Extreme Markup in Montréal has been wrapped up a few days ago. The proceedings of the conference are available in different formats. There is a lot of good materials there, including Interoperability.
But a specific paper by John L. Clark on Quality Assurance has caught our eyes: Structured Software Assurance. The paper starts with this introduction:
How can software users—either individuals or organizations—better determine the correctness of the software they are using? Similarly, how can software developers better determine the correctness of the software they are creating? These questions take on additional weight when considered with respect to the security or the safety of the product in question. What software is secure? What software is safe? How can users determine these qualities?
When developing a specification, the same type of concerns expressed in this article are emerging. We could rephrase them more or less as
- What makes a specification of good quality?
- What are the consequences of normative references?
- How do we determine that the normative references are correct and applied according to their normative criterias?
- How and when an editor of a specification can determine the quality of a specification?
- Which level of prose and testable must be written in the specification?
- What are the dangers or benefits of “free to implementations”?
- When is the right time to create a test case for a described feature?
These are only parts of the questions, specification editors are faced when developing a specification. Specification Guidelines is a tool to help Editors and Working Groups to ask the right question when developing a specification and then Variability in Specifications is trying to address very specific issues by diving into the topics.
John Clark is proposing in his article to use RDF for modeling the relationships between the different component parts of a software project. The article contained a simple RDF schema for this model.