Web Architecture and Quality: closing the loop

Author(s) and publish date

By:
Published:
Skip to 4 comments

In the first meeting of the W3C Technical Architecture Group (TAG), when we introduced ourselves to each other, I said I wanted to be able to use bookmarks in my bank web site. Over the next three years, we boiled that down to a one-liner: To benefit from and increase the value of the World Wide Web, agents should provide URIs as identifiers for resources. But I still can't bookmark my bank account... because we haven't closed the loop with the people that design bank web sites. Working on the TAG highlights a tension that is everywhere in W3C work: we can write documents until we are blue in the face, but until we get consensus, i.e. until we get somebody to agree to them and act on them, it hardly makes a difference. Consensus and quality are intimately related, as the introduction to the W3C Process Document explains:

W3C follows processes that promote the development of high-quality standards based on the consensus of the Membership, Team, and public. W3C processes promote fairness, responsiveness, and progress: all facets of the W3C mission.

I learned one of my first lessons about consensus in 1991, when I started working on the HTML specification. As a math and computer science guy, it was obvious to me that we should have a formal grammar for the language. This SGML DTD stuff looked a little more quirky than lex and yacc (and turned out to be a lot more quirky as I learned more; but that's another story...) but it supported the same feedback loop with the computer:

  1. Write a specification of the language.
  2. Write a test document.
  3. Ask the computer if the test document matches the specification.
  4. If the results surprise you, tweak the language specification and/or the test document.
  5. Repeat steps 3 and 4 until satisfied.

I was a little frustrated that the rest of the IETF HTML Working Group didn't see this as the obvious way to proceed. I was in the habit of downloading and compiling open source software (though we didn't call it that back then), and James Clark had provided an open source SGML parser, but somehow I couldn't convince people to play this game along with me. Eventually Mark Gaither and I figured we could make a forms/CGI interface to the SGML parser and provide a "zero install" service. The struggle to get people to learn the DTD notation continued, but at least folks were willing to try out the feedback loop.

And then Gerald started adding "kinder, gentler" diagnostic messages and expanded the scope of the service from a design tool to something that authors could actually use to fix their markup, and the feedback loop expanded.

Sometimes you just have to write prose specifications and get the implementors to read them and slowly work out the bugs. But too often, that process just doesn't converge. As the history of CSS shows, we do a lot better when we add test suites to the mix. And even better than those tests, where the evaluation is done manually, are the thousands of automated tests for XML. And even better than those, which were mostly developed after-the-fact at the errata stage, are the hundreds of RDF and OWL tests that were developed concurrent with the Recommendations. (More on those separately.)

It's even better when we can make concise formal specifications, but often the formal specification tools and technologies aren't mature or expressive enough. The popular implementations of HTML have never matched the DTD, and even the DTD isn't keeping up with XHTML specifications, let alone SVG or RDF.

But even when we have nice concice specifications for our technologies, we still need to understand the choices people face when they use them... or don't use them. I know all sorts of stuff about URIs and Web Architecture, but I have a lot to learn about how bank web sites are designed and built before URIs work like I want them to.

Related RSS feed

Comments (4)

Comments for this post are closed.