A W3C QA Activity: Why, How, What

Position Paper for the W3C Workshop on Quality Assurance at W3C

Josef Dietl, Mozquito Technologies AG

Abstract

We see Quality Assurance and Conformance as critical activities of the W3C that span a wide range of important topics. The analysis provided in this paper yields the following set of criteria for the quality of a QA process:

All of that is closely connected to the motivation for the existance of W3C in the first place, but also to join W3C as a Member or to implement W3C specifications. To lay out the arguments, we have abstracted our experience with W3C specifications and the process used to develop them. The results ranges widely and cover the following three core subjects: Editorial quality, implementation quality and QA process quality.

QA: Why?

We begin our analysis with the question why we care about the topic at all. For us, questions surrounding QA and conformance go to the very heart of our W3C membership, our contributions there and we feel that it is also about the core mission of the W3C. After all, the most visible aspect of the W3C mission "Leading the Web to its Full Potential" is to de-facto-standardize "the Web" (the universe of hyperlinked information). To that end, developing, recommending and providing interoperability specifications is an important starting point, but as a matter of fact, the "de facto" part doesn't occur until interoperable implementations are achieved.

QA and win-win scenarios: A glimpse on Game theory

Let us have a look at a simplified, qualitative game theory analysis of the above points. Without a QA effort, there are obviously some zero-sum characteristics in the game: First we observe that standardization will inevitably make it easier for operators to switch equipment. In other words: in a scenario without QA, large vendors are in a peculiar position: because they already have a huge installed base, they face a demand for backwards compatibility to maintain customer loyalty. Not mentioning the effect on technical decisions, this draws on the resources they can expend for the implementation of the standard. At the same time, having a major fraction of the total market for a certain technology is believed to make it more likely (assuming all other parameters being equal (1)) to experience a net outflux when reducing technology lock-in in the move to support standards. Under these circumstances, a large vendor experiences two arguments in favour of evolving the specification further in order to raise loyalty. Small vendors are forced to follow suit - which is not quite the concept of a de-facto standard.

On the other hand, mechanisms like the above have been alleged repeatedly, and occasionally they can impact the brand image of large vendors. Sympathy often rests with the small vendors, and because there is no objective way to compare the implementations, both parties have reason to feel at a disadvantage.

How can QA change this?

The goal of a QA framework then must be to create a level playing field. Essentially, a QA effort should provide vendors with a valuable and credible statement that their effort pays off. Over the past year, the W3C has earned a firm and respectable reputation as a competent and neutral body. That reputation should be leveraged in order to give vendors large and small an incentive to achieve their goals within the framework of the specification. The goals we have seen so far are:

With a visible quality mark (2), (3) and a specification providing well-defined extension interfaces, the above scenario changes as follows: the quality mark is an additional argument for customers for an upgrade and thus dampens the demand for backwards compatibility. At the same time, the reputation of all vendors is generally better protected of allegations of changes to the standard. Customer loyalty can still be supported by building on the extension framework, and particularly large vendors can make good use of their strong knowledge of the customer by target group specific extensions.

On the larger scale, the quality mark raises customer confidence, which in turn speeds up adoption of the new technology and creates a bigger total market for the new technology.

A working QA framework will benefit all parties involved:

At the same time, another challenge in standardization starts to disappear: features can be pushed out of the standard and into the extension interface. That way, the specifications themselves become easier to handle. This also enables a wide choice of implementations, ranging from ones that are basic in their feature set, but available in resource-constrained environments to implementations that are targetted towards the needs and resources of specific audiences.

Deploying a QA effort

Despite all these compelling benefits of a QA initiative, deployment of that effort needs to be done carefully. Obviously the boundary conditions must be set in a way that the abovementioned advantages actually materialize, but beyond that, the deployment phase itself (both at large and over and over again in every individual Working Group) must be handled with great care. After all, if the quality mark is meaningful, it will take a certain effort to achieve it, and participating in W3C activities is an expensive business in its own right. To leverage a QA framework to even encourage contributions to W3C's good work takes the confidence on the part of the contributors that they will ultimately reap the benefits of these efforts.

The one good way to develop this confidence is expectation management. The criteria to qualify for the quality mark must be known early on, or at least the process for establishing these criteria must be developed early, and particularly the expertise in the Working Group must be considered in order to make contributions to the WG worth the while. Ideally, use cases and test cases would be developed early on or even before the creation of the Working Group. The XML Query Group is making very good experience with this approach. Among the many comendable "outside" efforts to provide test suites, there have also been conformance tests developed by outside organizations that strongly failed to meet the approval of the Group. In order to keep the action where the Working Group is, the WG members' impact on a W3C QA efforts must be assured (see also "QA Process Quality" below).

Three Levels of QA

There are three levels of Quality Assurance for consideration, each one building on the previous.

  1. Editorial quality: The quality of the specification
  2. Product quality: The quality of the implementation
  3. QA Process quality: The quality of the test

Editorial Quality

Arguably, the quality of specifications is the first milestone to achieve before implementations can be tested in later steps.

One serious lack is that there are no criteria of what makes an editorially good specification. For the purpose of this discussion, we approach two criteria: it must be easy to verify and, and it must be easy to read. Perfection is - as everywhere - impossible to achieve, still, W3C specifications past and present are fairly heterogenous in terms of ambiguity, precision, contradictions and mere readability. As explained earlier (see "Deploying a QA effort" above), security of criteria is critical for smooth deployment of QA, and meaningful test suites depend on precise specifications. Hopefully, stronger use of extension mechanisms will have an intrinsic positive effect. However, as far as we can tell, the main limits in this area are resource constraints that need to be overcome either with more (or better suited) resources or by changing the timeframes. Every improvement in this area - however it can be achieved - is well appreciated and will certainly improve product quality.

Another ongoing debate has gone on in the W3C about the technical quality of the W3C specifications. The current approach is to add quality assurance stages to the W3C Process. We still remember the good days when the W3C Process had only three stages: Working Draft, Proposed Recommendation and Recommendation, and QA was done by peer review through the intense contact within the community, be it from Working Group to Working Group or from Working Group to implementation community. In addition to the impact on the quality of the specifications, adding stages to the W3C Process has had two less pleasant side effects: first, there is a near-combinatoric explosion of resources spent on dependencies (reflected partly in a growth of WG lifetime from 12-18 months four years ago to now 18-24 months), and second the process of moving from Working Draft to Recommendation has extended from about 10% of a Working Group's lifetime to about 25%. While speed is no longer as much of an issue for W3C as it used to be, it is worth contemplating more focused mechanisms to achieve the same quality.

First approaches are this QA workshop, the forming Technical Architecture Group and the New Member Introduction at every AC Meeting. Extending the New Member Introduction to Working Groups and their members could probably improve the quality aspect "architectural integrity" and reduce resources spent on coordination and communication with other Working Groups.

Implementation Quality

We understand "implementations" to be documents and software (4), where software can often be further split into producers and consumers of documents, like document editors and browsers, respectively.

Let us, for just one example, look at the so-called "browser war". At that time, the driving force was a virtuous cycle of documents and document consuming software. It is noteable however, that HTML editors could have made all the difference, had they made it easier to create conforming documents. Alas, GUI tools entered the race relatively late, and at the time the status quo had already deteriorated sufficiently to force Web designers to choose either documents that find demand or documents that stick with the standards. Needless to say, most followed the demand. Among others, the efforts by the CSS Working Group (test suite, validator and core style sheets, followed later by a proof-of-concept CSS parser), the HTML validator and the efforts by the Web Standards Project have helped to improve the situation.

In the long run, it turns out that the W3C Team proverb "software is cheap, documents are expensive" indeed governs the dynamics of conformance through the growing legacy. The once virtuous cycle between documents and browsers has turned vicious since, and it becomes increasingly difficult to deploy gradual improvements: they are not included in documents because "the browsers don't support it" - and the browsers don't see a need to support it because "the demand just isn't there". (5)

From that experience, we assume that the output of document production facilities and, more generally, documents, need most careful scrutiny.

QA Process Quality

This brief section serves only to list the requirements on the QA process that we have found in the course of this discussion:

Summary

Of course, a paper like this can not shed light on all aspects of Quality Assurance in such an interesting organization as the W3C. The most prominent ommission is probably the lack of coverage of education and outreach. We look forward to learn more at the QA Workshop in Washington.


(1) Of course, the assumption that all other parameters are the same is far fetched. The famous "economies of scales" are probably the most important correction to be made.

(2) Note that the quality mark itself must also convey long-term respectability.

(3) "visible" in this sense is not meant to express a visual representation, but merely a "visibility in the market". Actually, a "visible" mark that is both visual and auditive would be very compelling.

(4) this ommits the very important aspect of re-use of a technology in another specification. "Quality Assurance" in such context is a delicate, but in this context minor issue: The Technical Architecture Group is in charge there.

(5) The Mozquito transformation is actually an approach to break this deadlock.