Re: test suite distinctions [was: Re: Feedback on "The Matrix"]

On Thu, 28 Feb 2002, Mark Skall wrote:

> I think of this checklist at a higher level - a meta level.  If
> this is a test, it's not a test of the technical "goodness" of a
> test suite but more like a test to ensure that certain process
> steps were followed (e.g., is every feature tested, is there
> documented traceability from each test to a requirement in the
> recommendation, is there a test harness, etc.).
> 
> I believe it is not only possible to develop this checklist
> (test), but really quite straight-forward.

I agree, except that I am sure lots of e-mails will be wasted on
defining exactly what "feature", "traceability", and "harness" are.

In my opinion, such a meta-level checklist would have very little
utility and should not be used for rating test tools:

The things you mention are obvious qualities of a good test suite. Any
sane test suite author would try to implement them, and nobody will
get them right 100%. Thus, there should be little value in spending
time on making the "obvious" checklist available. On the other hand,
it would be impossible to use that checklist for rating (i.e.,
assigning comparable scores) of "test materials" because meta criteria
cannot be converted to scores in an algorithmic fashion:

	Test suite A tests 75% of MUSTs in RFC 2616 (HTTP)
	Test suite B tests 95% of MUSTs in XML 1.0 Recommendation

Which test suite is "better"? Which should get a higher score? Will
many XML-processing users care about HTTP test score? Etc., etc.

Publishing a table that meta-compares test materials _without_ ranking
might be a good idea but, again, not very useful in general. It would
only be useful if you have, say, 5 test suites that test the same kind
of thing.

$0.02,

Alex.

Received on Thursday, 28 February 2002 15:22:10 UTC