test case evaluation criteria?

Re: my action item from Aug 4 WCAG Techniques Task Force telecon to provide 
some metrics by which to evaluate candidate test cases, here are some thoughts:

(1) test cases should be simple (atomic?) and "short" - as objectively 
measured as possible?

(2) test cases should be self-explanatory (unambiguous?) - should be easy 
and quick to determine the result (as objectively measured as possible?)

(3) test cases should always provide definite binary result (pass/fail) - 
should always be obvious (objective)?

(4) test cases should be nonoverlapping (don't duplicate functionality of 
other test cases?)

(5) test cases should adhere to (be relevant to) the specification 
(guidelines and/or success criteria)?

(6) test cases should always test the thing being tested (clear test 
purpose), not other things inadvertently (side-effects?)

(7) test requirements should always be clearly stated (any supporting files 
included, any other prerequisites, test environment clearly (objectively) 
explained?)

(8) coverage should be as complete as possible (every success criterion 
should have at least one associated test case?)

(9) quality of code and documentation must be "good" (as objectively measured?)

(10) any underlaying scenarios for test methodology must be included

Thanks, Tim Boland  NIST

Received on Monday, 9 August 2004 13:04:38 UTC