Requirements for UAAG 1.0 test suite
Nearby: UAWG home page | UAAG 1.0 |
QA Activity home page
Status of this Document
This document describes the expectations for a test suite to be developed
by students at the UIUC under Jon Gunderson.
Goal
Produce a test suite for UAAG 1.0 for:
- HTML 4, CSS1 user agents.
- Windows (2000/XP), Macintosh operating environments.
Resources
- Jon Gunderson (2 hours/week, until May 2002)
- Colin (3-4 hours/week, until May 2002)
- Dominique (3-4 hours/week, until May 2002)
- Wilson (unknown commitment level)
Test suite structure
Organization and navigation
The suite should:
- Organize the tests as a hierarchy, following the guidelines and
checkpoints organization. Each checkpoint consists of one or more
(bulleted) statements, which may be any of the following: a requirement,
a sufficient technique, or an exception. It is likely that the test suite
structure should allow the user to navigate to each requirement and then
find relevant test information on another page.
- The test suite user should be able to navigate efficiently through the
guidelines/checkpoints hierarchy, but also linearly through every test
("next test" and "previous test").
- The test suite should link to the Techniques Document at the checkpoint
level.
Structure of an test
For each testable requirement, provide the following information:
- The statement of the requirement (with enough context, such as
checkpoint number and bullet number).
- One or more test cases (for instance, one test case per element or
attribute that is relevant to a given requirement of UAAG 1.0).
- Each test case should explain the expected behavior when the test is
run. Note that the code for a given test may be reused in other parts of
the test suite, but in a different context, might require a different
description. It may be a good idea to keep test files to a minimum, and
wrap them with descriptions and other metadata according to the context
where the files are used.
- When a test case is about a format (e.g., HTML), the test should
include a link to the relevant description of the element, attribute,
etc. in the format specification.
- When a test refers to conformance to another specification (e.g.,
documentation must conform to WCAG 1.0), link to:
- The relevant specification, and
- Any test suites available for that specification.
- When a test refers to other guidelines (e.g., operating environment
user interface design guidelines), include a link to those
guidelines.
- Exception statements do not require tests. Instead, these statements
should be part of the description of expected behavior for a required
feature.
- Sufficient techniques may have tests associated, or may just be part of
a description of expected behavior for a required feature.
Interaction
The test suite user should be able to record the results of the tests as
they are carried out. In order to facilitate evaluation, the test suite
should allow the user to interact as follows:
- Allow the user to supply metadata about the tester (name, email,
affiliation), user agent(s) being tested (product name, version number),
operating environment data (name, version number), and date(s) of the
evaluation.
- Allow the user to filter the test suite based on various parameters
described in "Discretionary
Behavior in UAAG 1.0". Based on values for parameters chosen (e.g.,
which content type labels, is there a text selection, are style sheets
supported, etc.), generate a filtered view of the test suite with only
the relevant tests.
- For each test, allow the user to rate how well the product(s) being
evaluated satisfy the test. For example, allow the test suite user to
push a radio button for each test with the appropriate rating code. For a
(very) few of the tests, it may be possible for the user agent to inform
the user automatically of whether the user agent passes the test.
Automatic testing and informing the test suite user is not required even
in these cases. Note: The rating application and
the test suite can be separate (and cross link, for
example). A single form with radio buttons for each
checkpoint might be easier to implement.
- An evaluation tool should allow for comments on how each
checkpoint is satisfied.
- Allow the test suite user to save the results of each evaluation.
Optional:
- Use EARL as the format for representing the evaluation.
- Allow the test suite user to read an existing evaluation form and
change it incrementally (e.g., which someone might want to do when
evaluating a minor product release).
Jon Gunderson (jongund@uiuc.edu)
Last modified: $Date: 2002/02/08 18:11:37 $ by $Author: ijacobs $