Testing/Requirements/Cooper

From W3C Wiki

Use Cases

The testing project supports specific use cases. The following use cases have been identified.

CR testing

The test tool supports the necessary procedures to verify that a technical specification is able to advance beyond Candidate Recommendation by identifying at least two interoperable of every normative requirement.

Test precise technical requirements of specs

Many specifications have precise technical requirements such as parsing and validity rules that must be tested.

Test aggregate technical requirements of specs

Some requirements can only be tested in the context of other requirements, so a procedure for testing them together, without confounding the results, is needed.

Test more general conformance requirements

Some requirements are more general requirements for specification conformance that cannot be evaluated with simple unit tests. It must be possible to test such general conformance requirements as well.

User agent conformance testing

It may not be an immediate goal to perform user agent conformance testing, but the creation of a test harness naturally meets many of the requirements for this, and there is likely to be interest in using the test harness for this purpose. The test harness should be designed with this functionality in mind.

Acid tests

Acid tests are generally complex all-in-one tests that easily show overall interoperable handling by various user agents.

Interoperability reports

While a W3C goal is to test specification conformance, more important to the community may be interoperability testing. Knowing which user agents produce what results for a given test, regardless of specification requirements related to that test, allows identification of areas of generally consistent and generally inconsistent user agent behaviour.

“Accessibility support database”

The accessibility support database is a proposal from the WCAG WG to produce interoperability reports for user agents on particular WCAG requirements, which differ from technical specification requirements. The needed structure of this database is similar to the structure of a well-constructed test harness; some particular requirements such as comparison of multiple testers’ results improve the general test harness even if not considered core.

General Requirements

The following requirements are overall requirements for the test harness.

Distinguish the roles of test files, test repository, test cases, test case repository, test harness, test results repository

The structure should address the following roles and treat them as separable from each other. In actual implementation some layers may be combined but the possibility of handling them separately is important.

  • Test files
  • Test file repository
  • Test cases
  • Test case repository
  • Test harness
  • Test results repository

Allow many to many relationships among test files, test cases, and test results

There should not be an assumption of one-to-one relationship between elements at the various layers. A given test case may require several test files. A given test file may be used by several test cases. A given test execution may be repeated by different users and stored separately.

Test Cases agnostic to how they’re run

Test cases should in general be structured to be agnostic to the method of execution. Some test cases may be automatable, but there may be reasons to execute the test case manually. Some test cases may not seem automatable now or with current resources, but may become so in the future. Making the test cases agnostic allows repurposing.

Some test cases may be designed not to be agnostic, such as self-executing tests. While this is supported, it may necessitate the provision of redundant alternate test cases to meet all use cases. The default preference should be for an execution-agnostic design.

Positive tests for conformance to requirements

It must be possible to define positive tests of specification requirements.

Negative tests for error conditions

It must be possible to define negative tests that actively test failure to meet specification requirements or test error handling behaviour.

Test cases / test files can be used for multiple purposes / shared among WGs

Test files and test cases should be designed as neutrally as possible so they can be repurposed. Multiple Working Groups may have reasons to re-use test files and should not be forced to create redundant versions. Even within a specification, a given test file may be used to test multiple things.

Test Files

The following requirements apply to test files.

Metadata about test case to be separate from test file

Metadata about test case should be stored separately from test files whenever possible. Metadata inside the test file makes it impossible to re-use the file for multiple test cases, and can introduce validity problems where the user agent reacts to the metadata, not the actual intended test subject.

Allow metadata about test case to be in the test file

Notwithstanding the above, the harness must allow test case metadata to be included in test files. Generally this is used to facilitate automation in various ways. Test case developers should consider the consequences to the reusability and validity of the test file when doing this.

Allow self-executing test files

The repository must support test files that execute simply by being loaded in the user agent and automatically record a result to the repository. The result record must be complete with information about test case, tester, user agent, etc.

Test files have unique ID

Test files must be externally identifiable by a unique ID. A URI may be sufficient. The ID should not be expected to contain metadata about the test in its lexical form, although as a convenience many IDs may have some structure.

Dependencies

It must be possible for test files to have dependencies of the following types.

Singleton test files

Test files consisting of a single file with no external dependencies are preferred for simplicity and portability.

Dependencies on external resources

Notwithstanding the above it must be possible for test files to have dependencies on external resources such as images, scripts, etc.

Shared resources

It must be possible for resources, such as images, scripts, etc., to be shared by multiple test files. The test file repository structure must accommodate actual “test files” as well as resources that are not themselves considered test files.

Multi-file tests

Some tests may require external resources but not expect to share those resources with other test files (particularly images). It must be possible to store such related resources and collectively identify them as related to a single test file.

Types of test files

The repository should support the following types of test files (non-exhaustive list).

  • Unit tests
  • Feature tests
  • Multiple-feature tests (interaction between the features)
  • Acid tests
  • Parsing
  • Memory model e.g., DOM of feature
  • Presentation of feature
  • API exposure of feature
  • Reaction to input

Test Cases

Test cases are the actual instructions for testing, using particular test files and against which results are reported.

Metadata

Test cases must provide the following metadata.

Associate with particular spec requirement(s)

The specific specification requirement(s) addressed by the test case must be uniquely and unambiguously identified.

Passing or failing test

Test case must identify whether an affirmative results indicates a passing or failure condition of the spec requirement.

Indicate whether test case is a feature test or error condition test

Applicability to user agents

Test cases may be only valid for certain user agents, because of intrinsic characteristics of the particular user agent.

Allow different test cases to be stored for the same test

This may require more than one test case to provide complete testing of a specific specification requirement, but must be allowed.

Allow multiple test cases to use same test files

Various test case may use the same test files; that is, a given test file may be referenced by more than one test case. For example, a test file may be used to test both user agent behaviour and conformance checker behaviour; the test cases are different.

Automatable test cases

Procedures or supports for automated execution of test cases may be provided.

Non-automatable test cases

It must be possible to provide test cases that are not expected to be automatable.

Repository

These are requirements for repository[ies].

Storage of test files

A repository must be provided to store test files, conforming to the requirements set out in the test files section.

Storage of test cases

A repository must be provided to store test cases, conforming to the requirements set out in the test cases section.

Storage of testing results

A repository must be provided to store test results. This must store the following information:

  • Execution of particular test cases
  • Using particular test files
  • On particular user agent
  • By particular tester
  • Yielding particular results

Multiple tests

It must be possible to store more than one set of results for a given combination of the above features.

 “Authoritative” result

When multiple test results for a given test case exist, there must be a mechanism to compare results and determine an authoritative result. This must be limited to privileged users.

Allow test files and test cases to be submitted from diverse sources

The repository must support input from multiple types of sources.

Bulk upload

Batches of tests that were auto generated or externally developed.

Community contribution

Members of the public should be able to submit tests, test cases, and test results.

Provide review and acceptance procedure for submitted tests

A review and acceptance mechanism will be needed; submitted content should not be automatically assumed to be valid or useful.

Non-technical users

It must be possible to submit content to the repository without requiring high technical or inside knowledge on the submitter’s part.

Clear license for anything submitted to the repository

It must be clear to all users the license applied content contributed to the repository.

Harness

The test harness is a support tool that facilitates execution of test cases, using particular test files, and storage of results in the repository. It also may be the interface that allows contribution of test files and test cases. In general the test harness should do the following:

  • Expose test case and test file information to testers
  • Provide break-down of every tested requirement for a spec
  • Collect test results one-by-one and in bulk
  • Manage workflow for comprehensive test of spec
  • Support automated testing
  • Support manual testing