Appendix to the agenda: Requirements draft

Dear Eval TF,

In our call, we will discuss further on the questions that are on the list. Please also react online. As a result of our last call, below you find a first draft of the possible requirements for the methodology. We will discuss this further tomorrow in our call:

First Draft Section on Requirements

* Objectives:
The main objective is an internationally harmonized methodology for evaluating the conformance of websites to WCAG 2.0. This methodology will support different contexts, such as for self-assessment or third-party evaluation of small or larger websites.
It intends to cover recommendations for sampling web pages and for expressing the scope of a conformance claim, critical path analyses, computer assisted content selection, manual content selection, the evaluation of web pages, integration and aggregation of the evaluation results and conformance statements. The methodology will also address tolerance metrics.
The Methodology also includes recommendations for harmonized (machine-readable) reporting.

This work is part of other related W3C/WAI activities around evaluation and testing.
More on the EvalTF page.

* Target Audience:
A01: All organization evaluating one or more websites
A02: Web accessibility benchmarking organizations
A03: Web content producers wishing to evaluate their content
A04: Developers of Evaluation and Repair Tools
A05: Policy makers and Web site owners wishing to evaluate websites

The person(s) using the Methodology should be knowledgeable of the Guidelines and people with disabilities.

* Requirements:
R01: Technical conformance to existing Web Accessibility Initiative (WAI) Recommendations and Techniques documents.
R02: Tool and browser independent
R03: Unique interpretation
R04: Replicability: different Web accessibility evaluators who perform the same tests on the same site should get the same results within a given tolerance.
R05: Translatable
R06: The methodology points to the existing tests in the techniques documents and does not reproduce them.
R07: Support for both manual and automated evaluation.
R08: Users include (see target audience)
R09: Support for different contexts (i.e. self-assessment, third-party evaluation of small or larger websites).
R10: Includes recommendations for sampling web pages and for expressing the scope of a conformance claim
R11: Describes critical path analyses,
R12: Covers computer assisted content selection and manual content selection
R13: Includes integration and aggregation of the evaluation results and related conformance statements.
R14: Includes tolerance metrics.
R15: The Methodology includes recommendations for harmonized (machine-readable) reporting.

The methodology describes the expected level of expertise for persons carrying out the evaluation and the possibility to conduct evaluations in teams using roles. There is also a description of the necessity to involve people with disabilities.

Received on Wednesday, 31 August 2011 11:58:42 UTC