Requirements
for "Techniques for Automated and Semi-Automated Evaluation
Tools"
This document provides a set of initial
requirements that need to be incorporated in the document "Techniques for
Automated and Semi-Automated Evaluation Tools". Further refinements of this
document will occur under the scope of the Evaluation and Repair Tools Working Group
(ERT WG) discussions.
- This version:
- http://www.w3.org/WAI/ER/WD-AERT/ED-requirements20130517
- Previous published version:
- http://www.w3.org/WAI/ER/WD-AERT/ED-requirements20130514
- Editor:
- Carlos A Velasco, Fraunhofer
Institute for Applied Information Technology FIT
Copyright
© 2013 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability,
trademark
and document
use rules apply.
Purpose of the document
The
document presented here gathers requirements for the document "Techniques
for Automated and Semi-Automated Evaluation Tools", in the following
called the document. This requirements' document will
present also some scenarios on the use of the main document.
The
purpose of the document "Techniques for Automated and Semi-Automated
Evaluation Tools" is to present typical features of web accessibility
evaluation tools that will support the reader in defining different tool
profiles.
Objectives of the document
The
objectives of the document "Techniques for Automated and Semi-Automated
Evaluation Tools" include the following:
- Describe to developers of web accessibility evaluation tools their
typical features and briefly present possible issues
about these features (examples of such features are listed in the
section "Typical features of a web accessibility
evaluation tool").
- Define profiles of web accessibility evaluation
tools according to different combinations of the aforementioned
features.
- Support developers of web accessibility evaluation tools to
understand the different types of web accessibility tests: automatic,
semiautomatic and manual.
- Support developers of web accessibility evaluation tools to
understand how to use WCAG 2.0 success criteria, sufficient techniques,
advisory techniques, and common failures for web accessibility
testing.
In addition, the document may provide additional information on
supporting developers of web accessibility evaluation tools to present test
results to different audiences and how to integrate their tools into
different development workflows.
Audience of the document
The
document "Techniques for Automated and Semi-Automated Evaluation Tools" is
targeted mainly to development managers and developers of
web accessibility evaluation tools. Under this scope, we will not
distinguish between commercial and open source developers, although there
are use cases and issues that could be more relevant to one group than to
the other.
A secondary audience of this document are users of
accessibility evaluation tools like accessibility experts or web
developers.
Types of tools included
Examples of tools that are
included are:
- Industrial/commercial and open source tools, which test complete web
sites or web applications.
- Focused tools, which test a concrete aspect of accessibility, for
instance, testing contrast of images, accessibility of forms, ARIA
implementation, etc.
- Tools that support research with users or developers of specific
aspects of accessibility.
Typical features of a web accessibility evaluation
tool
The document will contain descriptions of different features
that are included in accessibility evaluation tools, which help to classify
them and to identify their limitations. Typical examples include:
- ability to crawl big web sites or portals
- types of web technologies handled by the tool, for instance HTML
markup, stylesheets, PDF documents, Flash applications, multimedia,
etc.
- ability to integrate dynamic content generated via scripting
(dynamic modification of the Document Object Model according to the user
interaction with the application, etc.)
- support for testing APIs like the WebDriver API, for instance
- support for standard reporting languages like EARL
- support for different accessibility compliance environments in
different countries
- integration in the web development workflow as a plug-in add-on in
different Integrated Development Environments (open source or
commercial)
- multilinguality and internationalization
- etc.
Scenarios
Here we will present two or more scenarios which
can put in context the recommendations of the document.
John: a
development manager
John is a development manager in a small software
company creating testing tools for mobile and desktop web applications. Due
to increasing demand from customers, the company is evaluating the
possibility to extend the software to evaluate web accessibility. John
consults the document to get a general overview of typical features from
accessibility evaluation tools. He also gathers information about resources
that helped him to understand the implications of this new functionality and
how their existing tools will map into the profiles defined in the document.
He creates a matrix to compare the existing characteristics from its tool
with the features of accessibility tools. With the result of this
comparison, he is able to estimate the effort necessary to implement the new
features of the tool and create an implementation roadmap.
Issues not
covered in this document
The following issues are not covered in this
document:
- Procurement and acquisition issues for this type of tools are
outside of the scope of this document and are covered elsewhere
- Interpretation of WCAG 2.0 success criteria and
techniques
- How to interpret standards and recommendations related to web
technologies
References
- Web Content Accessibility
Guidelines (WCAG) 2.0
- Website Accessibility
Conformance Evaluation Methodology 1.0
- Developer Guide for
Evaluation and Report Language (EARL) 1.0
- UWEM, Unified Web
Evaluation Methodology version 1.2
- Requirements for
web developers and web commissioners in ubiquitous Web 2.0 design and
development (January 2012)
- ACCESSIBLE project
Table of contents
What follows is a preliminary table of
contents for the document [Editorial note: urgent
feedback from the working group is needed]:
- Abstract
- Status of this document
- Introduction
- Audience of this document
- Document conventions
- Complementary resources
- Typical features of an evaluation tool
- Example profiles of evaluation tools
- References
- Appendix A: Customising results to different audiences
- Appendix B: Integrating the evaluation procedure into the
development testing workflows