This document describes features that any web evaluation tool (including web quality assurance tools) can incorporate, so that they support the evaluation of accessibility requirements like the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness on such features and to provide an introductory guidance for tool developers on what kind of features they could provide in future implementations of their tools. The document could also be used to help compare the features provided by different types of tools in other scenarios like, for example, during the procurement of such tools.

The features in scope of this document include capabilities to help specify, manage, carry out and report the results from accessibility evaluations. For example, some of the described features relate to crawling of websites, interacting with tool users to carry out semi-automated evaluation and providing evaluation results in a machine-readable format. This document does not describe the evaluation of web content features, which is addressed by WCAG 2.0.

This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document does not prioritize nor require any particular accessibility evaluation feature or specific type of evaluation tools.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

Table of contents

1 Introduction

Designing, developing, monitoring and managing a website typically involves a variety of tasks and actors who use different types of tools. For example, a web developer might use an integrated development environment (IDE) or a text editor to create templates for a content management system (CMS), while an editor/content author will typically use the content-editing facility provided by the CMS to create the web pages. Ideally accessibility evaluation is carried out throughout the process and by everyone involved. For example, web developers should ensure that any headings provided in templates are coded appropriately, whilst content authors should ensure that any images added to web pages have appropriate text alternatives.

Evaluation tools can assist accessibility evaluation in many different ways. For example, tools can assist:

In the following these actors will be referred to as the users of the web evaluation tool, unless the feature demands an specific profile or know-how.

In the context of this document, an evaluation tool is a software application that enables its users to test web content against specific quality assurance criteria. This includes but is not limited to the following (non-mutually-exclusive) types of tools:

The accessibility evaluation features listed and described in Section 2 can be incorporated by evaluation tools to provide support for accessibility evaluation. Section 3 provides example profiles of evaluation tools with accessibility evaluation features. This document does not describe the evaluation of specific web content features, which is addressed by WCAG 2.0.

W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria such as the features listed in this document.

2 Features of an accessibility evaluation tool

[Review Note: Feedback on this section is particularly welcome, specifically with suggestions for accessibility evaluation features that are not listed below and with comments to refine listed accessibility evaluation features.]

The accessibility evaluation features listed and described below are not exhaustive. It may not be possible nor desired for a single tool to implement all of the listed features either. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. Developers can use this list to identify features that are relevant to their tools to plan their implementation. Also others interested in acquiring and using evaluation tools can use this document to learn about relevant features to look for.

The features of an accessibility evaluation tool are presented in the following from different perspectives: the subject to be evaluated (i.e., web content and its environment, which enables its rendering to the user agent/end user), the testing requirements, the reporting customization capabilities of the tool and other tool usage characteristics like the integration into the development and edition workflow of the user. In this document, the terms feature and characteristic are used interchangeably.

2.1 Test subject and its environment

This category includes features that help to retrieve and render different types of content. There are tools that may retrieve the content to be analyzed from the file system or from a database. However, the majority of them do it via the network through the HTTP(S) protocol. In the rest of the document, we will focus mostly on this scenario.

Due to the characteristics of the HTTP(S) protocol, the rendering of a web resource implies the manipulation and storage of many other components associated with it, like request and response headers, session information, cookies, authentication information, etc. These associated components is what we denominate the environment of the test subject and are described in the following.

2.1.1 Content types

Although the majority of web resources are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or JavaScript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources and are therefore important for an accessibility evaluation.

In general, the following types of content formats can be distinguished:

The accessibility evaluation tool should describe which of the aforementioned formats can be parsed and evaluated with the tool.

2.1.2 Content encoding and content language

This feature identifies which content languages and which encodings are supported by the evaluation tool. The web is a multilingual and multicultural space in which information can be presented in different languages, thus evaluation tools should be in the position to address this issue. Furthermore, web content can be transmitted using different character encodings and sets (like ISO-8859-1, UTF-8, UTF-16, etc.), which demands from the evaluation tools the capacity to handle them.

More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].

2.1.3 DOM document fragments

Many websites are generated dynamically by combining code templates with HTML snippets that are created by website editors. Some evaluation tools may be integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE) to test these snippets as developers and/or editors create them.

Usually this is implemented in the evaluation tools by creating DOM document fragments [DOM] from these snippets. Evaluation tools may filter as well the accessibility tests according to their relevance to the document fragment.

2.1.4 Dynamic content

Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of desktop applications and contain dynamic content and interface updates. Tools that evaluate such applications should emulate and record different user actions (e.g., activating interface components by clicking with the mouse, swiping with the fingers on a touch-screen or using the keyboard) that modify the status of the current page or load new resources. The evaluation tool needs to define and record these intermediate steps that can be later on interpreted by the tool (see section on web testing APIs).

2.1.5 Content negotiation

Content negotiation is a characteristic of the HTTP(S) protocol that enables web servers to customize the sent representation of the requested resources according to the demands of the client user agent. Because of this, the identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient. To support content negotiation, the testing tool customizes and stores the HTTP headers according to different criteria (see discussion in the following sections):

2.1.6 Cookies

A cookie is a name-value pair that it is stored by the user-agent [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information exchanged between the client and the server, which as seen before may be relevant for content negotiation.

A tool that supports cookies may store the cookie information provided by the server in an HTTP response an reuse it in subsequent requests. It may also allow the user to manually set cookie information to be used in the HTTP requests.

2.1.7 Authentication

Websites require sometimes authentication (e.g., HTTP authentication, OpenID, etc.) to control access to given parts of the website or to present customized content to authenticated users.

A tool that supports authentication allows the user to provide their credentials beforehand, so that they are used when accessing protected resources, or it prompts the user to enter her credentials upon the server request. The tool may also support the use of different credentials for different parts of a web site.

2.1.8 Session tracking

Within HTTP, session information can be used for different purposes like, e.g., implementation of security mechanisms (login information, logout a user after a long inactivity period) or track the interaction paths of the users.

Session information can be stored in the user agent local storage, in the session ID in the URL or in a cookie, for example. An evaluation tool that supports session tracking should be able to handle these different scenarios.

2.1.9 Crawling

Some evaluation tools incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. There are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong results in the evaluation process.

A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:

2.2 Testing functionality

This category includes features targeted to the configuration of the tests to be performed.

2.2.1 Selection of evaluation tests

Accessibility evaluation tools may offer the possibility to select a given subset of evaluation tests or even a single one. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.

This feature shall not be confused with the fact that some tools are focused on testing a single characteristic of the web page, like for example, a tool to test color contrast.

2.2.2 Test modes: automatic, semiautomatic and manual

According to the Evaluation and Report Language (EARL) specification [EARL10], there are three types of modes to perform accessibility tests:

There are some evaluation tools that support accessibility experts by performing semiautomatic or manual tests. This support is normally introduced by highlighting in the source code or in the rendered document areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).

Tools may keep provenance information (i.e., which part of the report was automatically generated by the tool and which was manually modified).

Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode.

2.2.3 Development of own tests and test extensions

Developers and quality assurance engineers need sometimes to implement their own tests. For that purpose, some tools define an API so developers can create their own tests, which respond to internal demands within their organization.

2.2.4 Test automation

When evaluating accessibility of web sites and applications it is sometimes desirable to create scripts that emulate user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. Tools that support that API enable testers to write tests that automate the application's and end-users' behaviour.

2.3 Reporting and monitoring

This category includes features related to the ability of the tool to present, store, import, export and compare the testing results in different ways. In this section the term report must be interpreted in its widest sense. It could be a set of computer screens presenting different tables and graphics, a set of icons superimposed on top of the content displayed to the user indicating different types of errors/warnings, a HTML document summarizing the evaluation results, a word processor document summarizing the evaluation results, etc.

2.3.1 Standard reporting languages

Support for standard reporting languages like EARL [EARL10] is a requirement for many users. There are cases where tool users want to exchange results, compare evaluation results with other tools, import/export results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. and the support for a standardized language facilitates such task.

Although they may not be considered standardized, some tools support exporting test results in other formats like Comma-Separated Values (CSV) [CSV, TABDATA].

2.3.2 Persistence of results

The implementation of monitoring features requires that the tool has a persistence layer (a database, for example) where results could be stored and retrieved at a later stage to compare different evaluation rounds.

2.3.3 Import/export functionality

In many evaluation methodologies, accessibility experts and quality assurance engineers use different tools. If the evaluation tool supports import and export of test results (for instance, in EARL format [EARL10], as JSON [JSON], in a CSV [CSV, TABDATA] file, etc.), the tool may be easily integrated in such environments.

2.3.4 Report customization

This feature allows the customization of the resulting report according to different criteria, such as the target audience, the type of results, the part of the site being analyzed, the type of content, etc. This feature may also allow the developer or the accessibility expert to add additional comments in the report.

2.3.5 Results aggregation

The presentation of evaluation results is influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page. For instance, accessibility errors may be listed for a whole web resource or presented for concrete components like images, videos, tables, forms, etc.

2.3.6 Conformance

Conformance statements are demanded by many customers to assess quickly the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of accessibility techniques (i.e., common failures, sufficient techniques, etc.) and aggregate results as described in the previous section.

As described in Section 2.2.2, full accessibility compliance can only be achieved when manual testing has been implemented.

2.3.7 Error repair

The majority of web developers have little or no knowledge about web accessibility. Tools may provide together with their reporting capabilities additional information to support developers and accessibility experts to correct the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. If the evaluation tool is part of an authoring tool as described in the Authoring Tool Accessibility Guidelines 2.0 [ATAG], then the tool will meet its success criterion B.3.2.1.

Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects. Such support may include, for example, a guided step-by-step wizard which guides the evaluator to correct the problems found.

2.4 Tool usage

This section includes characteristics that describe the integration into the development and edition workflow of the user or are targeted to the customization of different aspects of the tool depending on its audience, like for instance, user interface language, user interface functionality, user interface accessibility, etc.

2.4.1 Workflow integration

Accessibility evaluation tools present different interfaces, which allow their integration into the standard development workflow of the user. The typical ones that can be highlighted are the following:

2.4.2 Localization and internationalization

Localization and internationalization are important to address worldwide markets. Tool users may not be able to speak English and it is necessary to present the user interface (e.g., icons, text directionality, UI layout, units, etc.) and the reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N].

From the accessibility standpoint, it is recommended to use the authorized translations of the Web Content Accessibility Guidelines. It must be considered as well that some accessibility tests need to be customized to other languages, like for instance, those related to readability.

2.4.3 Functionality customization to different audiences

Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:

The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.

2.4.4 Policy environments

Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in accessibility policies in different countries. The tool should specify in its documentation which policy environments are supported. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.

2.4.5 Tool accessibility

Accessibility evaluation teams may include people with disabilities. To that end, it is relevant that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the running operating system. In such cases, compliance to the Authoring Tool Accessibility Guidelines 2.0 [ATAG] becomes an important feature to support both from the perspective of the user interface of the tool and the access to its results.

Additionally, when producing reports (for instance, in HTML format), it is important that they are accessible as well and in compliance with the Web Content Accessibility Guidelines 2.0 [WCAG20].

3 Example profiles of evaluation tools

[Review Note: The editors and the Working Group will welcome suggestions on new profiles from tools to extend the listed examples. Furthermore, we welcome feedback on the granularity of these profiles and their tabular presentation below.]

This section presents 3 examples of accessibility evaluation tools. They are provided for illustration purposes and do not represent an existing product. In every subsection, we will highlight some of the key features of the tool. The table at the end of the chapter summarizes and complements these textual descriptions.

3.1 Tool A: Browser plug-in evaluating a rendered HTML page

Tool A is a browser plug-in with which the user can perform a quick automatic accessibility evaluation on a rendered HTML page. The main features of the tool are:

Table 1 presents an overview of the matching features as described in section 2.

3.2 Tool B: Large-scale accessibility evaluation tool

Tool B is a large-scale accessibility evaluation tool used to analyze large web sites. The main features of the tool are:

Table 1 presents an overview of the matching features as described in section 2.

3.3 Tool C: Accessibility evaluation tool for mobile applications

Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment based upon a virtual machine environment that emulates the accessibility API of some devices. The main features of the tool are:

Table 1 presents an overview of the matching features as described in section 2.

3.4 Overview

This section presents a tabular overview of the characteristics of the tools described previously.

Table 1. List of features for the example tools described in section 3.
Category Feature Tool A Tool B Tool C
Test subject and its environment Content-types HTML, CSS and JavaScript HTML, CSS and JavaScript HTML, CSS and JavaScript
Content encoding and content language ISO-8859-1, UTF-8, UTF-16; any language supported by these encodings ISO-8859-1, UTF-8; any language supported by these encodings ISO-8859-1, UTF-8; any language supported by these encodings
DOM Document fragments no no no
Dynamic content relies on browser capabilities yes relies on browser capabilities
Content negotiation relies on browser capabilities; not configurable yes relies on browser capabilities; not configurable
Cookies relies on browser capabilities; not configurable configurable relies on browser capabilities; not configurable
Authentication relies on browser capabilities; not configurable configurable relies on browser capabilities; not configurable
Session tracking relies on browser capabilities; not configurable configurable relies on browser capabilities; not configurable
Crawling no yes no
Testing functionality Selection of evaluation tests no yes no
Test modes: automatic, semiautomatic and manual only automatic all all
Development of own tests and test extensions no no no
Test automation no no yes
Reporting and monitoring Standard reporting languages EARL EARL none
Persistence of results no yes no
Import/export functionality EARL EARL, CSV no
Report customization no comments/results added by evaluator no
Results aggregation no yes no
Conformance no yes no
Error repair inline hints in report yes
Tool usage Workflow integration in browser standalone standalone
Localization and internationalization en en, de, fr, es, jp en
Functionality customization to different audiences developers developers, commissioners developers
Policy environments no Section 508 (USA), BITV (Germany) no
Tool accessibility not accessible accessible in MS Windows (MSAA) not accessible

4 References

The following are references cited in the document.

Authoring Tool Accessibility Guidelines (ATAG) 2.0. W3C Candidate Recommendation 7 November 2013. Jan Richards, Jeanne Spellman, Jutta Treviranus (editors). Available at: http://www.w3.org/TR/ATAG20/
Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
CSS Current Status is available at: http://www.w3.org/standards/techs/css#w3c_all
Common Format and MIME Type for Comma-Separated Values (CSV) Files. Y. Shafranovich. Internet Engineering Task Force (IETF). Request for Comments: 4180, 2005. Available at: http://tools.ietf.org/rfc/rfc4180.txt
W3C DOM4. W3C Last Call Working Draft 04 February 2014. Anne van Kesteren, Aryeh Gregor, Ms2ger, Alex Russell, Robin Berjon (editors). Available at: http://www.w3.org/TR/dom/
Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Candidate Recommendation 04 February 2014. Robin Berjon, Steve Faulkner, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer, Ian Hickson (editors). Available at: http://www.w3.org/TR/html5/
HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
Internationalization and localization. Wikipedia. Available at: http://en.wikipedia.org/wiki/Internationalization_and_localization
The JSON Data Interchange Format. Standard ECMA-404 1st Edition / October 2013. Available at: http://www.ecma-international.org/publications/standards/Ecma-404.htm
Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
Model for Tabular Data and Metadata on the Web. W3C First Public Working Draft 27 March 2014. Jeni Tennison, Gregg Kellogg (editors). Available at: http://www.w3.org/TR/tabular-data-model/
W3C Internationalization (I18n) Activity. Available at: http://www.w3.org/International/
Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Recommendation 20 March 2014. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 8 April 2014. Michael Cooper, Andrew Kirkpatrick, Joshue O Connor (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
Web crawler. Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/


The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Yod Samuel Martín, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.