The purpose of this document is to support developers of web accessibility evaluation tools by identifying typical features of those tools and how to classify them according to different combinations of those features.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

Table of contents

[Editor note: it needs update!]

1 Introduction

There is a wide variety of web accessibility evaluation tools. This document intends to support evaluation tool developers to identify their key characteristics. To achieve that, this document:

1.1 Audience of this document

This document is targeted mainly to development managers and developers of web accessibility evaluation tools.

A secondary audience of this document are users of accessibility evaluation tools like accessibility experts or web developers.

Examples of tools that are within the scope of the document include:

1.2 Background documents

This document must be seen in the context of several others. It is recommended that the reader reviews the following documents:

In the document you will find additional pointers to other resources like standards, recommendations, technical specifications, etc., which are relevant to any developer interested in implementing an accessibility evaluation tool.

2 Typical features of an accessibility evaluation tool

In this section, we will describe typical features and functionalities of accessibility evaluation tools. The features of an accessibility evaluation tool can be presented from different perspectives: the subject being tested, the target audiences of the tool, the reporting and presentation of the results, its configurability, etc. We have tried to be as complete as possible, but it may be that some features of existing or future evaluation tools are omitted. The following list of characteristics does not follow any particular order.

It is very important that you analyse and describe for your own development process and for your customers which of those features are supported in your tool and declare any limitations of your tool.

2.1 Types of document formats analyzed

Although the vast majority of web documents are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or Javascript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources .and are therefore important for an accessibility evaluation.

In general, we can distinguish these types of formats:

Most of the accessibility evaluation tools concentrate on the markup evaluation, but the most advanced are able to process many of the types described above.

2.2 Cookies

A cookie is a name-value pair that it is stored in the browser of the user [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information. This information is relevant to other use cases, like a crawling tool.

2.3 Authentication support

Many sites require some kind of authentication (e.g., HTTP authentication, OpenID, etc.). An accessibility testing tool shall be able to support the typical authentication scenarios. It is important to do so because many sites present different content to authenticated users.

2.4 Session tracking

For security reasons, some sites include the session ID in the URL or in a cookie. With the support of the session information, websites can implement security mechanisms like for instance login out a user after a long inactivity period or track typical interaction paths of the users.

2.5 Crawling of sites

There are tools that incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. It must be kept in mind that, as seen in the previous section, there are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong assumptions in the evaluation process.

A web crawler defines an starting point and a set of options. The critical features of a web crawler are related to its configuration capabilities. Among them, we can highlight:

2.6 Support for aggregation of results

Evaluation results can be presented in different ways. This presentation of results is also influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page, for instance, the accessibility errors would be listed for a whole web resource or presented for concrete components like images.

Also when issuing conformance statements it is necessary to tackle the different types of techniques (i.e., common failures, sufficient techniques, etc.) and their implications.

2.7 Support for standard reporting languages

Support for standard reporting languages like EARL [EARL10] is a requirement for many customers. There are cases where tool users want to exchange results, compare evaluation results with other tools, import results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. Due to its semantic nature, EARL is an adequate framework to exchanges and compare results.

2.8 Report customization according to different criteria

The results of your evaluation can be used in different circumstances. With that aim, results could be filtered depending on:

2.9 Customization to different audiences

[Editor note: is it necessary to distinguish between UI customization and customization of the results' presentation?]

Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic;. However, there are also tools that allow the customization of the evaluation results or even the user interfaces to other audiences like for instance:

2.10 Localization

Localization is important to address worldwide markets. There may be cases where your customers are not able to speak English and you need to present your user interface and your reports in other languages. To that end, you can start by looking into the authorized translations of the Web Content Accessibility Guidelines.

2.11 Support for different policy environments

Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in the accessibility policy in different countries. It is important that you clearly define in your tool which of those policy environments you support. Most of the tools are focused on the implementation of their Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.

2.12 Evaluating document fragments

Nowadays, it is typical that it is necessary to test fragments of HTML documents, coming for instance from a web editor in a Content Management System. For those cases, the tool must be able to generate a document fragment to be tested. Furthermore, the tool needs to filter the accessibility tests according to their relevance to the document fragment.

2.13 Evaluating web applications

Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of the desktop applications. Tools that evaluate such applications must emulate different user actions (e.g., activating interface components or filling and sending forms) that modify the status of the current page or load new resources. The user of such an application would need to define these intermediate steps that can be later on interpreted by the tool (see the following section).

2.14 Support for web testing APIs

When evaluating accessibility of web sites and applications it is sometimes desirable to have own scripts that emulate some kind of user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. With such tools, it is possible to write tests that automate the browser behaviour.

2.15 Support for semiautomatic and manual tests

According to the Evaluation and Report Language specification [EARL10], there are three types of modes to perform accessibility tests:

Most of the tools concentrate on the testing of accessibility requirements which can be performed automatically, although there are some that support accessibility experts by performing the other two types of tests. This support is normally introduced by highlighting in the source code or in the rendered page areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).

Sometimes, the tools do not declare openly that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in the manual and semiautomatic mode.

2.16 Integration in the web development workflow

Accessibility evaluation tools present different interfaces. What is important is how these tools integrate into the workflow of the web developer. Mostly the typical ones we can highlight the following:

2.17 Support for repair

The majority of web developers have little or no knowledge about web accessibility. Some tools provide together with their reporting capabilities additional information to support developers when correcting the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.

2.18 Persistence of results and monitoring over time

Managers and quality assurance engineers of big websites and portals need to be able to monitor the level of compliance and the progress on improving different sections of a portal. For that it is important the persistence of the results and their comparison. Some tools offer a dashboard functionality easily configurable depending on the needs of the users.

2.19 Development of own tests and test extensions

It is sometimes desirable that developers and quality assurance engineers implement their own tests. For that purpose, it is typical of some advanced tools to offer an API so developers can create their own tests.

2.20 Customization of the performed tests

Depending on the workflow that the customer uses for development, it is sometimes desirable to perform only a reduced set of tests. Some tools for different possibilities to customize the tests performed and match accordingly the reporting output of the tool.

3 Example profiles of evaluation tools

As it was mentioned earlier, there is a wide landscape of accessibility evaluation tools available on the web. In the following sections we will describe some examples of such tools. These examples do not represent any existing tool. They are provided here as illustration of how to present a profile and its features.

3.1 Tool A: Browser plug-in evaluating a rendered HTML page

Tool A is a simple browser plug-in that the user can download to perform a quick automatic accessibility evaluation on a rendered HTML page. The tool tests only the Web Content Accessibility Guidelines 2.0 techniques that can be automatically analysed. The configuration options of the tool are limited to perform one of the three conformance levels of WCAG. After the test is run, the tool presents an alert at the side of the components where an error is found. When selecting the alert, the author is informed about the problem and hints are given on ways to solve the error. Since the tool works directly on the browser, it is not integrated in the workflow of some authors who use IDEs in their development.

Table 1 presents an overview of the matching features as described in section 2.

3.2 Tool B


3.3 Tool C


3.x Tabular overview

This section presents a tabular of the characteristics of the tools described previously.

[Editor note: Proposed categories]

Table 1. List of features for the example tools described.
Category Feature Tool A Tool B Tool C
Subject being tested Types of document formats analyzed HTML (CSS and JavaScript interpretation is provided because the plug-in has access to the rendered DOM within the browser)
Cookies yes
Authentication support yes
Session tracking no
Crawling of sites no
Evaluating web applications yes
Evaluating document fragments no
Test customization Support for web testing APIs no
Support for semiautomatic and manual tests no
Customization of the performed tests no
Development of own tests and test extensions no
Tool audience Localization no
Customization to different audiences no
Support for different policy environments no
Reporting Support for standard reporting languages no
Report customization according to different criteria
  • WCAG 2.0 level
Support for aggregation of results no
Monitoring and workflow integration Integration in the web development workflow no
Support for repair yes
Persistence of results and monitoring over time no

4 References

The following are references cited in the document.

Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
CSS Current Status is available at: http://www.w3.org/standards/techs/css#w3c_all
Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Candidate Recommendation 17 December 2012. Robin Berjon, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer (editors). Available at: http://www.w3.org/TR/html5/
HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Candidate Recommendation 18 January 2011. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 3 January 2012. Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/


Appendix A: Customising results to different audiences

Appendix B: Integrating the evaluation procedure into the development testing workflows