[contents]


Abstract

The purpose of this document is to support developers of web accessibility evaluation tools by identifying typical features of those tools and how to classify them according to different combinations of those features.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

Table of contents

1 Introduction

There is a wide variety of web accessibility evaluation tools. This document intends to support evaluation tool developers to identify their key characteristics or features (the terms are used as synonymous in this document). To achieve that, this document:

This document can also be used as a guide to assess accessibility evaluation tools according to the presented features and the corresponding grouping profiles.

1.1 Audience of this document

This document is targeted mainly to development managers and developers of web accessibility evaluation tools.

A secondary audience of this document are users of accessibility evaluation tools like accessibility experts or web developers.

Examples of tools that are within the scope of the document include (the list is not exclusive and further use cases could be added):

1.2 Background resources

This document must be seen in the context of several others. Complementary information can be found in the following documents and those cited in the references section:

In this document you will find additional pointers to other resources like standards, recommendations, technical specifications, etc., which are relevant to any developer interested in implementing an accessibility evaluation tool.

2 Features of an accessibility evaluation tool

In this section, we will describe common features and functionalities of accessibility evaluation tools. The features of an accessibility evaluation tool can be presented from different perspectives: the subject tested, the target audiences of the tool, the reporting and presentation of the results, its configurability, etc. We have tried to be as complete as possible, but it may be the case that some features of existing or future evaluation tools are omitted. The following list of characteristics is grouped according to some of the criteria mentioned earlier.

It is recommended that you analyse your own development process and describe for your customers which of those features are supported in your tool and declare any of its limitations.

2.1 Test subjects and their environment

Under this category we include characteristics that help to identify and evaluate different types of content.

2.1.1 Content-types

Although the vast majority of web documents are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or Javascript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources and are therefore important for an accessibility evaluation.

In general, we can distinguish these types of content formats:

Most of the accessibility evaluation tools concentrate on the markup evaluation, but the most advanced are able to process many of the types described above.

2.1.2 Content encoding and language

The web is a multilingual and multicultural space in which information can be presented in different languages. Furthermore, this content can be transmitted using different character encodings and sets. Some accessibility evaluation tools can process such variations and present its results adequately. More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].

2.1.3 Markup fragments

Nowadays, it is necessary to test fragments of HTML documents, coming for instance from a web editor in a Content Management System. For those cases, the tool could be able to generate a document fragment to be tested. Furthermore, the tool could filter the accessibility tests according to their relevance to the document fragment.

Tools with this feature are normally plug-ins that are integrated within a Content Management System (CMS) or within an Integrated Development Environment (IDE).

2.1.4 Dynamic content

Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of the desktop applications and contain dynamic content and interface updates. Tools that evaluate such applications should emulate and record different user actions (e.g., activating interface components or filling and sending forms) that modify the status of the current page or load new resources. The user of such an application would need to define these intermediate steps that can be later on interpreted by the tool (see section on web testing APIs).

2.1.5 Cookies

A cookie is a name-value pair that it is stored in the browser of the user [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information. This information is relevant to other use cases, like the crawling tool described later.

2.1.6 Authentication

Many sites require some kind of authentication (e.g., HTTP authentication, OpenID, etc.). An accessibility testing tool should be able to support common authentication scenarios. It is important to do so because many sites present customized content to authenticated users.

2.1.7 Session tracking

For security reasons, some sites include the session ID in the URL or in a cookie, for example. With the support of the session information, websites may implement security mechanisms like for instance login out a user after a long inactivity period or track the interaction paths of the users.

2.1.8 Content negotiation

The identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient, as other factors such as HTTP content negotiation might come into play. To support content negotiation, the testing tool should be able to send and customize different HTTP headers according to different criteria, combined with some of the features presented earlier and interpret the response of the server.

This issue is significant for accessibility, as some sites to be tested may present different content in different languages, encodings, etc., as described in previous sections.

2.1.9 Crawling

There are tools that incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. It must be kept in mind that there are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong assumptions in the evaluation process.

A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:

Some of these characteristics were presented earlier or are described later in the document.

2.2 Test customization

This category includes characteristics targeted to the selection of the tests to be performed.

2.2.1 Customization of the performed tests

Depending on the workflow that the customer uses for development, it is sometimes desirable to perform only a reduced set of tests. Some tools offer different possibilities to customize the tests performed and match accordingly the reporting output and, when applicable, the interface of the tool. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.

2.2.2 Semiautomatic and manual testing

According to the Evaluation and Report Language (EARL) specification [EARL10], there are three types of modes to perform accessibility tests:

Most of the tools concentrate on the testing of accessibility requirements which can be performed automatically, although there are some that support accessibility experts by performing the other two types of tests. This support is normally introduced by highlighting in the source code or in the rendered document areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).

Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode.

2.2.3 Development of own tests and test extensions

Developers and quality assurance engineers need sometimes to implement their own tests. For that purpose, some tools define an API so developers can create their own tests, which respond to internal demands within their organisation.

2.2.4 Web testing APIs

When evaluating accessibility of web sites and applications it is sometimes desirable to create scripts that emulate some kind of user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. With such tools, it is possible to write tests that automate the application's and users' behaviour.

2.3 Reporting

This category includes characteristics related to the ability of the tool to present the testing results in different ways, including filtering, manipulating and displaying graphically these results.

2.3.1 Standard reporting languages

Support for standard reporting languages like EARL [EARL10] is a requirement for many customers. There are cases where tool users want to exchange results, compare evaluation results with other tools, import results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. Due to its semantic nature, EARL is an adequate framework to exchange and compare results.

2.3.2 Report customization and filtering according to different criteria

The results of your evaluation can be used in different circumstances. With that aim, results could be filtered depending on (see previous sections):

2.3.3 Conformance and results aggregation

Evaluation results can be presented in different ways. This presentation of results is also influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page, for instance, the accessibility errors would be listed for a whole web resource or presented for concrete components like images, videos, tables, forms, etc.

Conformance statements are demanded by many customers to assess quickly the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of techniques (i.e., common failures, sufficient techniques, etc.) and their implications.

2.4 Tool audience

This section includes characteristics that are targeted to the customization of different aspects of the tool depending on its audience, like for instance, reporting and user interface language, user interface functionality, etc.

2.4.1 Localization and internationalization

Localization and internationalization are important to address worldwide markets. There may be cases where your customers are not able to speak English and you need to present your user interface (e.g., icons, directionality, UI layout, units, etc.) and your reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N].

From the accessibility standpoint, we recommend also to use the authorized translations of the Web Content Accessibility Guidelines. It must be considered as well that some accessibility tests need to be customized to other languages, like for instance, those related to readability.

2.4.2 Functionality customization to different audiences

Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:

The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.

2.4.3 Policy environments

Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in accessibility policies in different countries. You should clearly define in your tool which of those policy environments you support. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.

2.4.4 Tool accessibility

Accessibility evaluation teams and web developers may include people with disabilities. To that end, it is relevant that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the running operating system.

2.5 Monitoring and workflow integration

In the following sections we will describe aspects related to the integration of the tool into the standard development workflow of the customer.

2.5.1 Error repair

The majority of web developers have little or no knowledge about web accessibility. Some tools provide together with their reporting capabilities additional information to support developers and accessibility experts to correct the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.

Such support may include a guided step-by-step wizard which guides the evaluator to correct the problems found.

2.5.2 Integration in the web development workflow

Accessibility evaluation tools present different interfaces. What is important is how these tools integrate into the workflow of the web developer. Mostly the typical ones we can highlight the following:

2.5.3 Persistence of results and monitoring over time

Managers and quality assurance engineers of big websites and portals need to be able to monitor the level of compliance and the progress on improving different sections of a portal. For that it is important the persistence of the results and their comparison. Some tools offer a dashboard functionality, configurable depending on the needs of their users.

3 Example profiles of evaluation tools

As it was mentioned earlier, there is a wide landscape of accessibility evaluation tools available on the web. In the following sections we will describe some examples of such tools. These examples do not represent any existing tool. They are provided here as illustration of how to present a profile and its features.

3.1 Tool A: Browser plug-in evaluating a rendered HTML page

Tool A is a simple browser plug-in that the user can download to perform a quick automatic accessibility evaluation on a rendered HTML page. The tool tests only the Web Content Accessibility Guidelines 2.0 techniques that can be automatically analysed. Its configuration options of the tool are limited to perform one of the three conformance levels of WCAG.

After the test is run, the tool presents an alert at the side of the components where an error is found. When selecting the alert, the author is informed about the problem and hints are given on ways to solve the error. Since the tool works directly on the browser, it is not integrated in the workflow of some authors who use IDEs in their development.

Table 1 presents an overview of the matching features as described in section 2.

3.2 Tool B: Large-scale accessibility evaluation tool

Tool B is a large-scale accessibility evaluation tool. It offers its users the possibility to crawl and analyze complete websites. It offers the possibility to customise which parts of the website are analysed by defining or excluding different areas of the site to be crawled. Results are persisted in a relational database and there is a dashboard to compare results at different dates.

The tool supports authentication, sessions, cookies and content negotiation by customising the HTTP headers used in the crawling process. The tool performs autonomously the WCAG automatic tests.

The tool offers a customized view, where experts can select a subset of the crawled pages and complete the automatic and semiautomatic tests by inspecting the selected pages and store the results in the database.

The reports of the tool can be exported as a EARL report (serialized as RDF/XML), in a spreadsheet and as a PDF document.

The tool incorporates the corresponding interfaces to the accessibility APIs of its operating system.

Table 1 presents an overview of the matching features as described in section 2.

3.3 Tool C: Accessibility evaluation tool for mobile applications

Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment that gives access to the application to the Device API.

The tool can emulate different user agents running on different mobile operating systems. It also shows typical display sizes corresponding to mainstream smartphones and tablets. It supports HTML, CSS and JavaScript, providing to the testers and implementation of the Web Driver API, supporting automatic and manual evaluation.

3.4 Overview

This section presents a tabular overview of the characteristics of the tools described previously.

Table 1. List of features for the example tools described.
Category Feature Tool A Tool B Tool C
Test subjects and their environment Content-types HTML (CSS and JavaScript interpretation is provided because the plug-in has access to the rendered DOM within the browser) HTML and CSS only HTML, CSS and JavaScript
Content encoding and language yes yes yes
Document fragments no no no
Dynamic content yes no yes
Cookies yes yes yes
Authentication yes yes yes
Session tracking no yes yes
Content negotiation no yes yes
Crawling no yes no
Test customization Customization of the performed tests no yes nno
Semiautomatic and manual testing no yes yes
Development of own tests and test extensions no no no
Web testing APIs no no yes
Reporting Standard reporting languages no yes no
Report customization and filtering according to different criteria yes yes no
Conformance and results aggregation no yes yes
Tool audience Localization and internationalization no no yes
Functionality customization to different audiences no yes no
Policy environments no no no
Tool accessibility no yes no
Monitoring and workflow integration Error repair yes no yes
Integration in the web development workflow no yes no
Persistence of results and monitoring over time no yes yes

4 References

The following are references cited in the document.

CSS2
Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
CSS3
CSS Current Status is available at: http://www.w3.org/standards/techs/css#w3c_all
EARL10
Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
ECMAScript
ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
HTML4
HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
HTML5
HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Candidate Recommendation 17 December 2012. Robin Berjon, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer (editors). Available at: http://www.w3.org/TR/html5/
HTTPCOOKIES
HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
I18N
Internationalization and localization. Wikipedia. Available at: http://en.wikipedia.org/wiki/Internationalization_and_localization
ODF
Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
OOXML
Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
PDF
PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
RFC2119
Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
W3Ci18n
W3C Internationalization (I18n) Activity. Available at: http://www.w3.org/International/
WAI-ARIA
Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Candidate Recommendation 18 January 2011. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
WCAG20
Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
WCAG20-TECHS
Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 3 January 2012. Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
WEBCRAWLER
Web crawler. Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
WebDriver
WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/

Acknowledgements

The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Shadi Abou-Zahra, Yod Samuel Martín, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.

Appendix A: Customising results to different audiences

Appendix B: Integrating the evaluation procedure into the development testing workflows