[contents]


Abstract

The purpose of this document is to support developers of web accessibility evaluation tools by identifying typical features of those tools and how to classify them according to different combinations of those features.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

Table of contents

[Editor note: it needs update!]

1 Introduction

There is a wide variety of web accessibility evaluation tools. This document intends to support evaluation tool developers to identify their key characteristics. To achieve that, this document:

1.1 Audience of this document

This document is targeted mainly to development managers and developers of web accessibility evaluation tools.

A secondary audience of this document are users of accessibility evaluation tools like accessibility experts or web developers.

Examples of tools that are within the scope of the document include:

1.2 Background documents

This document must be seen in the context of several others. It is recommended that the reader reviews the following documents:

In the document you will find additional pointers to other resources like standards, recommendations, technical specifications, etc., which are relevant to any developer interested in implementing an accessibility evaluation tool.

2 Typical features of an accessibility evaluation tool

In this section, we will describe typical features and functionalities of accessibility evaluation tools. The features of an accessibility evaluation tool can be presented from different perspectives: the subject tested, the target audiences of the tool, the reporting and presentation of the results, its configurability, etc. We have tried to be as complete as possible, but it may be that some features of existing or future evaluation tools are omitted. The following list of characteristics is grouped according to some overarching criteria.

It is very important that you analyse your own development process and describe for your customers which of those features are supported in your tool and declare any of its limitations.

2.1 Test subjects and their environment

Under this category we include characteristics that help to identify and evaluate different types of content.

2.1.1 Content-types

Although the vast majority of web documents are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or Javascript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources and are therefore important for an accessibility evaluation.

In general, we can distinguish these types of content formats:

Most of the accessibility evaluation tools concentrate on the markup evaluation, but the most advanced are able to process many of the types described above.

2.1.2 Document fragments

Nowadays, it is necessary to test fragments of HTML documents, coming for instance from a web editor in a Content Management System. For those cases, the tool must be able to generate a document fragment to be tested. Furthermore, the tool needs to filter the accessibility tests according to their relevance to the document fragment.

2.1.3 Web applications

Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of the desktop applications. Tools that evaluate such applications must emulate different user actions (e.g., activating interface components or filling and sending forms) that modify the status of the current page or load new resources. The user of such an application would need to define these intermediate steps that can be later on interpreted by the tool (see section on Web testing APIs).

2.1.4 Cookies

A cookie is a name-value pair that it is stored in the browser of the user [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information. This information is relevant to other use cases, like a crawling tool.

2.1.5 Authentication

Many sites require some kind of authentication (e.g., HTTP authentication, OpenID, etc.). An accessibility testing tool shall be able to support common authentication scenarios. It is important to do so because many sites present different content to authenticated users.

2.1.6 Session tracking

For security reasons, some sites include the session ID in the URL or in a cookie. With the support of the session information, websites can implement security mechanisms like for instance login out a user after a long inactivity period or track typical interaction paths of the users.

2.1.7 Crawling

There are tools that incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. It must be kept in mind that, as seen in the previous sections, there are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong assumptions in the evaluation process.

A web crawler defines an starting point and a set of options. The critical features of a web crawler are related to its configuration capabilities. Among them, we can highlight:

2.2 Test customization

This category includes characteristics targeted to the selection of the tests to be performed.

2.2.1 Customization of the performed tests

Depending on the workflow that the customer uses for development, it is sometimes desirable to perform only a reduced set of tests. Some tools offer different possibilities to customize the tests performed and match accordingly the reporting output of the tool. A typical example could be performing tests to do different levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests corresponding to a single technique or failure.

2.2.2 Semiautomatic and manual testing

According to the Evaluation and Report Language specification [EARL10], there are three types of modes to perform accessibility tests:

Most of the tools concentrate on the testing of accessibility requirements which can be performed automatically, although there are some that support accessibility experts by performing the other two types of tests. This support is normally introduced by highlighting in the source code or in the rendered document areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).

Sometimes, the tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode.

2.2.3 Development of own tests and test extensions

It is sometimes desirable that developers and quality assurance engineers implement their own tests. For that purpose, it is typical of some advanced tools to offer an API so developers can create their own tests.

2.2.4 Web testing APIs

When evaluating accessibility of web sites and applications it is sometimes desirable to create scripts that emulate some kind of user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. With such tools, it is possible to write tests that automate the application and users' behaviour.

2.3 Reporting

This category includes characteristics related to the ability of the tool to present the testing results in different ways, including filtering and displaying graphically these results.

2.3.1 Standard reporting languages

Support for standard reporting languages like EARL [EARL10] is a requirement for many customers. There are cases where tool users want to exchange results, compare evaluation results with other tools, import results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. Due to its semantic nature, EARL is an adequate framework to exchanges and compare results.

2.3.2 Report customization according to different criteria

The results of your evaluation can be used in different circumstances. With that aim, results could be filtered depending on:

2.3.3 Conformance and results aggregation

Evaluation results can be presented in different ways. This presentation of results is also influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page, for instance, the accessibility errors would be listed for a whole web resource or presented for concrete components like images.

Conformance statements are demanded by many customers to assess quickly the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of techniques (i.e., common failures, sufficient techniques, etc.) and their implications.

2.4 Tool audience

This section includes characteristics that are targeted to the customization of different aspects of the tool depending on its audience, like for instance, language, user interface, etc.

2.4.1 Localization

Localization is important to address worldwide markets. There may be cases where your customers are not able to speak English and you need to present your user interface and your reports in other languages. To that end, you can start by looking into the authorized translations of the Web Content Accessibility Guidelines.

2.4.2 Customization to different audiences

[Editor note: is it necessary to distinguish between UI customization and customization of the results' presentation?]

Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic;. However, there are also tools that allow the customization of the evaluation results or even the user interfaces to other audiences like for instance:

2.4.3 Policy environments

Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in the accessibility policy in different countries. It is important that you clearly define in your tool which of those policy environments you support. Most of the tools are focused on the implementation of their Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.

2.4.4 Tool accessibility

Accessibility evaluation teams and web developers may include people with disabilities. To that end, it is relevant that the tool itself can be used with different assistive technologies and it is integrated with the different APIs of the operating system.

2.5 Monitoring and workflow integration

In the following sections we will describe aspect related to the integration of the tool into the standard development workflow of the customer.

2.5.1 Error repair

The majority of web developers have little or no knowledge about web accessibility. Some tools provide together with their reporting capabilities additional information to support developers when correcting the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.

2.5.2 Integration in the web development workflow

Accessibility evaluation tools present different interfaces. What is important is how these tools integrate into the workflow of the web developer. Mostly the typical ones we can highlight the following:

2.5.3 Persistence of results and monitoring over time

Managers and quality assurance engineers of big websites and portals need to be able to monitor the level of compliance and the progress on improving different sections of a portal. For that it is important the persistence of the results and their comparison. Some tools offer a dashboard functionality easily configurable depending on the needs of the users.

3 Example profiles of evaluation tools

As it was mentioned earlier, there is a wide landscape of accessibility evaluation tools available on the web. In the following sections we will describe some examples of such tools. These examples do not represent any existing tool. They are provided here as illustration of how to present a profile and its features.

3.1 Tool A: Browser plug-in evaluating a rendered HTML page

Tool A is a simple browser plug-in that the user can download to perform a quick automatic accessibility evaluation on a rendered HTML page. The tool tests only the Web Content Accessibility Guidelines 2.0 techniques that can be automatically analysed. The configuration options of the tool are limited to perform one of the three conformance levels of WCAG. After the test is run, the tool presents an alert at the side of the components where an error is found. When selecting the alert, the author is informed about the problem and hints are given on ways to solve the error. Since the tool works directly on the browser, it is not integrated in the workflow of some authors who use IDEs in their development.

Table 1 presents an overview of the matching features as described in section 2.

3.2 Tool B: Large-scale accessibility evaluation tool

-

3.3 Tool C: Accessibility simulation tool for mobile applications

-

3.x Tabular overview

This section presents a tabular of the characteristics of the tools described previously.

[Editor note: Proposed categories]

Table 1. List of features for the example tools described.
Category Feature Tool A Tool B Tool C
Test subjects and their environment Content-types HTML (CSS and JavaScript interpretation is provided because the plug-in has access to the rendered DOM within the browser)
Document fragments no
Web applications yes
Cookies yes
Authentication yes
Session tracking no
Crawling no
Test customization Customization of the performed tests no
Semiautomatic and manual testing no
Development of own tests and test extensions no
Web testing APIs no
Reporting Standard reporting languages no
Report customization according to different criteria
  • WCAG 2.0 level
Conformance and results aggregation no
Tool audience Localization no
Customization to different audiences no
Policy environments no
Tool accessibility no
Monitoring and workflow integration Error repair yes
Integration in the web development workflow no
Persistence of results and monitoring over time no

4 References

The following are references cited in the document.

CSS2
Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
CSS3
CSS Current Status is available at: http://www.w3.org/standards/techs/css#w3c_all
EARL10
Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
ECMAScript
ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
HTML4
HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
HTML5
HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Candidate Recommendation 17 December 2012. Robin Berjon, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer (editors). Available at: http://www.w3.org/TR/html5/
HTTPCOOKIES
HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
ODF
Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
OOXML
Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
PDF
PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
RFC2119
Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
WAI-ARIA
Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Candidate Recommendation 18 January 2011. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
WCAG20
Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
WCAG20-TECHS
Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 3 January 2012. Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
WEBCRAWLER
Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
WebDriver
WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/

Acknowledgements

Appendix A: Customising results to different audiences

Appendix B: Integrating the evaluation procedure into the development testing workflows