Copyright © 2013 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
The purpose of this document is to support developers of web accessibility evaluation tools by identifying typical features of those tools and how to classify them according to different combinations of those features.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
[Editor note: it needs update!]
There is a wide variety of web accessibility evaluation tools. This document intends to support evaluation tool developers to identify their key characteristics. To achieve that, this document:
This document is targeted mainly to development managers and developers of web accessibility evaluation tools.
A secondary audience of this document are users of accessibility evaluation tools like accessibility experts or web developers.
Examples of tools that are within the scope of the document include:
This document must be seen in the context of several others. It is recommended that the reader reviews the following documents:
In the document you will find additional pointers to other resources like standards, recommendations, technical specifications, etc., which are relevant to any developer interested in implementing an accessibility evaluation tool.
In this section, we will describe typical features and functionalities of accessibility evaluation tools. The features of an accessibility evaluation tool can be presented from different perspectives: the subject being tested, the target audiences of the tool, the reporting and presentation of the results, its configurability, etc. We have tried to be as complete as possible, but it may be that some features of existing or future evaluation tools are omitted. The following list of characteristics does not follow any particular order.
It is very important that you analyse and describe for your own development process and for your customers which of those features are supported in your tool and declare any limitations of your tool.
In general, we can distinguish these types of formats:
Most of the accessibility evaluation tools concentrate on the markup evaluation, but the most advanced are able to process many of the types described above.
A cookie is a name-value pair that it is stored in the browser of the user [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information. This information is relevant to other use cases, like a crawling tool.
Many sites require some kind of authentication (e.g., HTTP authentication, OpenID, etc.). An accessibility testing tool shall be able to support the typical authentication scenarios. It is important to do so because many sites present different content to authenticated users.
For security reasons, some sites include the session ID in the URL or in a cookie. With the support of the session information, websites can implement security mechanisms like for instance login out a user after a long inactivity period or track typical interaction paths of the users.
There are tools that incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. It must be kept in mind that, as seen in the previous section, there are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong assumptions in the evaluation process.
A web crawler defines an starting point and a set of options. The critical features of a web crawler are related to its configuration capabilities. Among them, we can highlight:
Evaluation results can be presented in different ways. This presentation of results is also influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page, for instance, the accessibility errors would be listed for a whole web resource or presented for concrete components like images.
Also when issuing conformance statements it is necessary to tackle the different types of techniques (i.e., common failures, sufficient techniques, etc.) and their implications.
Support for standard reporting languages like EARL [EARL10] is a requirement for many customers. There are cases where tool users want to exchange results, compare evaluation results with other tools, import results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. Due to its semantic nature, EARL is an adequate framework to exchanges and compare results.
The results of your evaluation can be used in different circumstances. With that aim, results could be filtered depending on:
Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic;. However, there are also tools that allow the customization of the evaluation results or even the user interfaces to other audiences like for instance:
Localization is important to address worldwide markets. There may be cases where your customers are not able to speak English and you need to present your user interface and your reports in other languages. To that end, you can start by looking into the authorized translations of the Web Content Accessibility Guidelines.
Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in the accessibility policy in different countries. It is important that you clearly define in your tool which of those policy environments you support. Most of the tools are focused on the implementation of their Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.
Nowadays, it is typical that it is necessary to test fragments of HTML documents, coming for instance from a web editor in a Content Management System. For those cases, the tool must be able to generate a document fragment to be tested. Furthermore, the tool needs to filter the accessibility tests according to their relevance to the document fragment.
Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of the desktop applications. Tools that evaluate such applications must emulate different user actions (e.g., activating interface components or filling and sending forms) that modify the status of the current page or load new resources. The user of such an application would need to define these intermediate steps that can be later on interpreted by the tool (see the following section).
When evaluating accessibility of web sites and applications it is sometimes desirable to have own scripts that emulate some kind of user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. With such tools, it is possible to write tests that automate the browser behaviour.
According to the Evaluation and Report Language specification [EARL10], there are three types of modes to perform accessibility tests:
Most of the tools concentrate on the testing of accessibility requirements which can be performed automatically, although there are some that support accessibility experts by performing the other two types of tests.
Sometimes, the tools do not declare openly that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in the manual and semiautomatic mode.
Accessibility evaluation tools present different interfaces. What is important is how these tools integrate into the workflow of the web developer. Mostly the typical ones we can highlight the following:
The majority of web developers have little or no knowledge about web accessibility. Some tools provide together with their reporting capabilities additional information to support developers when correcting the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc.
Managers and quality assurance engineers of big websites and portals need to be able to monitor the level of compliance and the progress on improving different sections of a portal. For that it is important the persistence of the results and their comparison. Some tools offer a dashboard functionality easily configurable depending on the needs of the users.
It is sometimes desirable that developers and quality assurance engineers implement their own tests. For that purpose, it is typical of some advanced tools to offer an API so developers can create their own tests.
Depending on the workflow that the customer uses for development, it is sometimes desirable to perform only a reduced set of tests. Some tools for different possibilities to customize the tests performed and match accordingly the reporting output of the tool.
This section presents a tabular of the characteristics of the tools described previously.
|Feature||Tool A||Tool B|
|Types of document formats analyzed|
|Crawling of sites|
|Support for aggregation of results|
|Support for standard reporting languages|
|Report customization according to different criteria|
|Customization to different audiences|
|Support for different policy environments|
|Evaluating document fragments|
|Evaluating web applications|
|Support for web testing APIs|
|Support for semiautomatic and manual tests|
|Integration in the web development workflow|
|Support for repair|
|Persistence of results and monitoring over time|
|Development of own tests and test extensions|
|Customization of the performed tests|
The following are references cited in the document.