Evaluation features of web accessibility evaluation tools
Guidance for developers
- This version:
- Latest published version:
- Latest internal version:
- Previous published version:
- Previous internal version:
- Carlos A Velasco, Fraunhofer Institute for Applied Information Technology FIT
- Philip Ackermann, Fraunhofer Institute for Applied Information Technology FIT
- Evangelos Vlachogiannis, Fraunhofer Institute for Applied Information Technology FIT
- Shadi Abou-Zahra, W3C Web Accessibility Initiative (WAI)
This document describes web accessibility evaluation features that any web evaluation tool (including web quality assurance tools) can incorporate, so that they support the evaluation of accessibility requirements like the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness on such accessibility evaluation features and to provide guidance for tool developers on what kind of features they could provide in future implementations of their tools. The document could also be used to help compare the features provided by different types of tools such as, for example, during the procurement of such tools.
The features in scope of this document include capabilities to help specify, manage, carry out and report the results from accessibility evaluations. For example, some of the described features relate to crawling of websites, interacting with tool users to carry out semi-automated evaluation and providing evaluation results in machine-readable format. This document does not describe the evaluation of web content features, which is addressed by WCAG 2.0 Success Criteria.
This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document does not prioritize nor require any particular accessibility evaluation feature or specific type of evaluation tools.
Status of this document
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
Table of contents
- Status of this document
- Table of contents
- 1 Introduction
- 2 Features of an accessibility evaluation
- 2.1 Test subjects and their environment
- 2.2 Testing functionality
- 2.3 Reporting and monitoring
- 2.4 Workflow integration
- 2.5 Tool usage
- 3 Example profiles of evaluation tools
- 4 References
Designing, developing, and managing a website typically involves a variety of tasks and people who use different types of tools. For example, a web developer might use an integrated development environment (IDE) to create templates for a content management system (CMS) of a website while an editor will typically use the content-editing facility provided by the CMS to create the web pages. Ideally accessibility evaluation is carried out throughout the process and by everyone involved. For example, web developers should ensure that any headings provided in templates are coded appropriately while content authors should ensure that any images added to web pages have appropriate text alternatives.
Evaluation tools can assist accessibility evaluation in many different ways. For example, tools can assist:
- Designers by detecting color combinations with insufficient contrast (luminosity) between foreground and background;
- Web developers by detecting elements, attributes, and other code segments that reduce the accessibility of the code;
- Authors by detecting structural components such as headings, lists, and tables that are not properly marked as such;
- Quality assurance testers in performing accessibility checks on specific aspects of a web application;
- Accessibility evaluators in selecting representative web page samples to evaluate the accessibility of entire websites;
- Website owners and commissioners in monitoring the overall accessibility performance and progression of a particular website.
This document lists and describes these types of accessibility evaluation features that can be provided by evaluation tools. It does not describe the evaluation of specific web content features, which is addressed by WCAG 2.0 Success Criteria.
In the context of this document, an evaluation tool is a software application that enables its users to test web content against specific quality assurance criteria. This includes but is not limited to the following (non-mutually-exclusive) types of tools:
- Web quality assurance tool - Any software application that is specifically designed to test web content against quality assurance criteria (that are usually broader than accessibility criteria alone);
- Web accessibility evaluation tool - Any software application that is specifically designed to test web content against accessibility criteria, such as the Web Content Accessibility Guidelines (WCAG) 2.0.
The accessibility evaluation features listed and described in this document can be incorporated by evaluation tools to provide support for accessibility evaluation. Section 3 provides example profiles of evaluation tools with accessibility evaluation features.
W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria such as the features listed in this document.
2 Features of an accessibility evaluation tool
[Review Note: Feedback on this section is particularly welcome, specifically with suggestions for accessibility evaluation features that are not listed below and with comments to refine listed accessibility evaluation features.]
The accessibility evaluation features listed and described below are not exhaustive. It may also not be possible nor desired for a single tool to implement all of the listed features. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. Developers can use this list to identify features that are relevant to their tools to plan their implementation. Also others interested in acquiring and using evaluation tools can use this document to learn about relevant features to look for.
The features of an accessibility evaluation tool are presented in the following from different perspectives: the subject to be evaluated (i.e., web content and its environment, which enables its rendering to the end user), the testing requirements, the reporting customization capabilities of the tool and the integration into the development and edition workflow of the customer.
2.1 Test subjects and their environment
This category includes characteristics that help to retrieve, render and evaluate different types of content. There are tools that may retrieve the content to analyse from the file system or from a database. However, the vast majority of them do it via the network through the HTTP protocol. In the rest of the document, we will focus mostly on this scenario.
Due to the characteristics of the HTTP(S) protocol, the rendering of a web resource implies the manipulation of many other elements associated with it, like the request and response headers, sessions and cookies, authentication information, etc. These associated elements is what we denominate the environment of the test subject. These components are explained in the following.
2.1.1 Content types
In general, the following types of content formats can be distinguished:
- Markup documents. These are normally HTML [HTML4, HTML5] or XML documents. Processing these resources requires the ability to build a Document Object Model (DOM) [DOM] according to the different specifications, which can then be parsed and analysed.
- Style resources. These are presentation modifiers conformant to the different CSS specifications [CSS2, CSS3], which are processed by the user agents. The interpretation of stylesheets is important for many accessibility techniques.
- Multimedia resources. These are images, movies or audio tracks that are standalone or embedded in a web resource. From the accessibility standpoint, the evaluation of these resources is very relevant, especially for issues like colour contrast, colour blindness, or media alternatives, for instance.
- PDF documents. Many reports and other administrative information are available in this format, especially in government websites. The format was initially developed by Adobe [PDF] and was later on standardised by ISO.
- Resources with other proprietary or open source formats. Although not very frequent on the web, it is possible to encounter other formats like office documents (like Apache OpenOffice [ODF] or Microsoft Office [OOXML] documents), Adobe Flash movies and applications, etc. Depending on the tool customers' needs, it may be necessary to parse and evaluate this type of resources.
2.1.2 Content encoding and content language
This component identifies which content languages and encodings are supported by the evaluation tool. The web is a multilingual and multicultural space in which information can be presented in different languages, thus evaluation tools should be in the position to address this point. Furthermore, this content can be transmitted using different character encodings and sets (like ISO-8859-1, UTF-8, UTF-16, etc.). More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].
2.1.3 DOM document fragments
Many websites are generated dynamically by combining code templates with HTML snippets that are created by website editors. Some evaluation tools may be integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE) to test these snippets as developers and/or editors create them. Usually this is done by creating DOM [DOM] document fragments from these snippets. Evaluation tools may filter as well the accessibility tests according to their relevance to the document fragment.
2.1.4 Dynamic content
Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of the desktop applications and contain dynamic content and interface updates. Tools that evaluate such applications should emulate and record different user actions (e.g., activating interface components by clicking with the mouse, swiping with the fingers on a touch-screen or using the keyboard) that modify the status of the current page or load new resources. The evaluation tool needs to define and record these intermediate steps that can be later on interpreted by the tool (see section on web testing APIs).
2.1.5 Content negotiation
Content negotiation is a characteristic of the HTTP protocol to allow web servers to customers the sent resources according to the demands of the client user agent. Because of this, the identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient. To support content negotiation, the testing tool customizes the HTTP headers according to different criteria to, for example, fetch particular variants of the content, such as a mobile website or a specific language version of a website or to control session and authentication information as presented in the following sections.
A cookie is a name-value pair that it is stored by the user-agent [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information exchanged between the client and the server, which as seen before may be relevant for content negotiation.
Websites require sometimes authentication (e.g., HTTP authentication, OpenID, etc.) to control access to given parts of the website or to present customized content to authenticated users.
2.1.8 Session tracking
Within HTTP, session information can be used for different purposes like, e.g., implementation of security mechanisms (login information, logout a user after a long inactivity period) or track the interaction paths of the users. Session information can be stored in the user agent local storage, in the session ID in the URL or in a cookie, for example.
There are tools that incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. There are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong assumptions in the evaluation process.
A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:
- Types of content-types crawled.
- Capability to define inclusion and exclusion filters. Customers may require analysis of concrete parts of the website or may not want to include others.
- Multithreaded crawling. For a large site, it may be important to optimize performance by having a tool able to crawl in parallel threads.
- Avoidance of duplicate downloads. It is typical from web resources to link many times to the same resource. If the crawler is not able to identify such issues, it may lead to a great performance loss.
- Capabilities related to previous features like: content negotiation, authentication support or session tracking.
2.2 Testing functionality
This category includes features targeted to the configuration of the tests to be performed or to the configuration of the resources to be tested.
2.2.1 Selection of evaluation tests
Accessibility evaluation tools may offer the possibility to select a given subset of evaluation tests or even a single one. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.
2.2.2 Automatic, semiautomatic and manual testing
According to the Evaluation and Report Language (EARL) specification [EARL10], there are three types of modes to perform accessibility tests:
- Automatic - where the test was carried out automatically by the software tool and without any human intervention.
- Manual - where the test was carried out by human evaluators. This includes the case where the evaluators are aided by instructions or guidance provided by software tools, but where the evaluators carried out the actual test procedure.
- Semi-Automatic - where the test was partially carried out by software tools, but where human input or judgment was still required to decide or help decide the outcome of the test.
There are some evaluation tools that support accessibility experts by performing semiautomatic or manual tests. This support is normally introduced by highlighting in the source code or in the rendered document areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).
Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode.
2.2.3 Development of own tests and test extensions
Developers and quality assurance engineers need sometimes to implement their own tests. For that purpose, some tools define an API so developers can create their own tests, which respond to internal demands within their organisation.
2.2.4 Test automation
When evaluating accessibility of web sites and applications it is sometimes desirable to create scripts that emulate user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. With such tools, it is possible to write tests that automate the application's and users' behaviour.
2.3 Reporting and monitoring
This category includes features related to the ability of the tool to present, store, import, export and compare the testing results in different ways. In this section the term report must be interpreted in its widest sense. It could be a set of computer screens presenting different tables and graphics, a word processor document summarizing results, etc.
2.3.1 Standard reporting languages
Support for standard reporting languages like EARL [EARL10] is a requirement for many customers. There are cases where tool users want to exchange results, compare evaluation results with other tools, import/export results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc.
2.3.2 Persistence of results
The implementation of monitoring features requires that the tool has a persistence layer (a database, for example) where results could be stored and retrieved.
2.3.3 Import/export functionality
In many evaluation methodologies accessibility experts and quality assurance engineers use different tools. If the evaluation tool supports import and export of test results (for instance, in EARL format, as JSON objects, in a CSV file, etc.), it may be easily integrated in such environments.
2.3.4 Report customization
This feature allows the customization of the resulting report according to different criteria, such as the target audience, the type of results, the part of the site being analyzed, the type of content, etc.
2.3.5 Results aggregation
The presentation of evaluation results is influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page. For instance, accessibility errors may be listed for a whole web resource or presented for concrete components like images, videos, tables, forms, etc.
Conformance statements are demanded by many customers to assess quickly the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of accessibility techniques (i.e., common failures, sufficient techniques, etc.) and aggregate results as described in the previous section.
2.3.7 Error repair
The majority of web developers have little or no knowledge about web accessibility. Tools may provide together with their reporting capabilities additional information to support developers and accessibility experts to correct the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.
Such support may include a guided step-by-step wizard which guides the evaluator to correct the problems found.
2.4 Workflow integration
Accessibility evaluation tools present different interfaces, which allow their integration into the standard development workflow of the customer. The typical ones that can be highlighted are the following:
- plug-ins or extensions in web browsers;
- plug-ins for Integrated Development Environments (IDEs);
- plug-ins for Content Management Systems (CMS);
- stand-alone tools (desktop, mobile apps, online tools);
2.5 Tool usage
This section includes characteristics that are targeted to the customization of different aspects of the tool depending on its audience, like for instance, user interface language, user interface functionality, user interface accessibility, etc.
2.5.1 Localization and internationalization
Localization and internationalization are important to address worldwide markets. Tool users may not be able to speak English and it is necessary to present the user interface (e.g., icons, text directionality, UI layout, units, etc.) and the reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N].
From the accessibility standpoint, it is recommended to use the authorized translations of the Web Content Accessibility Guidelines. It must be considered as well that some accessibility tests need to be customized to other languages, like for instance, those related to readability.
2.5.2 Functionality customization to different audiences
Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:
- web developers with no or little knowledge of accessibility, who need detailed information on how to correct the problems found to implement appropriate solutions; or
- web commissioners, who need an aggregated view of the evaluation results and tools to support the monitoring of their own websites.
The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.
2.5.3 Policy environments
Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in accessibility policies in different countries. The tool should specify in its documentation which policy environments are supported. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.
2.5.4 Tool accessibility
Accessibility evaluation teams and web developers may include people with disabilities. To that end, it is relevant that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the running operating system.
3 Example profiles of evaluation tools
[Review Note: This section needs review once section 2 is accepted by the working group.]
As it was mentioned earlier, there is a wide landscape of accessibility evaluation tools available on the web. The following sections describe some examples of such tools. These examples do not represent any existing tool. They are provided here as illustration of how to present a profile and its features.
3.1 Tool A: Browser plug-in evaluating a rendered HTML page
Tool A is a simple browser plug-in that the user can download to perform a quick automatic accessibility evaluation on a rendered HTML page. The tool tests only the Web Content Accessibility Guidelines 2.0 techniques that can be automatically analysed. Its configuration options of the tool are limited to perform one of the three conformance levels of WCAG.
After the test is run, the tool presents an alert at the side of the components where an error is found. When selecting the alert, the author is informed about the problem and hints are given on ways to solve the error. Since the tool works directly on the browser, it is not integrated in the workflow of some authors who use IDEs in their development.
Table 1 presents an overview of the matching features as described in section 2.
3.2 Tool B: Large-scale accessibility evaluation tool
Tool B is a large-scale accessibility evaluation tool. It offers its users the possibility to crawl and analyze complete websites. It offers the possibility to customise which parts of the website are analysed by defining or excluding different areas of the site to be crawled. Results are persisted in a relational database and there is a dashboard to compare results at different dates.
The tool supports authentication, sessions, cookies and content negotiation by customising the HTTP headers used in the crawling process. The tool performs autonomously the WCAG automatic tests.
The tool offers a customized view, where experts can select a subset of the crawled pages and complete the automatic and semiautomatic tests by inspecting the selected pages and store the results in the database.
The reports of the tool can be exported as a EARL report (serialized as RDF/XML), in a spreadsheet and as a PDF document.
The tool incorporates the corresponding interfaces to the accessibility APIs of its operating system.
Table 1 presents an overview of the matching features as described in section 2.
3.3 Tool C: Accessibility evaluation tool for mobile applications
Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment that gives access to the application to the Device API.
This section presents a tabular overview of the characteristics of the tools described previously.
|Category||Feature||Tool A||Tool B||Tool C|
|Content encoding and language||yes||yes||yes|
|Test customization||Customization of the performed tests||no||yes||nno|
|Semiautomatic and manual testing||no||yes||yes|
|Development of own tests and test extensions||no||no||no|
|Web testing APIs||no||no||yes|
|Reporting||Standard reporting languages||no||yes||no|
|Report customization and filtering according to different criteria||yes||yes||no|
|Conformance and results aggregation||no||yes||yes|
|Tool audience||Localization and internationalization||no||no||yes|
|Functionality customization to different audiences||no||yes||no|
|Monitoring and workflow integration||Error repair||yes||no||yes|
|Integration in the web development workflow||no||yes||no|
|Persistence of results and monitoring over time||no||yes||yes|
The following are references cited in the document.
- Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
- CSS Current Status is available at: http://www.w3.org/standards/techs/css#w3c_all
- W3C DOM4. W3C First Public Working Draft 07 November 2013. Anne van Kesteren, Aryeh Gregor, Ms2ger, Alex Russell, Robin Berjon (editors). Available at: http://www.w3.org/TR/dom/
- Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
- ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
- HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
- HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Candidate Recommendation 17 December 2012. Robin Berjon, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer (editors). Available at: http://www.w3.org/TR/html5/
- HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
- Internationalization and localization. Wikipedia. Available at: http://en.wikipedia.org/wiki/Internationalization_and_localization
- Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
- Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
- PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
- Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
- W3C Internationalization (I18n) Activity. Available at: http://www.w3.org/International/
- Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Candidate Recommendation 18 January 2011. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
- Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
- Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 3 January 2012. Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
- Web crawler. Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
- WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/
The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Yod Samuel Martín, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.