[contents]


Abstract

This document describes features that quality assurance and web authoring tools may incorporate to support the evaluation of accessibility requirements, such as those defined by the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness on such features and to give an introductory guidance for tool developers on what kinds of features they could provide in future implementations of their tools. This list of features could also be used to help compare different types of evaluation tools, for example, during the procurement of such tools.

The features in scope of this document include tool capabilities to specify, manage, carry out, and report the results from web accessibility evaluations. For example, some of the described features relate to crawling websites, interacting with tool users to carry out semiautomated evaluation, or providing evaluation results in a machine-readable format. This document does not describe the actual assessment of web content features, which is addressed by WCAG 2.0 and its supporting documents.

This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document neither prescribes nor prioritizes any particular accessibility evaluation feature or specific type of evaluation tools. It describes features that can be provided by tools that support fully-automated, semiautomated and manual web accessibility evaluation. Following this document can help tool developers to meet accessibility checking requirements defined by the Authoring Tool Accessibility Guidelines (ATAG).

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This 15 December 2014 Editor's Draft of Guide to Features of Web Accessibility Evaluation Tools (ETF) is intended to be published and maintained as a W3C Working Group Note after review and refinement. It provides a complete draft for final review, and incorporates all the comments received on previous drafts.

The Evaluation and Repair Tools Working Group (ERT WG) believes to have completed a first version of thos document and invites remaining feedback on it by web accessibility evaluation tool developers, web authoring and quality assurance tool developers, evaluators, researchers, and others with interest in web accessibility evaluation tools. In particular, ERT WG is looking for feedback on:

Please send comments on this Guide to Features of Web Accessibility Evaluation Tools document (ETF) by 22 January 2015 to public-wai-ert-tools@w3.org (publicly visible mailing list archive).

Publication as an Editor Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document has been produced by the Evaluation and Repair Tools Working Group (ERT WG), as part of the Web Accessibility Initiative (WAI) Technical Activity.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.


Table of Contents


1 Introduction

Designing, developing, monitoring, and managing a website typically involves a variety of tasks and people who use different types of tools. For example, a web developer might use an Integrated Development Environment (IDE) to create templates for a Content Management System (CMS), while a web content author might use the content-editing facility provided by the CMS to create and edit the content of individual web pages. Ideally all these tools provide features to support accessibility evaluation by everyone involved throughout the design and development process. For example, an IDE could provide functionality to for developers check individual web page components and document fragements during their development, while a CMS could provide functionality to for webmasters to monitor the overall accessibility of the website. This document lists and describes such types of features that can be provided by tools to support accessibility evaluation of web content in a variety of situations and contexts.

1.1 Evaluation Tools

In the context of this document, an evaluation tool is a (web-based or non-web-based) software application that enables its users to evaluate web content according to specific quality criteria, such as web accessibility requirements. This includes but is not limited to the following (non-mutually-exclusive) types of tools:

Note that these terms are not mutually exclusive. A web accessibility evaluation tool is a particular type of web quality assurance tool. In other cases an evaluation tool could be considered to be a web authoring tool, for example, if it provides repair functionality that modifies the content. Also, a web quality assurance tool might not check for accessibility criteria but might provide other functionality, such as managing quality assurance processes and reporting evaluation results, which may be useful for accessibility evaluation of web content. This document refers to any of these tools collectively as evaluation tools.

W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria, including some of the features listed in this document.

2 List of Features

The features of web accessibility evaluation tools listed below in this section are categorized according to:

The list of evaluation tool features provided below in this section is not exhaustive. It may be neither possible nor desired for a single tool to implement all of the listed features. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. As mentioned in the abstract, the features presented in this section are provided as a reference. This document does not prescribe any of them to developers of accessibility evaluation tools. Developers can use this list to identify features that are relevant to their tools and to plan their implementation. Also others interested in acquiring and using evaluation tools can use this document to learn about relevant features to consider.

2.1 Resources to be Evaluated

This category includes features that help evaluation tools to retrieve and render different types of web content. While there are evaluation tools that retrieve the content to be analyzed directly from file systems or from databases, the majority of evaluation tools retrieve the content through a network connection using the HTTP(S) protocol. This section focuses primarily on this latter situation.

Besides the primary web content resources, such as the HTML and CSS code, other aspects of the HTTP(S) protocol are relevant to retrieving and rendering web content. For example, the HTTP request and response headers may include important information about the content that was retrieved and provided to the user. Also session and authentication information, cookies, and other aspects might impact the particular content retrieved and rendered, especially in web applications. These characteristics of the HTTP(S) protocol are considered in this document.

2.1.1 Content Formats

Feature: State the specific resource formats that your tool supports for evaluating the accessibility of web content.

Although the majority of web content is in HTML format, there are many other types of formats that need to be considered during the evaluation of web content. For example, HTML code is often intended to be rendered together with associated CSS and JavaScript code. Many accessibility tests are the result of interpreting the combined rendering of different resources, so that supporting formats beyond HTML alone is important for accessibility evaluation. There are also many formants beyond HTML in use on the Web.

Some of the resource types and corresponding content formats to consider include:

2.1.2 Content Negotiation

Feature: Provide mechanisms in your tool for tool users to configure different HTTP content negotiation parameters.

Content negotiation is a characteristic of the HTTP(S) protocol that enables web browsers to select different variants of the same web resources. For example, a web page located at a particular web address (URL/URI) can be provided in different languages, in mobile vs. desktop versions, or in other variants depending on the settings of the web browser and other context information sent to the web server. To test these different variants of the web content, evaluation tool users need to be able to configure the content negotiation parameters to emulate different types of web browsers and server requests.

Note: This feature relates to other features on cookies, authentication, and session tracking that are listed as sub-sections.

2.1.2.1 Cookies

Feature: Provide mechanisms in your tool to support the exchnage of HTTP cookies, and for tool users to configure them.

Cookies [HTTPCOOKIES] are exchanged between the web browser and server as part of the content negotiation. They typically include information about website users, such as authentication and session tracking information. Cookies can also contain user preferences settings for the website and references to user profiles stored on the server. Thus the web content generated may be substantially different depending on the cookie parameters exchanged. Evaluation tools need be able to at least exchange and store cookies between different HTTP requests and responses exchanges, to facilitate authentication and session tracking. Ideally evaluation tool users are also able to configure cookie paramaters, to emulate different types of web users and user profiles.

2.1.2.2 Authentication

Feature: Provide mechanisms in your tool to support different types of authentication, and for tool users to configure them.

Many websites have restricted areas that require authentication (typcially using passwords) to access these areas. Websites may also provide different presentation and functionality depending on the user credentials. For example, the administrator of an online shop might have more information and options presented than typical website users. Different authentication mechanisms, such as HTTP authentication and OpenID are commonly used by websites, and sometimes different authentication mechansims are used for different parts of the same website. Evaluation tools need to support these different types of user authentication mechanisms, and ideally allow tool users to configure authentication parameters.

Note: The authentication mechanisms in scope of this section are based on HTTP exchanges between the web browser and server. Other (application-based) forms of authentication require other features, in particular the test automation feature.

2.1.2.3 Session Tracking

Feature: Provide mechanisms in your tool to support different types of session tracking, and for tool users to configure them.

Websites often track the activity of users during a session (visit). For example, the website tracks which products you selected for purchase, the credit card information you provided, and other interactions during a process. Different mechanisms can be used for session tracking, for example using local storage in the web browser, session ID in the URL/URI, and cookies. Also here different mechanisms may be used on a single website. Evaluation tools need to be able to support these different types of session tracking mechansisms, and, where relevant, allow tool users to configure their parameter to emulate specific users and situations.

Note: The user emulation mechanisms in scope of this section are based on HTTP exchanges between the web browser and server. Other (application-based) forms of user emulation require other features, in particular the test automation feature.

2.1.3 Character Encoding

Feature: State the character encodings that your tool supports for evaluating web content accessibility. Support at least UTF-8.

Web content can be transmitted using different character sets and encodings. The correct rendering of web content depends upon the correct interpretation and processing of these character sets and encodings. It is recommended to support at least UTF-8, though websites often use other (legacy) character encodings as well.

Tip: More information about character encoding is provided by the W3C Internationalization Activity [W3Ci18n].

2.1.4 Content Language

Feature: State the natural languages that your tool supports for evaluating the accessibility of web content.

Some web accessibility requirements relate to the readability of web content. Ideally evaluation tools can carry out evaluation of such requirements for a variety of natural languages.

Note: This feature relates to the language of the web content being evaluated rather than of the evaluation tool. The latter aspect is addressed by the feature on localization and internationalization.

2.1.5 Content Rendering

Feature: Evaluate the accessibility of web content based on the rendered DOM rather than the source code.

As previously mentioned, individual resources, such as the HTML code, are often intended to be rendered together with associated resources, such as the CSS and JavaScript code. Thus evaluation tools need to check the rendered DOM that is used to present the generated content to the end-user, rather than the static (raw) code of individual resources, which can sometimes be meaningless.

Note: Accomplishing this typically requires integration with a web browser engine, which can sometimes lead to different rendering depending on the particular engine used. Understanding the selected web browser engine is essential in this context. In some cases evaluation tools can be used with different web browser engines interchangeably.

2.1.6 Document Fragments

Feature: Provide mechanisms in your tool to support the accessibility evaluation of DOM document fragments.

Many websites are generated by combining HTML code snippets with pre-set templates. Evaluation tools, especially those that are integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE), can help check such code snippets during their development. Such checking is usually done by creating DOM document fragments [DOM] from the code snippets, that are then evaluates as web content.

Note: This feature relates to other features on content formats, content rendering, and tests selection.

2.1.7 Crawling

Feature: Provide mechanisms to crawl across entire websites, and for tool users to configure the crawling parameters.

While some evaluation tools focus on evaluating individual web pages (or even web page components), other evaluation tools provide capabilities to crawl entire websites. This relies on a web crawler [WEBCRAWLER] that is able to extract hyperlinks out of web resources, which are in many cases in formats other than HTML alone (see related feature on content formats).

Options that need to be configurable to help tool users to manage the crawling behavior include:

Tip: Managing the performance in this context is an issue, especially for large websites. Strategies such as multi-threaded crawling (ability to spawn parallel threads for the crawling activities), avoiding duplicate downloads (detecting that resources have already been downloaded), and avoiding recursive loops (detecting links that have already been visited).

Note: This feature relates to other features on content formats, content rendering, and content negotiation (including cookies, authentication, and session tracking).

2.1.8 Sampling

Feature: Provide mechanisms to suggest different types of web content samples from a website for detailed analysis.

Evaluation tools that crawl entire websites or otherwise monitor overall website activity, for example through integration in Content Management System (CMS), can also identify samples of web content to be analyzed more closely. For example, evaluation tools could highlight web pages with particular characteristics, such as:

Such tools can also help to randomly select web pages for a website for further inspection (detailed analysis by expert evaluators).

2.2 Performing Evaluation Checks

This category includes features related to the different modalities, customization options, and automation aspects of how tools evaluate the accessibility of web content. As previously noted, this document does not describe the actual assessment of web content features, which is addressed by WCAG 2.0 and its supporting documents.

2.2.1 Test Coverage

Feature: State each of the accessibility requirements that your tool can evaluate, and map them to WCAG 2.0.

Evaluation tools vary significantly in the scope of accessibility requirements that they can evaluate. This could be at the level of WCAG 2.0 Success Criteria, for example 1.1.1 Non-text Content, or at a more granular level such as the individual WCAG 2.0 Techniques (which are non-normative). It is important to clearly communicate which accessibility requirements are addressed by your tools, to help users understand how it meets their own needs.

2.2.2 Testing Modes

Feature: State the testing modes that your tool supports for each accessibility requirement it can evaluate.

Evaluation tools help users to evaluate the accessibility of web content in the following test modes::

(These definitions are intended to be consistent with corresponding definitions in Evaluation and Report Language 1.0 [EARL10] and Authoring Tool Accessibility Guidelines 2.0 [ATAG20]. In case of conflict, the definitions of the most up-to-date version of these documents is to be used as the reference.)

Thus evaluation tools vary significantly in how they carry out the actual testing. It is important to clearly communicate how your tool supports each accessibility requirements that it claims to support, so that tool users can understand how it works in practice and how it meets their own needs.

2.2.3 Custom Tests

Feature: Provide mechanisms to allow tool users to add to and modify the default set of tests provided by your tool.

In some cases website owners have particular quality assurance requirements that need additional tests or modification of existing tests. Some evaluation tools provide API, rule-based configuration files, or other means to define the tests that the tool then carries out. This functionality is useful for users who want to customize the default set of tests provided by your tool.

2.2.4 Tests Selection

Feature: Provide options for tool users to select the individual accessibility evaluation checks to be carried out.

Evaluation tool users may sometimes want to focus on a particular subset of evaluation checks. For example, a designer may be interested in particular checks while a code programmer and content author may be interested in other checks. Tool users may also want to filter checks by their corresponding WCAG conformance levels (A, AA or AAA). Evaluation tools can support these users by providing options to filter and select the individual accessibility evaluation checks that they want to focus on.

Note: Some tools are specifically focused on evaluating particular requirements, such as color contrast, in any case.

2.2.5 Test Automation

Feature: Provide mechanisms to allow the emulation of user interaction and the automation of tests being carried out.

When evaluating the accessibility of websites, in particular interactive applications, it is sometimes convenient to create scripts that emulate user interaction. For example, your evaluation tool could be configured to activate certain links, buttons, and other users interface components, emulate swiping activity on a touch-screen, and provide input through the keyboard interface when it encounters particular web pages. This is particularly useful to replicate the behavior of end-users on a website, for example to evaluate certain tasks (paths) on a website.

Different approaches can be taken to realize this feature, and there are different efforts to develop common APIs. The W3C WebDriver API [WebDriver] is an international effort to standardize such an API.

2.2.6 Manual Evaluation

Feature: Provide mechanisms to support tool users to manually evaluate web content, and state each functionality.

While some evaluation tools focus largely on automatic evaluation, others also provide functionality to support tool users to manually evaluate individual web pages. Providing such functionality is essential because the majority of web content cannot be evaluated automatically (see testing modes for more background). Mechanisms to support manual evaluation can include:

Note: Some tools also provide functionality to help designers and developers learn about the types of accessibility barriers that people with disabilities experience, rather than to evaluate them. For example, this can include adding unsteady motion to the movement of the screen pointer, distoring the display, changing color schemes, removing the audio or adding noise, and others.

2.3 Reporting and Monitoring

This category includes features related to the ability of evaluation tools to present, store, import, export, and compare evaluation results. In this context, the term reports is used in its widest sense. Reports could be a set of computer screens presenting different tables and graphs, a set of icons superimposed on top of the web content displayed to the user to indicate different types of results, a document generated in HTML or other formats, and many other possibility to provide feedback about the evaluation results.

2.3.1 Human-Readable Reports

Feature: Provide functionality in your tool to generate reports in human-readable formats.

While most evaluation tools present their results to the tool users, in some cases they only work in the background, for example to check the code typed in the editor of a Content Management Systems (CMS) or Integrated Development Environments (IDE), and do not provide feedback directly to the users (they relay the results to other tools that present the feedback). For evaluation tools that provide feedback directly to the users, this feature could entail providing several modalities for reporting. For example, evaluation tools could provide interactive widgets, such as sortable tables, and dynamically generated graphs to help users explore and analyze the results. They could also generate static documents in HTML and other formats that can be read independently from the tool (for example, bug reports for developers to fix). Different formats will be suitable for different audiences.

2.3.2 Machine-Readable Reports

Feature: Provide functionality in your tool to generate reports in machine-readable formats.

Providing evaluation results in machine-readable formats, including the results of individual tests and aggregated results, allows processing these results by other tools. For example, people can build scripts and tools to analyze the results for monitoring and for research purposes. This also facilitates the longevity of the data, especially when they are stored in open formats.

Many formats can be used to provide such machine readable formats, ranging from Comma-Separated Values (CSV) formats [CSV, TABDATA], to the more structured XML [XML10, XML11] and JSON [JSON] formats. The W3C Evaluation and Report Language (EARL) 1.0 [EARL10] is an international effort to standardize a vendor-neutral and platform-independent format that is specifically designed to facilitate the declaration and exchange of test results.

2.3.3 Report Customization

Feature: Provide functionality in your tool to allow tool users to customize the reports generated by the tool.

In addition to providing reports in different formats and modalities, it is often helpful to be able to customize the generated reports as well. For example, tool users might want to organize the evaluation results according to the different accessibility requirements, the target audiences, the types of results, the areas of the website that were evaluated, the type of content, and many other aspects. This also applies to graphs that can be generated, data that can be exported, and other aspects of your tool reporting.

2.3.3.1 Results Aggregation

Feature: Provide functionality in your tool to allow tool users to aggregate the results by different criteria.

A specific aspect of customization is the ability of the evaluation tool to aggregate the results according to different criteria. This could include the following aspects:

2.3.3.2 Importing Results

Feature: Provide mechanisms in your tool to import evaluation results from other evaluation tools.

Importing results from other evaluation tools can be useful, for example to extend the coverage of accessibility tests carried out (see test coverage for more background), and to combine, compare, or further analyze evaluation results. Together with results exporting functionality this feature provides flexibility that allows your evaluation tool to be integrated in a variety of development and testing environments. This may require specific APIs and the use of formats such as those described in machine-readable reports.

2.3.4 Persistence of Results

Feature: Provide mechanisms in your tool to support persistence of results over time and sessions.

To facilitate monitoring of progress over time, evaluation tools can help by storing and maintaining evaluation results, and retrieving them at later stages. This could include the use of a database or other persistency layers in the evaluation tool that allows the data to be kept across different sessions. Together with the report customization options discussed, this feature could allow the analysis and comparison of website accessibility performance over time.

2.3.5 Repair Guidance

Feature: Provide functionality in your tool to supplement generated reports with repair guidance.

In addition to providing evaluation results, tools can also help designers and developers, especially those with less accessibility expertise, by providing repair suggestions for identified errors. This could include information about the accessibility requirements, the errors, examples, and links to tutorials and other resources. W3C provides useful material in WCAG 2.0 and its supporting documents, in particular in the Techniques for WCAG 2.0, Understanding WCAG 2.0, and the Web Accessibility Tutorials.

Evaluation tools can also actively help tool users in applying accessibility repairs to the web content, for example by prompting the tool users to provide input. However, fully automated repair is generally discouraged as it can often originate undesired side-effects, like introducing new errors. More guidance on providing such repair functionality can be found in the Implementing ATAG 2.0 guide, specifically in Part B and the appendices of the guide.

Note: As noted earlier, evaluation tool that provide repair functionality that modifies the content could be considered to be web authoring tool (see evaluation tools). At the same time, this feature of an evaluation tool can help authoring tools in meeting requirements defined by the Authoring Tool Accessibility Guidelines (ATAG) 2.0 [ATAG20].

2.4 Tool Usage

This section includes features impact the overall usage of the evaluation tool, such as integration into the development environment and workflow, localization and customization of the user interface, and its compatibility with different platforms and technologies.

2.4.1 Workflow Integration

Feature: Provide mechanisms in your tool to support its integration in different workflow environments.

Ideally evaluation tools are integrated seamlessly throughout the design and development workflow, to facilitate evaluation throughout the process. There are several mechanisms in which evaluation tools can support workflow integration, such as the following:

2.4.2 Platform Support

Feature: State the specific operating systems and platforms that your tool can be used on.

Evaluation tools run on different operating systems and platforms. For example, software-based (desktop) evaluation tools may be available for one operating system but not for another. Also, evaluation tools that are used as part of other tools, for example as plug-ins and add-ons for web browsers, Content Management Systems (CMS), and Integrated Development Environments (IDE), might only support specific versions of these host tools. For example, a plug-in for one web browser may not be compatible with another web browser, or with the same web browser on a different operating system. Evaluation tools might also not provide the same functionality on different operating systems and platforms, and may require certain pre-installations and pre-configurations.

Given this complexity, it is essential to clearly communicate the specific operating systems and platforms that your evaluation tool can be used on. This includes relevant version numbers, configurations, and other aspects that are prerequisites for the tool to run properly. Also state any limitations that the tool may have on different platforms and configurations, where relevant.

2.4.3 Localization and Internationalization

Feature: State the specific natural languages and localization settings that your tool supports.

Providing localization and internationalization features is important, to allow your evaluation tool to be usable in different languages and regions around the world. For example, all text provided by the tool, including text on the user interface, reports, and guidance materials needs to be available in different languages, to address different audiences. This typically implies supporting variable lengths for words (words in one language may be longer or shorter in another), and text directions (eg. left-to-right vs. right-to-left etc.). Also, the use of units and measurements (eg. comma vs. period as separators etc.) and the meanings of symbols and icons can be significantly different from one region to another. The layout of the user interface can also be affected by local customs and preferences. Respecting the operating system and platform settings and configuration is important in this context.

Tip: More information about localization and internationalization is provided by the W3C Internationalization Activity [W3Ci18n].

Tip: W3C Web Accessibility Initiative (WAI) provides references to WCAG 2.0 Translations, including its supporting documents.

Note: This feature relates to the evaluation tool itself rather than the language of the web content being evaluated. The latter is addressed by the feature content language.

2.4.4 Target Audience

Feature: Provide functionality in your tool to allow tool users to customize it to different target audiences.

Ideally evaluation tools address different audiences throughout the design and development process, for example those with more and less coding responsibility, and people with varying expertise in accessibility. Being able to address these different audiences supports the creation and maintenance of accessible web content.

Part of this can be accomplished through the report customization feature. For example, to provide detailed listings of bug reports for web developers, and aggregated summaries for web project managers and website owners. However, in some cases the user interface can be rearranged or otherwise adapted to better meet typical usage and expectations of different users. For example, the menu items and functionality presented by the user interface may be more or less appropriate for different tool users.

2.4.5 Tool Accessibility

Feature: Ensure that the user interface provided by your tool is accessible to people with disabilities.

While evaluation tools may often need to present the inaccessible web content being evaluated, for example to highlight errors or to seek confirmation of an error by the tool user, the surrounding user interface provided by the tool needs to be accessible to people with disabilities. The following resources are relevant to making your evaluation tool accessible:

2.4.6 Accessibility Standards

Feature: State the specific accessibility standards that your tool supports for evaluation.

The Web Content Accessibility Guidelines (WCAG) 2.0 [WCAG20] are widely recognized by organizations and governments around the world. In support of harmonization of web accessibility standards, it is recommended to support at least WCAG 2.0 (that is also ISO/IEC 40500), as the international standard. Where needed, for example due to regional derivatives and variations of WCAG 2.0, your tool could also provide support for local standards. Stating which standards your tool supports is important for clarity.

Note: This feature relates to the more granular features test coverage, testing modes, custom tests, and tests selection.

3 Example Profiles

This section presents a few examples of web accessibility evaluation tools, to illustrate how the features listed in the previous section can be combined in practice. The examples are realistic though they are fictitious and only provided for illustration purposes, and do not represent any particular existing product. The example tools are described in the following sections, and their features are compared side-by-side in tabular form later on in this section.

3.1 Example Tool A: In-Page Evaluation

Example Tool A is a plug-in for a specific web browser. It builds on the rendering engine of that web browser. It performs automatic accessibility checks on the rendered web content and present the results directly in the web page by injecting icons into the content being presented to the tool user. More specifically:

The tabular comparison below maps this tool to each of the features described in section list of features.

3.2 Example Tool B: Large-Scale Evaluation

Example Tool B is a crawler that is designed to monitor many hundreds of websites, each with large volumes of web pages. It provides a service for website owners to obtain reports about their websites, and to provide authentication and configuration parameters so that the crawler can access more web pages. More specifically:

The tabular comparison below maps this tool to each of the features described in section list of features.

3.3 Example Tool C: Mobile Applications

Example Tool C is a desktop software, to help evaluate web-based mobile applications during their development. It is designed for application developers. It emulates different mobile operating systems and web browsers, and provides additional functionality, such as listing the triggered events, to assist testing. More specifically:

The tabular comparison below maps this tool to each of the features described in section list of features.

3.4 Tabular Comparison

This section provides a table that lists each of the example tools described in the preceding sections, and maps them to each of the features listed in section list of features. The tools are listed side-by-side to facilitate comparison of the different types of features that each tool provides. The purpose of this mapping is to illustrate how the tool features are combined in different types of tools.

Table 1. Tabular mapping and comparison of the example tool features.
Category Tool Feature Example Tool A Example Tool B Example Tool C
Resources to be Evaluated Content Formats HTML, CSS and JavaScript HTML, CSS and JavaScript HTML, CSS and JavaScript
Content Negotiation relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Cookies relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Authentication relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Session Tracking relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Character Encoding ISO-8859-1, UTF-8, UTF-16 ISO-8859-1, UTF-8 ISO-8859-1, UTF-8
Content Language any language supported by these encodings: ISO-8859-1, UTF-8, UTF-16 any language supported by these encodings: ISO-8859-1, UTF-8 any language supported by these encodings: ISO-8859-1, UTF-8
Content Rendering rendered DOM (relies on browser capabilities) rendered DOM (rendering engine) rendered DOM (rendering engine)
Document Fragments no no no
Crawling no yes no
Sampling no yes no
Performing Evaluation Checks Test Coverage no yes no
Testing Modes only automatic all all
Custom Tests no no no
Tests Selection no yes no
Test Automation no no yes
Manual Evaluation no no yes
Reporting and Monitoring Human-Readable Reports via UI icons dashboard; HTML report dashboard
Machine-Readable Reports EARL EARL none
Report Customization no comments/results added by evaluator no
Results Aggregation no yes no
Importing Results EARL EARL, CSV no
Persistence of results no yes no
Repair Guidance inline hints in report yes
Tool Usage Workflow Integration browser plug-in stand-alone client+server application stand-alone desktop application
Platform Support browser add-on distributed enterprise application with an external database desktop application
Localization and Internationalization en en, de, fr, es, jp en
Target Audience developers developers, commissioners developers
Tool Accessibility not accessible accessible under Microsoft Windows not accessible
Accessibility Standards WCAG 2.0 WCAG 2.0, Section 508 (USA), BITV 2.0 (Germany) WCAG 2.0

4 References

ATAG20
Authoring Tool Accessibility Guidelines (ATAG) 2.0. W3C Candidate Recommendation 7 November 2013. Jan Richards, Jeanne Spellman, Jutta Treviranus (editors). Available at: http://www.w3.org/TR/ATAG20/
See also ATAG Overview
CSS2
Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
CSS3
CSS Current Status is available at: http://www.w3.org/standards/techs/css
CSV
Common Format and MIME Type for Comma-Separated Values (CSV) Files. Y. Shafranovich. Internet Engineering Task Force (IETF). Request for Comments: 4180, 2005. Available at: http://tools.ietf.org/rfc/rfc4180.txt
DOM
W3C DOM4. W3C Last Call Working Draft 10 July 2014. Anne van Kesteren, Aryeh Gregor, Ms2ger, Alex Russell, Robin Berjon (editors). Available at: http://www.w3.org/TR/dom/
EARL10
Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
See also EARL Overview
ECMAScript
ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
HTML4
HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
HTML5
HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Recommendation 28 October 2014. Robin Berjon, Steve Faulkner, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer, Ian Hickson (editors). Available at: http://www.w3.org/TR/html5/
HTTPCOOKIES
HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
I18N
Internationalization and localization. Wikipedia. Available at: http://en.wikipedia.org/wiki/Internationalization_and_localization
JSON
The JSON Data Interchange Format. Standard ECMA-404 1st Edition / October 2013. Available at: http://www.ecma-international.org/publications/standards/Ecma-404.htm
MathML
Mathematical Markup Language (MathML) Version 3.0 2nd Edition . W3C Recommendation 10 April 2014. David Carlisle, Patrick Ion, Robert Miner (editors). Available at: http://www.w3.org/TR/MathML/
ODF
Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
OOXML
Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
PDF
PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
RFC2119
Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
TABDATA
Model for Tabular Data and Metadata on the Web. W3C First Public Working Draft 27 March 2014. Jeni Tennison, Gregg Kellogg (editors). Available at: http://www.w3.org/TR/tabular-data-model/
SMIL
Synchronized Multimedia Integration Language (SMIL 3.0). W3C Recommendation 1 December 2008. Dick Bulterman, Jack Jansen, Pablo Cesar, Sjoerd Mullender, Eric Hyche, Marisa DeMeglio, Julien Quint, Hiroshi Kawamura, Daniel Weck, Xabiel García Pañeda, David Melendi, Samuel Cruz-Lara, Marcin Hanclik, Daniel F. Zucker, Thierry Michel (editors). Available at: http://www.w3.org/TR/SMIL/
SVG
Scalable Vector Graphics (SVG) 1.1 (Second Edition). W3C Recommendation 16 August 2011. Erik Dahlström, Patrick Dengler, Anthony Grasso, Chris Lilley, Cameron McCormack, Doug Schepers, Jonathan Watt (editors). Available at: http://www.w3.org/TR/SVG/
W3Ci18n
W3C Internationalization (I18n) Activity. Available at: http://www.w3.org/International/
WAI-ARIA
Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Recommendation 20 March 2014. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
WCAG20
Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
See also WCAG Overview
WCAG20-TECHS
Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 16 September 2014. Michael Cooper, Andrew Kirkpatrick, Joshue O Connor (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
WCAG2ICT
Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies (WCAG2ICT). W3C Working Group Note 5 September 2013. Michael Cooper, Peter Korn, Andi Snow-Weaver, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG2ICT/
See also WCAG2ICT Overview
WEBCRAWLER
Web crawler. Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
WebDriver
WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/
XHTML10
XHTML™ 1.0 The Extensible HyperText Markup Language (Second Edition). A Reformulation of HTML 4 in XML 1.0. W3C Recommendation 26 January 2000, revised 1 August 2002. Available at: http://www.w3.org/TR/xhtml1/
XML10
Extensible Markup Language (XML) 1.0 (Fifth Edition). W3C Recommendation 26 November 2008. Tim Bray, Jean Paoli, C. M. Sperberg-McQueen, Eve Maler, François Yergeau (editors). Available at: http://www.w3.org/TR/REC-xml/
XML11
Extensible Markup Language (XML) 1.1 (Second Edition). W3C Recommendation 16 August 2006, edited in place 29 September 2006. Tim Bray, Jean Paoli, C. M. Sperberg-McQueen, Eve Maler, François Yergeau, John Cowan (editors). Available at: http://www.w3.org/TR/xml11/

Acknowledgements

The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Yod Samuel Martín, Philip Ackermann, Evangelos Vlachogiannis, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo, and Konstantinos Votis.

This publication was developed with support from the WAI-ACT project, co-funded by the ICT initiative under the European Commission's Seventh Framework Programme.