WAI: Strategies, guidelines, resources to make the Web accessible to people with disabilities
Quick links: Overview, Test Plan, Implementer Instructions, Test Harness
This implementation report describes how the Protocols and Formats Working Group tested and demonstrated implementability of the two specifications above. Accessible Rich Internet Applications (WAI-ARIA) 1.0 was published as a W3C Candidate Recommendation on 18 January 2011. WAI-ARIA 1.0 User Agent Implementation Guide was published as a W3C Candidate Recommendation on 17 December 2013. Candidate Recommendation is the maturity stage for a specification at which it is believed to be complete and implementable. To move to the next stage, the Working Group demonstrates implementability of the requirements, usually by documenting at least two interoperable implementations of each feature. The mechanism to do this depends on the nature of the specification. Because these two specifications are interdependent, implementation experience for both is provided in this combined report.
This is an implementation report for the Accessible Rich Internet Applications 1.0 (WAI-ARIA), and for the WAI-ARIA 1.0 User Agent Implementation Guide (UAIG). The report for these two specifications is combined because of the tight inter-dependency between, them described further below.
WAI-ARIA 1.0 transitioned to Candidate Recommendation on 18 January 2011. The Candidate Recommendation transition meeting took place on 22 December 2010 and 7 January 2011. WAI-ARIA 1.0 User Agent Implementation Guide transitioned to Candidate Recommendation on 17 December 2013. The Candidate Recommendation transition meeting took place on 13 December 2013.
WAI-ARIA and the UAIG are closely related specifications. While they entered Candidate Recommendation on different dates, it was always the plan—stated in the ARIA Candidate Recommendation—that they would advance to Proposed Recommendation together.
The two specifications, though interdependent, are separate for the following reasons:
However, the two specifications are closely inter-related. The UAIG primarily provides user agent implementation procedures for the features of WAI-ARIA, thus depends on WAI-ARIA. In turn, implementation of WAI-ARIA is demonstrated by testing implementation requirements specified by the UAIG, thus WAI-ARIA also depends on the UAIG. For these reasons, each specification requires the other to demonstrate its implementation requirements. For best efficiency, the implementation report for both specifications is provided together here.
The WAI-ARIA Exit Criteria and the UAIG Exit Criteria are quite similar. They specify that tests will be prepared and executed, and passing results on two implementations will constitute demonstration of interoperability. Because many of the tests depend on mapping to accessibility APIs which often have only one implementation, successful mapping of a given WAI-ARIA feature need only be demonstrated for any accessibility API, not necessarily each accessibility API.
In general, a distinct implementation is counted as a distinct user agent on a given operating system mapping to a particular platform. For instance, Firefox on Windows mapping to MSAA + UIA Express is one implementation, and Safari on OS X mapping to AXAPI is another. This pattern ensures that implemenations are fully distinct from each other. For approximately 5% of test cases, however, versions of a single user agent mapping to different platforms were used to demonstrate interoperability. These were Firefox on Windows mapping to MSAA + UIA Express, and Firefox on Linux mapping to ATK / AT-SPI. Because the same user agent was used, it was important to verify that the implementations were nonetheless distinct. Working with the implementer, the Working Group confirmed that the code base for WAI-ARIA mapping on the two platforms is separate, and maintained by different developers. Therefore these two versions of Firefox can be considered distinct for purposes of demonstrating interoperability of the specification; this is further demonstrated in portions of the the test report where Firefox on one platform passes while fails on another platform (such as test case 8). In spite of assuring itself that two implementations of Firefox are in fact distinct, the Working Group went to some length to minimize reliance on this. Only when it was clear after several rounds of discussion that other user agents would not be able to repair bugs in a realistic timeframe, did the group rely on the two versions of Firefox.
Only sections of the specifications that contain normative requirements are tested. Non-normative recommendations, while valuable, were not requirements because they were not implementable in all situations. RFC 2119 "SHOULD" and "MAY" statements were therefore considered non-normative for this purpose. Normative requirements tested constitute RFC 2119 "MUST" statements and other requirements in sections stating "This section is normative". For the WAI-ARIA specification, this includes The Roles Model, Supported States and Properties, Implementation in Host Languages, and Conformance; for the UAIG, this includes Supporting Keyboard Navigation, Mapping WAI-ARIA to Accessibility APIs, and Special Document Handling Procedures.
The design of WAI-ARIA supports specialized interfaces, called accessibility APIs (AAPI), which can be used to communicate accessibility information about user interfaces to assistive technologies. The UAIG addresses the following accessibility APIs:
These APIs are each defined for a particular operating system and together form a platform. The accessibility features of these platforms are defined by operating system vendors and are beyond the scope of WAI-ARIA. User agents expose semantics provided by WAI-ARIA markup to the platform as prescribed by the UAIG. Therefore, the implementations tested are these user agents on specific platforms. User agents tested include:
Various versions of user agents were tested because implementation of WAI-ARIA progressed over the course of the testing period. Test results also uncovered implementation bugs which were fixed and updated versions of the user agent tested. Passing test results in any version of a user agent was considered to be one of the two implementations required to demonstrate interoperable implementability of the relevant ARIA feature.
Assistive technologies (AT) use the information exposed to the platform AAPI to support accessible interaction according to the needs of a specific user. How AT do this is specific to the particular AT and user and is not specified by WAI-ARIA. Therefore, AT support for WAI-ARIA was not tested.
A test suite was prepared including testable statements, test files, and expected results. Test files are located in a Mercurial repository. Testable statements, expected results, and a framework to support test execution and results collection are in the test harness.
A key characteristic of the UAIG is that it defines mappings, for each WAI-ARIA feature, to more than one platform. The appropriate expected result for a given test case depends, therefore, on the platform being tested. Therefore the test harness allows multiple expected results to be defined for each test case and associates them with the appropriate platform. Testing of ARIA features depends on these mappings; therefore the UAIG is a critical resource to ARIA testing, and itself is largely tested by the ARIA testing process.
Testing is associated with “test runs” which contain metadata about the user agent and platform being tested; when testers change user agents or even versions, they create a new test run. This ensures test results are clearly associated with a particular version of a user agent and results remain valid for that version while allowing collection of new results for updated versions. For this reason the test report may include both failure and passing results for some tests on a given user agent, representing older and newer test runs.
When a tester executes a test run, they are presented with a list of applicable testable statements with option to test each one. Opening the test window presents the testable statement and expected result appropriate to the platform, the test file itself, and buttons to indicate the test result. Testers can only enter one result per test case per test run, but can edit previous results.
The majority of ARIA test cases investigate how ARIA features, normally provided in a HTML document, are exposed to platform accessibility APIs. Because assistive technology was excluded from testing to avoid confounding results, determining test results requires inspection of the accessibility API with dedicated tools. Testers would open the test file in a user agent, perform any interaction instructions, and then use the AAPI inspector to find the appropriate object corresponding to the test element in the HTML file. Expected results for the test cases indicate object properties of interest which the user could compare to those found in the inspector. This was a manual process because AAPI inspectors mostly are utility programs that don’t provide complex automation support, and the group did not take time to develop an automated solution.
The following AAPI inspectors were used by testers:
Some tests, particularly for the UAIG, do not require AAPI inspection. They test behavior of the user agent under certain conditions. These were also tested manually because the number of tests was too small to merit automation even though they are more easility automated than AAPI inspection tests. In some cases, script set up an environment for the test after the page loaded, or testers were instructed to perform a particular interaction and observe the result.
Implementation of WAI-ARIA was done in close collaboration with the Protocols and Formats Working Group. Preliminary implementations were done years prior to the ARIA Candidate Recommendation, as an iterative way to develop the candidate technology. Many implementers kept implementations up to date as the specification matured, and submitted feedback. When formal testing began, implementations were believed to be mature; however, many bugs that were overlooked previously due to lack of test cases came to light. For the testing period, most implementers had representatives in the Working Group, who worked together in via task forces to define implementation expectations, compare progress, discuss issues, etc. Implementers without direct representation had identified contact persons with whom the group could coordinate. These two channels allowed the group to track the progress of implementations, file bugs directly with implementers, verify implementation fixes, and resolve interoperability issues.
After WAI-ARIA 1.0 was published as a Candidate Recommendation, the Protocols and Formats Working Group set about collecting implementations to demonstrate satisfaction of the Candidate Recommendation Exit Criteria. Work included:
When a specification is published as a Candidate Recommendation, the Working Group may identify that some features are "at risk". This means that the group is not sure if it will find implementation of the feature, and specifies what action will be taken at the end of the Candidate Recommendation period. This is the only mechanism allowing normative change after the publication of the Candidate Recommendation; any normative requirements not marked as "at risk" must either be implemented or the specification returned to Working Draft.
WAI-ARIA defined one feature at risk in the Candidate Recommendation:
The Text Alternative Computation (Section 5.2.7.3), step 2B, may be changed from a normative requirement to an informative recommendation if interoperable implementations are not found. This does not affect the rest of the Text Alternative Computation.
Over the course of testing and implementation, this item was successfully implemented and the risk action has not been taken.
The UAIG defined four features at risk in the Candidate Recommenation:
- Steps 10 and 11 of Controlling focus with
tabindex
(Section 4.2), which broaden keyboard accessibility by event simulation, may be removed as a normative requirement if interoperable implementations are not found. The remaining steps in this section are not at risk and would not be removed.- The Microsoft UIA column in the mapping tables for sections 4.1, 5.4.1, 5.5.1, 5.8.1, 5.8.2, 5.8.3, 5.8.4 may be removed if an implementation is not found. The remaining AAPI columns would not be removed.
- The Text Alternative Computation (Section 5.6.1.1), step 2B, may be changed from a normative requirement to an informative recommendation if interoperable implementations are not found. This does not affect the rest of the Text Alternative Computation.
- Section 5.8.4 Special Events for Menus is only implemented on one platform. If an additional implementation is not found, requirements in this section will be changed to recommendations. Their status as requirements would be reconsidered for a future version of this specification.
The first two items have not found sufficient implementation. Accordingly, the Working Group has removed steps 10 and 11 of Section 4.2, and removed the Microsoft UIA column from the mapping tables for sections 4.1, 5.4.1, 5.5.1, 5.8.1, 5.8.2, 5.8.3, and 5.8.4.
The group was able to document implementation of the text alternative computation and special events for menus. Therefore, the risk action has not been taken for items 3 and 4 in the list above.
@@Executive summary, move details to separate page, also cover what we did in the 3-year testing period.
Below is a report of the implementation of each of the normative requirements of WAI-ARIA 1.0. Columns show the user agents and platforms tested. For each requirement, the user agent may pass or fail the requirement; empty cells indicate no data has been collected. Testers could also indicate they were uncertain about the outcome or that they thought the test case was invalid, triggering test case review and retest. Working Group review determined when "pass" results were valid in the face of other results recorded, e.g., because the test case had been changed or a new version of the user agent released with the relevant bug fixed.
The link in the first column of the table, ID, shows details about the test results collected for the particular test case, including tester, date of test, and user agent and version tested. It can be seen how, in some test cases, failures were originally reported, then the same or another tester later used an updated version of the user agent and reported a pass.
The link in the second column of the table, Ref, links to the section of the specification relevant to the test case, when available in the database. Many testable requirements in the specification are expressed in tables of properties, lacking individual IDs, so test cases reference the closest relevant section heading. Therefore sometimes there are many test cases associated with a particular section.