The Automated WCAG Monitoring community group, founded in 2014, is a W3C community with the focused to develop reliable test cases that can be automated or that are semi-automated. We aim to assist developers of test tools to improve the accuracy and completeness of their tools.
Creating (semi-)automated tests for WCAG is key to affordable, large scale research. The tests are designed in a way that they are useable by people with a variety of skills. The results too should be informative, not just to developers, but to website managers, policy makers and disability advocates and other interested parties.
Web accessibility testing is highly reliant on human judgement. Not only that, but it requires a significant understanding of both web technologies and assistive technologies. This makes automated testing and testing by people lacking these skills challenging. The Auto-WCAG community group believes that by taking on these challenges we can enable developers to solve parts of the accessibility questions, before ever involving accessibility experts. This means problems can be caught earlier in the development and accessibility experts can use their time more efficiently. Both of which will lead to more accessible products.
What We Do
The objective of this community is to create and maintain tests that can be implemented in large scale monitoring tools for web accessibility. These tests will be either automated, or semi-automated, in which tools assist non-expert users to evaluate web accessibility. The test cases are small, atomic tests that look if specific elements on a web page meet WCAG 2.0 success criteria. Each test case has a selector, looking for one ‘type’ of content on a web page. This piece of content is then run through a series of automatic or manual steps. The test case will return one of the following values: Passed, CannotTell or Failed.
By comparing the test results with results from expert accessibility evaluators, we aim to track the accuracy of the tests we’ve developed. This allows us for an iterative improvement and adjustment of the tests as web development practices change and evolve. It also provides the statistical bases on which large scale accessibility monitoring and benchmarking can be built.