Skip to toolbar

Community & Business Groups

Automated WCAG Monitoring Community Group

This group has been replaced by the ACT Rules Community Group - it was closed on March 2, 2020.

Creating (semi-)automated tests for WCAG is key to affordable, large scale research. The tests are designed in a way that they are useable by people with a variety of skills. The results too should be informative, not just to developers, but to website managers, policy makers and disability advocates and others. The objective of this community is to create and maintain tests that can be implemented in large scale monitoring tools for web accessibility. These tests will be either automated, or semi-automated, in which tools assist non-expert users to evaluate web accessibility. By comparing the test results with results from expert accessibility evaluators, we aim to track the accuracy of the tests we've developed. This allows for an iterative improvement and adjustment of the tests as web development practices change and evolve. It also provides the statistical bases on which large scale accessibility monitoring and benchmarking can be built. This group will not publish specifications.

Group's public email, repo and wiki activity over time

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

No Reports Yet Published

Learn more about publishing.

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

This group does not have a Chair and thus cannot publish new reports. Learn how to choose a Chair.

Auto-WCAG workshop speakers confirmed!

We are proud to announce Shadi Abou-Zahra, Jesse Beach and Eric Velleman as speakers for the first Auto-WCAG workshop in June.

Shadi Abou-Zahra (W3C) will give a presentation about EARL, Jesse Beach (Facebook) shares her knowledge about Quail 3 and Eric Velleman (Accessibility Foundation) will talk about WCAG-EM.

The presentations will be held in the afternoon and will be broadcast live. The exact times and days will soon be announced.

About the event

The workshop is a three day event from the 15th to the 17th of June in Utrecht, The Netherlands. The primary focus of the workshop is to write and review additional test cases for automating WCAG conformance testing. The afternoon sessions will be broadcast for those unable to attend in person to participate online.

The program and extra information can be found on the Auto-WCAG wiki: Workshop June 2015

It is still possible to register. To sign up for the workshop, send an e-mail to info@accessibility.nl please let us know if you will be attending in person or online.

4 Technical Difficulties of Automated Accessibility Testing

Jesse Beach is a software developer who builds tools for other developers. Visit her on Github: https://github.com/jessebeach

Accessibility evaluation is a method to determine deviation from best practices and standards. Humans do this, for the most part, very well and slowly. Machines do this, for the most part, unevenly and yet at great speed.

During the several years I’ve spent exploring techniques for automated accessibility testing in Quail, I recognized a few types of persistent challenges. Let’s go through them.

Identifying generic DOM elements

By identify, I don’t mean finding elements. That’s pretty easy to do with selectors. The difficulty is uniquely identifying a single element so that it can be referenced later.  Let’s say we have a list of links with images in them used for navigation.

Providing a unique selector for, as an example, the second link in the list, is difficult.

ul li:nth-child(2) a {}

Certainly we could be more specific about the parent-child relationship. Sorry about the ‘>’ HTML entities. I can’t get them to display as a great than sign, the CSS child element selector.

ul > li:nth-child(2) > a {}

And perhaps even include the href attribute.

ul > li:nth-child(2) > a[href="/news"] {}

It’s likely that this selector will be unique on the page, but it’s not guaranteed. With Quail, we take into account several attributes, like href, the help make a DOM element unique. Obviously, we also look for IDs, which presumably are only used once on a page, but even that isn’t guaranteed! In writing this article, I realized we should also be including relative DOM ordering. That’s why we write articles about our work, in order to learn.

In a perfect system, any specification violation would be associated with a unique selector identifying the element in a document associated with the violation. In the wild, unwashed web, we can never rely on this to be true.

Testing the DOM in the DOM

For some standards, such as color contrast, we need to generate a rendered web page. This is necessary because CSS is a complex beast. Browser vendors have a decade and a half experience interpreting and rendering DOM representations with CSS applied. End users use browsers as well. So they are the best tools we have to understand how HTML and CSS will combine to produce visual output. What I’m saying is, the environment we consume content in, is also the environment we’re running out tests in. In other words, the inmates are definitely running this asylum.

My favorite and most frustrating set of evaluations concerns invalid HTML. Here is an example.

pots of gold

Notice that the closing tags are incorrectly nested. Here’s another.

baskets of fruit

Notice the p tag isn’t closed. This one is tricky because it’s actually valid HTML; the closing `

` tag is optional. A browser will handle this just fine. Now, what about this example below.

strings of pearls

That will cause a ruckus on your page. A browser will attempt to fix this nesting error, but the results are unpredictable. And remember, when we query an HTML document, we’re not querying the text that was sent to the browser, we’re querying the DOM. The DOM is a model of the HTML text document it received from the server. Our automated assessments are interacting with a mediated HTML representation, which is often fine. But in the case of malformed HTML, it hides the problems from us.

In the current version of Quail, we make an AJAX call for the host page, get the text payload and run that string through tests for malformed HTML. It’s by no means ideal. The better solution would be to do assessments of malformed HTML outside a browser altogether and this is something we will implement in the future.

Writing really basic JavaScript

JavaScript was born a humble scripting language. In the years since its introduction, it has become a richer, more versatile language. If you work in an environment, like a server, where you can control the version of JavaScript that you run, then you probably revel in taking advantage of the latest language improvements. These include better Object and Array methods, for example.

In the world of automated accessibility testing, we want to provide the widest coverage for tests that we can. Practically this means our assessments must be written in the most plain of plain JavaScript possible. Bare-metal JS I call it. There are no forEach methods or Weak Maps. jQuery is best used around the 1.6 version when its feature set firmed up and it had excellent support for finicky, older browsers.

Cross-browser testing

One of the shortcomings of Quail early on was our singular testing platform — PhantomJS. PhantomJS is what is known as a headless browser. Unlike a web browser you use to view the internet, a headless browser has no visual component. It is lightweight and meant to render pages as if they were to be displayed — it just doesn’t display them. PhantomJS has also been on the cusp of a major version for release for years now. It’s a wonderful tool, but not without frustrating shortcomings.

To really test a web page, you need to run it in browsers: various versions on various platforms. To do this, the assessments need to be run through a web driver that knows how to talk to various browser executables. This infrastructure is much more complex than a test runner that spins up PhantomJS. Quail (a wonderful tool with frustrating shortcomings), is itself on the cusp of a major version release. We are introducing support for test runs using WebdriverIO and Selenium.

Selenium will allow us to run assessments in different browsers. Much thanks to OpenSauce for making this sort of testing available to open source projects!

Summing up

Writing automated tests for accessibility specifications will challenge you as a developer and in a good way. You’ll need to understand the fundamentals of the technologies that you use every day to get your work done. It’s like doing sit-ups or jogging; you’re in better shape for any sort of physical activity if you practice these basics often. Anyone is welcome to join the Quail team in translating the work of the Auto-WCAG Monitoring Group into coded examples of accessibility guideline assessments. Visit us on our Github project page, check out the tasks and propose a Pull Request!

About the author

Profile thumbnail image for Jesse Beach

Jesse Beach is a software developer at Facebook who builds tools for other developers. Visit her on Github: https://github.com/jessebeach

First Test Case of Success Criterion 1.1.1. finished

The group has finished the test case for provision of short text alternatives as required by WCAG Success Criterion 1.1.1.

Its core component is the text computation algorithm as defined in the current UAAG as well as the WAI-ARIA recommendation. This algorithm specifies how user agents should handle the different attributes that may be used to provide a textual alternative.

It does not only cover images, but also input elements of type image, areas of an image map and embed- and object-elements.

Assuming that the attributes are accessibility supported, all these elements are semi-automated tested for the correct use of sufficient techniques for provision of a textual alternative. All circumstances, in which WCAG allows to omit a text alternative, such as for grouped images or as a part of a link, are considered. Additionally it is tested for the correct hiding of purely decorative content from assistive technologies and common failures like the use of placeholders or filenames as an alternative.

On most elements human evaluation is needed for the complete test.
The tests are semi-automated so automated tools can implement many of the 18 steps. The Auto-WCAG community is looking forward to see what developers can do with this great new test case.

Do Accessibility Checkers have a place in QA?

There are many great tools on the market that can check the accessibility of web pages. The Web Accessibility Evaluation Tools List is a great resource to find checkers for different types of content. Many of them focus on testing specific aspects of accessibility, such as color contrast or parsing. But some have a broader scope and will check many different aspects  and report the conformance to WCAG success criteria.

I encourage all web professionals to use an accessibility checker in their daily work. But as an accessibility auditor with 8 years of experience, I must confess that I don’t use any of these checkers myself. To test HTML pages, the only tools I use are a DOM inspector, a color analyzer and a validator. So why the difference?

Test Accuracy

Automated accessibility testing is tricky. WCAG was never designed to be automated. There is a good argument to be made that by definition, automated testing of accessibility is impossible. Think about it. If you want to test if some piece of content is accessible, you should compare the existing implementation to what the component should be like when it is accessible. To automate this, you need two things. You need to automatically determine what an accessible version would be like and you need some way to compare it to the current situation.

The first part of this is important. Imagine a tool that could reliably determine what the text alternative of an image should be. We could compare that to the actual alternative and we would have our test, right? However, if there was such a tool, assistive technologies could also implement it. And if they did, we wouldn’t have an accessibility problem with text alternatives anymore.

This idea seems to be true for most accessibility problems: If we can automatically determine the solution, the problem goes away. Because of this, accessibility checkers are mostly unable to determine if a success criteria was met, except where no assistive technologies are involved. But what our tools certainly can do, is to look for symptoms of accessibility barriers and fail a success criteria based on those.

Symptoms Of Inaccessibility

If you’ve done anything with HTML in the past 10 years, you probably know that you shouldn’t  use the <font> element. It is an outdated solution to styling text. There is nothing inherently wrong about the <font> element, but many accessibility checkers flag the <font> element as an error. Why do that for an element that is not inherently inaccessible?

One way you could use the <font> element to create an accessibility problem is the following:

Pick a color:
<button type=”submit” value=”r”><font color=”red”>A</font></button>
<button type=”submit” value=”g”><font color=”green”>B</font></button>

Here the font element is used to provide information, which is not available in text. This is a failure for criterion 1.4.1. A checker that fails the criterion for using the <font> element would be correct in doing so in this situation. It assumes the <font> element is often used to provide information that is not otherwise available and fails the criterion based on that assumption.

Assumptions are the basis of automated accessibility tests. Checkers look for symptoms of accessibility barriers, such as the use of a <font> element, and assume they found a barrier. Every automated test I know of works on assumptions in one way or another. Even a test such as color contrast assumes there is no conforming alternative version. The important question then becomes: how accurate are these assumptions?

Dealing With Assumptions

Most tests in tools are based on the test designer’s experience with front end development practices. This experience greatly influences the accuracy of an accessibility checker. The required accuracy of the tests depends quite a lot on who is using the tool. For me, as an external accessibility auditor, I need a very high degree of accuracy. Double checking the results of a checker takes a lot of time, often more then it would to do the test manually. Therefore I tend not to use these tools.

For web developers and QA teams accuracy is less of an issue. It may be okay to flag <font> elements as errors, as using them is not a good idea anyway. Similarly, you could fail <table> elements without <th> elements, or <select> elements with an onchange=”” attribute. There are many tests that checkers can use, that can be very meaningful to your organisation, without them always accurately identifying accessibility errors.

Conclusion

Accessibility checker tools are great! They provide a quick and relatively inexpensive way to find accessibility barriers on your website. They are useful during the development to encourage a style of coding that avoids accessibility barriers. They also provide a good starting point for anyone who wants to build accessibility into their quality assurance process, though they don’t give you the whole picture.

Accessibility checkers have limitations. Being aware of those means you can make better decisions about the tools you can use. The field of web accessibility has long been focused on manual audits, but there is a clear precedence for the use of tools. As long as we understand their limitations, we can manage them and get better and more efficient because of them.

About The Author

Wilco FiersWilco Fiers is a web accessibility consultant and auditor at Accessibility Foundation NL. He is founder and chair of the Auto-WCAG community group. Wilco has participated in a variety of accessibility projects such as WAI-AGE, WAI-ACT and EIII as well as being a developer in open source projects such as QuailJS and WCAG-EM Report Tool.

WCAG-EM Report Tool: Website Accessibility Evaluation Report Generator

Last week the first version of the WCAG-EM Report Tool: Website Accessibility Evaluation Report Generator was published. This tool is developed by the Education and Outreach Working Group (EOWG). The tool helps to generate website accessibility evaluation reports according to Website Accessibility Conformance Evaluation Methodology (WCAG-EM). The tool guides you through the steps of WCAG-EM to create a structured evaluation report.

The WCAG-EM Report Tool project is closely related with the work of the Automated WCAG Monitoring community group. Both project groups are working on the development of supporting tools for testing websites on accessibility. Where the Auto-WCAG group focuses on the development of (semi-)automated tests for WCAG, the Report tool focuses on the creation of the evaluation reports.

Learn more about the WCAG-EM Report Tool.

Call for Participation in Automated WCAG Monitoring Community Group

The Automated WCAG Monitoring Community Group has been launched:


Creating (semi-)automated tests for WCAG is key to affordable, large scale research. The tests are designed in a way that they are useable by people with a variety of skills. The results too should be informative, not just to developers, but to website managers, policy makers and disability advocates and others.

The objective of this community is to create and maintain tests that can be implemented in large scale monitoring tools for web accessibility. These tests will be either automated, or semi-automated, in which tools assist non-expert users to evaluate web accessibility. By comparing the test results with results from expert accessibility evaluators, we aim to track the accuracy of the tests we’ve developed. This allows for an iterative improvement and adjustment of the tests as web development practices change and evolve. It also provides the statistical bases on which large scale accessibility monitoring and benchmarking can be built.

This group will not publish specifications.


In order to join the group, you will need a W3C account.

This is a community initiative. This group was originally proposed on 2014-05-08 by Wilco Fiers. The following people supported its creation: Wilco Fiers, Annika Nietzio, Jeroen Hulscher, Raph de Rooij, Anand Balachandran Pillai. W3C’s hosting of this group does not imply endorsement of its activities.

The group now has access to W3C-hosted services for email, blog, wikis, irc, tracking tools, and more. Read more about tools and services available by default and upon request.

If you believe that there is an issue with this group that requires the attention of the W3C staff, please send us email on site-comments@w3.org

Thank you,
W3C Community Development Team