Do Accessibility Checkers have a place in QA?
There are many great tools on the market that can check the accessibility of web pages. The Web Accessibility Evaluation Tools List is a great resource to find checkers for different types of content. Many of them focus on testing specific aspects of accessibility, such as color contrast or parsing. But some have a broader scope and will check many different aspects and report the conformance to WCAG success criteria.
I encourage all web professionals to use an accessibility checker in their daily work. But as an accessibility auditor with 8 years of experience, I must confess that I don’t use any of these checkers myself. To test HTML pages, the only tools I use are a DOM inspector, a color analyzer and a validator. So why the difference?
Automated accessibility testing is tricky. WCAG was never designed to be automated. There is a good argument to be made that by definition, automated testing of accessibility is impossible. Think about it. If you want to test if some piece of content is accessible, you should compare the existing implementation to what the component should be like when it is accessible. To automate this, you need two things. You need to automatically determine what an accessible version would be like and you need some way to compare it to the current situation.
The first part of this is important. Imagine a tool that could reliably determine what the text alternative of an image should be. We could compare that to the actual alternative and we would have our test, right? However, if there was such a tool, assistive technologies could also implement it. And if they did, we wouldn’t have an accessibility problem with text alternatives anymore.
This idea seems to be true for most accessibility problems: If we can automatically determine the solution, the problem goes away. Because of this, accessibility checkers are mostly unable to determine if a success criteria was met, except where no assistive technologies are involved. But what our tools certainly can do, is to look for symptoms of accessibility barriers and fail a success criteria based on those.
Symptoms Of Inaccessibility
If you’ve done anything with HTML in the past 10 years, you probably know that you shouldn’t use the <font> element. It is an outdated solution to styling text. There is nothing inherently wrong about the <font> element, but many accessibility checkers flag the <font> element as an error. Why do that for an element that is not inherently inaccessible?
One way you could use the <font> element to create an accessibility problem is the following:
Pick a color:
<button type=”submit” value=”r”><font color=”red”>A</font></button>
<button type=”submit” value=”g”><font color=”green”>B</font></button>
Here the font element is used to provide information, which is not available in text. This is a failure for criterion 1.4.1. A checker that fails the criterion for using the <font> element would be correct in doing so in this situation. It assumes the <font> element is often used to provide information that is not otherwise available and fails the criterion based on that assumption.
Assumptions are the basis of automated accessibility tests. Checkers look for symptoms of accessibility barriers, such as the use of a <font> element, and assume they found a barrier. Every automated test I know of works on assumptions in one way or another. Even a test such as color contrast assumes there is no conforming alternative version. The important question then becomes: how accurate are these assumptions?
Dealing With Assumptions
Most tests in tools are based on the test designer’s experience with front end development practices. This experience greatly influences the accuracy of an accessibility checker. The required accuracy of the tests depends quite a lot on who is using the tool. For me, as an external accessibility auditor, I need a very high degree of accuracy. Double checking the results of a checker takes a lot of time, often more then it would to do the test manually. Therefore I tend not to use these tools.
For web developers and QA teams accuracy is less of an issue. It may be okay to flag <font> elements as errors, as using them is not a good idea anyway. Similarly, you could fail <table> elements without <th> elements, or <select> elements with an onchange=”” attribute. There are many tests that checkers can use, that can be very meaningful to your organisation, without them always accurately identifying accessibility errors.
Accessibility checker tools are great! They provide a quick and relatively inexpensive way to find accessibility barriers on your website. They are useful during the development to encourage a style of coding that avoids accessibility barriers. They also provide a good starting point for anyone who wants to build accessibility into their quality assurance process, though they don’t give you the whole picture.
Accessibility checkers have limitations. Being aware of those means you can make better decisions about the tools you can use. The field of web accessibility has long been focused on manual audits, but there is a clear precedence for the use of tools. As long as we understand their limitations, we can manage them and get better and more efficient because of them.
About The Author
Wilco Fiers is a web accessibility consultant and auditor at Accessibility Foundation NL. He is founder and chair of the Auto-WCAG community group. Wilco has participated in a variety of accessibility projects such as WAI-AGE, WAI-ACT and EIII as well as being a developer in open source projects such as QuailJS and WCAG-EM Report Tool.
This is an excellent overview. IMO the most important statement you made was “accessibility checkers are mostly unable to determine if a success criteria was met”. I feel it is vital for users of automated tools to understand that the best use of such tools is to find potential problems. They should not be used to verify that the tested page is “good”. The absence of issues reported by a tool does not mean anything more than that the tool didn’t find anything.