Skip to toolbar

Community & Business Groups

4 Technical Difficulties of Automated Accessibility Testing

Jesse Beach is a software developer who builds tools for other developers. Visit her on Github: https://github.com/jessebeach

Accessibility evaluation is a method to determine deviation from best practices and standards. Humans do this, for the most part, very well and slowly. Machines do this, for the most part, unevenly and yet at great speed.

During the several years I’ve spent exploring techniques for automated accessibility testing in Quail, I recognized a few types of persistent challenges. Let’s go through them.

Identifying generic DOM elements

By identify, I don’t mean finding elements. That’s pretty easy to do with selectors. The difficulty is uniquely identifying a single element so that it can be referenced later.  Let’s say we have a list of links with images in them used for navigation.

Providing a unique selector for, as an example, the second link in the list, is difficult.

ul li:nth-child(2) a {}

Certainly we could be more specific about the parent-child relationship. Sorry about the ‘>’ HTML entities. I can’t get them to display as a great than sign, the CSS child element selector.

ul > li:nth-child(2) > a {}

And perhaps even include the href attribute.

ul > li:nth-child(2) > a[href="/news"] {}

It’s likely that this selector will be unique on the page, but it’s not guaranteed. With Quail, we take into account several attributes, like href, the help make a DOM element unique. Obviously, we also look for IDs, which presumably are only used once on a page, but even that isn’t guaranteed! In writing this article, I realized we should also be including relative DOM ordering. That’s why we write articles about our work, in order to learn.

In a perfect system, any specification violation would be associated with a unique selector identifying the element in a document associated with the violation. In the wild, unwashed web, we can never rely on this to be true.

Testing the DOM in the DOM

For some standards, such as color contrast, we need to generate a rendered web page. This is necessary because CSS is a complex beast. Browser vendors have a decade and a half experience interpreting and rendering DOM representations with CSS applied. End users use browsers as well. So they are the best tools we have to understand how HTML and CSS will combine to produce visual output. What I’m saying is, the environment we consume content in, is also the environment we’re running out tests in. In other words, the inmates are definitely running this asylum.

My favorite and most frustrating set of evaluations concerns invalid HTML. Here is an example.

pots of gold

Notice that the closing tags are incorrectly nested. Here’s another.

baskets of fruit

Notice the p tag isn’t closed. This one is tricky because it’s actually valid HTML; the closing `

` tag is optional. A browser will handle this just fine. Now, what about this example below.

strings of pearls

That will cause a ruckus on your page. A browser will attempt to fix this nesting error, but the results are unpredictable. And remember, when we query an HTML document, we’re not querying the text that was sent to the browser, we’re querying the DOM. The DOM is a model of the HTML text document it received from the server. Our automated assessments are interacting with a mediated HTML representation, which is often fine. But in the case of malformed HTML, it hides the problems from us.

In the current version of Quail, we make an AJAX call for the host page, get the text payload and run that string through tests for malformed HTML. It’s by no means ideal. The better solution would be to do assessments of malformed HTML outside a browser altogether and this is something we will implement in the future.

Writing really basic JavaScript

JavaScript was born a humble scripting language. In the years since its introduction, it has become a richer, more versatile language. If you work in an environment, like a server, where you can control the version of JavaScript that you run, then you probably revel in taking advantage of the latest language improvements. These include better Object and Array methods, for example.

In the world of automated accessibility testing, we want to provide the widest coverage for tests that we can. Practically this means our assessments must be written in the most plain of plain JavaScript possible. Bare-metal JS I call it. There are no forEach methods or Weak Maps. jQuery is best used around the 1.6 version when its feature set firmed up and it had excellent support for finicky, older browsers.

Cross-browser testing

One of the shortcomings of Quail early on was our singular testing platform — PhantomJS. PhantomJS is what is known as a headless browser. Unlike a web browser you use to view the internet, a headless browser has no visual component. It is lightweight and meant to render pages as if they were to be displayed — it just doesn’t display them. PhantomJS has also been on the cusp of a major version for release for years now. It’s a wonderful tool, but not without frustrating shortcomings.

To really test a web page, you need to run it in browsers: various versions on various platforms. To do this, the assessments need to be run through a web driver that knows how to talk to various browser executables. This infrastructure is much more complex than a test runner that spins up PhantomJS. Quail (a wonderful tool with frustrating shortcomings), is itself on the cusp of a major version release. We are introducing support for test runs using WebdriverIO and Selenium.

Selenium will allow us to run assessments in different browsers. Much thanks to OpenSauce for making this sort of testing available to open source projects!

Summing up

Writing automated tests for accessibility specifications will challenge you as a developer and in a good way. You’ll need to understand the fundamentals of the technologies that you use every day to get your work done. It’s like doing sit-ups or jogging; you’re in better shape for any sort of physical activity if you practice these basics often. Anyone is welcome to join the Quail team in translating the work of the Auto-WCAG Monitoring Group into coded examples of accessibility guideline assessments. Visit us on our Github project page, check out the tasks and propose a Pull Request!

About the author

Profile thumbnail image for Jesse Beach

Jesse Beach is a software developer at Facebook who builds tools for other developers. Visit her on Github: https://github.com/jessebeach

Leave a Reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet, the plugin we use to mitigate spam comments.

*