Using Combined Expertise to Evaluate Web Accessibility

Introduction

Evaluating the accessibility of Web content for people with disabilities requires diverse kinds of expertise and perspectives. While it is possible for individuals to evaluate Web accessibility effectively if they have training and experience across a broad range of disciplines, it is less likely that one individual will have all the expertise that a collaborative approach can bring. 

This document describes: 

References to related evaluation resources are mentioned throughout this document. Most of these resources can be found in, Evaluating Web Accessibility Overview.

Recommended Expertise

Effective evaluation of Web accessibility requires more than simply running an evaluation tool on a Web site. Comprehensive and effective evaluations require evaluators with an understanding of Web technologies, evaluation tools, barriers that people with disabilities experience, assistive technologies and approaches that people with disabilities use, and accessibility guidelines and techniques.

The following list includes recommended expertise across a variety of areas, and provides links to initial resources, listed in the Appendix at the end of this document, which may be useful in learning more about those areas. 

Approaches for Collaborative Evaluation

When first conducting a Web accessibility evaluation, the initial approach in many organizations is to assign the task to an individual within the organization, or to outsource it. However, many organizations use a collaborative evaluation process involving the skills and perspectives of multiple evaluators. This approach allows an organization to use in-house expertise as well as outside experts where needed.

Collaborative evaluation processes can involve:

Considerations in Combining Expertise

Centralized versus distributed evaluation capability

Organizations with in-house evaluation capacity sometimes use a centralized group of evaluators, and sometimes use evaluators who are distributed across the organization. A centralized team can serve as a resource for the rest of the organization. Evaluation capability that is distributed across an organization may offer more possibilities for integrating accessibility work into Web development processes throughout the organization. It may help in identifying more diverse expertise since one can look beyond the boundaries of a centralized team. In addition, it can help in developing a shared organizational mission for continual improvement of accessibility, rather than leaving oversight of accessibility as the responsibility of a single office.

Identification of external expertise

Once gaps in internal expertise are clear, an organization can prioritize its needs for external expertise. The internal gaps are often in areas of knowledge specific to disability and/or accessibility; for instance, Web accessibility guidelines, cross-disability accessibility barriers, or use of assistive technologies. In addition, even organizations with established user testing processes may need guidance on how to get feedback from users with disabilities. It can be valuable in some cases to bring in more than one outside expert to cover this range of issues effectively, or to look for feedback in online communities focusing on Web accessibility.

Involving users in evaluation

Inclusion of people with disabilities in a collaborative group can contribute to a better understanding of accessibility issues within the organization, and/or to maintaining awareness of the urgency of addressing accessibility barriers on a site, in addition to their individual technical contributions to the evaluation.

Regardless of the collective expertise of a collaborative group of evaluators in conducting conformance evaluations, an organization may want to ensure periodic review by users with a variety of disabilities. There are many factors to consider in effectively involving users in Web accessibility evaluations, including ensuring diversity in disabilities represented, types of assistive technology used, and experience with the Web.

Facilitating collaboration through shared tools and templates

A group of evaluators may want to arrange for shared access to certain evaluation tools, or to ensure that they have access to a broad range of evaluation tools across the group as a whole.

Using an agreed-upon template for reporting the results of evaluations can greatly facilitate coordination between different evaluators. Such a template might be an adaptation of the Template for Accessibility Evaluation Reports.

Communicating results

Collaborative teams may want to give particular attention to communicating the results of their evaluations to their customers clearly, since their reports represent the combined perspectives of different evaluators. 

Getting and giving feedback

Providing a mechanism for feedback within an organization on the usefulness of the evaluation process and resulting report may assist collaborative evaluators in ongoing identification of gaps in expertise, and contribute to long-term improvement in the quality of evaluations.

Feedback from experienced groups of evaluators on evaluation resources such as W3C/WAI’s resources in Evaluating Web Accessibility Overview can, over time, also help improve the quality of evaluation support resources available to the broader Web community. Feedback links are available in the footers of pages in this Evaluation resource suite.

Appendix

This appendix includes links to resources related to key areas of expertise needed for Web accessibility evaluation.

Back to Top