Accessible Platform Architectures Working Group - Publications
This document summarizes relevant research, then outlines accessibility-related user needs and associated requirements for the synchronization of audio and visual media. The scope of the discussion includes synchronization of accessibility-related components of multimedia, such as captions, sign language interpretation, and descriptions. The requirements identified herein are applicable to multimedia content in general, as well as real-time communication applications and media occurring in immersive environments.
This document is a gap analysis and roadmap for the state of accessibility for people with learning and cognitive disabilities when using the Web and information technologies. It builds on the information presented in Cognitive Accessibility User Research and Cognitive Accessibility Issue Papers to evaluate where user needs remain to be met in technologies and accessibility guidelines. For various accessibility issues, this document provides a summary of issues and techniques, then identifies gaps and unmet user needs and suggest ways to meet these needs.
Lists user needs and requirements for people with disabilities when using real-time communications (RTC).
Lists user needs and requirements for people with disabilities when using virtual reality or immersive environments, augmented or mixed reality and other related technologies (XR).
This document summarizes considerations of accessibility that arise in the conduct of remote and hybrid meetings. Such meetings are mediated, for some or all participants, by real-time communication software typically built upon Web technologies. Issues of software selection, and the roles of meeting hosts and participants in providing access are explained. Relevant W3C documents are referred to, where applicable, as sources of more detailed and in some instances normative guidance.
Candidate Recommendation Snapshots
This specification provides web content authors a standard approach to support web users with various cognitive and learning disabilities who: Customarily communicate using symbolic languages generally known as Augmentative and Alternative Communications (AAC); Need more familiar icons (and other graphical symbols) in order to comprehend page content;
This document is a gap analysis and roadmap for the state of accessibility for people with learning and cognitive disabilities when using the Web and information technologies.
Accurate pronunciation by text-to-speech (TTS) synthesis is very important in many contexts, and critical in education, publishing, communication, entertainment, among other domains. TTS has become an important technology for providing access to digital content on the web. Yet there is no way to markup content today that will correctly present TTS generated output across commonly used TTS engines and operating environments.
This document list examples of the tools defined values, this is an extension of Personalization Explainer 1.0. It was developed by the Personalization Task Force to provide a vocabulary of terms that can be used to enhance web tools.
This document lists examples of the personalized help and support properties. This is an extension of Personalization Explainer 1.0. including the properties of literal, numberfree, easylang, alternative, explain, feedback, moreinfo,extrahelp, helptype. It was developed by the Personalization Task Force to provide a vocabulary of terms that can be used to enhance help and support function for web.
First Public Working Drafts
The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document defines a standard mechanism to allow content authors to include spoken presentation guidance in HTML content. Also, it contains two identified approaches and enumerates their advantages and disadvantages.
The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document presents the results of the Pronunciation Task Force work on an HTML standard. It includes an introduction with a historical perspective, an enumeration of the core requirements, a listing of approach use cases, and finally a gap analysis. Gaps are defined when a requirement does not have a corresponding use case approach by which it can be authored in HTML.
The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document provides various user scenarios highlighting the need for standardization of pronunciation markup, to ensure that consistent and accurate representation of the content. The requirements that come from the user scenarios provide the basis for the technical requirements/specifications.
This document outlines various accessibility-related user needs, requirements and scenarios for collaboration tools. The tools of interest are distinguished by their support for one or more specific collaborative features. These features include real-time editing of content by multiple authors, the use of comments or annotations, and revision control. A Web-based text editor or word processor offering all of these features would be a central example of such a collaboration tool.
Various approaches have been employed over many years to distinguish human users of web sites from robots. While the traditional CAPTCHA approach of asking the user to identify obscured text in an image remains common, other mechanisms are gaining in prominence. These approaches generally require users to perform a task believed to be possible for humans and difficult for robots, but the nature of the task inherently excludes many people with disabilities, resulting in an incorrect denial of service to these users. Research findings also indicate that many popular CAPTCHA techniques are no longer particularly effective or secure, so it is necessary to consider alternative approaches to block robots, yet ensure these approaches support access for people with disabilities. This document examines a number of potential solutions that allow systems to test for human users, and the extent to which these solutions adequately accommodate people with disabilities.
This is a requirements document for Personalization Semantics, this document contains use cases, requirements and user stories for personalization semantics.
This document outlines accessibility user needs, requirements and scenarios, for natural language interfaces. These user needs should influence accessibility requirements in related specifications, and be considered in the design of applications that include natural language interfaces.
Defines standard semantics to enable user driven personalization such as the association of a user-preferred symbols to elements having those semantics.
The W3C Accessibility Maturity Model is a guide for organizations to evaluate and improve their business processes to produce digital products that are accessible to people with disabilities. Use of the W3C Accessibility Maturity Model will provide organizations informative guidance (guidance that is not normative and does not set requirements) on improving accessibility policies, processes, and outcomes.
This is a W3C registry of symbols used in augmentative and alternative communication (AAC). It is co-published with Blissymbolics Communication International and contains their full set of authorized symbols. With over 5000 symbols, this set can function as a basis for semantic mappings between different AAC symbol sets in use around the world, using the entries in this registry to identify common meanings. This functionality is necessary for interoperable implementation of the WAI-Adapt: Symbols Module (draft symbols module, currently part of the WAI-ADAPT: Content Module [adapt-content]). This registry is available for other use cases as well.