[Contents] [Guidelines]
Implementing UAAG 2.0
A guide to understanding and implementing User Agent Accessibility Guidelines 2.0
W3C Working Draft 4 October
2012
- This version:
- http://www.w3.org/TR/2012/WD-IMPLEMENTING-UAAG20-20121004/
- Latest version:
- http://www.w3.org/TR/IMPLEMENTING-UAAG20/
- Previous version:
- http://www.w3.org/TR/2011/WD-IMPLEMENTING-UAAG20-20110719//
- Editors:
- James Allan, Texas School for the Blind and Visually Impaired
- Kelly Ford, Microsoft
- Kim Patch, Redstart Systems
- Jeanne Spellman, W3C/Web Accessibility Initiative
- Previous Editors:
- Jan Richards, Inclusive Design Institute, OCAD University
Copyright ©2012 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document provides supporting information for the User Agent Accessibility Guidelines (UAAG) 2.0 for designing user
agents that lower barriers to Web accessibility for people with
disabilities. User agents include browsers and other types of software that
retrieve and render Web content. A user agent that
conforms to these guidelines will promote
accessibility through its own user interface and through other internal
facilities, including its ability to communicate with other technologies
(especially assistive
technologies). Furthermore, all users, not just users with disabilities,
should find conforming user agents to be more usable. In addition to helping developers of browsers and media players, this
document will also benefit developers of assistive technologies because it
explains what types of information and control an assistive technology may
expect from a conforming user agent.
This document provides explanation of the intent of UAAG 2.0 success criteria, examples of implementation of the guidelines, best practice recommendations and additional resources for the guideline.
The "User Agent Accessibility Guidelines 2.0" (UAAG 2.0) is part
of a series of accessibility guidelines published by the W3C Web Accessibility
Initiative (WAI).
May be
Superseded
This section describes the status of this document at the time of its
publication. Other documents may supersede this document. A list of current
W3C publications and
the latest revision of this technical report can be found in the W3C technical reports
index at http://www.w3.org/TR/.
Working Draft of Implementing UAAG
2.0
This is the W3C Working Draft of 4 October 2012. This working draft is a refinement of the success criteria based on feedback, and further review after writing the Implementing UAAG 2.0 document. The working group is in the process of reviewing the success criterion levels (e.g. A, AA, AAA) for consistency, and expect that some levels may change in the next working draft.
- Customize text-display and focus-highlight - added and refined success criteria so users can customize the display of text and the highlight of focus location, which is especially important to meet the needs of people with low vision, dyslexia, and other conditions that impact reading.
- Input methods: speech, touch & gesture - added and refined success criteria to support broad methods of input including speech, touch and gesture.
- Mobile Accessibility - reviewed success criteria for applicability to mobile and added examples to Implementing UAAG 2.0.
- Clarity - all success criteria were reviewed for clarity, consistency and ease of testing.
The Working Group seeks feedback on the following points for this draft:
- Does UAAG 2.0 do enough to ensure support of other technologies that support accessibility, e.g. WAI-ARIA, HTML5 Canvas, IndieUI?
- 5.1.1 - If you are developing a user agent for a specific experience, say a Macintosh experience to run on a Windows platform, should you have to meet the platform accessibility requirements of the Windows platform?
- What accessibility needs specific to mobile devices have we not addressed that should be addressed by UAAG 2.0?
- Some UAAG 2.0 requirements may be met by the platform (e.g. keyboard shortcuts on a mobile phone). If the user agent software runs on multiple platforms (e.g. a web app) should the user agent be able to claim conformance even if the feature doesn't work on all platforms?
Comments on this draft should be sent to public-uaag2-comments@w3.org (Public Archive) by 9 November 2012.
UAAG 2.0 is currently informative only. After the User Agent Working Group (UAWG) is
rechartered to produce W3C Recommendations under the W3C Patent Policy, th
group expects to advance UAAG 2.0 through the Recommendation track. Until
that time User Agent Accessibility
Guidelines 1.0 (UAAG 1.0) [UAAG10] is the stable,
referenceable version. This Working Draft does not supersede UAAG 1.0.
Web Accessibility Initiative
This document has been produced as part of the W3C Web Accessibility Initiative (WAI). The
goals of the User Agent Working Group (UAWG) are discussed in the Working Group charter. The
UAWG is part of the WAI Technical
Activity.
No
Endorsement
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a
draft document and may be updated, replaced or obsoleted by other documents
at any time. It is inappropriate to cite this document as other than work in
progress.
Patents
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
A user agent is any software that retrieves, renders and facilitates end-user interaction with Web content.
Users interacting with a web browser may do so using one or more input methods including keyboard, mouse, speech, touch, and gesture. It's critical that each user be free to use whatever input method or combination of methods works best for a given situation. Therefore every potential user task must be accessible via modality independent controls that any input technology can access.
For instance, if a user can't use or doesn't have access to a mouse, but can use and access a keyboard, the keyboard can call a modality independent control to activate an OnMouseOver event.
What qualifies as a User Agent?
These guidelines employ the following tests to determine if software qualifies as a user agent. UAAG 2.0 divides potential user agents into
- platform-based application
- extension or plug-in
- web-based application
If the following three conditions are met, then it is a platform-based application:
- It is a standalone application, and
- It interprets any W3C-specified language, and
- It provides a user interface or interprets a procedural
or declarative language that may be used to provide a user interface
If the following two conditions are met then it is an extension or plug-in:
- It is launched by, or extends the functionality of a platform-based application, and
- Post-launch user interaction is included in, or is
within the bounds of the platform-based application
If the following three conditions are met then it is an web-based application:
- The user interface is generated by a procedural or declarative language; and
- The user interface is embedded in an application that renders web content, and
- User interaction is controlled by a procedural or declarative language, or if user interaction does
not modify the Document Object Model of its containing document.
Relationship to the Authoring Tool Accessibility Guidelines (ATAG) 2.0
While it is common to think of user agents retrieving and rendering web content for one group of people (end-users) that was previously authored by another group (authors), user agents are also frequently involved with the process of authoring content.
For these cases, it is important for user agent developers to consider the application of another W3C-WAI Recommendation, the Authoring Tool Accessibility Guidelines (ATAG). ATAG (currently 2.0 is in draft) provides guidance to the developers of tools regarding the accessibility of authoring interfaces to authors (ATAG 2.0 Part A) and ways in which all authors can be supported in producing accessible web content (ATAG 2.0 Part B).
The Role of User Agents in Web Authoring
The following is a list of several ways in which user agents are commonly involved in web content authoring and the relationship between UAAG 2.0 and ATAG 2.0.
- Preview tool: Authors often preview their work in user agents to test how the content will be appear and operate. ATAG 2.0 includes a special exception when previews are implemented with pre-existing user agents, so there are no additional requirements on user agent developers in this case.
- Checking tool: Authors often make use of user agent error panels (e.g. HTML validity, JavaScript errors) during authoring. ATAG 2.0 Part A applies, but may not include additional accessibility requirements beyond the UAAG 2.0 success criteria. If a user agent includes an "accessibility checker", the developer should consult checker implementation guidance in ATAG 2.0 Part B.
- Edit modes: Some user agents include a mode where the user can edit and save changes to the web content, modifying the experience of other users. In this mode, the user agent is acting as an authoring tool and all of ATAG 2.0 applies.
- Automatic content changes: Some user agents or plug-ins may automatically change retrieved web content before it's rendered. This functionality is not considered an authoring tool because changes are made to the user's own experience, not the experience of other users.
- Providing a platform for web-based authoring tools: Many web applications serve as authoring tools and make use of user agent features to deliver functionality (e.g., undo text entry, adjust font size of the authoring tool user interface etc.)
User agent developers should consult ATAG 2.0 to understand the ways in which web-based authoring tools can depend on user agent features.
UAAG 2.0 Guidelines
The success criteria and applicability notes in this section are normative. Guideline summaries are informative.
PRINCIPLE 1 - Ensure that the user interface
and rendered content are perceivable
Implementing
Guideline 1.1 - Provide access to alternative content.
Summary: The user can choose to render any type of alternative content available. (1.1.1). The user can also choose at least one alternative such as alt text to be always displayed (1.1.2), but it's recommended that users also be
able to specify a cascade (1.1.4), such as alt text if it's there, otherwise longdesc, otherwise filename, etc. It's recommended that the user can configure the caption text and that text or sign language alternative cannot obscure the video or the controls (1.1.3). The user can configure the size and position of media alternatives (1.1.5).
1.1.1 Render Alternative Content [was 1.1.3]:
For any content element, the user can choose to render any types of alternative content that are present. (Level A)
- Intent of Success Criterion 1.1.1:
Alternative content is wasted if users cannot find it. The user agent should make the presence of alternative content evident to the user. Users should not have to hunt and examine every time to see if it includes such content, because such searching can be time-consuming, especially for users whose disability makes input difficult, tiring, or painful. The user should be able to easily identify which items have alternative content, rather than being merely informed that alternative content is somewhere in the view.
- Examples of Success Criterion 1.1.1:
- Tinan has repetitive strain injuries and seeks to limit scrolling. The user agent renders distinct visual icons in proximity of content that has short text alternatives, long descriptions, and captions. If the icon forces the text to extend beyond a fixed size container the user agent uses global preference settings to determine whether to expand the container, provide scroll bars, or truncate the content.
- Aosa is blind. When rendering a Web page using synthesized speech, the user agent generates an audible tone to signify that the word being read is an acronym, and Aosa can press the * key to hear the expansion. When the phrase being read is the Alt text for an image, another tone indicates that Aosa can press + to hear the long description.
- Brin is deaf. The video player she is using has a button displayed beneath the playing video that indicates that captions are available. She clicks the button to toggle the captions on so she can understand the video.
- Related Resources for Success Criterion 1.1.1:
1.1.2 Configurable Alternative Content Defaults [was 1.1.1]:
For each type of non-text content, the user can specify a type of alternative content that, if present, will be rendered by default. (Level AA)
- Intent of Success Criterion 1.1.2:
Alternative content is wasted if the user agent doesn't render it for users who need it. Default alternative content is a global option because it is an unreasonable burden for users to change the rendering options every time they visit a new page.
- Examples of Success Criterion 1.1.2:
- Sally is blind. In the browser's preferences dialog box, Sally specifies that she wants alt text displayed in place of images, and that the document should reflow to allow the entire alt text to be displayed rather than truncated.
- Ben has low vision. In the browser's preferences dialog box, he chooses to always display the alternative ("fallback") content for embedded objects, such as videos.
- Brin is deaf. She toggles a menu item which turns on the display of all captions for video and audio content.
- Related Resources for Success Criterion 1.1.2:
1.1.3 Display of Time-Based Media Alternatives:
For recognized on-screen alternatives for time-based media (e.g. captions, sign language video), the following are all true: (Level AA)
- Do not obscure primary media: The user can specify that displaying media alternatives doesn't obscure the primary time-based media; and
- Do not obscure controls: The user can specify that the displaying media alternatives doesn't obscure recognized controls for the primary time-based media; and
- Configurable text: The user can configure recognized text within media alternatives (e.g. captions) in conformance with 1.4.1.
Note: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size to meet this requirement.
- Intent of Success Criterion 1.1.3 [was 2.11.11]:
Users who require or can benefit from alternative media tracks in video or audio may not find that the default or authored position and size of those tracks is usable. Enabling the user to move and scale any displayed alternate media tracks (e.g. captions) allows displayed content to be positioned and sized to meet the needs of the user.
- Examples of Success Criterion 1.1.3:
- Justin has low vision and works in a noisy environment that makes it difficult to listen to instructional videos. When he enlarges the text of the captions to a viewable size, they block most of the video image. Justin selects an option that displays the caption track in a separate window, which he positions below the video image so the captions don't block the video image.
- Jaime is deaf and is taking courses from on online university. She prefers to use ASL if it is available for online media. A course she is taking offers captions and a signing avatar for the recorded lectures. The default size of the avatar window is small, making it difficult to follow the signing. The avatar also overlays a significant part of the lecture video. Jaime drags the avatar out of the video and enlarges it, so that they are side by side and equally sized.
- Related Resources for Success Criterion 1.1.3:
1.1.3 [old] Indicate Unrendered Alternative Content:
1.1.4 Default Rendering of Alternative Content (Enhanced):
For each type of non-text content, the user can specify the cascade order in which to render different types of alternative content when preferred types are not present. (Level AAA)
- Intent of Success Criterion 1.1.4:
For a given piece of non-text content the author may provide one or more alternatives. For example, an image may have different versions based on resolution, ‘alt text’ (@alt) or a link to a long description (@longdesc). A video may have bandwidth alternatives, caption files in different languages, and audio descriptions in different languages. Users can choose which item(s) to render by default, and specify the order of the cascade of alternatives to be rendered if the author provided multiple alternatives.
- Examples of Success Criterion 1.1.4:
- Mary has a learning disability. She finds looking at images on a webpage very distracting. Mary would like to see all images rendered in the following order: First, for images with long descriptions, render the long description in place of the image. If the long description does not exist, render the ‘alt text’. If neither is available, render the file name. Added functionality would allow Mary to right click (context menu) on an image to list and select the preferred order of available alternatives (thumbnail, original size, full screen, low resolution, high resolution, alt text, long description, file name).
- Juan is hard of hearing. He always wants to see video on the page. Juan wants to use the Spanish language track and Spanish captions as a default. If these are not available, he wants to see the video with English audio and captions. If no captions are available, Juan wants video with English audio. Added functionality would allow Juan to right click (context menu) on an video to list and select the preferred order of the available alternatives (still image, caption languages, audio languages, audio-description languages).
- Related Resources for Success Criterion 1.1.4:
1.1.5 Size and Position of Time-Based Media Alternatives:
The user can configure recognized on-screen alternatives for time-based media (e.g. captions, sign language video) as follows: (Level AAA)
- The user can resize the media alternatives up to the size of the user agent's viewport.
- The user can reposition the media alternatives to at least above, below, to the right, to the left, and overlapping the primary time-based media.
Note 1: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size or hidden to meet this requirement.
Note 2: Implementation may involve displaying media alternatives in a separate window, but this is not required.
- Intent of Success Criterion 1.1.5:
Users may want to reposition the alternative in close proximity to the most important portion of the main media to reduce the visual scanning distance between them. For example, if the video frequently includes on-screen text near the top of the video then the captions will be easier to read if they are located above the video.
- Examples of Success Criterion 1.1.5:
- Maximilian adjusts the position of his captions depending on what he's watching. When watching a sporting event with a dashboard displaying statistics at the top, he positions the captions immediately above the top so the captions are close to the statistics. However, when he watches a movie, he positions the captions so that they overlap the video frame near the bottom. When he watches financial news with a stock ticker along the bottom, he moves the captions to be immediately below the ticker.
- When Tom watches narrow-aspect video on a wides-aspect screen, he moves the window displaying sign language interpretation to the side, allowing the primary video to take up the entire height of the screen without the interpretation getting in the way.
- Raymond has one functioning hand. He positions captions so that they're not covered by the hand he's using to hold his tablet.
- Related Resources for Success Criterion 1.1.5:
Implementing
Guideline 1.2 - Repair missing content.
Summary: The user can request useful alternative content when the author fails to provide it. For example, showing metadata in place of missing or empty (1.2.1) alt text. The user can ask the browser to predict missing structural information, such as field labels, table headings or section headings (1.2.2).
1.2.1 Support Repair by Assistive Technologies:
If text alternatives for non-text content are missing or empty then both of the following are true: (Level AA)
- the user
agent does not attempt to repair the text alternatives with text values that are also available to assistive technologies.
- the user agent makes metadata related to the non-text content available programmatically (and not via fields reserved for text alternatives).
- Intent of Success Criterion 1.2.1:
When alternative content is missing, it is sometimes useful for users
to have access to alternative information, such as the
filename. Users need to control the flow of this
information, because an uncontrolled flow can be distracting and time-consuming. This is particularly important for users who can't use some forms of content (e.g. images) or may need to avoid some forms of content (e.g. animations) and replace them with alternative content.
Users need to control the flow of the content when this information is added, because truncating the content to fit its container may make the document unusable (e.g. if important information becomes hidden). In other cases, expanding the container will make the document unusable (e.g. when important cues no longer line up correctly).
Note that repair text is only required for alternative content for images. For example, it would not require the user agent to generate a transcript of audio using speech recognition.
- Examples of Success Criterion 1.2.1:
- Ray is blind and counts on alternative text descriptions for images. There is an image in web content that does not have alternative
text provided. The browser displays the string '(image canoe.png)', which includes the file name because that is the
only available information about the image.
- Bintu is deaf and relies on captions to replace audio. Bintu selects a caption
button for a video she wants to watch, and is informed that no captions exist. The player
then analyzes the video soundtrack and provides speech-to-text
translation served as captions. Note: this is an advanced example, not a requirement.
- Related Resources for Success Criterion 1.2.1:
1.2.2 Repair Missing Structure:
The user can specify whether or not the
user
agent should attempt to insert the following types of structural markup on the basis of author-specified presentation attributes (e.g.. position and appearance):
(Level AAA)
- Labels
- Headers (e.g. heading markup, table headers)
- Intent of Success Criterion 1.2.2:
When an author neglects to provide labels and/or headers as necessary for accessibility, the user agent can sometimes use heuristics to determine potential labels and/or headers from presentation attributes. Once potential headings and/or labels have been identified, the user agent can proceed (e.g. with communication via platform accessibility services) as if the relationship was defined in the markup. The user can specify whether heuristics should be applied because some users want to experience the content as the author provided it (e.g. to perform evaluations or when heuristics fail).
- Examples of Success Criterion 1.2.2:
- George uses speech input. When content markup includes a checkbox without a specified label, the user agent detects that a static text component is positioned immediately to the right of the checkbox and no other static text components are nearby. The nearby text is treated as the label. This enables George to toggle the checkbox by speaking its name.
- Maria uses a screen reader. When a table lacks marked up header rows, the user agent gives her the option to have the first row treated as the table header row.
- Li uses a scanning keyboard and makes use of the user agent's outline view to more efficiently navigate web pages. The content markup includes instances of static text that differ from surrounding text because they are sentence fragments and styled with a larger font and bold weight. The instances are treated as headings as therefore appear in the outline view, enabling Li to more efficiently navigate to them.
- Related Resources for Success Criterion 1.2.2:
Implementing
Guideline 1.3 - Provide highlighting for
selection, keyboard focus, enabled elements, visited links.
Summary: The user can visually distinguish selected, focused, and enabled items, and recently visited links (1.3.1), with a choice of highighting options that at least include foreground and background colors, and border color and thickness (1.3.2).
1.3.1 Highlighted Items:
The user can specify that the following classes be highlighted so that each is uniquely distinguished: (Level A)
- selection
- active keyboard focus (indicated by focus cursors and/or text cursors)
- recognized enabled input elements (distinguished from disabled elements)
- elements with alternative content
- recently visited links
- Intent of Success Criterion 1.3.1:
Users need to be able to easily discover web content they can interact with. One effective way to do this is to highlight enabled elements and links (including recently visited links). Highlighted selection and content focus lets people who use keyboard, gesture and speech input know where they working. On some pages controls may be difficult to discern amid a large amount of other content, or may be styled so the controls are difficult to distinguish from other content. This can be particularly difficult for people with visual impairments, who may not be able to distinguish subtle visual differences. People with some cognitive impairments may have difficulty distinguishing between items with similar or non-standard appearance. Visually distinguishing these items reduces the amount of time or number of commands these groups require to examine a page.
Note: In addition to these required categories, it is recommended that user agents also allow the user to highlight the active viewport, even when it is a frame or similar within the active window. This makes it much easier for the user to visually locate the active focus.
Note: Platform
conventions will dictate whether or not keyboard focus in an inactive viewport is visually indicated by an inactive cursor.
Note: the definition of visited and unvisited links is up to the user agent. Visited links may be links visited during the current session or in the browser's history.
- Examples of Success Criterion 1.3.1:
- Jerry has low vision. He goes to a website that uses styles to override visited link color. He wants to know what links have yet to be explored. The user agent provides a dialog box for setting overrides to author-selected link colors. Jerry uses the dialog box to override the author styles so visited links are indicated.
- Jerry goes to a website with styles that remove the content focus outline. The user agent provides a dialog box for setting overrides to the author's focus outline declaration. Jerry uses the dialog box to display the content focus outline so he tell where the focus is on the page.
- Binh gets easily frustrated when he cannot locate the buttons and links on a page. This happens when buttons and links don't have the standard appearance he's used to. The user agent provides a dialog box for setting overrides to author-selected link colors. Binh turns on the option to have all links appear in bright purple, and buttons drawn with a bright purple border so he can easily scan the page and find the items he's looking for.
- Related Resources for Success Criterion 1.3.1:
1.3.2 Highlighting Options:
When highlighting classes specified by 1.3.1 Highlighted Items, the user can specify highlighting options that include at least: (Level AA)
- foreground colors,
- background colors, and
- borders (configurable color, style, and thickness)
- Intent of Success Criterion 1.3.2:
Low vision users and users with some cognitive disabilities need control over visual properties to meet their individual needs. These include foreground colors, background colors, and
visual borders (with the same configurable range as the operating environment's conventional selection utilities)
- Examples of Success Criterion 1.3.2:
- Alex has low vision. He sometimes has difficulty distinguishing fields on web forms. The user
agent provides a dialog box allowing the user to override any author
settings. He chooses to have all form fields displayed with a yellow background and outlined with a thick black border.
- Marcy has a cognitive disorder that makes it difficult to stay focused on the task she wants to accomplish. The user
agent provides a dialog box allowing the user to override any author
settings. She chooses to have all form fields displayed with a yellow background and outlined with a thick black border.
- Related Resources for Success Criterion 1.3.2:
Implementing
Guideline 1.4 - Provide text configuration.
Summary: The user can control text font, color, and size (1.4.1), including whether all text should be the shown the same size (1.4.2).
1.4.1 Configure Rendered Text:
The
user can globally set any or all of the following
characteristics of visually rendered text content, overriding any specified by the author or user
agent defaults: (Level A)
- text scale (the general size
of text)
- font family
- text color (foreground and
background)
- line spacing
- character spacing
- Intent of Success Criterion 1.4.1:
Users need to access a wide range of
font sizes, styles, colors, and other attributes in order to find the
combination that works best for their needs. In providing preferences, it is important to avoid making assumptions. For example,
some users may increase font size to make text more legible, while
other users may reduce the font size to decrease the need to scroll the content.
- Examples of Success Criterion 1.4.1:
- Lee has low vision from albinism and has difficulty with screen resolution and brightness. She chooses to have all text displayed in Palatino font, with white text on a black background, and at least 16-points. The serif Palatino font has character spacing that resolves better for her vision, while the white on black reduces glare and the larger size allows her to distinguish fine detail more clearly.
- Lee uses a browser on her mobile phone that supports only 3 font sizes: small, medium, and large. Lee needs to use a font size of 16 pt, which is between the medium and large sizes. The mobile phone settings provide an option to override the 3 font sizes with the operating system font range, so that Lee can select the 16 pt font size she needs.
- Mike has a reading disability. A website uses a fancy script font for the headings that he cannot understand. He chooses to have all text displayed in a plain font that he can read.
- Related Resources for Success Criterion 1.4.1:
1.4.2 Preserving
Size Distinctions:
The user can specify whether or not distinctions in the size of rendered text are preserved when that text is rescaled (e.g. headers continue to be larger than body text). (Level A)
- Intent of Success Criterion 1.4.2:
The relative size of text provides visual cues that help in understanding and navigating web content. Some content may be authored in a way that makes it difficult or impossible to understand when font distinctions were hidden, such as headlines that are in a larger font than body text. It's important that users who need to enlarge or reduce text size be able to preserve these visual cues. It is also important that magnification users who find that text size distinction greatly increases scrolling and fatigue be able to display all text at the same size.
- Examples of Success Criterion 1.4.2:
- Lee has low vision. She finds text easiest to read at 16 pt Palatino and chooses to have her browser display body text in the 16 pt Palatino. She needs the headlines to scale proportionally (e.g. 24 pt) in order to preserve headline prominence.
- Tomas has low vision and uses a screen magnifier. He chooses to have his browser display all text the same size, and sets that size as large as he can without making the letters too tall for his screen. Tomas chooses not to have headings be proportionately larger than normal text because that would make them taller than his screen and so unreadable.
- Related Resources for Success Criterion 1.4.2:
Implementing
Guideline 1.5 - Provide volume configuration.
Summary: The user can adjust the volume of each audio track relative to the global volume level (1.5.1).
1.5.1 Global Volume:
The user can independently
adjust the volume of all audio tracks, relative to the global volume level set
through operating environment mechanisms. (Level A)
- Intent of Success Criterion 1.5.1:
User agents can render audio tracks from a variety sources, and in
some cases, multiple audio tracks may be present on a single page.
Users should be able to globally set the volume of audio tracks, rather
than having to adjust the volume of each audio track being played.
- Examples of Success Criterion 1.5.1:
- An operating system provides a master audio volume control that
applies to all audio tracks rendered within the environment, including
the user agent. The user may define a default volume level through a
preferences dialog that is retained across sessions.
- A user encounters a page with two advertisements and one video which
begins playback on page load complete. A global mute command, supported
via a mute key on the user's keyboard, allows the user to immediately
silence the playing audio tracks.
- Related Resources for Success Criterion 1.5.1:
Implementing
Guideline 1.6 - Provide synthesized speech configuration.
Summary: If synthesized speech is produced, the user can specify speech rate and volume (1.6.1), pitch and pitch range (1.6.2), and synthesizer speech characteristics like emphasis (1.6.3) and features like spelling (1.6.4).
1.6.1 Speech Rate, Volume, and Voice:
If synthesized speech is produced, the user can specify the following: (Level A)
- speech rate,
- speech volume (independently of
other sources of audio), and
- voice, when more than one voice option is available
1.6.2 Speech Pitch and Range:
If synthesized speech is produced, the user can specify the following: (Level AA)
- pitch (average frequency of the speaking voice), and
- pitch range (variation in average frequency)
1.6.3 Advanced Speech Characteristics:
The
user can adjust all of the speech characteristics offered by the speech
synthesizer. (Level AAA)
- Intent of Success Criterion 1.6.1, 1.6.2, 1.6.3:
These success criteria allow users to control speech characteristics so they can perceive and understand the audio information.
For example, a user may need to increase the volume to a level within the user's range of perception. Or a user may increase the rate of synthesized speech presentation because the user understands it at a rate faster than the default setting of the user agent.
Success criterion 1.6.1 covers characteristics that users most commonly need to adjust and that are adjustable in most technologies. Success criterion 1.6.2 covers characteristics that are less widely altered and less widely supported.
- Examples of Success Criterion 1.6.1, 1.6.2, 1.6.3:
- Jamie is blind. He uses a mobile-based web browser to read a web page. He presses a key to increase the rate at which the information is read back. He also uses a mobile browser in a noisy environments such as a crowded subway. With a key press, Jamie quickly increases the volume.
- Randy has a hearing disability where speech at lower pitches is difficult to hear. He is using an audio browser that reads web pages back to him. He issues a voice command saying "raise pitch" and the overall pitch of the synthetic speech is raised.
- Related Resources for Success Criterion 1.6.1, 1.6.2, 1.6.3:
1.6.4 Synthesized Speech Features:
If synthesized speech is produced, the following features are provided: (Level AA)
- user-defined extensions to the
synthesized speech dictionary,
- "spell-out", where text is spelled
one character at a time, or according to language-dependent pronunciation
rules,
- at least two ways of speaking numerals:
spoken as individual digits and punctuation (e.g. "one two zero three point five" for 1203.5 or "one comma two zero three point five" for 1,203.5), and
spoken as full numbers are spoken (e.g. "one thousand, two hundred
and three point five" for 1203.5),
- at least two ways of speaking
punctuation: spoken literally, and with punctuation understood from natural pauses.
- Intent of Success Criterion 1.6.4:
The synthetic speech presentation of text can be difficult to understand. These success criteria improve understandability by giving the user the ability to adjust the way the speech synthesizer presents text.
- Examples of Success Criterion 1.6.4:
- Penny has a reading disabililty. She is using a browser that reads web pages to her so she can review her most recent banking transactions. She has configured her browser to speak currency instead of digits. With this setting she hears a transaction as "Deposit, two hundred fifty five dollars".
- Penny's speech synthesizer is repeating a phone number. She wishes to copy this number, so she switches to the mode where each digit is spoken as a unique word (e.g. five, five, five, seven, nine).
- George is blind. His speech synthesizer incorrectly pronounces technical terms employed in his organization. The terms are consistently mispronounced in a way that makes it difficult for George to distinguish them. A dictionary allows George to enter a spelling of the name that causes the synthetic speech to produce the correct pronunciation.
- Related Resources for Success Criterion 1.6.4:
Implementing
Guideline 1.7 - Enable Configuration of User Stylesheets.
Summary: The user agent shall support user stylesheets (1.7.1) and the user can choose which if any user-supplied (1.7.2) and author-supplied (1.7.3) stylesheets to use. The user agent will allow users to save user stylesheets (1.7.4).
1.7.1 Support User Stylesheets:
If the user agent supports a mechanism for authors to supply stylesheets, the user agent also provides a mechanism for users to supply stylesheets. (Level A)
1.7.2 Apply User Stylesheets:
If user style sheets are supported, then the user can enable or disable user stylesheets for: (Level A)
- all pages on specified websites, or
- all pages
1.7.3 Author Style Sheets:
If the user agent supports a mechanism for authors to supply stylesheets, the user can disable the use of author style sheets on the current page.
(Level A)
- Intent of Success Criterion 1.7.1, 1.7.2 & 1.7.3:
CSS stylesheets allow users and authors to customize the rendering of web content. Such customization is frequently used to make web content accessible to a wide range of user needs. These success criteria ensure that users can take full advantage of this ability to customize stylesheets. Since different websites may require different style changes to be readable, it is recommended that user agents provide a feature that lets the user specify which stylesheet should be automatically applied to different web pages as they are loaded (e.g. based on a list of domain names or URL templates).
- Examples of Success Criterion 1.7.1, 1.7.2 & 1.7.3:
- Tanya has low vision. She finds yellow text on a black background easiest to read. When a website loads, the user agent provides a menu that allows Tanya to select among several stylesheets that the web author has created for the website. Tanya selects the "yellow on black" stylesheet from the menu. The web content is then rendered using this stylesheet.
- Tanya has changed a website that is normally in full color to yellow text on a black background. When her husband Jeromy sits at her computer to look at the site, he goes to the user agent menu to quickly de-select the user-defined stylesheet that Tanya previously applied to the web page. The website is now rendered in full color.
- Related Resources for Success Criterion 1.7.1, 1.7.2 & 1.7.3:
1.7.4 Save Copies of Stylesheets:
The user can save copies of the stylesheets referenced by the current page, in order to edit and load the copies as user stylesheets. (Level AA)
- Intent of Success Criterion 1.7.4:
Stylesheets provide for powerful customization of rendered content. Occasionally a user may need to make slight modifications to the author-supplied external stylesheets used on a website to satisfy certain accessibility needs. At other times a web author may have created a stylesheet that a user with a disability finds helpful. The intent of this success criteria is to allow users to easily save the CSS for a website and make needed modifications without having to create full stylesheet of their own and to apply well designed stylesheets to other web pages where they find the stylesheets helpful.
- Examples of Success Criterion 1.7.4:
-
Mikki browses to a new website and discovers that the Arial 14-point type used for all headlines does not work well with her level of vision. A friend helps Mikki discover that the headlines are created with a CSS stylesheet. They save this stylesheet and study how it is created. Mikki modifies this stylesheet to adjust the headline text to a larger font and a different typeface. She uses the modified stylesheet by applying it as a user style sheet.
- Related Resources for Success Criterion 1.7.4:
Implementing
Guideline 1.8 - Help users to use and orient within windows and viewports.
Summary: The user agent provides programmatic and visual cues to keep the user oriented. These include highlighting the viewport (1.8.1), keeping the focus within the viewport (1.8.2 & 1.8.7), resizing the viewport (1.8.3), providing scrollbar(s) that identify when content is outside the visible region (1.8.4) and which portion is visible (1.8.5), changing the size of graphical content with zoom (1.8.6 & 1.8.12), restoring the focus and point of regard when the user returns to a previously viewed page (1.8.8). Users can set a preference whether new windows or tabs open automatically (1.8.9) or get focus automatically (1.8.10). Additionally, the user can specify that all view ports have the same user interface elements (1.8.11), if and how new viewports open (1.8.9), and whether the new window automatically gets focus (1.8.10). The user can mark items in a webpage and use shortcuts to navigate back to marked items. (1.8.13).
1.8.1 Highlight Viewport:
The viewport with the input focus is highlighted and the user can customize attributes of the highlighting mechanism (e.g. shape, size, stroke width, color, blink rate). The viewport can include nested viewports and containers. (Level A)
- Intent of Success Criterion 1.8.1:
When a user agent presents content using multiple viewports, users
benefit from a clear indication of which viewport has focus. Text foreground and background colors
may not provide enough indication of viewport focus for users
with low vision. These users need to customize highlighting of viewport frames using color,
contrast, and border thickness
to provide multiple visual cues.
- Examples of Success Criterion 1.8.1:
- Tanya has low vision. Her favorite music website allows her to select which of the top 10 songs
are available for listening. Each song is represented by a thumbnail of the album cover – a graphical viewport containing a music player. Tanya uses a keyboard-based screen
magnification tool to tab between songs. This highlights the
currently selected player with a thick, yellow
border against a dark gray background.
- Related Resources for Success Criterion 1.8.1:
1.8.2 Move Viewport to Selection and Focus:
When a viewport's selection or input focus changes, the viewport's content moves as necessary to ensure that the new selection or input focus location is at least partially in the visible portion of the viewport. (Level A)
- Intent of Success Criterion 1.8.2:
When content extends
horizontally or vertically beyond the visible bounds of its viewport,
users must be able to move to one or more selectable elements
that may be out of view and to have the selected content
automatically move into view. This gives keyboard users and screen magnification users an efficient means to
view selected content without having to scroll to
locate and view the selection.
- Examples of Success Criterion 1.8.2:
- John has low vision and uses a screen magnifier. He is spellchecking his blog, which is contained within a scrollable viewport. The blog text exceeds the vertical size of the viewport. The
blogging software provides a command to move to the first, and then any
subsequent, unrecognized words. With two unrecognized words in the
posting, John ignores the first selected word, and presses the
key to move to the next unrecognized word, which is out of view. As the key is pressed, the viewport
scrolls to show the selected word.
- George uses a screen reader. He is showing a sighted colleague how to
complete a registration form that's contained within a viewport. The form
exceeds the vertical bounds of the viewport, requiring George to
scroll vertically to view the complete form content. When George
completes each form entry, if the next form is not already visible in the viewport, it scrolls into view.
- Related Resources for Success Criterion 1.8.2:
1.8.3 Resize Viewport:
The user can resize graphical viewports within the limits of the display, overriding any values
specified by the author. (Level A)
- Intent of Success Criterion 1.8.3:
If a graphical viewport contains content that exceeds the dimensions
of the viewport, users can increase the size of
the viewport – up to the limits of the physical display screen – to allow the full image to be displayed without
scrolling. This
benefits keyboard users who may find it difficult to scroll content,
and users with cognitive or learning disabilities whose understanding
of the content is aided by viewing the complete image.
- Examples of Success Criterion 1.8.3:
- Perttu has a learning disability. He is studying an organization chart and has difficulty maintaining a
mental representation of the organizational linkages for items out of
view. In order to facilitate his understanding of the organization,
he sizes the viewport to allow
the entire chart to be displayed.
- Related Resources for Success Criterion 1.8.3:
1.8.4 Viewport Scrollbars:
When the rendered content extends beyond the viewport dimensions, users can have graphical viewports include scrollbars,
overriding any values specified by the author.
(Level A)
- Intent of Success Criterion 1.8.4:
When rendered content exceeds the bounds of
a graphical viewport, horizonal or vertical scrollbars show that not
all of the rendered content is currently visible within the viewport and provide a means of navigation to that content.
The scrollbars make it clear that the rendered content is not fully visible.
- Examples of Success Criterion 1.8.4:
- Tanya has low vision. She is reading a receipe on a website with a fixed size popup window. Tanya has set a preference in her browser to override the author setting and always display scrollbars on content overflow. She increases the font size causing the
recipe to exceed the vertical and horizontal dimension of the viewport. The presence of the scrollbar shows her that additional ingredients may be present. She uses the scrollbar to move them into
view.
- Related Resources for Success Criterion 1.8.4:
1.8.5 Indicate Viewport Position [was 1.8.5]:
The user can determine the viewport's position relative to the full extent of the rendered
content. (Level A)
- Intent of Success Criterion 1.8.5:
Users who have fine-motor problems that make it difficult to scroll, users who have cognitive issues that make it difficult to orient on the page, and screen reader users, who rely on audio to scan the page, all need to quickly assess the amount of content on the page and where they are located within the content.
- Examples of Success Criterion 1.8.5:
- Ally has cognitive issues that make it difficult to orient. She navigates to a lengthy web page and begins paging through the content. A scroll bar indicates her position within the content as she pages and shows that with each paging action only a small portion of the content is rendered.
- George uses a screenreader to access a lengthy web page. His screen reader speaks the percentage that the page is scrolled, allowing George to know where the cursor is relative to the entire page. .
- Related Resources for Success Criterion 1.8.5:
1.8.6: Zoom [was 1.8.X]:
The user can rescale content within graphical viewports as follows: (Level A)
- Zoom in: to at least 500% of the default size; and
- Zoom out: to at least 10% of the default size, so the content fits within the height or width of the viewport.
- Intent of Success Criterion 1.8.6:
Some users want to be able to magnify content to so it is more legible. Some users want to be able to shrink content so that more of it is visible onscreen. This can help them understand the structure of the content and their position in the content, even if text has become too small to read.
- Examples of Success Criterion 1.8.6:
- Tanya has low vision. She opens a web application uses small text fonts, mixed with graphical elements. She tries increasing the text size alone, but the graphical elements remain at their original sizes. She can't interpret the small graphical elements and the text flows improperly. However, when Tanya uses the zoom feature, the graphical elements are legible, and the page flows correctly.
- Perttu has a learning disability. He is studying an organization chart but has difficulty maintaining a mental representation of the organizational linkages for items out of view. In order to facilitate his understanding of the organization, Perttu zooms out to allow the entire chart to be displayed.
- Related Resources for Success Criterion
1.8.6:
1.8.7 Maintain point of regard [was 1.8.Z]
: To the extent possible, the point of regard
remains visible and at the same location within the viewport when the viewport is resized, when content is zoomed or scaled, or when content formatting is changed. (Level A)
- Intent of Success Criterion 1.8.7:
Users can be confused and disoriented when the area where they are working suddenly shifts outside the visible region of the viewport. When this happens, users may have to expend considerable time and effort to re-navigate back to their previous point of regard. Just as the location in audio does not change when the user increases the volume, the point of regard should not change when the user changes the size of the window or zooms the content.
The point of regard is the information within the viewport that is visible to the user. When there is focused or selected content inside a viewport, and the viewport is resized, or content is zoomed, scaled, or formatted differently, that content will remain visible in the viewport. Otherwise, the user agent should maintain the same top-left (top-right for text read right-to-left) corner as the initial viewport.
Note: User agents are encouraged to allow user to override author instructions not to wrap content (e.g., nowrap).
- Examples of Success Criterion 1.8.7:
- Jorge has low vision. While viewing a webpage he sees a picture with a caption that is too small to read. He highlights the caption, then uses the browser's zoom feature to increase the size of the content so he can read the caption. Throughout the zooming process the highlighted caption remains in the viewport, allowing Jorge to keep oriented on the caption and begin reading it when the appropriate content size is reached. Later, while reading on a page, Jorge finds some text that's too large to read. The beginning of the large text is at the top of the bowser content area. Jorge uses the zoom feature to make the content smaller. The text is reduced to a confortable reading size and the beginning of the text remains at the top of the browser window.
- Melissa has a distraction disorder. She is doing research for school. She scrolls the content so the relevant section heading is the top line of the browser. She resizes the browser so she can see her notes and the browser at the same time. After resizing the browser the heading is still the top line.
- Xu has a reading disability. He is reading a page with footnotes that are too small to read. Xu places the footnote at the top of the browser, and using the increase font-size feature, he increases the font-size of the text on the page. The footnotes stay on the top of the viewport.
- Related Resources for Success Criterion
1.8.7:
1.8.8 Viewport History [was 1.8.5]:
For user
agents that implement a viewport history mechanism (e.g. "back" button), the user can return to any state in the viewport history that is allowed by the content, including a restored point of regard, input focus and selection. (Level AA)
1.8.9 Open on Request [was 1.8.6]:
The user can specify whether author content can open new top-level viewports (e.g. windows or tabs). (Level AA)
1.8.10 Do Not Take Focus:
If new top-level viewports (e.g. windows or tabs) are configured to open without explicit user request, the user can specify whether or not top-level viewports take the active keyboard focus when they open. (Level AA)
- Intent of Success Criteria 1.8.8, 1.8.9 & 1.8.10:
Unexpected focus and viewport changes can be disorienting for all users, requiring time and effort for the user to orient to the change. These success criteria are intended to allow the user to be in control of when viewport changes happen so the user can orient to the changes in a predictable fashion.
- Examples of Success Criteria 1.8.8, 1.8.9 & 1.8.10:
- Justin has an attention deficit disorder. He is studying an online college course and finds a word that he he doesn't know. He follows a link to the definition of the word. After reading the definition, he uses the back button (viewport history) to return. The word remains selected remains the same so he doesn't have to reorient or find it on the page.
- George uses a screen reader. He visits a web page that ordinarily opens a popup page. George receives an audio alert that additional content is available. He can choose to have this pop-up content open on request.
- John has low vision and uses a screen magnifier to access his browser. He configures his browser so popup windows do not take focus because the popups can open outside the view of his magnifier.
- Frank has repetitive strain injuries and uses speech input. He configures his browser so that pop-ups always take focus, so he doesn't have to take multiple steps to locate the window and take the desired action.
- Related Resources for Success Criteria 1.8.8, 1.8.9 & 1.8.10:
1.8.11 Same UI:
The user can specify that all top-level viewports (e.g. windows or tabs) follow the defined user interface configuration. (Level AA)
- Intent of Success Criterion 1.8.11:
Users orient themselves to a browsing environment with a variety of techniques. This success criteria is designed to ensure that the user does not have to learn multiple strategies to use the browsing viewport.
- Examples of Success Criterion 1.8.11:
- Robert uses magnification software. After setting up his user agent, he knows that web content begins predictably one inch from the top of the window, so he can configure his magnification software to present content starting at that location.
- Courtney has difficulty understanding and using complex user interfaces. She has worked with her sister to set up her browser to have only a few of the most common browser controls displayed and she knows to expect them on every browser window.
- Related Resources for Success Criterion 1.8.11:
1.8.12: Reflowing Zoom:
The user can request that when reflowable content in a graphical viewport is rescaled, it is reflowed so that one dimension of the content fits within the height or width of the viewport. (Level AA)
Note: User agents are encouraged to allow users to override author instructions not to wrap content (e.g., nowrap).
- Intent of Success Criterion 1.8.12:
It's important that reflowed content remains easily usable. Content is not easily usable if the user has to scroll back and forth to see a single line of text. This is an especially acute issue for users who find it difficult or impossible to use the mouse to scroll and for users who find it difficult to reorient when the content changes. This does not require or prohibit the user agent from providing an option to turn off reflow.
- Examples of Success Criterion 1.8.12:
- Frank has repetitive strain injuries and uses speech input. As he's gotten older he sometimes needs to increase the font size to comfortably read text on his computer screen. If the text does not flow properly, he overuses his voice to carry out multiple speech commands, or uses the mouse, which exacerbates his injuries, or simply gives up.
- Maggie has cognitive issues that make it difficult for her to reorient when the computer screen changes. She occasionally zooms in to read small text. She finds it easier to reorient after zooming when at least one dimension of the content fits within the height or width of the viewport.
- Related Resources for Success Criterion 1.8.12:
1.8.13 Webpage Bookmarks [was 1.8.m, 1.8.13]:
The user can mark items in a webpage, then use shortcuts to navigate back to marked items. The user can specify whether a navigation mark disappears after a session, or is persistent across sessions. (Level AAA)
- Intent of Success Criterion 1.8.13:
This success criterion is crucial for users who have trouble navigating a webpage. People who use speech input, have memory problems, or use small screens may be able to go from one area of a webpage to another area once or twice, but may have trouble frequently repeating the action. The ability to mark areas of the page allows these types of users to navigate more quickly with less fatigue.
- Examples of Success Criterion 1.8.13:
- Jamie is a quadriplegic who uses speech input. She is a professor who reads long documents online and often finds herself comparing different portions of the same document. It is tedious carrying out multiple scrolling commands by speech every time she needs to change to another portion of the document. She sets several bookmarks instead. This allows her to instantly jump among sections, eliminating the time and effort penalties she usually has to pay for slow scrolling.
- Julie has memory problems. She finds it difficult to remember key points from a document she has just read. She uses bookmarks to mark important points she needs to read again. Without the ability to bookmark, she won't remember what she needs to read multiple times in order to do so.
- George is blind and occasionally travels to unfamiliar places for work. He sometimes uses an iPhone to orient himself within a map of the building he's in. Once he's found a key place on the map – a room where a conference session is held, or the bathroom – he bookmarks it to build a map of useful places. This makes navigation easier the second time around. He sets the marks to be persistent across sessions so next time he visits he won't have to repeat his work.
- Mary has repetitive strain injuries that make it painful to use a mouse. She is a college professer who uses an elaborate web application to correct papers. After putting a comment in a comment field she has to scroll all the way to the bottom of the document to enter the comment. The bookmarks make even this badly designed application something she can use successfully without hurting herself.
- Related Resources for Success Criterion
1.8.13:
Implementing
Guideline 1.9 - Provide alternative views. [was 1.10]
Summary: The user can view the source of content (1.9.2), or an outline view of important elements. (1.9.1).
1.9.1 Outline View:
Users can view a navigable outline of rendered content composed of labels for important structural elements, and can move focus efficiently to these elements in the main viewport. (Level AA)
Note: The important structural elements depend on the web content technology, but may include headings, table captions, and content sections.
- Intent of Success Criterion 1.9.1:
Outline views allow users get a simplified view or overview of a
document. They are particularly useful for users with memory or
cognitive disabilities, blind users, and users who find it difficult or impossible to use a mouse. A navigable outline views reduce orientation and navigation time and fatigue.
- Examples of Success Criterion 1.9.1:
- Frank has repetitive strain injuries and uses speech input. He uses a web browser to read financial information. His browser provides an optional panel displaying a hierarchical list
of the headers and tables in the current document.
Frank is able to expand or shrink portions of the outline view for
faster access to the information he wants.
- George uses a screen reader. He reads long standards documents and uses the headings to navigate quickly so he can compare sections of the standards.
- George also finds the outline view useful when he is quickly checking a reference on his mobile phone.
- Related Resources for Success Criterion 1.9.1:
1.9.2 Source View:
The user can view all source text that is available to the user
agent. (Level AAA)
- Intent of Success Criterion 1.9.2:
The source view is the ultimate fallback for a person with disabilities when the browser cannot
properly render some content, or when the user cannot take advantage of
the content as rendered or using the mechanisms provided.
- Examples of Success Criterion 1.9.2:
- George uses a screenreader. He visits a web page where the content author failed to provide alt text or a long description for an
image he wants to access. As a last resort, George examines the source to see the image's URI, class, and similar attributes. He sees that part of the URI for the image is "home.jpg" and concludes that he can click on that image to return to the home page of the site.
- Mikki has low vision. She wants to create a user stylesheet for her web-based email program. She examines the source code to identify the CSS class of the email heading. She then creates a user stylesheet that increases the size and contrast of the email headings.
- Related Resources for Success Criterion 1.9.2:
Implementing
Guideline 1.10 - Provide element information. [was 1.11]
Summary:The user agent presents information about content relationships (e.g. form labels, table headers)(1.10.1), and extended link information (e.g. title, internal vs. external) (1.10.2)
1.10.1 Access Relationships:
The user can access explicitly-defined relationships based on the user's position in content (e.g. show the label of a form control, show the headers of a table cell). (Level AA)
- Intent of Success Criterion 1.10.1:
Some users have difficulty perceiving, remembering, or understanding the relationships between elements and their descriptions. Certain elements relate to others in a recognizable manner, such as relationships with 'id' attributes and child elements (e.g. Ajax widgets and with form elements). This allows users to better understand these relationships even if the elements are not adjacent on the screen or the DOM.
- Examples of Success Criterion 1.10.1:
- John has low vision and uses a screen magnifier to access his browser. When interacting with tables and spreadsheets John has to move the viewport of the magnifier to understand the row and column titles of the cell with which he is interacting. This takes additional time and effort and is therefore frustrating. John switches to a browser that presents the row and column titles when he hovers over a cell - this makes him much more productive at his accounting job.
- Courtney has a cognive disability that makes it difficult for her to comprehend complex user interfaces. She is completing an online job application. It is not clear where she should enter her phone number. She mouses over a blank box and sees a tooltip that says "home phone number". She is able to complete the form.
- Related Resources for Success Criterion 1.10.1:
1.10.2 Access to Element Hierarchy:
The user can determine the path of element nodes going from the root element of the element hierarchy to the currently focused or selected element. (Level AAA)
- Intent of Success Criterion 1.10.2:
Users who have difficulty working with a web page or document can use user style sheets or scripts to modify page presentation or interaction so they can gain information or accomplish a task. Stylesheets and scripts may require the user to identify specific elements, element attributes, and element position in the hierarchy. The user agent can facilitate this process by allowing the user to navigate to an element, select it if it's not navigable, and query for element information. If this feature is not provided, the user may be forced to try to find the corresponding element in the source view or entire document tree.
- Examples of Success Criterion 1.10.2:
- Jack has low vision. He wants certain content on a web page to be displayed in a larger font, and wants to create a user style sheet that would modify its appearance. He needs to identify the class or ID of the particular element, so he selects the text he's interested in, opens the browser's debug window, which shows him that the selected text is an element with class "story" inside a paragraph inside a DIV with class "Premiere". He then knows the combination of classes and element types to specify in the user style sheet.
- Related Resources for Success Criterion 1.10.2:
PRINCIPLE 2. Ensure that the user interface is
operable
Summary: Users can operate all functions (2.1.1), and move focus (2.1.2) using just the keyboard. Users can activate important or common features with shortcut keys, (2.1.6), override keyboard shortcuts in content and user interface (2.1.4), escape keyboard traps (2.1.3), specify that selecting an item in a dropdown list or menu not activate that item or move to that new web page (2.1.4) and use standard keys for that platform (2.1.5).
2.1.1 Keyboard Operation:
All
functionality can be operated via the keyboard using sequential or direct
keyboard commands that do not require specific timings for individual
keystrokes, except where the underlying function requires input that depends
on the path of the user's movement and not just the endpoints (e.g. free
hand drawing). This does not forbid and should not discourage providing other input methods in addition to keyboard operation including mouse, touch, gesture and speech. (Level A)
- Intent of Success Criterion 2.1.1:
A user has many ways to input information into a computer or device, including mouse, keyboard, gesture, and speech. The keyboard paradigm is the most universal interface for text input – even devices that do not have a keyboard (like mobile phones) support a software interface for them. A user should be able to navigate, read and use all of the web page or application without needing to use a mouse. Some users do not use a mouse. Others can only use a pointing device that uses the keyboard API. It's important that these users be able to interact with enabled components, select content, navigate viewports, configure the user agent, access documentation, install the user agent, and operate user interface controls, all entirely through keyboard input.
User agents generally support at least three types of keyboard operation:
- Direct (e.g. keyboard shortcuts such a "F1" to open the help menu; see checkpoint 11.4 for single-key access requirements)
- Sequential (e.g. navigation through cascading menus)
- Spatial (e.g. when the keyboard is used to move the pointing device in two-dimensional visual space to manipulate a bitmap image)
User agents should support direct or sequential keyboard operation for all functionalities. The user agent should offer a combination of keyboard-operable user interface controls (e.g. keyboard operable print menus and settings) and direct keyboard shortcuts (e.g. to print the current page).
- Examples of Success Criterion 2.1.1:
- Amal has a medical condition that prevents him from using the mouse. Jeremy finds it difficult or impossible to use a mouse or keyboard and therefore uses speech to control the keyboard. Tanya is blind and uses a screenreader and must be able to accomplish all tasks with the keyboard. Jamie uses a pointing device to control the computer. Each of these users must be able to do the following through keyboard alone, speech input alone, or pointing device alone:
- Select content and operate on it. For example, if it is possible to select rendered text with the mouse and make it the content of a new link by pushing a button, these users also need to be able to do so through the keyboard and other supported devices. Other examples include cut, copy, and paste.
- Set the focus on viewports and on enabled elements.
- Install, configure, uninstall, and update the user agent software.
- Amal, Jeremy and Jamie use the graphical user interface menus even though they cannot use the mouse.
- Fill out forms.
- Access documentation.
- Amal is reading a web page where the author used the CSS overflow property to constrain the size of a block of content. Amal's browser provides scroll bars to display text that overflows the container. He uses the keyboard to enter the element and operate the scrollbars to read all the content, he then uses the keyboard to return focus to the main web page. (see 2.1.3 No Keyboard Trap)
- Tanya needs to use a volume control widget to change the volume on a video she is watching. She use the keyboard to navigate to the widget, uses the arrow keys to increase the volume, then uses the keyboard to navigate to the comments below the video player.
- Cade has low vision. He can't identify an icon and so he needs to read its alternative text. Mouse users could hover the mouse over the icon and have its alternative text displayed as a pop-up or tooltip, but Cade cannot use a mouse. He uses the keyboard to move focus to the icon. The browser automatically displays the "hover" tooltip, or allows him to press a key to have it displayed.
- Jeremy is a speech-input user who cannot use his hands to control his computer. It is much easier for him to speak keyboard shortcuts then click on an icon in a new program. He needs to be able to see tooltips to discover keyboard shortcuts without having to use the mouse.
- Related Resources for Success Criterion 2.1.1:
2.1.2 Keyboard Focus (former 1.9.2):
Every viewport
has an active or inactive keyboard focus at all times. (Level A)
- Intent of Success Criterion 2.1.2:
Both the user and some types of assistive technology need to know what will be affected by any keyboard input, so it's important that they be able to tell which window, viewport, and controls have the keyboard focus at any time. This applies whether window and viewport are active (active keyboard focus) or inactive (inactive keyboard focus). Even when a window is inactive, it can be affected by simulated keyboard input sent by assistive technology tools. Active keyboard focus is indicated to the user by focus cursors and text cursors, as required by Guidelines 1.3, and made available to assistive technology, as required by Success Criterion 4.1.6.
- Examples of Success Criterion 2.1.2:
- Amal has a medical condition which prevents him from using the mouse. He launches a web browser and navigates to a web page. Initially the keyboard focus is on the entire document, which is exposed to assistive technology, but there is no visible cursor. When Alan presses the tab key, the focus moves to the first link on the web page, and a cursor in the form of a dotted rectangle appears around that link.
- Amal launches a web browser and navigates to a web page that has an enabled edit field. The browser places the keyboard focus on the edit field so Amal can immediately start entering text, and its location is shown using a text cursor (usually a vertical line or i-beam). As he types, the text cursor moves to show where the next character will appear. If Amal activates another window, the browser may hide the cursor in the now inactive window, but its location is still available to assistive technology.
- Raymond has low vision. As the keyboard focus moves from one control to another, or one window to another, his screen enlarger utility detects the focus change and pans its viewport to keep the focus location visible.
- Related Resources for Success Criterion 2.1.2:
2.1.3 No Keyboard Trap (former 2.1.5):
If keyboard focus can be moved to a component using a keyboard interface (including nested user agents), then focus can be moved away from that component using only a keyboard interface. If this requires more than unmodified arrow or tab keys (or other standard exit methods), users are advised of the method for moving focus away. (Level A)
- Intent of Success Criterion 2.1.3:
If users can put focus on an element, they can remove focus and move on to the next element. This is often not possible with embedded objects. The user agent needs to provide a way to always return to the previous or next element in the content, or a known location such as the address bar. The user agent also needs to take control back from the embedded object, no matter what it is.
- Examples of Success Criterion 2.1.3:
- Ari has repetitive strain injuries that are exacerbated when he uses the mouse. He is using a video hosting site where each page hosts a nested media player. He presses Tab until the focus in on the media player, then presses Enter to activate and put the keyboard focus on it. When he's finished watching the video, he presses Tab to navigate to the comments below the video, but cannot get the focus to leave the video player. He presses Alt+Left to return to the previous page, but that also fails because the video player is consuming those keystrokes. Luckily, Ari knows that Shift+Esc will return focus from a nested user agent, with or without its cooperation. Thus, even a badly behaved nested user agent cannot prevent Ari from getting on with his work.
- Ari moves the focus to a toolbar extension that does not relinquish control back to the user agent. He presses Alt-D to move the focus to the address bar.
- Mary is a blind user who does not use the mouse. She moves the focus to an embedded scripted application that was poorly programmed. She presses a documented key combination – Alt+N – to override the scripting and move the focus to the next element in the content.
- Katan cannot use the mouse and has trouble with short-term memory. He is using a virtual machine. The escape sequence is Shift+Ctrl+Escape. Every time Katan opens the program, it briefly shows the escape sequence near the top right corner of the screen. The virtual machine also has an Exit option in its menu system. The Escape keystrokes are also indicated on the menu.
- Related Resources for Success Criterion 2.1.3:
- Compound documents [@@ which W3C document does this refer to? There are many@@]
2.1.4 Separate Selection from
Activation (former 2.1.4):
The user can specify that focus and selection can be moved without causing further changes in focus, selection, or the state of controls, by either the user
agent or author content. (Level A)
- Intent of Success Criterion 2.1.4:
People do not expect side effects when moving the keyboard focus regardless of whether the side effect is caused by the user agent or author content. If users fail to notice side effects, they could end up doing something disastrous. This is especially likely for users of assistive technology who cannot see changes happening elsewhere on the screen. Users may also find it confusing or disorienting if the effect causes unexpected focus movement or changes in context. If the user agent does implement side effects to keyboard navigation, it is recommended that it provide a user preference setting to disable them. However, in some cases it may be more appropriate to provide a separate navigation mechanism that avoids side effects, such as allowing the user to hold down the Ctrl key while navigating to avoid changing selection or choice.
Note: It may not be possible for the user agent to detect or prevent side effects implemented by scripts in the content, but the user agent is required to prevent side effects that are under its control.
- Examples of Success Criterion 2.1.4:
- Murray uses a screen magnifier that allows him to see the element with the focus and a small area around it. He explores a dialog box by repeatedly pressing the Tab key to move to, and read each control in succession. He uses the arrow keys to navigate through a dropdown menu. When he moves the focus to the first option he goes to that page. He wants to navigate to the third selection, but can't get there. Fortunately, the platform also has a convention that holding down the Ctrl key while navigating that moves the focus without changing selection or option choice. Murray uses this while exploring. His web browser implements its own form controls and navigation mechanisms rather than using the platform's infrastructure, but also implements this Ctrl-key mechanism for users like Murray.
- Related Resources for Success Criterion 2.1.4:
2.1.5
Follow Text Keyboard Conventions (former 2.1.7):
The user agent follows keyboard conventions for the operating environment. (Level A)
- Intent of Success Criterion 2.1.5:
Keyboard users rely on the user agent to provide keyboard support that is full-featured and consistent among applications. Following platform conventions for keyboard access helps ensure that the functions that people rely on are not accidentally omitted. In addition, making these inputs consistent within and across programs greatly reduces learning curve, cognitive load, and errors. User agents are encouraged to add keyboard commands when the commands provide additional features or benefit for users. User agents should avoid omitting the standard commands, or assigning them to different keys.
- Examples of Success Criterion 2.1.5:
- Jack is blind and uses a screenreader. He edits blog posts in his browser's text area control, and can use the same keys for navigation, editing , and formatting that he's used to using in other applications on the platform (e.g. arrow keys, commands to move to the beginning and end of the line, cut, copy, paste, ctrl-b for bolding).
- Jack's user agent uses a custom tree control showing a hierarchal outline view of the document headings. He can navigate within that control using the same keys as work in the native tree controls used in other applications (e.g. up, down, left and right arrow keys, and navigating directly to entries by typing the beginning of their text).
- Jack puts his browser into a caret browsing mode so he can move the text cursor through the text on the page. The browser supports the same keys for navigation and editing as are used in other applications.
- Related Resources for Success Criterion 2.1.5:
2.1.6
Efficient Keyboard Access:
The user
agent user interface includes mechanisms to make keyboard access more efficient than sequential keyboard access. (Level A)
- Intent of Success Criterion 2.1.6
:
Efficient keyboard navigation is especially important for people who cannot easily use a mouse, are quickly fatigued, or find it diffcult to memorize the menu structure for sequential navigation. This is important in all types of user agent environments.
- In a browser: A browser provides keyboard shortcuts for its menu functions as well as access keys in the design of its menus and dialog boxes. The choice of shortcut keys follows platform conventions where applicable (e.g. for open document, save document, cut, copy, paste).
- In a mobile environment: A social networking application on a mobile device has only a very few keyboard shortcuts available on its targeted devices. These few keyboard shortcuts are used for the most commonly accessed functions of the application (e.g. home, list of friends).
- In a media player: A embedded media player provides shortcut keys for commonly used functions (e.g. pause and play).
- Examples of Success Criterion 2.1.6:
- Jack is blind and uses a screenreader. He opens his browser to write his email. He uses a shortcut to start a new message. He uses the same CTRL-N command he uses in his desktop email application.
- Jean has a medical condition that makes it difficult to visualize complex visual structures like dropdown menus. She is writing a new blog entry and discovers that her blogging application has updated and all the menus have changed. She cannot find the menu option to format the text, but is pleased to find that CTRL-B still works to bold the text she wants.
- Related Resources for Success Criterion 2.1.6:
2.1.9
[deleted] Allow Override of User Interface Keyboard Commands:
Implementing Guideline 2.2 - Provide sequential navigation [new, includes former 2.1.8 and 1.9.8, and a new SC]
[Return to Guideline]
Summary: Users can use the keyboard to navigate sequentially (2.2.3) to all the operable elements (2.2.1) in the viewport as well as between viewports (2.2.2). Users can optionally disable wrapping or request a signal when wrapping occurs (2.2.4).
2.2.1 Sequential Navigation Between Elements [replaces 1.9.8 Bi-Directional and 2.1.8 Keyboard Navigation]:
The user can move the keyboard focus backwards and forwards through all recognized enabled elements in the current viewport. (Level A)
- Intent of Success Criterion 2.2.1:
Sequential keyboard navigation is a fundamental, universal method of keyboard access. While it can be slower and require more input than other methods (such as direct, structural, or search-based navigation) it is a simpler mechanism that requires very little cognitive load or memorization, and is consistent across contexts. Users need keyboard access to all viewports and all enabled elements so that they can manipulate them, view them with screen magnifiers, or have them described by screen readers. The ability to move both forward and backward through the navigation order greatly reduces the number of keystrokes and allows the user to more easily recover from mistakes in overshooting a destination.
- Examples of Success Criterion 2.2.1:
- Sooj cannot use a pointing device, so she moves the keyboard focus to the next enabled element by pressing the Tab key, and to the previous enabled element by pressing Shift+Tab. Within list boxes and radio button groups she uses the up and down arrow keys to move to the next and previous items.
- Related Resources for Success Criterion 2.2.1:
2.2.2 Sequential Navigation Between Viewports [new]:
The user can move the keyboard focus backwards and forwards between
viewports, without having to sequentially navigate all the elements in
a viewport. (Level A)
- Intent of Success Criterion 2.2.2:
It is important for the user to be able to jump directly to the next
or previous viewports without having to visit every element in a
viewport on the way to the next viewport, because this can add an
exorbitant number of navigation commands to operations that should be
easy and efficient. Users need keyboard access to all viewports
and enabled elements so that they can manipulate them, view them
with screen magnifiers, or have them described by screen readers. The
ability to move both forward and backward through the navigation order
greatly reduces the number of keystrokes and allows the user to more
easily recover from mistakes in overshooting a destination. This
navigation can be among applications, windows, or viewports within an application. This includes the user agent's user
interface, extensions to the user interface (e.g. add-on), and content.
- Examples of Success Criterion 2.2.2:
- Sooj has a repetitive stress injury and cannot use a mouse, so she moves the keyboard focus to the next pane by pressing F6 or to the previous pane by pressing Shift+F6. She moves between tabbed document views by pressing Ctrl+Tab and Shift+Ctrl+Tab.
- Sooj is working in her web browser, where one document window (viewport) is active (has the active keyboard focus). When she switches to her word processor, the web browser's window and its keyboard focus become inactive, and it hides its cursor. When Sooj switches back to the browser window, it reactivates that viewport, its keyboard focus becomes active again, and its cursor reappears in the same location as when she switched to a different application.
- A developer creates an extension to a user agent that allows the user to add notes about each web page being visited. Sooj can press a shortcut key to move focus to the user interface of this extension and interact with the functionality offered by the extension. Similarly, Sooj presses another key to move focus back to the main viewport for the user agent in the same location as when she moved to the plugin.
- Related Resources for Success Criterion 2.2.2:
2.2.3 Default Navigation Order (former 1.9.9):
If the author has not specified a navigation order, the default
sequential navigation order is the
document order. (Level A)
- Intent of Success Criterion 2.2.3:
When the content author didn't explicitly define a consistent tab order, the browser will provide one. Users need to have a mental map of where the focus will land when they press the Tab key or use other sequential navigation commands. If the focus jumps in seemingly random fashion, skipping up or down, it becomes impossible to use this method efficiently because users must stop, find the focus, reorient, and determine what direction they should proceed every time they press navigation keys. This is a particular problem for users with some cognitive limitations or whose disability makes input difficult, tiring, or painful. Content authors are expected to define a logical navigation order in their documents, but if they have not, this success criterion ensures that the order will at least be consistent between user agents.
- Examples of Success Criterion 2.2.3:
- Alec is filling out an HTML form. Because the form's author has not specified a navigation order using the tabindex attribute, when Alec presses the Tab key the focus moves to the next control in the order defined in the underlying HTML. This order is logical as long as the author is not using styles to change the visual order. Even if the author uses styles to change the visual order, Alec will have the same experience completing this form on his mobile phone.
- Related Resources for Success Criterion 2.2.3:
2.2.4 Options for Wrapping in Navigation (new):
The user can prevent sequential navigation from wrapping the focus at the beginning or end of a document, and can request notification when such wrapping occurs. (Level AA)
- Intent of Success Criterion 2.2.4:
Users need a good mental map of the navigation sequence and behavior, and particularly need to know when they have started over again so they can maintain that mental map and not waste time and energy inadvertently revisiting information. This is a greater problem for users who have limited short-term memory, perceive a narrow field of vision, or use a screen magnifier, screen reader or small screen device. This also prevents people with mobility issues from having to use extra navigation commands.
- Examples of Success Criterion 2.2.4:
- Betsy has low vision. She is using a screen magnifier that only shows her a single line of text. She's navigating through a long list of unsorted items in a list box, searching for an entry that she does not realize is not in the list. Each time she presses the down arrow she is presented with the next item in the list. The list box wraps, after she reads the final entry and presses the down arrow, she is once again presented with the first entry in the list. Unfortunately, it takes her a long time to realize that she's scrolling through the same set of items again and again. To avoid this, she can turn on an option to prevent wrapping, or have the user agent play a sound or display a message to indicate that it is wrapping back to the first item. This behavior should be under the user's control, because keyboard users who can see the entire screen may not want to be interrupted by a pop-up dialog box.
- Related Resources for Success Criterion 2.2.4:
Implementing Guideline 2.3 - Provide direct navigation and activation [includes former 2.1.6, 2.1.7, 2.1.11]
[Return to Guideline]
Summary: Users can navigate directly (e.g. keyboard shortcuts) to important elements (2.3.1) with the option of immediate activation of the operable elements (2.3.3). Display commands with the elements to make it easier for users to discover the commands (2.3.2 & 2.3.4). The user can remap and save direct commands (2.3.5).
2.3.1 Direct Navigation to Important Elements (former 2.7.4):
The user can navigate directly to any important (e.g. structural or operable) element in rendered content. (Level A)
- Intent of Success Criterion 2.3.1
:
It is often difficult for some people to use a mouse to move the viewport and focus to important elements. In this case some other form of direct navigation - such as numbers or key combinations assigned to important elements - should be available. Direct navigation can be accessed via keyboard or speech input.
- Examples of Success Criterion 2.3.1
:
- Mary cannot use the mouse or keyboard due to a repetitive strain injury. She uses speech input with uses a mouseless browsing plug-in for her browser. The plug-in overlays each link with a number that can then be used to directly select it (e.g. by speaking the command "link 12"). This prevents Mary from having to say "tab" numerous times to select a link.
- Related Resources for Success Criterion 2.3.1
:
2.3.2
Present Direct Commands in Rendered Content (former 2.1.6):
The user can have any recognized direct commands in rendered content (e.g. accesskey, landmark) be presented with their associated elements. (Level A)
- Intent of Success Criterion 2.3.2:
For many users, including those who use the keyboard or an input method such as speech, the keyboard is often a primary method of user agent control. It is important that direct keyboard commands assigned to user agent functionality be discoverable, including in rendered content. If direct commands are not presented in content, many users will not discover them.
- Examples of Success Criterion 2.3.2:
- Fiona uses an audio browser. When the system reads form controls in the rendered content, it reads the label of the form followed by the accesskey (e.g. "name alt plus n").
- Mary cannot use the mouse or keyboard. She uses speech input. She is composing an email and wants to attach a file. She sees ctrl-shift-A next to the link to attach a file. She quickly uses the command to add the attachment.
- Related Resources for Success Criterion 2.3.2:
2.3.3 Direct activation (former 2.7.6):
The user can move directly to and activate any operable
elements in rendered content. (Level AA)
- Intent of Success Criterion 2.3.3:
It is often difficult for some people to use a pointing device (the standard method of direct navigation) to move the viewport and focus to important elements. In this case some other form of direct navigation - such as numbers or key combinations assigned to important elements - should be available which can then be accessed via the keyboard or speech control technology.
- Examples of Success Criterion 2.3.3:
- Mary cannot use the mouse or keyboard due to a repetitive strain injury, instead she uses voice control technology with uses a mouse-less browsing plug-in to her browser. The plug-in overlays each hyperlink with a number that can then be used to directly select it (e.g. by speaking the command "select link twelve"). This prevents Mary from having to say the word 'tab' numerous times to get to her desired hyperlink.
- Related Resources for Success Criterion 2.3.3:
2.3.4 Present Direct Commands in User Interface (former 2.1.7):
The user can have any direct commands (e.g. keyboard shortcuts) in the user agent user interface be presented with their associated user interface controls (e.g. "Ctrl+S" displayed on the "Save" menu item and toolbar button). (Level AA)
- Intent of Success Criterion 2.3.4:
For many users, including those who use the keyboard or and input method such as speech, the keyboard is often a primary method of user agent control. It is important that direct keyboard commands assigned to user agent functionality be discoverable as the user is exploring the user agent. If direct commands are not presented in content, most users won't know about them and so the commands effectively won't exist.
- Examples of Success Criterion 2.3.4:
- Vlad is a keyboard-only user who uses a browser on the Mac OS operating system. When he needs to perform a new operation with the browser user interface, he searches for it in the menus and notes whether the menu item has a " ⌘ " label (e.g. "Copy ⌘-C"), which indicates the direct activation command he can use in the future to avoid having to traverse the menus.
- Amir uses ability switches to control an onscreen keyboard for the Windows operating system. When he presses the "alt" key the available browser user interface accesskeys are shown as overlays on the appropriate user interface controls (e.g. "File with 'F' in an overlay").
- Related Resources for Success Criterion 2.3.4:
2.3.5
Customize Keyboard Commands:
The user can override any keyboard shortcut including recognized author supplied shortcuts (e.g. accesskeys) and user agent user interface controls, except for conventional bindings for the operating environment (e.g. arrow keys for navigating within menus). The rebinding options must include single-key and key-plus-modifier keys if available in the operating environment. The user must be able to save these settings beyond the current session. (Level AA)
- Intent of Success Criterion 2.3.5
People using a keyboard interface need the ability to remap the user agent's keyboard shortcuts in order to avoid keystroke conflicts with assistive technology, reduce number of keystrokes, use familiar keystroke combinations, and optimize keyboard layout (e.g. for one-handed use). This is important for people with dexterity issues where every keystroke can be time consuming, tiring or painful. It is also important for people using assistive technologies such as screen readers, where many keystrokes are already in use by the assistive technology. The goal of this SC is to enable the user to be in control of what happens when a given key is pressed and use the keyboard commands that meet his or her needs, and the ability to save the modifications.
Content authors may utilize the Accesskey attribute to define short cut keys which allow quick access to specific elements, actions, or parts of their Web content. The author-selected short cuts may utilize keystrokes that are unique to their site, differing from conventions used, and or familiar, to users of other similar sites, or sites offering similar functionality. Users of assistive technologies who rely upon keyboard input may wish to have a consistent mapping of shortcut keys to similar, or common actions or functions across the sites they visit.
User agents should allow users to define a preferred key combination for specific instances of author defined accesskeys. The user should have the option to make any defined override to be persistent across browsing sessions.
User agents may also offer the user the option to automatically apply preferred key combinations for content which has author supplied accesskey bindings, based upon the associated text, label, or ARIA role, and which override any author specified keybinding for that page content.
- Examples of Success Criterion 2.3.5:
- A speech recognition user has defined standard commands to access commonly used parts of a website. For example, speaking the command "site search" will take the user to a website's search function. A site author may assign an access key to set focus to the search input field, basing the accesskey on the first letter of the search engine used (e.g. G for Google or B for Bing, rather than the mnemonic S for search). The speech user has specified an override key mapping of S, which is consistent with the keystroke issued by the speech recognizer they are using.
- A mobile device user, whose primary keyboard interface is their phone's numeric keypad, maps common website actions to numeric shortcut keys. For example, the user prefers to have the 1 key to activate a site's "skip to content" function. An author of a site visited daily by this user defines "S" as the accesskey for the skip to content function. The user overrides the author defined accesskey of "S" with "1".
- Laura types with one hand and finds keys on the left side of the
keyboard easier to press. She browses to a web page and notices that
the author has assigned access keys using keys from the right side of
the keyboard. She opens a dialog in the user agent and reassigns the
access keys from the web page to the left side of the keyboard home
row. She also use the user agent's preferences/options to redefine its keyboard shortcuts in the same way.
- Elaine's screen magnification program uses alt+m to increase the
size of the magnified area of the screen. She notices that in her web
browser, alt+m is a hotkey for activating a home button that stops her
from being able to control her magnification software. She opens a
hotkey reassignment feature in the user agent, and sets alt+o to be
the new hotkey for the home button. Her screen magnification software
now works correctly.
- George uses a screenreader. The screenreader uses Ctrl+f to read the item with focus. Since this is a common user agent command for "find", the user agent allows him to reassign the find command to a non-conflicting key binding. The user agent provides a list of user interface features and default keyboard assignments with options for the user to assign new key combinations. User keyboard customizations are saved similar to other user preferences by the user agent.
- Related Resources for Success Criterion 2.3.5:
Summary: Users can search rendered content (2.4.1) forward or backward (2.4.2) and can have the matched content highlighted in the viewport (2.4.3). The user is notified if there is no match (2.4.4). Users can also search by case and for text within alternative content (2.4.5).
2.4.1 Text Search:
The user can perform a search within rendered content (e.g. not hidden with a style), including rendered text alternatives and rendered generated content, for any sequence of printing characters from the document character set. (Level A)
- Intent of Success Criterion 2.4.1:
The find or text search function in a user agent allows the user to easily locate desired information in rendered content. People who read or navigate slowly or with difficulty due to a disability rely more heavily on the ability to search for text, rather than scanning or reading an entire document to find it. The ability to search alternative content allows screen reader users to find content they heard on their speaker. Users with hearing impairments use Text Search as an efficient method of jumping to specific points in a video. Users who find it difficult to use the mouse or keyboard and have to limit their physical operations will save movements using search.
- Examples of Success Criterion 2.4.1:
- Marvin has a dexterity impairment. He needs to move efficiently to specific text in the document. The user agent provides a local search function that is available using speech commands. Marvin says "find box," a text box with a search button appears. He speaks the word he is looking for, and says "enter", which executes the search function.
- Betty, who has low vision, is attempting to create a user stylesheet for a site. She need to know the 'class' attribute value for navigation headers. Betty gets the source view of the current page and searches for the specific phrase used in a navigation list to find the class associated with the navigation element
- Joe, a user with a distraction disorder, is taking an online exam. He is working on the 6th question when he realizes he wrote something wrong in the essay on question 2. He uses the search function in the browser to find the text in error inside the textarea of question 2.
- Sam is a screen reader user. He has images off and the alternative content for images is revealed. He wants to send the flow chart image on the page to a collegue. Sam searches for the word "flowchart" that he heard spoken as part of the 'alt' text for the image. He then uses the context menu to select the address of the image and sends it to a colleague.
- Agnes is deaf. She is watching a video with captions turned on. Agnes uses the search function to seach through the captions and jump to the point in the video where the search term is located in the time line.
- Greta has a reading impairment. She is trying to efficiently locate some information in a large, detailed graphic. She is using a browser with native support for SVG. When Greta searches for a term, text within the SVG is searched along with the HTML content.
-
- Related Resources for Success Criterion 2.4.1:
2.4.2 Find Direction:
The user can search forward or backward in rendered content. (Level A)
- Intent of Success Criterion 2.4.2:
People who read slowly or with difficulty due to a disability rely more heavily on the ability to search for the text they're looking for, rather than scanning or reading an entire document to find it. Local find in a user agent allows the user to easily locate desired information in rendered content. The ability to search for alternative text content allows screen reader users to find content they heard on their speaker. Users with hearing impairment use text search as an efficient method of jumping to specific points in a video.
- Examples of Success Criterion 2.4.2:
- A user has been reading through a web page and wants to quickly locate a phrase previously read. When opening the browser's page search feature, the user has options to search forward and backward from the current location. If the search reaches an endpoint in the document, the user is notified that the search has wrapped around, such as with an alert box or other indication.
- Related Resources for Success Criterion 2.4.2:
2.4.3 Match Found:
When a search operation produces a match, the matched content is highlighted, the viewport is scrolled if necessary so that the matched content is within its visible area, and the user can search from the location of the match.
(Level A)
- Intent of Success Criterion 2.4.3:
It is important for the user to easily recognize that a search term has been found and that the term is revealed to the user in context. The user agent moves the viewport to include the found term and the term is highlighted in some fashion.The point of regard is the found element in the viewport. Any subsequent searches on the same term or other navigation tasks (e.g tabbing to the next anchor) begin from this point.
- Examples of Success Criterion 2.4.3:
- Jules is low vision and uses a
magnified screen. She frequently searches for terms that appear
multiple times in a document that contains a lot of repetition. It
is important that the viewport moves and if necessary her screen
scrolls after each search so she can easily track where she is in
the document.
- Related Resources for Success Criterion 2.4.3:
2.4.4 Alert on Wrap or No Match:
The user can be notified when there is no match to a search operation. The user can be notified when the search continues from the beginning or end of content. (Level A)
- Intent of Success Criterion 2.4.4:
It is important for users to get clear, timely feedback so they don't waste time waiting or, worse, issue a command based on a wrong assumption. It is important during a search that users are informed when there is no match or that the search has reached the beginning of the document.
- Examples of Success Criterion 2.4.4:
- Dennis uses a screen reader. As soon as he gets a message that
there is no match he goes on to search for something else. If he
does not get a message he wastes time retrying the search to make
sure the entire document has been searched.
- Related Resources for Success Criterion 2.4.4:
2.4.5 Search alternative content:
The user can perform text searches within textual alternative content (e.g.
text alternatives for non-text content, captions) even when the textual alternative content is not rendered onscreen. (Level AA)
- Intent of Success Criterion 2.4.5: Authors frequently provide alternative content to meet web content accessibility guidelines, which users with disabilities will experience as part of the content. The purpose of this success criteria is to ensure that text search allows users to locate this content, even if it may not be visibly rendered.
- Examples of Success Criterion 2.4.5:
-
Ronda typically browses the web with images turned off so she sees the alternative text for images rendered in place of images. She visits a favorite website on a friend's computer and knows she wants to find a link to her favorite artist's photo gallery. On her computer this is the artist's name so she searches for that word on her friend's computer. The alt text is not displayed in this case but the artist's image is highlighted because her 'text search' command has associated the alt text with the image.
-
Dennis remembers a phrase in a captioned training movie that he wants to replay. He goes to the web page with the embedded movie. He types in the phrase in the "Find" box and the user agent moves to the point in the movie where the phrase is found.
- Related Resources for Success Criterion 2.4.5:
Summary: Users can view (2.5.1), navigate (2.5.2) and configure (2.5.3) content hierarchy.
2.5.1 Location in
Hierarchy: [was 2.5.3]
When the user agent is presenting hierarchical information, but the hierarchy is not reflected in a standardized fashion in the DOM or platform accessibility services, the user can view the path of nodes leading from the root of the hierarchy to a specified element. (Level AA) .
- Intent of Success Criterion 2.5.1:
Many users rely on assistive technology to interpret information displayed on the screen and convey it to them different ways. Sometimes when a user agent displays different pieces of information on the screen it understands relationships between them that can't easily be conveyed through either the DOM or platform accessibility services, making it difficult or impossible for the assistive technology to infer and interpret. In these cases the user agent should provide the ability to present those relationships in a simplified way that can be understood by the assistive technology, and by users who have difficulty interpreting complex visual presentations.
This is also required when the hierarchy is included in the DOM but in a non-standardized fashion, such as using proprietary attributes or distributed extensibility microdata, because such proprietary information would not be useful to assistive technology.
- Examples of Success Criterion 2.5.1:
- Armand is blind. His media player uses an XML data structure that describes each audio piece in terms of genre, artist, album, and track. It displays columns for each of those categories, where he can select one entry in order to filter the contents displayed in the subsequent columns (e.g. he selects one entry in the Artist column, causing the Albums and Tracks columns to only show items associated with the selected artist). The user agent understands the hierarchical nature of these columns, but since that cannot be conveyed in the HTML used for presentation, they are not in the DOM for access by his screen reader. Therefore, when Armand moves the focus to an item in the Tracks column, the user agent updates a static text field that shows the path of genre, artist, and album associated with the track, and Armand can have this field read to him, either on request or automatically when it updates.
- Related Resources for Success Criterion 2.5.1:
2.5.2 Navigate by structural element [was 2.5.5]:
The user agent provides at least the following types of structural navigation, where the structure types exist:(Level AA)
- by heading
- within tables
- Intent of Success Criterion 2.5.2:
Users who find it difficult or impossible to use the mouse require an efficient way to jump among elements without having to navigate through intervening content. Navigating by heading is especially important when scanning a webpage to find a pertinent section. Navigating by table element is especially important when building or reading tables.
- Examples of Success Criterion 2.5.2:
- Jamie is blind. When he reads the New York Times he scans the headlines to find interesting story. In order to do this needs a way to go from headline to headline.
- Billie is a paraplegic who uses speech. She writes long legal documents. When she is proofing a document she needs to scan each section header. It takes far fewer speech commands to navigate the section headers when she can jump directly to them. Billie also makes extensive use of tables. It takes far fewer speech commands to navigate tables when she can jump directly among elements. Direct navigation to headlines and table elements allows her to do her job without overusing her vocal cords and within required time constraints.
- Celia has short-term memory issues and is easily distracted. When looking for any particular section of the document she finds it easier to scan through headings by jumping from heading to heading rather than having to scan through an entire page of potentially distracting text. Celia also finds it useful to be able to move from a subheading to a major heading and back to orient herself within the context without becoming confused.
- Related Resources for Success Criterion 2.5.2:
2.5.3 Configure Elements for Structural Navigation [was 2.5.7]:
The user can configure the sets of important elements (including element types) for structured navigation and hierarchical/outline view. (Level AAA)
- Intent of Success Criterion 2.5.3:
Sometimes authors will visually convey relationships between elements by spatially grouping them, by giving them the same coloration or background, and so forth. Users may not be able to perceive those attributes, such as when using a screen reader, or when strong magnification makes it difficult to make a mental model of the screen layout. In those cases the user agent can assist by providing a view of the data that groups elements that that user agent perceives as implying relationships.
Often the user agent will choose by default the elements it considers important for structured navigation, however these may not be relevant in all circumstances. It may be that the user wishes to navigate via informal mechanisms such as microformats or via a particular styling which is used to convey a structure in the visual navigation, but which does not exist in the element hierarchy.
- Examples of Success Criterion 2.5.3:
- Fred is blind and wishes to navigate through the menu structure using the Tab key, however the menu is a set of nested list elements with a particular HTML class attribute denoting the menu-submenu relationship. Because Fred's user agent allows him to configure important elements he can explicitly include the class name as an important element for navigation. He then assigns a keyboard shortcut to navigate to the next element with the same class name as the element that has the focus.
- Jane uses a mobile device (and is often situationally impaired) and often encounters tables laid out using floating DIV elements with specific class names denoting the visual styling. In this case Jane cannot use the cursor keys to move around these tabular layouts having instead to use the tab key to move sequentially left-to-right top-to-bottom. Jane's browser allows her to configure important elements and so she can pick out the classes associated with the element, and therefore use the cursor key to move logically through columns or rows.
- Related Resources for Success Criterion 2.5.3:
Summary:Users can interact with web content by mouse, keyboard, voice input, gesture, or a combination of input methods. Users can discover what event handlers (e.g. onmouseover) are available at the element and activate an element's events individually (2.6.1).
2.6.1
Access to input methods:
The user can discover recognized input methods explicitly associated with an element, and activate those methods in a modality independent manner. (Level AA)
- Intent of Success Criterion 2.6.1
:
Users interacting with a web browser may be doing so by using one or more input technologies including keyboard, mouse, speech, touch, and gesture. Sometimes a web page is scripted so that it assumes and requires an input method that is not available to the user, such as handling drag events for a user relying solely on the keyboard. In those cases, the user needs to determine what methods are available and activate that element with a modality independent method.
In addition, any one input method should not hold back another. For instance, people who don't use a mouse shouldn't have to map their input methods to the same steps a mouse user would take. The user agent can constrain steps such as not allowing a mouseup before a mousedown.
- Examples of Success Criterion 2.6.1
:
- Jeremy cannot use a mouse. He needs to activate a flyout menu that appears when one hovers the mouse over a button. He navigates to the button and pulls up its context menu, which includes an "Inputs" submenu that lists the available input methods (e.g. events that were registered for this element: click, double-click, swipe left, and hover) and from that he chooses the "Hover" command. .
- Ken is a speech input user. In order to get his work done in a reasonable amount of time and without overtaxing his voice he uses a single speech command phrase to move the mouse up, left and click.
- Karen cannot use a mouse. She clicks a single key to activate both events for link that has an onmousedown and an onmouseup event link.
- Related Resources for Success Criterion 2.6.1:
Summary: Users can restore preference settings to default (2.7.2), and accessibility settings persist between sessions (2.7.1). Users can manage multiple sets of preference settings (2.7.3), and adjust preference setting outside the user interface so the current user interface does not prevent access (2.7.4). It's also recommended that groups of settings can be transported to compatible systems (2.7.5).
2.7.1
Persistent Accessibility Settings [was 2.7.2]:
User agent accessibility preference settings persist between sessions. (Level A)
- Intent of Success Criterion 2.7.1
:
When a user has customized settings within the user agent to maximize accessibility, this success criteria ensures that customization is saved between browsing sessions. The user can then have those settings automatically used in subsequent browsing sessions.
- Examples of Success Criterion 2.7.1
:
- Lynn has moderately low vision, and sets the default zoom level, font size, and colors to make pages easier for her to read. Because those settings are persistent, she doesn't have to manually restore her settings every time she starts the browser.
- Brian has to adjust some settings in his browser to make it fully compatible with his speech input system. It's difficult for him to get it set up, since he can't fully operate the browser until it's done, so once it's configured this way he relies on it staying in that configuration even if he upgrades the browser, restarts his system and so forth.
- Related Resources for Success Criterion 2.7.1
:
2.7.2
Restore all to default [was 2.7.3]:
The user can restore all preference settings to default values. (Level A)
- Intent of Success Criterion 2.7.2
:
For some users, it may be difficult to easily recall all modified settings while others may find it difficult to navigate to each modified setting, especially if a particular setting may have impacted their ability to do so. Users who customize settings may find that their chosen settings are not suitable and decide to restore these settings to their default values. This success criteria provides a means for a user to easily restore all preference settings to their default values using a single function or action.
- Examples of Success Criterion 2.7.2
:
- Ron accidentally changes a browser setting that makes his browser incompatible with his screen reader, preventing him from changing it back. He instead restarts his browser using a command line option that starts it in the default configuration, which he's able to use, and from there he can change the setting that caused him problems.
- Related Resources for Success Criterion 2.7.2
:
2.7.3
Multiple Sets of Preference Settings [was 2.7.4]:
The
user can save and retrieve multiple sets of user agent preference settings.
(Level AA)
- Intent of Success Criterion 2.7.3
:
Some users may need to change their setting preferences under different circumstances such as varying levels of user fatigue or changes in environmental noise or lighting conditions. Providing an easy method for saving and switching between a set of preferences helps the user complete intended tasks in different situations.
- Examples of Success Criterion 2.7.3
:
- When Hiroki is carrying his tablet computer he operates it with the built-in touchscreen, but when at his desk he links it to a Bluetooth keyboard and mouse, and redirects the display to a full-size computer monitor. The browser allows him to quickly switch between two completely different configurations for these different environments.
- Davy has moderately low vision and prefers to adjust the contrast of media differently during the day than at night. Because this requires a number of different steps, for different types of media and aspects of the browser's display, he has two user profiles, one for each environment.
- Aaron usually uses a keyboard and mouse, but when his repetitive strain injury is bothering him he prefers to use the mouse and avoid using the keyboard as much as possible. At those times he users his browser's user preference profiles to load a different configuration that’s optimized for the mouse, including custom toolbars that make most of the commands he uses available as toolbar buttons.
- Related Resources for Success Criterion 2.7.3
:
2.7.4
Change preference settings outside the user interface [was 2.7.6]:
The user can adjust any preference settings required to meet the User Agent Accessibility Guidelines (UAAG) 2.0 from outside the user agent user interface. (Level AA)
- Intent of Success Criterion 2.7.4
:
Users with a disability may find that they cannot use a user agent with its current preference settings. This can occur when they are initially setting up the product and its default settings don't meet their needs, or after they, or another product or user, change an option from the setting they need. In these cases they may not be able to user the user agent's own user interface to adjust the settings, and so need a way to adjust or reset those options from outside the user agent. There are multiple ways this can be accomplished including: detecting and implementing the platform accessibility settings, providing an external file to modify, providing access to settings from a separate utility program, providing accessibility options in the installation program, or providing command-line switches to change the user agent's behavior.
Note: User agents are encouraged to allow all user preferences to be adjusted.
- Examples of Success Criterion 2.7.4
:
- Aosa is blind, and her web browser is incompatible with her screen reader unless she changes one of the browser's advanced settings. Because she cannot activate and use the appropriate dialog box until the setting is changed, she needs to start the browser using a dedicated command-line switch that changes the behavior.
- Bintu is deaf and enjoys watching captioned videos. Since different video players may not have accessible settings, she sets her browser to always display captions, knowing the video player will respect the browser's request to display captions. In this way the options in one user agent (the embedded video player) are set outside its user interface by being set in another program (the hosting browser).
- Sasha requires high contrast to be able to discriminate the shape of letters. She has set the accessibility preferences on her mobile phone to use the high contrast mode. When she launches her mobile browser, it detects that she is using high contrast and adjusts the font and color settings for its user interface to reflect those settings.
- Justin has an attention deficit disorder. He is setting up his new e-book reader and is interrupted while setting the default font colors, then finds he accidentally sets his background and font color to white on white. He cannot read the settings screen to recover his default settings, so he exits the reader and follows the instructions on the vendor's website to edit the "settings.ini" file to adjust the colors. He then restarts the reader with the corrected color settings.
- Related Resources for Success Criterion 2.7.4
:
2.7.5
Portable Preference Settings [was 2.7.7]:
The user can transfer all compatible user agent preference settings between computers. (Level AAA)
- Intent of Success Criterion 2.7.5
:
Configuring a user agent may be a complex and
time consuming task. Some users hire assistive
technology professional trainers to do their system setup. Users who have spent time customizing accessibility
preferences to meet their requirements need to easily migrate preference
setting to another compatible device. Schools and
universities also need to maintain accessibility settings across multiple
machines.
- Examples of Success Criterion 2.7.5
:
- Lori is a rehabilitation specialist who works with clients to customize
their electronics and assistive technology to their needs. She sets up
a new system for Ray, and saves a backup of the accessibility setting
for Ray and for herself. She keeps electronic backup of her clients
configurations that she can quickly email to them if they lose their
settings.
- Ray is blind and uses a screenreader on his desktop and the voice
application on his phone. He sets up the web-based email application
accessibility preferences on a mobile device. These preferences are
automatically reflected in the desktop version of that web-based email application.
- Trisha is a 4th grader with low vision. When she goes to different
classrooms in her school, she carries a USB stick with her accessibility
settings so she and her teachers do not have to use class-time
customizing her computer.
- Related Resources for Success Criterion 2.7.5
:
Summary: It's recommended that users can add, remove and configure the position of graphical user agent controls (2.8.1) and restore them to their default settings (2.8.2).
2.8.1 Configure Toolbars:
The user can add, remove, reorder, show, and hide any toolbars and similar containers, and the items within them. (Level AA)
- Intent of Success Criterion 2.8.1:
This success criterion is about giving the user control over which user interface elements are visible and usable, where they are visually located on the screen, and where they fall in the navigation order. In some cases adjusting whether an element is visible and usable may involve installing/uninstalling a component, or merely showing/hiding it, depending on the user agent and the specific component.
This can reduce keystrokes, bring buttons into view that are hidden by default or otherwise allow the user to interact with the user agent in a more efficient fashion. Users with dexterity impairments or mobility impairments may have problems making the large movements required to select between non-adjacent controls which they need to use frequently. Similarly users with low vision may have to excessively move their magnified view-port to see frequently used controls. Enabling these controls to be situated together removes much of the strain faced by these users, and increases productivity as task completion times are decreased.
- Examples of Success Criterion 2.8.1:
- Martin accesses the computer by pressing keys with a stick held in his mouth known as a mouthstick and gets around the user agent with taps on the tab and arrow keys. The designers of a user agent have decided to place a button for printing a web page as the last button on a toolbar. This button requires six presses of a right arrow key to reach for Carl and is the only button he uses on the toolbar. Using a preferences dialog, Martin is able to configure this toolbar to only show the Print button, reducing the number of presses he must issue with his mouthstick to one.
- Laura has one hand, the left. When she holds her mobile phone, she must use her thumb to press the controls. She configures her mobile apps so that the toolbars are at the left side or the bottom, so she can reach them with her thumb.
- Related Resources for Success Criterion 2.8.1:
2.8.2 Reset Toolbar Configuration:
The user can restore all toolbars and similar containers to their default configuration. (Level AAA)
- Intent of Success Criterion 2.8.2:
Mistakes happen. If a user has modified the toolbar incorrectly it can often be difficult to return to a stable state so that these errors can be corrected. There are additional pressures in this regard for people with learning difficulties who make more use of toolbars than they do of textual menus. Building an easily selectable mechanism to restore these defaults saves user time and reduces stress.
- Examples of Success Criterion 2.8.2:
- Jack is an 80 year old web surfer who is intellectually very sharp but experiences tremors in his hands when required to make fine movements with the mouse. To help himself he is setting up the toolbar so that 'spacers' are placed between each component to prevent accidental selection of adjacent buttons. Today his tremors are particularly bad and Jack makes a number of mistakes becoming increasingly frustrated that while trying to correct these errors he begins to make many more. Jack just wants to start over. Luckily the browser manufacturer has included an easy mechanism to restore the default toolbar configuration. Jack chooses this option and successfully starts over the next day.
- Related Resources for Success Criterion 2.8.2:
Summary: Users can extend the time limit for user input when such limits are controllable by the user agent (2.9.1); by default, the user agent shows the progress of content in the process of downloading (2.9.2).
2.9.1
Adjustable Timing:
Where time limits for user input are recognized
and controllable by the user agent, the user can extend the time
limits. (Level A)
- Intent of Success Criterion 2.9.1
:
Users of assistive technology, such as screen readers, and those who may require more time to read or understand and act upon content (e.g. individuals with reading disabilities or non-native readers of the presented language) should be able to extend or override any content/author imposed presentation / interaction time limits.
- Examples of Success Criterion 2.9.1
:
- News Alerts: A news organizations website has a region of the home page which presents featured stories, cycled every 3 seconds. A user with low vision, using a screen magnifier, requires more than three seconds to read the news item and select it. The user agent provides the user with a global option to freeze all timed events using a keyboard command. Another keyboard command resumes the timed presentation.
- Session Inactivity Timeouts: A screen reader user is logged into a financial services website and is reading the site's detailed privacy policy. Because of security policy, the site will terminate the session of any user who has been inactive for 5 minutes. A prompt will appear warning of the impending log off without further action. This user is able to select an option in her non-visual user agent that automatically responds to those prompts if the user agent is currently reading the content.
- Related Resources for Success Criterion 2.9.1
:
2.9.2 Retrieval Progress:
By default, the user agent shows the progress of content retrieval. (Level A)
- Intent of Success Criterion 2.9.2 :
Users need to know that their actions are producing results even if
there is a time delay. Users who cannot see visual indications need to
have feedback indicating a time delay and have an idea of where they are
in the retrieval process. This reduces errors and unnecessary duplicate
actions.
- Examples of Success Criterion 2.9.2 :
- The user has clicked on a link that is downloading a large file. The
user agent displays a programmatically available progress bar. If the
progress stops, the user agent displays a message that it has timed out.
- The user has entered data in a form and is waiting for a response from
the server. If the response hasn't been received in 5 seconds, the user
agent displays a programmatically available message that it is waiting
for a response. If the process times out, the user agent displays a
message that it has timed out.
- Related Resources for Success Criterion 2.9.2:
Implementing Guideline 2.10 (former 2.4) - Help users avoid flashing that could cause seizures.
[Return to Guideline]
Summary: To help users avoid seizures, the default configuration prevents the browser user interface and rendered content from flashing more than three times a second above a luminescence or color threshold (2.10.1), or does not flash at all (2.10.2).
2.10.1
Three Flashes or Below Threshold:
In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one-second period, unless the flash is below the general flash and red flash thresholds. (Level A)
- Intent of Success Criterion 2.10.1
:
The intent of this Success Criterion is to guard against inducing seizures due to photosensitivity, which can occur when there is a rapid series of general flashing, or red flash. A potentially harmful flash occurs when there is a pair of significantly opposing changes in luminance, or irrespective of luminance, a transition to or from a saturated red occurs.
- Examples of Success Criterion 2.10.1
:
- A single, double, or triple flash -- as long as it does not include changes to or from a saturated red -- may be used to attract a user's attention, or as part of an interface animation.
- An error condition is indicated by flashing that continues until acknowledged by the user. In order to avoid triggering seizures, the flashing is limited to fewer than three times per second, and, to be extra cautious, it is not red.
- Related Resources for Success Criterion 2.10.1
:
2.10.2
Three Flashes:
In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one-second period (regardless of whether not the flash is below the general flash and red flash thresholds). (Level AAA)
- Intent of Success Criterion 2.10.2
:
- The intent of this Success Criterion is to guard against inducing seizures due to photosensitivity, which can occur when there is a rapid series of general flashing, or red flash. A potentially harmful flash occurs when there is a pair of significantly opposing changes in luminance, or irrespective of luminance, a transition to or from a saturated red occurs. 2.10.2 has the same effect as 2.10.1, only goes further to ensure that more sensitive users can traverse the Web without potentially harmful effects.
- Examples of Success Criterion 2.10.2
:
- Related Resources for Success Criterion 2.10.2
:
Summary: The user can control background images (2.11.1); present placeholders for time-based media (2.11.2) and executable regions (2.11.3), or block all executable content (2.11.4); adjust playback (2.11.5), stop/pause/resume (2.11.6), navigate, (2.11.7) and specify tracks for prerecorded time-based media (2.11.9); and adjust contrast and brightness of visual time-based media (2.11.10).
Applicability Notes:
Guideline 2.11 and its success criteria only apply to images, animations, video, audio, etc. that
the user agent can recognize.
2.11.1 Background Image
Toggle:
The user can have all
recognized background images shown or hidden.
(Level A)
- Intent of Success Criterion 2.11.1:
It can be difficult for some people to read text or identify images
when the background is complex or doesn't contrast well with the
foreground. Allowing users to disable the display of background images
helps ensure that foreground content remains easy to read. This can
also help remove purely decorative distractions, which is important
for some users.
This checkpoint does not address issues of multi-layered renderings
and does not require the user agent to change background rendering for
multi-layer renderings (refer, for example, to the z-index property in
Cascading Style Sheets, level 2 ([CSS2], section 9.9.1)
Note: Because background images occasionally convey important
information, when their display is turned off the user agent should
give users access to any alternative content associated with them. (At
the time of this writing, HTML does not support alternative content
for background images, but this may be supported in other technologies
or future versions.)
- Examples of Success Criterion 2.11.1:
- James has a reading disability where he needs text to be clear from
distractions that are not related to the text. He configures his user
agent to hot load background images and navigates to a web page. James
then gets only the text from the web page without any images
interfering with what he is reading.
- Sasha requires high contrast to be able to discriminate the shape of letters. She always sets a preference in her browser to turn off background images, so that she can see the text clearly without the variations in the background.
- Related Resources for Success Criterion 2.11.1:
2.11.2 Time-Based Media
Load-Only:
The user can override the play on
load of recognized time-based media content such that the content is not played
until explicit user request. (Level A)
- Intent of Success Criterion 2.11.2:
Users who
need to avoid signals that may trigger seizures, users who are easily
distracted, and users who have difficulty interacting with the
controls provided for playing media need to be able to load media in a paused state by default. The user agent provides a global control that
sets a state equivalent to "paused waiting for user interaction" for
all recognized media when a page loads. As an opt-in user setting, autoplay for media is off/paused, until the
user activates 'play'. The user agent provides a visual
or auditory (as appropriate) indicator that the video is in paused
state and needs user interaction to start. This prevents media from
playing without explicit request from the user.
There may times when media doesn't have a native control in the page.
That is, the media is not in the actual document, but rather has
simply been created with document.createElement('audio'). Here, the
user agent does not recognize that the media exists. The user agent
cannot give a visual indication by default, as it wouldn't be clear
where that indication should appear in the page. At that stage, it
will be up to the author to provide the controls See WCAG in resources
below.
- Examples of Success Criterion 2.11.2:
- Jill browses browses the web using a screen reader to listen to the text of web pages. She navigates to her favorite shopping site and is greeted with trumpets blaring and an announcer shouting "Sale, sale, sale!" The audio is so loud that she can no longer hear the web page content. Jill closes her browser and changes a setting titled Play Audio on Request to yes and visits her shopping site again. This time she can read the content and when she is ready plays the audio and smiles, thinking of the deal's she is about to find.
- Jamie has epilepsy that's triggered by certain types of audio. She sets browser so that content does not play automatically so she can avoid audio that could trigger her epilepsy.
- Kendra has photo-epilepsy. She sets her browser so that content does not play automatically so she can avoid flashing content that could trigger her photo epilepsy.
- Related Resources for Success Criterion 2.11.2:
2.11.3 Execution
Placeholder:
The user can
render a placeholder instead of executable
content that would normally be contained within an on-screen area (e.g.
Applet, Flash), until explicit user request to
execute. (Level A)
- Intent of Success Criterion 2.11.3
:
Documents that do things automatically when loaded can delay, distract, or interfere with user's ability to continue with a task. In the case of embedded objects, applets and media, replacing the executable content with a placeholder tells the user what has been blocked and provides a mechanism (e.g. a play button) for unblocking when the user is ready.
- Note: It is generally recommended that the placeholder take up the same space as the object it is replacing, so that the presentation does not need to be reflowed when the execution is started. However, users on mobile devices or using screen enlargers, or who have difficulty with scroll commands may benefit from having the option of a smaller placholder.
- Examples of Success Criterion 2.11.3
:
- Jane has difficulty concentrating. In order to concentrate on the text of a document she wants to hide any multimedia content, and only trigger execution of that content by clicking on the placeholder she feels it's appropriate.
- Evan is blind. He sets the option in his browser so that when a web page loads it does not automatically run an executable object, so that any or music or speech they play won't interfere with his ability to hear his screen reader. When he is ready to start it playing he navigates to the placeholder and presses the Enter key to activate it.
- Related Resources for Success Criterion 2.11.3
:
2.11.4 Execution Toggle:
The
user can turn on/off the execution
of executable content that would not normally be contained within a
particular area (e.g. Javascript). (Level A)
- Intent of Success Criterion 2.11.4
:
Documents that do things automatically when loaded can delay, distract, or interfere with user's ability to continue with a task. The user needs to be able to specify that executable content (e.g. scripts) be blocked when a document loads, be told which content has been blocked, and be able to selectively execute the content at a later time.
- Note: Some web applications and document may be essentially empty until scripts are run. However, it is still important for users to have this level of control.
- Examples of Success Criterion 2.11.4
:
- Jane has difficulty concentrating. In order to concentrate on the text of a document she wants to prevent any animations, media, or dynamic content from executing until she is ready. An icon on the status bar tells her that scripts have been blocked, and by clicking it she can select which scripts to run.
- Evan is blind. He sets the option in his browser so that when a web page loads it does not automatically start running scripts that might play sounds that would interfere with his ability to hear his screen reader. An icon on the status bar tells him that scripts have been blocked, and by activating it he can select which scripts to run.
- Related Resources for Success Criterion 2.11.4
:
2.11.5 Playback Rate Adjustment for Prerecorded Content:
The user can adjust the playback rate of prerecorded time-based media content, such that all of the following are true: (Level A)
- The user can adjust the playback rate of the time-based media tracks to between 50% and 250% of real time.
- Speech whose playback rate has been adjusted by the user maintains pitch in order to limit degradation of the speech quality.
- Audio and video tracks remain synchronized across this required range of playback rates.
- The user agent provides a function that resets the playback rate to normal (100%).
- Intent of Success Criterion 2.11.5:
Users with sensory and cognitive impairments may have difficulty following or understanding spoken audio when presented at the normal playback rate. By slowing down the audio presentation of speech, while maintaining the pitch, or frequency characteristics, users are better able to follow the spoken content. For users with visual impairments familiar with the speeding up of speech presentation using screen readers or digital audio book players, the ability to speed up the audio, while maintaining pitch, allows those users to skim spoken audio without loss of understandability of the speech. Users with learning disabilities may be distracted or otherwise unable to follow complex animations or instructional video. By allowing the presentation to be slowed, the user has a better opportunity to observe the visual events of the animation. Additionally, a person may want to slow down the media if they are trying to take notes, and do so slowly because of language or dexterity impairments, etc.
- Examples of Success Criterion 2.11.5:
- Timo experienced a traumatic brain injury and has difficulty in comprehending speech. When listening to episodes of his favorite podcast on the Web, he slows down the audio by 50% and is able to understand the interviewer's and guest's question and answer session.
- Anu is a blind university student who has grown up with digital talking book players, and regularly listens to spoken audio at 200% of normal speaking rate. In studying for exams, she reviews the online lecture videos from her History course, adjusting the presentation rate to 2x on the Web video player, in order to quickly review the material, and slowing the presentation down to normal rate when she encounters material she needs to review carefully.
- Perttu has a learning disability and requires a longer time to follow instructions. He likes to cook and is watching a cooking demonstration on the Web. The instructions go by too quickly, and Perttu slows the video player to half speed in order to make it easier to follow the recipe being prepared.
- Related Resources for Success Criterion 2.11.5:
2.11.6 Stop/Pause/Resume
Time-Based Media:
The user can stop, pause, and resume rendered audio and
animation content (including video,
animated images, and changing text) that last three or more seconds at their default playback
rate. (Level A)
- Intent of Success Criterion 2.11.6:
Users with sensory, attentional, or cognitive impairments may have difficulty following or understanding multimedia content. By allowing time-based media to be stopped, paused, and resumed, users are able control the presentation rate, providing time to understand or act upon presented content before continuing, or to stop potentially distracting or harmful content.
- Examples of Success Criterion 2.11.6:
- Adam reads more slowly than average because of his dyslexia. He's watching a video of a lecture, and when the video shows slides, he presses the space bar to pause the video so that he can read the text at his own speed. When he's ready to continue, he presses the space bar again to resume the video.
- Angelica uses a website to watch and listen to user-contributed podcasts, but when one starts playing she realizes that the level of white noise in the soundtrack is likely to trigger her audio-induced seizures. She quickly clicks on the player's STOP button (or presses the equivalent keyboard command) and the noise is instantly discontinued.
- Allesandro finds it impossible to ignore visual changes, and so finds unnecessary animations make it very difficult for him to read or interact with other content on the screen. When he's trying to read an article on a newspaper's website and finds an animated advertisement or moving text of a news ticker continually distracting him, he presses the Esc key (or chooses the appropriate command from the menu bar) to tell his web browser to stop all animations.
- Amaryllis is blind and is listening to streaming audio on a web page. When she wants to respond to an incoming email message she needs to pause the audio, which would otherwise interfere with her ability to hear her screen reader.
- Related Resources for Success Criterion 2.11.6:
2.11.7 Navigate Time-Based Media:
The user can navigate along the timebase using a continuous scale, and by relative time units within rendered audio and animations (including video and animated images) that last three or more seconds at their default playback rate. (Level A)
- Intent of Success Criterion 2.11.7:
Users with sensory, cognitive or attentional impairments may find it difficult to understand or follow time-based media. This success criteria allows users to position within the timebase to review content or to skip content which may be distracting.
- Examples of Success Criterion 2.11.7:
- Jared has a print disability which makes it laborious to read text. He is watching a technical training video which will display section objectives or summary questions as text. When the text flashes by too quickly for him to read, he presses a key command to skip back an increment so he can read the text, or pause the video if more time is required.
- Debbie has difficulty with bright or flashing video. When she encounters a flashing transition in a video, she quickly presses a key command to forward the video past the flashing, then carefully uses the slider to adjust the video back to the start of the next section avoiding the flashing material.
- Related Resources for Success Criterion 2.11.7:
2.11.8 Semantic Navigation of Time-Based Media:
The user can navigate by semantic structure within the time-based media, such as by chapters or scenes present in the media (Level AA).
- Intent of Success Criterion 2.11.8:
Users need to be able to navigate time-based media in ways that are more meaningful than arbitrary time increments.
- Examples of Success Criterion 2.11.8:
- Marka is blind and is listening to a video of an hour-long lecture. The section she is in has some complex material that builds on material from an earlier section. While a sighted user could pause the video and move the slider back until she recognized visually distinct content from the section she wanted, Marka uses a control to skip back section by section until she hears the section title name she wants to review. When she is finished, Marka uses the control to move forward section by section until she hears the title of topic she was originally in.
- Wes has repetitive stress injury that limits the length of his computer sessions. He stops playback of a training video when he is tired and after resting, he can restart it and navigate to the scene where he left off.
- Related Resources for Success Criterion 2.11.8:
2.11.9 Track Enable/Disable of Time-Based Media:
During time-based media playback, the user can determine which tracks are available and select or deselect tracks, overriding global default settings for captions, audio descriptions, etc. (Level AA)
- Intent of Success Criterion 2.11.9:
Some users with disabilities need to choose different languages or audio tracks (e.g. descriptive video). Users need the ability to choose the tracks that best meet their accessibility needs (e.g. the caption track in their own language) when authors have provided many alternatives.
- Examples of Success Criterion 2.11.9:
- Marka is listening to a video of a lecture. The professor is demonstrating a chemistry experiment and is not speaking during a key part. Since she cannot see what the professor is demonstrating, Marka brings up a menu of the available tracks and discovers that there is an audio description track available. Marka selects the description track, rewinds a few minutes, and listens to the description of the experiment.
- Gorges is deaf, enjoys current run movies, and subscribes to a web service that streams major popular movies. While he speaks English, a certain movie uses a slang that he doesn't understand. He pauses the movie, selects a menu of caption tracks and finds a Spanish caption track. He then watches the rest of the movie with Spanish captioning.
- Related Resources for Success Criterion 2.11.9:
2.11.10 Video Contrast and Brightness [was 2.11.12]:
Users can adjust the contrast and brightness of visual time-based media. (Level AAA)
- Intent of Success Criterion 2.11.10:
Text scaling - default setting should apply, UA should allow separate
control of the caption tracks. User needs larger captions. Snap captions outside of video, change text size and caption viewport size/position. User need to reposition and make the sign language track larger.
- Examples of Success Criterion 2.11.10:
- Frank has low vision that requires a higher contrast to discern a video image. When Frank is watching an instructional video, he selects a menu item that allows him to increase the contrast of the video, to make it easier for him to see the important content.
- Kelly has photo-epilepsy and is watching an amateur video taken on a sunny day near the water. Concerned that the video may contain flashing that could trigger a seizure, Kelly selects a menu item of video controls that allow her to reduce the brightness and contrast of the video. While some of the detail is lost, Kelly can safely watch the video.
- Related Resources for Success Criterion 2.11.10:
Summary: For all input devices supported by the platform, the user agents should let the user perform all functions aside from entering text (2.12.2), and enter text with any platform-provided features (2.12.1). If possible, it is also encouraged to let the user enter text even if the platform does not provide such a feature (2.12.3).
2.12.1 Support Platform Text Input Devices:
If the platform
supports text input using an input device, the user agent is compatible with this functionality. (Level A)
- Intent of Success Criterion 2.12.1:
Some users rely entirely on pointing devices, or find them much more convenient than keyboards. These users can operate applications much more easily and efficiently if they can carry out most operations with the pointing device. It is not the intention of these guidelines to require every user agent to implement its own on-screen keyboard on systems that do not include them, but on systems where one is included it is vitally important that the user agent support this utility.
- Examples of Success Criterion 2.12.1:
- Ruth has extremely limited motor control and slurred speech, so operates her computer using a head pointer. Her desktop operating system includes a built-in on-screen keyboard utility, and even though the percentage of desktop users who use it is very small, she counts on new applications (including user agents) to be tested and compatible with it so that she can enter text. When active, the on-screen keyboard reserves the bottom portion of the screen for its own use, so the user agent respects this and does not cover that area even in modes that would normally take up the full screen. It also avoids communicating with the keyboard through low-level system API that would miss simulated keyboard input.
- Related Resources for Success Criterion 2.12.1:
2.12.2 Operation With Any Device:
If an input device is supported by the platform
, all user agent functionality other than text input can be operated using that device. (Level AA)
- Intent of Success Criterion 2.12.2:
Some users rely entirely on pointing devices, or find them much more convenient than keyboards. These users can operate applications much more easily and efficiently if they can carry out most operations with the pointing device, and only fall back on a physical or on-screen keyboard as infrequently as possible. If the platform provides the ability to enter arbitrary text using a device (such as large vocabulary speech recognition or an on-screen keyboard utility), the user agent is required to support it per 2.12.1 Text Input With Any Device. If the platform does not provide such a feature, the browser is encouraged to provide its own, but because that is generally more difficult and resource intensive than command and control it is not required.
- Examples of Success Criterion 2.12.2:
- Ruth has extremely limited motor control and slurred speech, so operates her computer using a head pointer. The mouse pointer moves in response to the orientation of her head, and she clicks, double clicks, or drags using a sip-and-puff switch. It is much easier for her to point and click on a button or menu item than it is for her to simulate keyboard shortcuts using her on-screen keyboard. In fact, she prefers to customize her applications to make most functions available through toolbar buttons or menu items, even those that are by default available only through keyboard shortcuts.
- Randall has a web browser on his smart phone that allows him to perform most operations using speech commands. Unfortunately, a few features are only available through the touchscreen, which he can only operate by taking off his protective gloves. In the next version of the browser, the remaining features are given keyboard commands, and Randall finds the product safer and more convenient to use.
- Related Resources for Success Criterion 2.12.2:
2.12.3 Text Input With Any Device:
If an input device is supported by the platform
, all user agent functionality including text input can be operated using that device. (Level AAA)
- Intent of Success Criterion 2.12.3:
Some users rely entirely on pointing devices, or find them much more convenient than keyboards. These users can operate applications much more easily and efficiently if they can carry out most operations with the pointing device, and only fall back on a physical or on-screen keyboard as infrequently as possible. If the platform provides the ability to enter arbitrary text using a device (such as large vocabulary speech recognition or an on-screen keyboard utility), the user agent is required to support it per 2.12.1 Text Input With Any Device. If the platform does not provide such a feature, the browser is encouraged to provide its own.
- Examples of Success Criterion 2.12.3:
- Ruth has extremely limited motor control and slurred speech, so operates her computer using a head pointer. The mouse pointer moves in response to the orientation of her head, and she clicks, double clicks, or drags using a sip-and-puff switch. The operating system does not provide an on-screen keyboard, but in order to be maximally accessible, a small on-screen keyboard is available as an add-on for her browser.
- Randall has a web browser on his smart phone that allows him to perform most operations using speech commands. By offloading the speech recognition to an Internet server, it is able to perform large vocabulary speech recognition, so Randall can use his voice to compose email and fill in forms, as well as controlling the browser itself.
- Related Resources for Success Criterion 2.12.3:
PRINCIPLE 3: Ensure that the user interface is
understandable
Implementing Guideline 3.1 - Help users avoid
unnecessary messages.
[Return to Guideline]
Summary: Users can turn off non-essential messages from the author or user-agent.
3.1.2 Reduce Interruptions:
The user can avoid or defer recognized non-essential or low priority messages and updating/changing information in the user agent user interface and rendered content.(Level AA)
- Intent of Success Criterion 3.1.2:
It's important that users be able to avoid unnecessary messages. Messages designed to inform the user can be a burden to users for whom pressing keys is time-consuming, tiring, or painful. Similarly, users with some cognitive impairments find it difficult to deal with constant distractions caused by updating visual or audio information, and would prefer to hide unnecessary updates.
- Examples of Success Criterion 3.1.2:
- The browser has an update ready. The user should have the option to be informed of an update or, instead, only get update information when the user actively requests it.
- A web page has a stock market ticker that is marked as having a low priority level using the WAI-ARIA aria-live:polite value. Shirley has a cognitive disability and is distracted by the page flicker, so she changes the browser's preference setting to indicate that regions with low priority level should not be automatically updated.
- Related Resources for Success Criterion 3.1.2:
Summary: Users can have form submissions require confirmation (3.2.1), go back after navigating (3.2.2), and have their text checked for spelling errors (3.2.3).
3.2.1 Form Submission:
The user can specify whether or not recognized form submissions must be confirmed. (Level AA)
- Intent of Success Criterion 3.2.1:
Users need to be protected against accidentally submitting a form. Some
assistive technologies use the Enter key to advance to the next field.
If the form is designed to submit on Enter, the user can unknowingly
submit the form. Those users need to be able to disable the ability to
submit on Enter.
- Examples of Success Criterion 3.2.1:
- Upon installation of a web browser, a screenreader user selects an
option to disable form submission on Enter. This is a preference option
that can be easily discovered and changed by the user in the future.
This allows the user to complete forms from the banking website knowing
that the submit button must be selected in order to submit the form.
- Ryan uses a mouse but his hand tremor makes him more likely to accidentally miss his intended target. Cancel and Submit (or OK) buttons are often next to each other, and Ryan has found that submitting a form inappropriately can cause serious complications, so he sets a browser option to always request confirmation of form submissions. As a result, when he clicks on a submit button the browser presents a message box asking "Are you sure you want to submit this form?" with larger, well-spaced buttons for Submit and Cancel.
- Related Resources for Success Criterion 3.2.1:
3.2.2 Back Button
: The user can reverse recognized navigation between web addresses (e.g. standard "back button" functionality). (Level AA)
- Intent of Success Criterion 3.2.2:
Being able to retrace a navigation step is important for users with cognitive issues that involve memory and attention. It's also important for users whose means of input is not 100% accurate, such as speech input users or users with fine motor challenges. and it's also beneficial for users for whom navigation is time consuming, tiring, or painful, because it allows them to avoid having to re-enter long URLs.
- Examples of Success Criterion 3.2.2:
- Joe is using speech input in a relatively noisy room. The program hears an especially loud word from across the room and interprets it as Joe saying "Enter" to click a selected link. Joe says "Go Back" to go back to the page he was on.
- Mike's head injury leaves him easily distracted. He is in the middle of a search when the phone rings. When he comes back he says "Go Back" to go back to the original search page so he can reorient himself.
- Ellen had a head injury that affects her short-term memory. She clicks on a link, and is interrupted by a colleague. When she goes back to her task she has forgotten what she was originally doing. It's important that she be able to retrace her steps to reorient herself.
- Etta has low vision and frequently clicks on the wrong link, and when she does so, she simply clicks the Back button in order to get back to where she had been.
- Bessie enters text with difficulty, so when she wants to navigate to a page she's already been to, it's much easier for her to click on its title in her History menu than it would be to enter its URL or navigate through it through a long series of links.
- Related Resources for Success Criterion 3.2.2:
3.2.3 Provide spell checking functionality:
User agents provide spell checking functionality for text created inside the user agent. (Level AA)
- Intent of Success Criterion 3.2.3:
Users with various disabilities benefit from the features found in spell checkers when composing text. It is commonplace to create enter significant amounts of content inside a user agent for tasks such as email, social networking and web-based productivity applications such as word processors. The intent of this success criteria is to ensure that users with disabilities can easily correct spelling errors when composing text. This particularly important for users
with dyslexia and other disabilities that significantly increase their chance of misspelling words.
- Examples of Success Criterion 3.2.3:
- Amanda is dyslexic and frequently spells words incorrectly. She is able to correct words when alerted to the errors. She navigates to her web-based email application and composes a new message. The user agent alerts her to spelling errors as she is typing and she quickly corrects the mistakes and sends an error free message.
- Related Resources for Success Criterion 3.2.3:
3.2.4 Text Entry Undo:
The user can reverse recognized text entry actions prior to submission. (Level A)
Note: Submission can be triggered in many different ways, such as clicking a submit button, typing a key in a control with an onkeypress event, or by a script responding to a timer.
- Intent of Success Criterion 3.2.4:
Users who are blind or have some visual impairments or cognitive disabilities have difficulty determining the location of the keyboard focus, and therefore are at risk of entering text in an undesired window or location. Users with mobility problems may have difficulty selecting the correct form field and may not realize it until they have entered text information. These users need to be able to reverse a text entry ("Undo") prior to submission.
- Examples of Success Criterion 3.2.4:
- Billie is a paraplegic who uses speech input. When she is working from the main office, she is in a noisy location which can interfere with her speech input. She is writing a blog entry and is almost finishes when her speech input software incorrectly interprets some background noise as a "select all" command causing her to overwrite her entire blog entry with a small phrase. She uses the "undo" command to reverse the text entry and restore her blog entry.
- George is blind and uses a screenreader. He is entering financial information into his banking billpaying account. He types the first few letters of the payee from a long list . When George reviews the selection prior to pressing Submit, he hears that he had selected the wrong payee. George uses the "undo" command and selects the correcf payee.
- Related Resources for Success Criterion 3.2.4:
3.2.5 Settings Change Confirmation:
If the user agent provides mechanisms for changing its user interface settings, it either allows the user to reverse the setting changes, or the user can require user confirmation to proceed. (Level A)
- Intent of Success Criterion 3.2.5:
The description of some user interface settings can be confusing to less-technical users; settings changes can have unintended consequences; or some disabilities make it more likely that a user can make an unintended selection on a preference screen. Users need to be able to reverse changes to the user interface.
- Examples of Success Criterion 3.2.5:
- Davy has moderately low vision. He is adjusting the contrast of the background on his mobile phone when he accidentally selects a white background with the previously selected white text, so all the labels of the icons disappear. He can see a highlighted rectangle on the screen that usually contains the word Undo when he makes a change on his phone. He selects that box and the dark background returns, so he can now read the text. He carefully changes the background to a color with sufficient contrast for comfortable reading.
- Related Resources for Success Criterion 3.2.5:
Summary: User documentation is available in an accessible format (3.3.1), it includes accessibility features (3.3.2), delineates differences between versions (3.3.3), provides a centralized views of conformance UAAG2.0 (3.3.4), and is available as context sensitive help in the UA (3.3.5).
3.3.1 Accessible documentation:
The product documentation is available in a format that meets success criteria of WCAG 2.0 Level "A" or greater. (Level A)
- Intent of Success Criterion 3.3.1:
User agents will provide documentation in a format that is accessible.
If provided as Web content, it must conform to WCAG 2.0 Level "A" and
if not provided as Web content, it must be in conformance to a
published accessibility benchmark and identified in any conformance
claim for the user agent. This benefits all users who utilize
assistive technology or accessible formats.
- Examples of Success Criterion 3.3.1:
- A user agent installs user documentation in HTML format conforming to
WCAG 2.0 Level "A". This documentation is viewed within the user agent
and is accessible in accordance with the conformance of the user agent
to UAAG 2.0.
- A user agent provides documentation in HTML format conforming to WCAG
2.0 Level "AA" and is available online. In addition, the user agent
provides user documentation in a locally installed digital talking
book content format in conformance with a recognized, published
format.
- Related Resources for Success Criterion 3.3.1:
3.3.2 Document Accessibility
Features:
All features of the user agent that meet User Agent Accessibility Guidelines 2.0 success criteria are documented. (Level A)
- Intent of Success Criterion 3.3.2:
Users with disabilities that need to use the accessibility features of the user agent user interface can easily find descriptions of how to use and configure the features, including assistive technology. These descriptions can be provided in the documentation or user interface of the user agent or by the underlying platform, if the feature is in fact a service of that platform. It is strongly recommended that the documentation be easily available to end users, and without charge.
- Examples of Success Criterion 3.3.2:
- In a section entitled "Browser Features Supporting Accessibility", a
vendor provides a detailed description of user agent features which
provide accessibility, describing how they function, and listing any
supported third party assistive technologies that may be supported or
required.
- A user is exploring the menus of a user agent and finds a feature named Use My Style Sheet. Activating help the user quickly learns that this feature allows custom CSS stylesheets to be created to help make web content more accessible.
- Related Resources for Success Criterion 3.3.2:
3.3.3 Changes Between
Versions:
Changes to features that meet UAAG 2.0 success criteria since the previous user agent release are documented. (Level AA)
- Intent of Success Criterion 3.3.3:
As accessibility features are implemented in new versions it is important for users to be able to be informed about these new features and how to operate them. The user should not have to discover which new features were implemented in the new version.
- Examples of Success Criterion 3.3.3:
- Martha goes to an app store on her computer and notices that an update for the web browser she uses is available. When she installs it she finds a welcome page talking about the new features in this release, and one of the links on that page says "What's New For Accessibility". Following this link Martha reads about the accessibility improvements added and discovers this update had added a feature allowing her to have tooltips displayed for elements when she is using caret browsing. The text also informs Martha that this feature is off by default and that she should go to accessibility settings to turn it on.
- Related Resources for Success Criterion 3.3.3:
3.3.4 Centralized View:
There is a dedicated section of the documentation that presents a view of all features of the user agent necessary to meet the requirements of User Agent Accessibility Guidelines 2.0. (Level AAA)
- Intent of Success Criterion 3.3.4:
Specific accessibility features are important for users to know about and how to operate. The user should not have to discover where the accessibility features are documented in context (although that too is very useful). A specific section devoted to only accessibility features (e.g. keyboard shortcuts, how to zoom the viewport, where to find accessibility configuration settings), would make it easier for user to become more functional more quickly with the user agent. This is on a per-product basis. Nested user agents or addons may provide separate centralized documentation.
- Examples of Success Criterion 3.3.4:
- Bob downloads a new web browser on his mobile phone. He's never used this software before and also uses a screen reader that is part of his phone's operating system. The browser's online help includes a section on accessiblity that point him to pages discussing non-visual access, such as interaction with screen readers, as well as helpful hints such as an explanation of the screen layout and a list of supported touch gestures.
- A specific section in the documentation (local or online) detailing accessibility features of the user agent.
- Related Resources for Success Criterion 3.3.4:
Implementing Guideline 3.4 - The user agent must behave in a predictable fashion.
[Return to Guideline]
Summary: Users can prevent non-requested focus changes (3.4.1).
3.4.1 Avoid unpredictable focus [formerly 3.4.2, before that 5.4.2, and 1.9.20, broadened]:
The user can prevent focus changes that are not a result of explicit user request. (Level A)
- Intent of Success Criterion 3.4.1:
Users need to know that navigation in a web page is going to start in a predictable location and move in a predictable fashion. If a page moves the initial focus to somewhere other than the beginning of the page, the user may not realize they have skipped over some content, especially if they can only see a small portion of the page at a time. Similar problems may occur if the content or user agent automatically moves the focus while the user is reading or entering data on a page. If the focus moves without the user recognizing it, they can easily end up entering data in an incorrect field or taking other unintentional actions. Users can also become confused or disoriented when a window scrolls when they haven't requested it. This is particularly problematic for users who can only see a small portion of the document, and thus have to use more effort to determine their new context. Such users also are more likely to continue typing, not immediately realizing that the context has changed. Users for whom navigation is time consuming, tiring, or painful (including those using screen readers or with impaired dexterity) may also need more steps to return to the area where they want to work. While we recognize it may improve accessibility for some users on some pages to have the page to set focus to specific link or field when the page loads, it can also be detrimental for some users, and therefore users needs to be in control of this behavior.
- Examples of Success Criterion 3.4.1:
- Jerome has loaded a page that sets its default focus to a search box. Because he wants to read the content of the page, rather than starting by entering data, it takes him additional scrolling to get to the content that was not in the search box. To prevent this, he adjusts his browser's settings to disable the automatic focus change on this page.
- Jessica uses a screen enlarger, and loads a page that contains instructions followed by a form. If the page automatically moves the keyboard focus to the form, she may not realize there were instructions. To avoid this problem, she sets an option to prevent default focus changes.
- James uses speech recognition. He speaks his credit card number by saying several digits and, if needed, Tab keys, in a single phrase. He needs to know ahead of time whether it is necessary to include the Tab command in the phrase.
- Joey is filling in a Web form that asks for his phone number using three separate fields. When she types the three digits of her area code into the first field, the browser automatically moves the focus to the second field. This can be a problem for two reasons, first because if Joey is not looking at the screen she does not realize that the focus has moved for her, and so she presses the Tab key to move it manually, not realizing that this now puts the focus on the third field rather than the second. It can also pose a problem if Joey realizes that she typed one digit incorrectly in the area code field, because when she presses Shift+Tab to return and edit that field, the browser or content script checks the number of digits that have been entered, and seeing that it is three, automatically moves the focus once again, preventing her from editing the number. To avoid these problems, Joey goes to her browser's Preferences dialog box and checks the option that prevents focus changes that she has not explicitly requested.
- Justine uses a keyboard macro to execute a multistep command at a specific location. The focus changes without her control, so the command fails or executes with unpredictable results.
- Related Resources for Success Criterion 3.4.1:
PRINCIPLE 4: Facilitate programmatic access
Implementing Guideline 4.1 - Facilitate
programmatic access to assistive technology
[Return to Guideline]
Summary: Be compatible with assistive technologies by supporting platform standards (4.1.1), including providing information about all menus, buttons, dialogs, etc. (4.1.2, 4.1.6), access to DOMs (4.1.4), and access to structural relationships and meanings, such as what text or image labels a control or serves as a heading (4.1.5). Where something can't be made accessible, provide an accessible alternative version, such as a standard window in place of a customized window (4.1.3). Make sure that that programmatic exchanges are quick and responsive (4.1.7).
4.1.1 Platform Accessibility Services:
The user agent supports relevant
platform accessibility services. (Level A)
- Intent of Success Criterion 4.1.1:
The intent of this success criterion is to make user agent user interfaces more accessible to users who rely on assistive technologies, such as screen readers and speech recognition software. Most major operating environments provide platform accessibility services that allow applications (including user agents) and assistive technologies to work together more effectively, but these must be supported by the software on both sides. The requirement is stated generally because the specifics of what constitutes a platform accessibility service will differ on each platform, but basic features common to these services are addressed by other success criteria under Guideline 4.1.
Assistive technologies often use a combination of methods to get information about and manipulate a user agent's user interface and the content it's rendering; some of these are addressed in other success criteria. However, platform accessibility services are particularly important because they provide common functionality across all the well-behaved applications running on the platform, reducing the amount of special-casing the assistive technology has to implement for each of the hundreds of applications it may need to support.
Most web-based user agents will support this requirement automatically because they run inside host user agents, and the host is responsible for exposing all content, including nested user agents, via platform accessibility services. As long as the nested user agent's user interface is entirely web-based and complies with Web Content Accessibility Guidelines (e.g. providing alternative text and supporting WAI-ARIA where needed) the host will understand it well enough to provide a bridge between it and platform accessibility services.
- Examples of Success Criterion 4.1.1 :
- A browser is developing a new type of button bar for their Microsoft Windows product. When coding the new component the developer includes support for the Microsoft Active Accessibility API (MSAA) so that assistive technologies can recognize it as representing a toolbar, and can identify, navigate, and activate the bar and its buttons on the user's behalf.
- Related Resources for Success Criterion 4.1.1:
4.1.2 Name, Role, State, Value,
Description:
For all user interface components including user interface, rendered content, generated content, and alternative content, the user agent makes available the name, role, state, value,
and description via a platform accessibility services. (Level A)
- Intent of Success Criterion 4.1.2:
The information that assistive technology requires is the
- Name (component name)
- Role (purpose, such as alert, button, checkbox, etc)
- State (current status, such as busy, disabled, hidden, etc)
- Value (information associated with the component such as, the data in a text box, the position number of a slider, the date in a calendar widget)
- Description (user instructions about the component).
For every component developed for the user agent, pass this information to the appropriate accessibility platform architecture or application program interface (API). Embedded user agents, like media players can pass Name, Role, State, Value and Description via the WAI-ARIA techniques.
- Examples of Success Criterion 4.1.2 :
- A media player implements a slider to control the sound volume. The developer codes the component to pass the following information to the accessibility API:
Name = Volume control
Role = Slider
States & Values
aria-valuenow
The slider’s current value.
aria-value-min
The minimum of the value range
aria-value-max
The maximum of the value range
Description
aria-describedby = 'Use the right or left arrow key to change the sound volume.'
- Related Resources for Success Criterion 4.1.2:
-
http://msdn.microsoft.com/en-us/library/ms697187
4.1.3 Accessible
Alternative:
If a component of the user agent user interface cannot be exposed through the platform accessibility services, then the user agent provides an equivalent alternative that is exposed through the platform accessibility service. (Level A)
- Intent of Success Criterion 4.1.3:
Users who rely on assistive technology need to be able to carry out all tasks provided by the user agent, just like everyone else. When a particular user interface component cannot support for the platform accessibility service, and thus can't be made compatible with assistive technology, the user agent should let the user achieve the same goal using another component that IS fully accessible.
- Examples of Success Criterion 4.1.3:
- The user agent provides a single, complex control for 3-dimensional manipulation of a virtual object. This custom control cannot be represented in the platform accessibility service, so the user agent provides the user the option to achieve the same functionality through an alternate user interface, such as a panel with several basic controls that adjust the yar, spin, and roll independently.
- Related Resources for Success Criterion 4.1.3:
4.1.4 Programmatic Availability of
DOMs:
If the user agent implements one or more DOMs, they must be
made programmatically available to assistive technologies. (Level A)
- Intent of Success Criterion 4.1.4:
User agents (and other applications) and assistive technologies use a combination of DOMs, accessibility APIs, native platform APIs, and hard-coded heuristics to provide an accessible user interface and accessible content (http://accessibility.linuxfoundation.org/a11yspecs/atspi/adoc/a11y-dom-apis.html). It is the user agents responsibility to expose all relevant content to the platform accessibility api. Alternatively, the user agent must respond to requests for information from APIs.
- Examples of Success Criterion 4.1.4 :
- In user agents today, an author may inject content into a web page using CSS (generated content). This content is written to the screen and the CSS DOM. The user agent does not expose this generated content from the CSS-DOM (as per CSS recommendation) to the platform accessibility API or to the HTML-DOM. This generated content is non-existent to an assistive technology user. The user agent should expose all information from all DOMs to the platform accessibility API.
-
A web page is a compound document containing HTML, MathML, and SVG. Each has a separate DOM. As the user moves through the document, they are moving through multiple DOMs. The transition between DOMs is seamless and transparent to the user and their assistive technology. All of the content is read and all of the interaction is available from the keyboard regardless of the underlying source code or the respective DOM.
- Related Resources for Success Criterion 4.1.4:
4.1.5 Write Access:
If the user can modify the state or value of a piece of content through the user interface (e.g., by checking a box or editing a text area), the same degree of write access is available programmatically.
(Level A)
- Intent of Success Criterion 4.1.5:
If the user can affect the user interface using any form of input, the same affect may be done through programmatic access. It is often more reliable for assistive technology to use the programmatic method of access versus attempting to simulate mouse or keyboard input.
- Examples of Success Criterion 4.1.5:
- When the user says the phrase 'Volume 35%' their speech input utility can programmatically set the value of the volume slider to 35%, rather than having to use trial and error by simulating mouse clicks or arrow presses to try to find the 35% point.
- "Francois directs his third-party macro utility to set the value of a tri-state check box to "mixed". Even though the control would normally need to be cycled through its states of “on”, “off”, and “mixed”, the macro utility can set the control directly to the desired state.
- Related Resources for Success Criterion 4.1.5:
4.1.6 Expose Accessible Properties:
If any of
the following properties are supported by the platform accessibility services, make the properties available to the accessibility platform
architecture: (Level A)
- the bounding dimensions and coordinates of onscreen elements
- font family of text
- font size of text
- foreground color of text
- background color of text.
- change state/value notifications
- selection
- highlighting
- input device focus
- direct keyboard commands
- underline of menu items (keyboard command/shortcuts)
- Intent of Success Criterion 4.1.6:
These properties are used by assistive technology to create alternative views of the user agent user interface and rendered content as well as providing alternative means for the user to interact with these items. This applies to both user agent user interface (e.g. menus and dialog boxes) and to recognized aspects of the user content (e.g. HTML script using ARIA to indicate focus on custom controls).
- Examples of Success Criterion 4.1.6 1:
- Kiara loads a new version of a popular web browser for the first time. She puts her screen reader into an "explore mode" that lets her review what is appearing on the screen. Her screen reader uses the bounding rectangle of each element to tell her that items from the menu bar all appear on the same horizontal line, which is below the window's title bar.
- Kiara is using a screen reader at a telephone call center. The Web application displays caller names in different colors depending on their banking status. Kiara needs to know this information to appropriately respond to each customer immediately, without taking the time to look up their status through other means.
-
Max uses a screen magnifier that only shows him a small amount of the screen at one time. He gives it commands to pan through different portions of a Web page, but then can give it additional commands to quickly pan back to positions of interest, such as the text matched by the recent Search operation, text that he previously selected by dragging the mouse, or the text caret, rather than having to manually pan through the document searching for them.
- Related Resources for Success Criterion 4.1.6:
4.1.7 Timely Communication:
For APIs implemented to satisfy the requirements of UAAG 2.0, ensure that programmatic exchanges proceed at a rate such that users do not perceive a delay. (Level A)
- Intent of Success Criterion 4.1.7:
Conveying information for accessibility can often involve extensive communication between a user agent, an accessibility API, document object model, assistive technology and end user interaction. The objective is to ensure that the end user does not perceive a delay when interacting with the user agent.
- Examples of Success Criterion 4.1.7:
- Bonita accesses her web browser with a speech input program. She navigates to a web page and speaks the name of a link she wants to click. The link is activated with the same speed as it would be if a mouse had been used to click the link.
- Arthur is browsing a web page with a screen reader. As he tabs from link to link, the text of each link instantly appears on his braille display.
- Related Resources for Success Criterion 4.1.7:
PRINCIPLE 5: Comply with applicable
specifications and conventions
Implementing Guideline 5.1 - Comply with applicable specifications and conventions.
[Return to Guideline]
Summary: When the browser's menus, buttons, dialogs, etc. are authored in HTML or similar standards, they need to meet W3C's Web Content Accessibility Guidelines.
5.1.1 WCAG Compliant [was 5.2.1]:
Web-based authoring tool user interfaces meet the WCAG 2.0 success criteria: Level A to meet WCAG 2.0 Level A success criteria; Level AA to meet WCAG 2.0 Level A and AA success criteria; Level AAA to meet all WCAG 2.0 success criteria. (Level AAA)
Note: This success criterion does not apply to non-Web-based user agent user interfaces,
but does include any parts of non-Web-based user agents that are
Web-based (e.g. help systems).
- Intent of Guideline 5.2:
Web-based applications which are intended to replace or enhance a desktop user agent or its functionality, but are in-fact built and rendered using standard Web technologies, are becoming increasingly common. These Web applications, windowless browsers, rich internet applications, HTML5 canvas, etc., perform similar functions to their desktop cousins and so must also conform to the accessibility requirements placed on a desktop user agent.
- Examples of Guideline 5.2:
- Success criteria 2.7.1 requires that a user agent enable a user to change settings that impact accessibility. In this case we would expect that a Web-Based user agent should also enable a user to change accessibility settings specific to its functionality, which may in some cases enhance or override that of the platform on which it is executing: window-less browser, native operating system, etc.
- Related Resources for Guideline 5.2:
- WAI-ARIA 1.0 User Agent Implementation Guide
- W3C Web Design and Applications Activity
5.1.2 Implement accessibility features of content specs [was 5.3.1]:
Implement and cite in the conformance claim the accessibility
features of content
specifications. Accessibility features are those that are either (Level A):
- identified as such in the specification or
- allow authors to satisfy a requirement of WCAG.
- Intent of Success Criterion 5.1.2:
- Most content specifications include features important to users with disabilities, and users may find it difficult or impossible to use a product that fails to support those features. Users should be able to easily discover detailed information about the user agent's adherence to accessibility standards, including those related to content such as HTML and WAI-ARIA, and should be able to do so without installing and testing the accessibility features. This will allow them to make informed decisions about whether or not to they will be able to use, and therefore should install, a new product or version of that product.
- Examples of Success Criterion 5.1.2:
- Jordy uses a website which uses WAI-ARIA to identify the functions of custom controls. If he used a web browser that didn't support this aspect of WAI-ARIA and expose that information to assistive technology, the website would be unusable with his web browser. Therefore Jordy needs to choose a web browser that he knows fully supports WAI-ARIA, and he determines this by reading product documentation and UAAG conformance claims posted on the Web
- Related Resources for Success Criterion 5.1.2:
- WCAG
- HTML
- CSS
- WAI-ARIA
5.1.3 Implement Accessibility Features of platform:
If the user agent contains non-web-based user interfaces, then those user interfaces follow user interface accessibility guidelines for the platform. (Level A)
- Intent of Success Criterion 5.1.3:
- The intent of this success criterion is to ensure that user agent user interfaces that are not web applications are more accessible to authors with disabilities. Existing platform accessibility guidelines are referenced because accessibility guidelines already exist for many platforms and this wording allows developers the flexibility to conform with accessibility legislation in their markets.
- Most operating systems have conventions and expectations that aid accessibility, such as keyboard behavior, support of an accessibility API, user interface design, and other standards related to accessibility. The intent of this success criteria is to ensure that user agents comply with the basic accessibility requirements of the platform in use.
The user should be able to easily discover detailed information about the user agent's adherence to accessibility standards, platform standards such as MSAA or JAA, and third-party standards such as ISO 9241-171, and should be able to do so without installing and testing the accessibility features.
- Note: Developers should see the documents listed in the "Related Resources for Success Criterion A.1.2.1" section. Unless special circumstances exist (e.g., a document has been superseded, the platform has undergone major architectural changes), the listed resources should be assumed to be relevant to the platforms listed.
- Examples of Success Criterion 5.1.3:
- If you are developing for the Gnome platform, consult the Gnome Accessibility Developers Guide. For example, the Keyboard Focus section states: "Show current input focus clearly at all times. Remember that in controls that include a scrolling element, it is not always sufficient to highlight just the selected element inside that scrolling area, as it may not be visible. " If your program controls focus, make sure you conform to this accessibility guideline for focus.
- Mobile browser: The developer of a browser app for the iPhone platform follows the guidance provided in the "Accessibility Programming Guide for iPhone OS".
In the conformance claim, list the requirements you fully comply with, list the requirements you partially comply with and explain, and list the requirements you do not comply with and explain. Where applicable, this explanations can be general and cover several sections at once.
- Related Resources for Success Criterion 5.1.3:
- [APPLE-ACCESS]
- "Introduction to Accessibility Overview," Apple Computer Inc.
- [CARBON-ACCESS]
- "Introduction to Accessibility Programming Guidelines for Carbon," Apple Corporation.
- [COCOA-ACCESS]
- "Introduction to Accessibility Programming Guidelines for Cocoa," Apple Corporation.
- [EITAAC]
- "EITAAC Desktop Software standards," Electronic Information Technology Access Advisory (EITAAC) Committee.
- [GNOME-ACCESS]
- "GNOME Accessibility for Developers," C. Benson, B. Cameron, B. Haneman, S. Snider, P. O'Briain, The GNOME Accessibility Project.
- [GNOME-API]
- "Gnome Accessibility Toolkit API"
- [GNOME-KDE-KEYS]
- "Gnome/KDE Keyboard Shortcuts," Novell Corporation.
- [IBM-ACCESS]
- "Software Accessibility," IBM Special Needs Systems.
- [IEC-4WD]
- IEC/4WD 61966-2-1: Colour Measurement and Management in Multimedia Systems and Equipment - Part 2.1: Default Colour Space - sRGB. May 5, 1998.
- [ISO-TS-16071]
- "Ergonomics of human-system interaction -- Guidance on accessibility for human-computer interfaces". International Organization for Standardization.
- [JAVA-ACCESS]
- "IBM Guidelines for Writing Accessible Applications Using 100% Pure Java," R. Schwerdtfeger, IBM Special Needs Systems.
- [JAVA-API]
- " Java Accessibility Package"
- [JAVA-CHECKLIST]
- "Java Accessibility Guidelines and Checklist," IBM Special Needs Systems.
- [MACOSX-KEYS]
- "Mac OS X keyboard shortcuts," Apple Corporation.
- [MS-ENABLE]
- "Designing Accessible Applications," Microsoft Corporation.
- [MS-WIN7-ACCESS]
- "Engineering Software For Accessibility", Microsoft Corporation.
- [MS-KEYS]
- "Keyboard shortcuts for Windows," Microsoft Corporation.
- [NOTES-ACCESS]
- "Lotus Notes application accessibility," IBM Corporation.
- [SUN-DESIGN]
- "Designing for Accessibility," Eric Bergman and Earl Johnson. This paper discusses specific disabilities including those related to hearing, vision, and cognitive function.
- [Editors' Note: Resource links from Jim - compare and expand]
- http://www.microsoft.com/windowsxp/using/accessibility/default.mspx
- http://www.apple.com/accessibility/
- http://www.linux.org/docs/ldp/howto/Accessibility-HOWTO/linuxos.html
- http://www.linuxfoundation.org/collaborate/workgroups/accessibility/iaccessible2
- http://developer.apple.com/ue/accessibility/
- http://msdn.microsoft.com/en-us/library/dd373592%28VS.85%29.aspx
- http://msdn.microsoft.com/en-us/windows/ee815673.aspx
5.1.4 Handle Unrendered
Technologies:
If the user agent does
not render a technology, the user can choose a way to handle content
in that technology (e.g. by launching another application or by saving it to
disk). (Level A)
- Intent of Success Criterion 5.1.4:
Users who have disabilities may have fewer options in terms of how they access the information. Information is made available in a variety of ways on the Internet, and at times a specific format may be the only way in which information is available. If the user agent cannot render that format it should let the user access that content through alternate means, such as invoking a third-party renderer or saving the file to the user's hard drive.
- Examples of Success Criterion 5.1.4:
- Tracy has low vision and finds it much more convenient to access her bank statement electronically than on paper, even though the electronic version is in a TIFF image, a format that her browser cannot render. In this case, the browser lets her save the image to her hard drive so she can open it in another program.
- Related Resources for Success Criterion 5.1.4
5.1.5 Alternative content handlers:
The user can select content elements and have them rendered in alternative viewers. (Level AA)
- Intent of Success Criterion 5.1.5:
When accessing media or specialized content (e.g. MathML) on the Web, users with disabilities sometimes find they have a richer or more accessible experience using a third-party application, plug-in, or extension, than using the browser's built-in facilities. In these cases they want to be able to navigate to content in their browser and then enable or activate a browser plug-in or extension to interact with the content. Alternately, they may elect to save that content to their disk and launch it in a third- party application.In the case of streaming video that cannot be saved to disk, the browser launches the external viewer, passing it the URL to the online video.
- Examples of Success Criterion 5.1.5:
- A browser supports the VIDEO tag and adds its own play and pause controls, but George prefers to view the video content in a third-party application that provides much more sophisticated navigation controls such as bookmarks, skip-forward and backwards, and the ability to speed playback without increasing pitch of the audio track. In the browser, he right-clicks on the video to display a context menu, and from that chooses "Open in…", and then chooses his preferred video player. The browser launches the player to show that video file in the browser's cache folder. The browser saves the video to a temporary location on the user's disks (or uses one already in its cache folder), then launches the player to show that file.
- Jukka is visually impaired and a scientist whose work involves mathematical models for speech recognition. Many of the journals he reads online are beginning to include MathML to display equations. Jukka finds the native support for MathML accessibility in his Web browser to be generally compatible with his screen reader, but it can become unreliable for extremely complex equations. In those cases, Jukka selects an alternate rendering plugin via a context menu to make the MathML understandable to his screen reader.
- Related Resources for Success Criterion 5.1.5
5.1.6 Enable Reporting of User Agent Accessibility Faults:
The user agent provides a mechanism for users to report user agent accessibility issues.(Level AAA)
- Intent of Success Criterion 5.1.6:
People who use assistive technologies such as screen readers may find that a technology isn't fully compatible with a Web browser, or that the browser doesn't accessibly render content authored in compliance with WCAG. This causes information loss and inconvenience. When this happens, users with disabilities will benefit from being able to easily file a report with the user agent vendor to report the incompatibility, similar the way users can file bug reports or provide feedback. A contact link on a website is not sufficient to satisfy this success criterion.
- Examples of Success Criterion 5.1.6:
- Alice is a visually impaired college student who frequently uses a refreshable braille display with her Web browser. Occasionally she experiences difficulty with Web content containing elements such as drop-down list boxes or complex menus, where the text is only partially rendered on the braille display. Alice notices this incompatibility and navigates to the feedback section of her browser. After providing some basic information (such as AT software and computer hardware used), Alice is able to describe the problem she encountered and submit the report to the browser vendor.
- Fred has low vision and uses screen magnification software to enlarge displayed information and alter color contrast. While reading content on a website in compliance with WCAG, Fred discovers that the scroll bars are not visible and he cannot scroll further down the page. Unable to read the information he needs, Fred selects the Feedback option from his browser's Tools menu, and is presented with a dialog where he can select which information to transmit to the browser vendor. He selects to include the current URL of the page he is trying to view, system information including OS and screen magnifier version, and enters a description of the problem he is having. Though not required, Fred chooses to provide his email address so that the vendor may contact him for further details or to provide a workaround.
- Related Resources for Success Criterion 5.1.6:
Applicability Note:
When a rendering requirement of another specification contradicts a
requirement of UAAG 2.0, the user agent may disregard the rendering
requirement of the other specification and still satisfy this guideline.
Conformance
This section is normative.
Conformance means that the user agent satisfies the success criteria
defined in the guidelines section. This conformance section describes
conformance and lists the conformance requirements.
Conformance Requirements
In order for a Web page to conform to UAAG 2.0, one of the following levels of conformance is met
in full.
- Level A: For Level A conformance (the minimum level of conformance), the
user agent satisfies all the Level A Success Criteria.
- Level AA: For Level AA conformance, the user agent satisfies all the
Level A and Level AA Success Criteria.
- Level AAA: For Level AAA conformance, the user agent satisfies all the
Level A, Level AA and Level AAA Success Criteria.
Note 1: Although conformance can only be achieved at the stated levels,
developers are encouraged to report (in their claim) any progress toward
meeting success criteria from all levels beyond the achieved level of
conformance.
Conformance Claims (Optional)
If a conformance claim is made, the conformance claim must meet the
following conditions and include the following information (user agents
can conform to UAAG 2.0 without making a claim):
Conditions on Conformance Claims
- At least one version of the conformance claim must be published on the
web as a document meeting level "A" of WCAG 2.0. A suggested metadata
description for this document is "UAAG 2.0 Conformance Claim".
- Whenever the claimed conformance level is published (e.g. product
information website), the URI for the on-line published version of the
conformance claim must be included.
- The existence of a conformance claim does not imply that the W3C has
reviewed the claim or assured its validity.
- Claimants may be anyone (e.g. user agent developers, journalists, other
third parties).
- Claimants are solely responsible for the accuracy of their claims
(including claims that include products for which they are not
responsible) and keeping claims up to date.
- Claimants are encouraged to claim conformance to the most recent version
of the User Agent Accessibility Guidelines Recommendation.
Required Components of an UAAG 2.0 Conformance Claim
- Claimant name and affiliation.
- Date of the claim.
- Conformance level satisfied.
- User agent information: The name of the user agent and sufficient
additional information to specify the version (e.g. vendor name,
version number (or version range), required patches or updates, human
language of the user interface or documentation).
Note: If the user agent is a collection of software components (e.g. a
browser and extentions or plugins), then the name and version information must be provided
separately for each component, although the conformance claim will treat
them as a whole. As stated above, the Claimant has sole responsibility
for the conformance claim, not the developer of any of the software
components.
- Included Technologies: A list of the web content technologies
(including version numbers) rendered by the user agent that the Claimant
is including in the conformance claim. By including a web content
technology, the Claimant is claiming that the user agent meets the
requirements of UAAG 2.0 during the rendering of web content using that
web content technology.
Note 1: Web content technologies may be a combination of constituent web
content technologies. For example, an image technology (e.g. PNG) might
be listed together with a markup technology (e.g. HTML) since web
content in the markup technology is used make web content in the image
technology accessible (e.g. a PNG graph is made accessible using an
HTML table).
- Excluded Technologies: A list of any web content technologies produced
by the user agent that the Claimant is excluding from the
conformance claim. The user agent is not required to meet the
requirements of UAAG 2.0 during the production of the web content
technologies on this list.
- Declarations: For each success criterion:
A declaration of whether or not the success criterion has been
satisfied; or
A declaration that the success criterion is not applicable and a
rationale for why not.
- Platform(s): The platform
(s) upon which the user agent was evaluated:
For user agent platform(s) (used to evaluate web-based user agent user
interfaces): provide the name and version information of the user agent(s).
For platforms that are not user agents (used to evaluate non-web-based
user agent user interfaces) provide: The name and version information of
the platform(s) (e.g. operating system, etc.) and the name and
version of the platform accessibility service(s) employed.
Optional Components of an UAAG 2.0 Conformance Claim
A description of how the UAAG 2.0 success criteria were met where this
may not be obvious.
"Progress Towards Conformance" Statement
Developers of user agents that do not yet conform fully to a particular
UAAG 2.0 conformance level are encouraged to publish a statement on
progress towards conformance. The progress statement is the same as a
conformance claim except an UAAG 2.0
conformance level that is being progressed towards, rather than one
already satisfied, and report progress on success criteria not yet
met. Authors of "Progress Towards Conformance" Statement are solely
responsible for the accuracy of their statements. Developers are
encouraged to provide expected timelines for meeting outstanding success
criteria within the Statement.
Disclaimer
Neither W3C, WAI, nor UAWG take any responsibility for any aspect or
result of any UAAG 2.0 conformance claim that has not been published
under the authority of the W3C, WAI, or UAWG.
This glossary is normative.
- accelerator key
- see keyboard command
- activate
- To carry out the behaviors associated
with an enabled element in the rendered
content or a component of the user agent user
interface.
- active input focus
- see focus
- active selection
- see focus
- alternative content
- Content that can be used in place of default content that may not be universally accessible. Alternative content fulfills the same purpose as the original content. Examples include text alternatives for non-text content, captions for audio, audio descriptions for video, sign language for audio, media alternatives for time-based media. See WCAG for more information.
- alternative content
stack
- A set of alternative content items. The items may be mutually exclusive (e.g.
regular contrast graphic vs. high contrast graphic) or non-exclusive
(e.g. caption track that can play at the same time as a sound
track).
- animation
- Graphical content rendered to automatically change over time, giving the user a visual perception of movement. Examples include video, animated images, scrolling text, programmatic animation (e.g. moving or replacing rendered objects).
-
- application
programming interface (API), (conventional input/output/device
API)
- An application programming interface (API) defines how
communication may take place between applications.
- assistive technology
- An assistive
technology:
- relies on services (such as retrieving Web
resources and parsing markup) provided by one or more other
"host" user agents. Assistive technologies communicate data and
messages with host user agents by using and monitoring APIs.
- provides services beyond those offered by the host user agents to
meet the requirements of users with disabilities. Additional
services include alternative renderings (e.g. as synthesized
speech or magnified content), alternative input methods (e.g.
voice), additional navigation or orientation mechanisms, and
content transformations (e.g. to make tables more accessible).
Examples of assistive technologies that are important in the context
of UAAG 2.0 include the following:
- screen magnifiers, which are used by people with visual
disabilities to enlarge and change colors on the screen to improve
the visual readability of rendered text and images.
- screen readers, which are used by people who are blind or have
reading disabilities to read textual information through
synthesized speech or braille displays.
- voice recognition software, which are used by some people who have
physical disabilities to simulate the keyboard and mouse.
- alternative keyboards, which are used by some people with
physical disabilities to simulate the keyboard and mouse.
- alternative pointing devices, which are used by some people with
physical disabilities to simulate mouse pointing and button
activations.
- Beyond UAAG 2.0, assistive technologies consist
of software or hardware that has been specifically designed to assist
people with disabilities in carrying out daily activities. These
technologies include wheelchairs, reading machines, devices for
grasping, text telephones, and vibrating pagers. For example, the
following very general definition of "assistive technology device"
comes from the (U.S.) Assistive Technology Act of 1998 [AT1998]:
Any item, piece of equipment, or product system, whether acquired
commercially, modified, or customized, that is used to increase,
maintain, or improve functional capabilities of individuals with
disabilities.
- audio
- The technology of sound reproduction. Audio can be created synthetically (including speech synthesis), streamed from a live source (such as a radio broadcast), or recorded from real world sounds.
-
- audio description - (described video, video description or descriptive narration)
- An equivalent alternative that takes the form of narration added to
the audio to describe important visual details
that cannot be understood from the main soundtrack alone. Audio
description of video provides information about actions, characters,
scene changes, on-screen text, and other visual content. In standard
audio description, narration is added during existing pauses in
dialogue. In extended audio
description, the video is paused so that there is time to add
additional description.
- authors
- The people who have worked either alone or collaboratively to create
the content (e.g. content authors, designers, programmers,
publishers, testers).
- author styles
- See Style properties
- background images
- Images that are rendered on the base background.
- base
background
- The background of the content as a whole, such that
no content may be layered behind it. In graphics applications the base
background is often referred to as the canvas.
- blinking
text
- Text whose visual rendering alternates between visible and invisible
at any rate of change.
- captions (caption)
- An equivalent alternative that takes the form of text presented and synchronized with time-based media to provide not only the speech, but also non-speech information conveyed through sound, including meaningful sound effects and identification of speakers. In some
countries, the term "subtitle" is used to refer to dialogue only and
"captions" is used as the term for dialogue plus sounds and speaker
identification. In other countries, "subtitle" (or its translation) is
used to refer to both. Open captions are captions that are
always rendered with a visual track; they cannot be turned off.
Closed captions are captions that may be turned on and off.
The captions requirements of UAAG 2.0 assume that the user agent
can recognize the captions as such.
Note: Other terms that include the word "caption" may
have different meanings in UAAG 2.0. For instance, a "table
caption" is a title for the table, often positioned graphically above
or below the table. In UAAG 2.0, the intended meaning of "caption"
will be clear from context.
- collated text
transcript
- A collated text transcript is a text equivalent of a movie or
other animation. It is the combination of the text transcript of the audio track and the text equivalent
of the visual track. For example, a
collated text transcript typically includes segments of spoken dialogue
interspersed with text descriptions of the key visual elements of a
presentation (actions, body language, graphics, and scene changes). See
also the definitions of text
transcript and audio description. Collated
text transcripts are essential for people who are deaf-blind.
- command, direct command, direct navigation command, direct activation command, linear navigation command
, spacial (directional) command, structural navigation command
- direct navigation commands move focus to a specified item regardless of which currently has the focus
- direct activation commands activate a specified item regardless of which currently has the focus; they may move the focus to the item before immediately activating it
- linear navigation commands (sometimes called logical or sequential navigation commands) move forwards and backwards through a list of items
- structural navigation commands move forwards, backwards, up and down a hierarchy
- spatial commands (sometimes called directional commands), require the user to be cognizant of the spatial arrangement of items on the screen:
- spatial navigation commands move from one item to another based on direction on the screen
- spatial manipulation commands resize or reposition an item on the screen
- content (web content), empty content, reflowable content
- Information and sensory experience to be communicated to the user by means of a user agent, including code or markup that defines the content's structure, presentation, and interactions [adapted from WCAG 2.0]
empty
content (which may be alternative content) is
either a null value or an empty string (e.g. one that is zero
characters long). For instance, in HTML, alt=""
sets the
value of the alt
attribute to the empty string. In some
markup languages, an element may have empty content (e.g. the
HR
element in HTML).
reflowable content is content that can be arbitrarily wrapped over multiple lines. The primary exceptions to reflowable content are graphics and video.
- continuous scale
-
When interacting with a time-based media presentation, a continuous scale allows user (or programmatic) action to set the active playback position to any time point on the presentation timeline. The granularity of the positioning is determined by the smallest resolvable time unit in the media timebase.
- cursor
- see focus
- default
- see properties
- direct command, direct navigation command, direct activation command, linear navigation command
, spacial (directional) command, structural navigation command
- see command
- dimensions
- A viewport may also
have temporal dimensions, for instance when audio, speech, animations,
and movies are rendered. When the dimensions (spatial or temporal) of
rendered content exceed the dimensions of the viewport, the user agent
provides mechanisms such as scroll bars and advance and rewind controls
so that the user can access the rendered content "outside" the
viewport. Examples include: when the user can only view a portion of a
large document through a small graphical viewport, or when audio
content has already been played.
-
document character set
- The internal representation of data in the source content by a user agent.
-
document object, (Document Object Model, DOM)
- The Document Object Model is a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. This is an overview of DOM-related materials here at W3C and around the web:
http://www.w3.org/DOM/#what.
- document source, (text source)
- Text the user agent renders upon user request to view the source of specific viewport content (e.g. selected content, frame, page).
- documentation
- Any information that supports the use of a user agent. This information may be found, for example, in manuals, installation instructions, the help system, and tutorials. Documentation may be distributed (e.g. as files installed as part of the installation, some parts may be delivered on CD-ROM, others on the Web). See guideline 5.3 for information about
documentation.
- element, element type
- UAAG 2.0 uses the terms "element" and "element
type" primarily in the sense employed by the XML 1.0 specification
([XML], section 3): an element
type is a syntactic construct of a document type definition (DTD) for
its application. This sense is also relevant to structures defined by
XML schemas. UAAG 2.0 also uses the term "element" more generally
to mean a type of content (such as video or sound) or a logical
construct (such as a header or list).
- empty content
- see content
- enabled element, disabled
element
-
An element with associated behaviors that can be activated through the user interface or through an API. The set of elements that a user agent enables is generally derived from, but is not limited to, the set of elements defined by implemented markup languages. A disabled element is a potentially enabled element that is not currently available for activation (e.g. a "grayed out" menu item).
- equivalent alternative
- Acceptable substitute content that a user may not be able to access. An equivalent alternative fulfills essentially the same function or purpose as the original content upon presentation:
- text alternative: text that is available via the operating environment that is used in place of non-text content (e.g. text equivalents for images, text transcripts for audio tracks, or collated text transcripts for a movie). [from WCAG 2.0]
- full text alternative for synchronized media including any interaction: document including correctly sequenced text descriptions of all visual settings, actions, speakers, and non-speech sounds, and transcript of all dialogue combined with a means of achieving any outcomes that are achieved using interaction (if any) during the synchronized media. [from WCAG 2.0]
- synchronized alternatives: present essential audio information visually (i.e. captions) and essential video information in an auditory manner (i.e. audio descriptions).
[from ATAG 2.0]
- events and
scripting, event handler, event type
- User agents often perform a task when an event
having a particular "event type" occurs, including a user interface
event, a change to content, loading of content, or a request from the
operating environment.
Some markup languages allow authors to specify that a script, called an
event
handler, be executed when an event of a given type occurs. An
event handler is explicitly associated with an
element through scripting, markup or the DOM.
- explicit user request
- An interaction by the user through the user
agent user interface, the focus, or the selection. User requests are made, for example, through user
agent user interface controls and keyboard commands. Some examples of explicit user requests include when the user selects "New viewport," responds "yes" to a prompt in the user agent's user interface, configures the user agent to behave in a certain way, or changes the selection or focus with the keyboard or pointing device. Note: Users can make errors when interacting with the user agent. For example, a user may inadvertently respond "yes" to a prompt instead of "no." This type of error is considered an explicit user request.
- focus (active input focus, active selection, cursor, focus cursor, focusable element, highlight, inactive input focus, inactive selection, input focus, keyboard focus, pointer, pointing device focus, selection, split focus, text cursor)
Hierarchical Summary of some focus terms
- Input Focus (active/inactive)
- Keyboard Focus (active/inactive)
- Cursor (active/inactive)
- Focus cursor (active/inactive)
- Text cursor (active/inactive)
- Pointing device focus (active/inactive)
- active input focus
- The input focus location in the active viewport. The active focus is in the active viewport, while the inactive input focus is the inactive viewport. The active input focus is usually visibly indicated. In UAAG 2.0 "active input focus" generally refers to the active keyboard input focus.
- active selection
- The selection that will currently be affected by a user command, as opposed to selections in other viewports, called inactive selections, which would not currently be affected by a user command.
- conform
- see support
- cursor
- Visual indicator showing where keyboard input will occur. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field, also called a 'caret'). Cursors are active when in the active viewport, and inactive when in an inactive viewport.
- focus cursor
- Indicator that highlights a user interface element to show that it has keyboard focus, e.g. a dotted line around a button, or brightened title bar on a window. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field).
- focusable element
- Any element capable of having input focus, e.g. link, text box, or menu item. In order to be accessible and fully usable, every focusable element should take keyboard focus, and ideally would also take pointer focus.
- highlight, highlighted, highlighting
- Emphasis indicated through the user interface. For example, user agents highlight content that is selected, focused, or matched by a search operation. Graphical highlight mechanisms include dotted boxes, changed colors or fonts, underlining, adjacent icons, magnification, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume ("speech prosody"). User interface items may also be highlighted, for example a specific set of foreground and background colors for the title bar of the active window. Content that is highlighted may or may not be a selection.
- inactive input focus
- An input focus location in an inactive viewport such as a background window or pane. The inactive input focus location will become the active input focus location when input focus returns to that viewport. An inactive input focus may or may not be visibly indicated.
- inactive selection
- A selection that does not have the input focus and thus does not take input events.
- input focus
- The place where input will occur if a viewport is active. Examples include keyboard focus and pointing device focus. Input focus can also be active (in the active viewport) or inactive (in an inactive viewport).
- keyboard focus
- The screen location where keyboard input will occur if a viewport is active. Keyboard focus can be active (in the active viewport) or inactive (in an inactive viewport). See keyboard interface definition for types of keyboards included and what constitutes a keyboard.
- keyboard interface
- Keyboard interfaces are programmatic services provided by many platforms that allow operation in a device independent manner. A keyboard interface can allow keystroke input even if particular devices do not contain a hardware keyboard (e.g., a touchscreen-controlled device can have a keyboard interface built into its operating system to support onscreen keyboards as well as external keyboards that may be connected).
Note: Keyboard-operated mouse emulators, such as MouseKeys, do not qualify as operation through a keyboard interface because these emulators use pointing device interfaces, not keyboard interfaces. [from ATAG 2.0]
-
- pointer
- Visual indicator showing where pointing device input will occur. The indicator can be moved with a pointing device or emulator such as a mouse, pen tablet, keyboard-based mouse emulator, speech-based mouse commands, or 3-D wand. A pointing device click typically moves the input focus to the pointer location. The indicator may change to reflect different states.When touch screens are used, the "pointing device" is a combination of the touch screen and the user's finger or stylus. On most systems there is no pointer (on-screen visual indication) associated with this type of pointing device.
- pointing device focus
- The screen location where pointer input will occur if a viewport is active. There can be multiple pointing device foci for example when using a screen sharing utility there is typically one for the user's physical mouse and one for the remote mouse.
- selection
- A user agent mechanism for identifying a (possibly empty) range of content that will be the implicit source or target for subsequent operations. The selection may be used for a variety of purposes, including for cut-and-paste operations, to designate a specific element in a document for the purposes of a query, and as an indication of point of regard
(e.g. the matched results of a search may be automatically selected). The selection should be highlighted in a distinctive manner. On the screen, the selection may be highlighted in a variety of ways, including through colors, fonts, graphics, and magnification. When rendered using synthesized speech, the selection may be highlighted through changes in pitch, speed, or prosody.
- split focus
- A state when the user could be confused because the input focus is separated from something it is usually linked to, such as being at a different place than the selection or similar highlighting, or has been scrolled outside of the visible portion of the viewport.
- text cursor
- Indicator showing where keyboard input will occur in text (e.g. the flashing vertical bar in a text field, also called a caret).
-
- globally, global configuration
- graphical
- Information (e.g. text, colors, graphics, images, and animations)
rendered for visual consumption.
- highlight, highlighted, highlighting
- see focus
- image
- Pictorial content that is static (i.e. not moving or changing). See also the definition of animation.
- implement
- see support
- important elements
- This specification intentionally does not identify
which "important elements" must be navigable because this will vary by
specification. What constitutes "efficient navigation" may depend on a
number of factors as well, including the "shape" of content (e.g.
sequential navigation of long lists is not efficient) and desired
granularity (e.g. among tables, then among the cells of a given
table). Refer to the Implementing document [Implementing UAAG 2.0] for information
about identifying and navigating important elements.
- inactive input focus
- see focus
- inactive selection
- see focus
- informative (non-normative)
- see normative
- input configuration
- The set of bindings
between user agent functionalities and user
interface input mechanisms (e.g. menus, buttons, keyboard keys,
and voice commands). The default input configuration is the set of
bindings the user finds after installation of the software. Input
configurations may be affected by author-specified bindings (e.g.
through the
accesskey
attribute of HTML 4 [HTML4]).
- input focus
- see focus
- keyboard (keyboard emulator, keyboard interface)
- The letter, symbol and command keys or key indicators that allow a user to control a computing device. Assistive technologies have traditionally relied on the keyboard interface as a universal, or modality independent interface. In this document references to keyboard include keyboard emulators and keyboard interfaces that make use of the keyboard's role as a modality independent interface (see Modality Independence Principle). Keyboard emulators and interfaces may be used on devices which do not have a physical keyboard, such as mobile devices based on touchscreen input.
- keyboard command (keyboard binding,keyboard shortcuts or accelerator keys)
- A key or set of keys that are tied to a particular UI control or application function, allowing the user to navigate to or activate the control or function without traversing any intervening controls (e.g. CTRL+"S" to save a document). It is sometimes useful to distinguish keyboard commands that are associated with controls that are rendered in the current context (e.g. ALT+"D" to move focus to the address bar) from those that may be able to activate program functionality that is not associated with any currently rendered controls (e.g. "F1" to open the Help system). Keyboard commands can be triggered using a physical keyboard or keyboard emulator (e.g. on-screen keyboard or speech recognition). (See Modality Independence Principle).
- keyboard focus
- see focus
- natural language
- Natural language is spoken, written, or signed human
language such as French, Japanese, and American Sign Language. On the
Web, the natural language of content may be
specified by markup or HTTP headers. Some examples include the
lang
attribute in HTML 4 ([HTML4] section 8.1), the xml:lang
attribute in XML 1.0 ([XML], section 2.12), the hreflang
attribute for links in HTML 4 ([HTML4],
section 12.1.5), the HTTP Content-Language header ([RFC2616], section 14.12)
and the Accept-Language request header ([RFC2616], section 14.4).
See also the definition of script.
- navigation
- see command
- non-text content (non-text element, non-text equivalent)
- see text
- normative, informative (non-normative) [WCAG 2.0, ATAG
2.0]
- What is identified as "normative" is required for conformance (noting that one may conform in a
variety of well-defined ways to UAAG 2.0). What is identified as
"informative" (or, "non-normative") is never required for
conformance.
- notify
- To make the user aware of events or status changes. Notifications can occur within the user agent user interface (e.g. a status bar) or within the content display. Notifications may be passive and not require user acknowledgment, or they may be presented in the form of a prompt requesting a user response (e.g. a confirmation dialog).
- operating environment
- The term "operating environment" refers to the
environment that governs the user agent's operation, whether it is an
operating system or a programming language environment such as
Java.
- override
- In UAAG 2.0, the term "override" means that one
configuration or behavior preference prevails over another. Generally,
the requirements of UAAG 2.0 involve user preferences prevailing
over author preferences and user agent default settings and behaviors.
Preferences may be multi-valued in general (e.g. the user prefers blue
over red or yellow), and include the special case of two values (e.g.
turn on or off blinking text content).
- placeholder
- A placeholder is content generated by the user agent
to replace author-supplied content. A placeholder may be generated as
the result of a user preference (e.g. to not render images) or as repair content (e.g. when an
image cannot be found). A placeholder can be any type of content,
including text, images, and audio cues. A placeholder should identify
the technology of the replaced object.
Placeholders appear in the alternative content stack.
- platform
-
The software environment within which the authoring tool operates. Platforms provide a consistent operational environment on top of lower level software platforms or hardware. For web-based authoring user interfaces, the platform will be user agents (e.g., browsers). For non-web-based user interfaces, the range of platforms includes, but may not be limited to, desktop operating systems (e.g. Linux, MacOS, Windows, etc.), mobile operating systems (e.g. Android, Blackberry, iOS, Windows Phone, etc.), or cross-OS environments (e.g. Java), etc.
Note 1: Many platforms mediate communication between applications operating on the platform and assistive technology via a platform accessibility service.
Note 2: Accessibility guidelines for developers exist for many platforms.
- platform accessibility
service
- A programmatic interface that is engineered to enhance
communication between mainstream software applications and assistive
technologies (e.g. MSAA, UI Automation, and IAccessible2 for Windows applications, AXAPI for MacOSX applications, Gnome Accessibility Toolkit API for Gnome applications, Java Access for Java applications). On some platforms it may be conventional to enhance
communication further via implementing a DOM.
- plug-in [ATAG 2.0]
- A plug-in is a program that runs as part of the user
agent and that is not part of content. Users
generally choose to include or exclude plug-ins from their user
agents.
- point of regard
- The point of regard is the position in rendered content that the user
is presumed to be viewing. The dimensions of the point of regard may
vary. For example,it may be a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport), or a point (e.g. a moment during an audio
rendering or a cursor position in a graphical rendering), or a range of
text (e.g. focused text), or a two-dimensional area (e.g. content
rendered through a two-dimensional graphical viewport). The point of
regard is almost always within the viewport, but it may exceed the
spatial or temporal dimensions of the
viewport (see the definition of rendered content for more
information about viewport dimensions). The point of regard may also
refer to a particular moment in time for content that changes over time
(e.g. an audio-only
presentation). User agents may determine the point of regard in a
number of ways, including based on viewport position in content, keyboard focus, and selection. The stability of the point of regard is addressed by
success criterion 1.8.7
- pointer
- see focus
- pointing device focus
- see focus
- profile
- A profile is a named and persistent representation
of user preferences that may be used to configure a user agent.
Preferences include input configurations, style preferences, and
natural language preferences. In operating environments
with distinct user accounts, profiles enable users to reconfigure
software quickly when they log on. Users may share their profiles with
one another.Platform-independent profiles are useful for those who use the same user agent on different devices.
- prompt [ATAG
2.0]
- Any user-agent-initiated request for a decision or piece of
information from a user.
- properties, values, and
defaults
- A user agent renders a document by applying
formatting algorithms and style information to the document's elements.
Formatting depends on a number of factors, including where the document
is rendered (e.g. on screen, on paper, through loudspeakers, on a braille
display, on a mobile device). Style information (e.g. fonts, colors,
synthesized speech prosody) may come from the elements themselves
(e.g. certain font and phrase elements in HTML), from style sheets, or
from user agent settings. For the purposes of these guidelines, each
formatting or style option is governed by a property and each property
may take one value from a set of legal values. Generally in UAAG 2.0, the term "property"
has the meaning defined in CSS 2 ([CSS2], section 3). A
reference to "styles" in UAAG 2.0 means a set of style-related
properties. The value given to a property by a user agent at
installation is the property's default value.
- recognize
- Authors encode information in many ways, including
in markup languages, style sheet languages, scripting languages, and
protocols. When the information is encoded in a manner that allows the
user agent to process it with certainty, the user agent can "recognize"
the information. For instance, HTML allows authors to specify a heading
with the
H1
element, so a user agent that implements HTML
can recognize that content as a heading. If the author creates a
heading using a visual effect alone (e.g. just by increasing the font
size), then the author has encoded the heading in a manner that does
not allow the user agent to recognize it as a heading. Some requirements of UAAG 2.0 depend on content roles, content
relationships, timing relationships, and other information supplied by
the author. These requirements only apply when the author has encoded
that information in a manner that the user agent can recognize. See the
section on conformance for more information
about applicability. User agents will rely heavily on information that the
author has encoded in a markup language or style sheet language. Behaviors, style, meaning encoded in a script, and markup in an unfamiliar XML
namespace may not be recognized by the user agent as easily or at all.
- relative time units
- Relative time units define time intervals for navigating media relative to the current point (e.g. move forward 30 seconds). When interacting with a time-based media presentation, a user may find it beneficial to move forward or backward via a time interval relative to their current position. For example, a user may find a concept unclear in a video lecture and elect to skip back 30 seconds from the current position to review what had been described. Relative time units may be preset by the user agent, configurable by the user, and/or automatically calculated based upon media duration (e.g. jump 5 seconds in a 30-second clip, or 5 minutes in a 60-minute clip). Relative time units are distinct from absolute time values such as the 2 minute mark, the half-way point, or the end.
- rendered content, rendered
text
- Rendered content is the part of content that the user agent makes
available to the user's senses of sight and hearing (and only those
senses for the purposes of UAAG 2.0). Any content that causes an
effect that may be perceived through these senses constitutes rendered
content. This includes text characters, images, style sheets, scripts,
and any other content that, once processed, may be perceived
through sight and hearing.
- The term "rendered text" refers to text
content that is rendered in a way that communicates information about
the characters themselves, whether visually or as synthesized
speech.
- In the context of UAAG 2.0, invisible
content is content that is not rendered but that may
influence the graphical rendering (i.e. layout) of other content.
Similarly, silent content is content that
is not rendered but that may influence the audio rendering of other
content. Neither invisible nor silent content is considered rendered
content.
- repair content, repair text
- Content generated by the user agent to correct an error
condition. "Repair text" refers to the text portion of repair
content. Error conditions that may lead to the generation of
repair content include:
- Erroneous or incomplete content (e.g. ill-formed markup, invalid
markup, or missing alternative content
that is required by format specification);
- Missing resources for handling or rendering content (e.g. the
user agent lacks a font family to display some characters, or the
user agent does not implement a particular scripting language).
UAAG 2.0 does not require user agents to include repair content
in the document object. Repair content
inserted in the document object should conform to the Web Content
Accessibility Guidelines 1.0 [WCAG10]. For more
information about repair techniques for Web content and software, refer
to "Techniques for Authoring Tool Accessibility Guidelines 1.0"
[ATAG10-TECHS].
- script
- In UAAG 2.0, the term "script" almost always
refers to a scripting (programming) language used to create dynamic Web
content. However, in guidelines referring to the written (natural)
language of content, the term "script" is used as in Unicode [UNICODE] to mean "A
collection of symbols used to represent textual information in one or
more writing systems."
- Information encoded in (programming) scripts may be
difficult for a user agent to recognize. For
instance, a user agent is not expected to recognize that, when
executed, a script will calculate a factorial. The user agent will be
able to recognize some information in a script by virtue of
implementing the scripting language or a known program library (e.g.
the user agent is expected to recognize when a script will open a
viewport or retrieve a resource from the Web).
- selection, current
selection
- see focus
- serial access, sequential navigation
- One-dimensional access to
rendered content. Some examples of serial access include listening to
an audio stream or watching a video (both of which involve one temporal
dimension), or reading a series of lines of braille one line at a time
(one spatial dimension). Many users with blindness have serial access
to content rendered as audio, synthesized speech, or lines of braille.
The expression "sequential navigation" refers to navigation through
an ordered set of items (e.g. the enabled
elements in a document, a sequence of lines or pages, or a sequence
of menu options). Sequential navigation implies that the user cannot
skip directly from one member of the set to another, in contrast to
direct or structured navigation. Users with blindness or some users
with a physical disability may navigate content sequentially (e.g. by
navigating through links, one by one, in a graphical viewport with or
without the aid of an assistive technology). Sequential navigation is
important to users who cannot scan rendered content visually for
context and also benefits users unfamiliar with content. The increments
of sequential navigation may be determined by a number of factors,
including element type (e.g. links only), content structure (e.g.
navigation from heading to heading), and the current navigation context
(e.g. having navigated to a table, allow navigation among the table
cells).
Users with serial access to content or who navigate sequentially may
require more time to access content than users who use direct or
structured navigation.
- style properties, user agent default styles, author styles, user styles
- Properties whose values determine the presentation (e.g., font, color, size, location, padding, volume, synthesized speech prosody) of content elements as they are rendered (e.g. onscreen, via loudspeaker, via braille display) by user agents. Style properties can have several origins:
user agent default styles: The default style property values applied in the absence of any author or user styles. Some web content technologies specify a default rendering; others do not.
author styles: Style property values that are set by the author as part of the content (e.g., in-line styles, author style sheets).
user styles: Style property values that are set by the user (e.g., via user agent interface settings, user style sheets).
- style sheet, user style sheet, author style sheet
- A mechanism for communicating style property settings for web content, in which the style property settings are separable from other content resources. This separation is what allows author style sheets to be toggled or substituted, and user style sheets defined to apply to more than one resource. Style sheet web content technologies include Cascading Style Sheets (CSS) and Extensible Stylesheet Language (XSL). User style sheet: Style sheets specified by the user, resulting in user styles. Author style sheet: Style sheets specified by the author, resulting in author styles.
-
- support, implement, conform
- Support, implement,
and conform all refer to what a developer has designed a user agent
to do, but they represent different degrees of specificity. A user
agent "supports" general classes of objects, such as "images" or
"Japanese." A user agent "implements" a specification (e.g. the PNG
and SVG image format specifications or a particular scripting
language), or an API
(e.g. the DOM API) when it has been programmed to follow all or part
of a specification. A user agent "conforms to" a specification when it
implements the specification and satisfies its conformance
criteria.
- synchronize
- The act
of time-coordinating two or more presentation components (e.g. a visual track with captions, or
several tracks in a multimedia presentation). For Web content
developers, the requirement to synchronize means to provide the data
that will permit sensible time-coordinated rendering by a user agent.
For example, Web content developers can ensure that the segments of
caption text are neither too long nor too short, and that they map to
segments of the visual track that are appropriate in length. For user
agent developers, the requirement to synchronize means to present the
content in a sensible time-coordinated fashion under a wide range of
circumstances including technology constraints (e.g. small text-only
displays), user limitations (e.g. slow reading speeds, large font sizes,
high need for review or repeat functions), and content that is
sub-optimal in terms of accessibility.
- technology (web content technology) [WCAG 2.0, ATAG
2.0]
- A mechanism for encoding instructions to be rendered, played or
executed by user agents. Web Content
technologies may include markup languages, data formats, or programming
languages that authors may use alone or in
combination to create end-user experiences that range from static Web
pages to multimedia presentations to dynamic Web applications. Some
common examples of Web content technologies include HTML, CSS, SVG,
PNG, PDF, Flash, and JavaScript.
- text (text content, non-text
content,
text element, non-text
element, text
equivalent, non-text equivalent )
- Text used by itself
refers to a sequence of characters from a markup language's document character set.
Refer to the "Character Model for the World Wide Web" [CHARMOD] for more
information about text and characters. Note: UAAG 2.0 makes use of other terms that include the word "text" that
have highly specialized meanings: collated text
transcript, non-text content, text content, non-text element, text element, text equivalent, and text transcript.
Atext element adds text
characters to either content or the user
interface. Both in the Web Content Accessibility Guidelines 2.0 [WCAG20] and in UAAG 2.0, text elements are presumed to produce text that can be
understood when rendered visually, as synthesized speech, or as
Braille. Such text elements benefit at least these three groups of
users:
- visually-displayed text benefits users who are deaf and adept in
reading visually-displayed text;
- synthesized speech benefits users who are blind and adept in use
of synthesized speech;
- braille benefits users who are blind, and possibly deaf-blind,
and adept at reading braille.
A text element may consist of both text and non-text data. For
instance, a text element may contain markup for style (e.g. font size
or color), structure (e.g. heading levels), and other semantics. The
essential function of the text element should be retained even if style
information happens to be lost in rendering. A user agent may have to process a text element in order to have
access to the text characters. For instance, a text element may consist
of markup, it may be encrypted or compressed, or it may include
embedded text in a binary format (e.g. JPEG).
Text content is content that is composed of one or more text
elements. A text
equivalent (whether in content or the user
interface) is an equivalent composed of
one or more text elements. Authors generally provide text equivalents
for content by using the alternative content
mechanisms of a specification.
A non-text
element is an element (in content or the user
interface) that does not have the qualities of a text element.
Non-text
content is composed of one or more non-text elements. A
non-text equivalent (whether in content or the user interface) is an
equivalent composed of
one or more non-text elements.
- text decoration
- Any
stylistic effect that the user agent may apply to visually rendered text that does not
affect the layout of the document (i.e. does not require reformatting
when applied or removed). Text decoration mechanisms include underline,
overline, and strike-through.
- text format
- Any media object given an Internet media type of
"text" (e.g. "text/plain", "text/html", or "text/*") as defined in RFC
2046 [RFC2046], section 4.1, or
any media object identified by Internet media type to be an XML
document (as defined in [XML], section 2) or SGML
application. Refer, for example, to Internet media types defined in
"XML Media Types" [RFC3023].
- text transcript
- A text equivalent of audio
information (e.g. an audio-only presentation
or the audio track of a movie or other
animation). A text transcript provides text for both spoken words and non-spoken
sounds such as sound effects. Text transcripts make audio information
accessible to people who have hearing disabilities and to people who
cannot play the audio. Text transcripts are usually created by hand but
may be generated on the fly (e.g. by voice-to-text converters). See
also the definitions of captions and collated text
transcripts.
- time-based
- defining a common time scale for all components of a time-based media presentation. For example, a media-player will expose a single timebase for a presentation composed of individual video and audio tracks, for instance allowing users or technology to query or alter the playback rate and position.
- track (audio track or
visual track)
- Content rendered as sound through an
audio viewport. The audio track may be all
or part of the audio portion presentation (e.g. each instrument may
have a track, or each stereo channel may have a track). Also see definition of visual track
- toolbar @@ 735
- A collection of commonly used controls presented in a region that can be configured or navigated separately from other regions. Such containers may be docked or free-floating, permanent or transient, integral to the application or add-ons. Variations are often called toolbars, palettes, panels, or inspectors.
- user agent
- A user agent is any software that retrieves, renders
and facilitates end user interaction with Web content.
- user agent default styles
- User agent default styles are style property
values applied in the absence of any author or user styles. Some
markup languages specify a default rendering for content in that markup
language; others do not. For example, XML 1.0
[XML]
does not specify default styles for XML documents.
HTML 4 [HTML4] does not specify
default styles for HTML documents, but the CSS 2 [CSS2]
specification suggests a sample
default style sheet for HTML 4 based on current practice.
- user interface, user interface
control
- For the purposes of UAAG 2.0, user interface
includes both:
- the user agent user
interface, i.e. the controls (e.g. menus, buttons,
prompts, and other components for input and output) and mechanisms
(e.g. selection and focus) provided by the user agent ("out of the
box") that are not created by content.
- the "content user interface," i.e. the enabled elements that are
part of content, such as form controls, links, and applets.
The document distinguishes them only where required for clarity. For
more information, see the section on requirements for content, for user
agent features, or both @@.
The term "user interface control" refers to a component of the user
agent user interface or the content user interface, distinguished where
necessary.
- user styles
- User styles are style property
values that come from user interface settings, user style sheets,
or other user interactions.
- values
- see properties
- view
-
A user interface function that lets users interact with web content. UAAG 2.0 recognizes a variety of approaches to presenting the content in a view, such as:
- rendered view: Views in which content is presented such that it is rendered, played or executed. There are several sub-types:
- In conventionally rendered views the content is rendered, played or executed according to the web content technology specification. This is the default view of most user agents.
- In unconventionally rendered views the content is rendered quite differently than specified in the technology specification (e.g., rendering an audio file as a graphical wavefront); or
- source view: Views in which the web content is presented without being rendered, played or executed. The source view may be plain text (i.e., "View Source") or it may include some other organization (e.g., presenting the markup in a tree).
- outline view: Views in which only a subset of the rendered content is presented, usually composed of labels or placeholders for important structural elements. The important structural elements will depend on the web content technology, but may include headings, table captions, and content sections.
Note: Views can be visual, audio, or tactile.
top-level viewports are
viewports that are not contained within other user agent viewports.
- viewport
- The part of an onscreen view that the user agent is currently presenting onscreen to the user, such that the user can attend to any part of it without further action (e.g. scrolling). There may be multiple viewports on to the same view (e.g. when a split-screen is used to present the top and bottom of a document simultaneously) and viewports may be nested (e.g. a scrolling frame located within a larger document). When the viewport is smaller in extent than the content it is presenting, user agents typically provide mechanisms to bring the occluded content into the viewport (e.g., scrollbars).
- visual-only
- A visual-only presentation is content consisting
exclusively of one or more visual
tracks presented concurrently or in series. A silent movie is an
example of a visual-only presentation.
- visual track
- A visual object is content rendered through a
graphical viewport. Visual objects include
graphics, text, and visual portions of movies and other animations. A
visual track is a visual object that is intended as a whole or partial
presentation. A visual track does not necessarily correspond to a
single physical object or software object.
- voice browser
- From "Introduction and Overview of W3C Speech
Interface Framework" [VOICEBROWSER]: "A
voice browser is a device (hardware and software) that interprets voice
markup languages to generate voice output, interpret voice input, and
possibly accept and produce other modalities of input and output."
- web resource
- Anything that can be identified by a Uniform Resource Identifier
(URI).
Appendix B: How to refer to
UAAG 2.0 from other documents
This section is informative.
There are two recommended ways to refer to the "User Agent Accessibility
Guidelines 2.0" (and to W3C documents in general):
- References to a specific version of "User Agent Accessibility
Guidelines 2.0." For example, use the "this version" URI to
refer to the current document:
http://www.w3.org/TR/2010/WD-UAAG20-20100617/
- References to the latest version of "User Agent Accessibility
Guidelines 2.0." Use the "latest version" URI to refer to
the most recently published document in the series:
http://www.w3.org/TR/UAAG20/.
In almost all cases, references (either by name or by link) should be to
a specific version of the document. W3C will make every effort to make UAAG 2.0 indefinitely available at its original address in its original form.
The top of UAAG 2.0 includes the relevant catalog metadata for specific
references (including title, publication date, "this version" URI,
editors' names, and copyright information).
An XHTML 1.0 paragraph including a reference to this specific document
might be written:
<p>
<cite><a href="http://www.w3.org/TR/2010/WD-UAAG20-20100617/">
"User Agent Accessibility Guidelines 2.0,"</a></cite>
J. Allan, K. Ford, J. Spellman, eds.,
W3C Recommendation, http://www.w3.org/TR/ATAG20/.
The <a href="http://www.w3.org/TR/ATAG20/">latest version</a> of this document is available at http://www.w3.org/TR/ATAG20/.</p>
For very general references to this document (where stability of content
and anchors is not required), it may be appropriate to refer to the latest
version of this document. Other sections of this document explain how to build a conformance
claim.
Appendix C: References
This section is informative.
For the latest version of any W3C specification please
consult the list of W3C Technical Reports at
http://www.w3.org/TR/. Some documents listed below may have been superseded
since the publication of UAAG 2.0.
Note: In UAAG 2.0, bracketed labels such as
"[WCAG20]" link to the corresponding entries in this section. These labels
are also identified as references through markup.
- [CSS1]
- "Cascading Style
Sheets (CSS1) Level 1 Specification," B. Bos, H. Wium Lie,
eds., 17 December 1996, revised 11 January 1999. This W3C
Recommendation is http://www.w3.org/TR/1999/REC-CSS1-19990111.
- [CSS2]
- "Cascading Style
Sheets, level 2 (CSS2) Specification," B. Bos, H. Wium Lie,
C. Lilley, and I. Jacobs, eds., 12 May 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-CSS2-19980512/.
- [DOM2CORE]
- "Document
Object Model (DOM) Level 2 Core Specification," A. Le Hors,
P. Le Hégaret, L. Wood, G. Nicol, J. Robie, M. Champion, S. Byrne,
eds., 13 November 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/.
- [DOM2STYLE]
- "Document
Object Model (DOM) Level 2 Style Specification," V. Apparao,
P. Le Hégaret, C. Wilson, eds., 13 November 2000. This W3C
Recommendation is
http://www.w3.org/TR/2000/REC-DOM-Level-2-Style-20001113/.
- [INFOSET]
- "XML
Information Set," J. Cowan and R. Tobin, eds., 24 October
2001. This W3C Recommendation is
http://www.w3.org/TR/2001/REC-xml-infoset-20011024/.
- [RFC2046]
- "Multipurpose
Internet Mail Extensions (MIME) Part Two: Media Types," N.
Freed, N. Borenstein, November 1996.
- [WCAG10]
- "Web Content
Accessibility Guidelines 1.0," W. Chisholm, G. Vanderheiden,
and I. Jacobs, eds., 5 May 1999. This W3C Recommendation is
http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/.
- [XML]
- "Extensible Markup
Language (XML) 1.0 (Second Edition)," T. Bray, J. Paoli,
C.M. Sperberg-McQueen, eds., 6 October 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-xml-20001006.
- [AT1998]
- The Assistive Technology
Act of 1998.
- [ATAG10]
- "Authoring Tool
Accessibility Guidelines 1.0," J. Treviranus, C.
McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February 2000. This
W3C Recommendation is
http://www.w3.org/TR/2000/REC-ATAG10-20000203/.
- [ATAG10-TECHS]
- "Techniques
for Authoring Tool Accessibility Guidelines 1.0," J.
Treviranus, C. McCathieNevile, J. Richards, eds., 29 Oct 2002. This W3C
Note is http://www.w3.org/TR/2002/NOTE-ATAG10-TECHS-20021029/.
- [CHARMOD]
- "Character Model
for the World Wide Web," M. Dürst and F. Yergeau, eds., 30
April 2002. This W3C Working Draft is
http://www.w3.org/TR/2002/WD-charmod-20020430/. The latest version is available at
http://www.w3.org/TR/charmod/.
- [DOM2HTML]
- "Document
Object Model (DOM) Level 2 HTML Specification," J. Stenback,
P. Le Hégaret, A. Le Hors, eds., 8 November 2002. This W3C Proposed
Recommendation is
http://www.w3.org/TR/2002/PR-DOM-Level-2-HTML-20021108/. The latest version is
available at http://www.w3.org/TR/DOM-Level-2-HTML/.
- [HTML4]
- "HTML
4.01 Recommendation," D. Raggett, A. Le Hors, and I. Jacobs,
eds., 24 December 1999. This W3C Recommendation is
http://www.w3.org/TR/1999/REC-html401-19991224/.
- [RFC2616]
- "Hypertext
Transfer Protocol — HTTP/1.1," J. Gettys, J. Mogul, H.
Frystyk, L. Masinter, P. Leach, T. Berners-Lee, June 1999.
- [RFC3023]
- "XML Media
Types," M. Murata, S. St. Laurent, D. Kohn, January
2001.
- [SMIL]
- "Synchronized
Multimedia Integration Language (SMIL) 1.0 Specification,"
P. Hoschka, ed., 15 June 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-smil-19980615/.
- [SMIL20]
- "Synchronized
Multimedia Integration Language (SMIL 2.0) Specification,"
J. Ayars, et al., eds., 7 August 2001. This W3C Recommendation is
http://www.w3.org/TR/2001/REC-smil20-20010807/.
- [SVG]
- "Scalable
Vector Graphics (SVG) 1.0 Specification," J. Ferraiolo, ed.,
4 September 2001. This W3C Recommendation is
http://www.w3.org/TR/2001/REC-SVG-20010904/.
- [UAAG10]
- "User Agent
Accessibility Guidelines 1.0," I. Jacobs, J. Gunderson, E. Hansen,
eds.17 December 2002. This W3C Recommendation is available at
http://www.w3.org/TR/2002/REC-UAAG10-20021217/.
- [UAAG10-CHECKLIST]
- An appendix to UAAG 2.0 lists all of the checkpoints, sorted by
priority. The checklist is available in either tabular
form or list
form.
- [UAAG10-ICONS]
- Information about UAAG 1.0 conformance
icons and their usage is available at
http://www.w3.org/WAI/UAAG10-Conformance.
- [UAAG10-SUMMARY]
- An appendix to UAAG 2.0 provides a summary of the goals and structure of User Agent
Accessibility Guidelines 1.0.
- [UAAG10-TECHS]
- "Techniques for
User Agent Accessibility Guidelines 1.0," I. Jacobs, J.
Gunderson, E. Hansen, eds. The latest draft of the techniques document
is available at http://www.w3.org/TR/UAAG10-TECHS/.
- [UNICODE]
- The Unicode Consortium. The Unicode Standard, Version 6.1.0, (Mountain View, CA: The Unicode Consortium, 2012. ISBN 978-1-936213-02-3)
http://www.unicode.org/versions/Unicode6.1.0/
- [VOICEBROWSER]
- "Introduction
and Overview of W3C Speech Interface Framework," J. Larson,
4 December 2000. This W3C Working Draft is
http://www.w3.org/TR/2000/WD-voice-intro-20001204/. The latest version is
available at http://www.w3.org/TR/voice-intro/. UAAG 2.0 includes
references to additional W3C specifications about voice browser
technology.
- [W3CPROCESS]
- "World
Wide Web Consortium Process Document," I. Jacobs ed. The 19
July 2001 version of the Process Document is
http://www.w3.org/Consortium/Process-20010719/. The latest version is
available at http://www.w3.org/Consortium/Process/.
- [WCAG20]
- "Web Content Accessibility Guidelines (WCAG) 2.0" B. Caldwell, M. Cooper, L. Guarino Reid, G. Vanderheiden, eds., 8 December 2008. This W3C Recommendation is
http://www.w3.org/TR/2008/REC-WCAG20-20081211/. The latest version is
available at http://www.w3.org/TR/WCAG20/. Additional
format-specific techniques documents are available from this Recommendation.
- [WCAG20-TECHS]
- "Techniques for
Web Content Accessibility Guidelines 2.0," B. Caldwell, M. Cooper, L. Guarino Reid, G. Vanderheiden, eds., 8 December 2008. This W3C Note is
http://www.w3.org/TR/2010/NOTE-WCAG20-TECHS-20101014/. The latest version is
available at http://www.w3.org/TR/WCAG20-TECHS/. Additional
format-specific techniques documents are available from this Note.
- [WEBCHAR]
- "Web
Characterization Terminology and Definitions Sheet," B.
Lavoie, H. F. Nielsen, eds., 24 May 1999. This is a W3C Working Draft
that defines some terms to establish a common understanding about key
Web concepts. This W3C Working Draft is
http://www.w3.org/1999/05/WCA-terms/01.
- [XAG10]
- "XML
Accessibility Guidelines 1.0," D. Dardailler, S. Palmer, C.
McCathieNevile, eds., 3 October 2001. This W3C Working Draft is
http://www.w3.org/TR/2002/WD-xag-20021003. The latest version is available at
http://www.w3.org/TR/xag.
- [XHTML10]
- "XHTML[tm] 1.0:
The Extensible HyperText Markup Language," S. Pemberton, et
al., 26 January 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-xhtml1-20000126/.
- [XMLDSIG]
- "XML-Signature
Syntax and Processing," D. Eastlake, J. Reagle, D. Solo,
eds., 12 February 2002. This W3C Recommendation is
http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/.
- [XMLENC]
- "XML
Encryption Syntax and Processing," D. Eastlake, J. Reagle,
eds., 10 December 2002. This W3C Recommendation is
http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/.
Appendix D:
Acknowledgments
Participants
active in the UAWG prior publication:
- Jim Allan (Co-Chair, Texas School for the Blind and Visually
Impaired)
- Wayne Dick (Invited Expert)
- Kelly Ford (Co-Chair, Microsoft)
- Mark Hakkinen (Invited Expert)
- Simon Harper (University of Manchester)
- Greg Lowney (Invited Expert)
- Kimberly Patch (Invited Expert)
- Jan Richards (Adaptive Technology Resource Centre, University of
Toronto)
- Jeanne Spellman (W3C Staff Contact)
Other
previously active UAWG participants and other contributors to UAAG 2.0:
- Judy Brewer (W3C)
- Alan Cantor (Invited Expert)
- Bim Egan (Royal National Institute of Blind People)
- Sean Hayes (Microsoft)
- Dean Hudson (Apple)
- Patrick Lauke (Opera Software)
- Cathy Laws (IBM)
- Peter Parente (IBM)
- David Poehlman (Invited Expert)
- Simon Pieters (Opera Software)
- Henny Swan (Opera Software)
- Gregory Rosmaita (Invited Expert)
- David Tseng (Apple)
UAAG 2.0 would not have been possible without the work of those who
contributed to UAAG 1.0.
This publication has been funded in part with Federal funds from the U.S.
Department of Education, National Institute on Disability and Rehabilitation
Research (NIDRR) under contract number ED-OSE-10-C-0067. The content of this
publication does not necessarily reflect the views or policies of the U.S.
Department of Education, nor does mention of trade names, commercial
products, or organizations imply endorsement by the U.S. Government.
Appendix E: Checklist
Appendix F:
Comparison of UAAG 1.0 guidelines to UAAG 2.0
Appendix G: Alternative Content
These are the elements and attributes that present 'alternative
content' relevant to Guideline 3.