[Contents] [Guidelines]


Implementing UAAG 2.0

A guide to understanding and implementing User Agent Accessibility Guidelines 2.0

W3C Working Draft 23 May 2013

This version:
Latest version:
Previous version:
James Allan, Texas School for the Blind and Visually Impaired
Kelly Ford, Microsoft
Kim Patch, Redstart Systems
Jeanne Spellman, W3C/Web Accessibility Initiative


This document provides supporting information for the User Agent Accessibility Guidelines (UAAG) 2.0 for designing user agents that lower barriers to Web accessibility for people with disabilities. User agents include browsers and other types of software that retrieve and render Web content. A user agent that conforms to these guidelines will promote accessibility through its own user interface and through other internal facilities, including its ability to communicate with other technologies (especially assistive technologies). Furthermore, all users, not just users with disabilities, should find conforming user agents to be more usable. In addition to helping developers of browsers and media players, this document will also benefit developers of assistive technologies because it explains what types of information and control an assistive technology may expect from a conforming user agent.

This document provides explanation of the intent of UAAG 2.0 success criteria, examples of implementation of the guidelines, best practice recommendations and additional resources for the guideline.

The "User Agent Accessibility Guidelines 2.0" (UAAG 2.0) is part of a series of accessibility guidelines published by the W3C Web Accessibility Initiative (WAI).

Status of this document

May be Superseded

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

Working Draft of Implementing UAAG 2.0

This is the W3C Working Draft of 23 May 2013. This working draft reflects a review of all success criteria for applicability to mobile and extensive examples of mobile accessibility needs and solutions

The Working Group seeks feedback on the following points for this draft: .

Comments on this draft should be sent to public-uaag2-comments@w3.org (Public Archive) by 21 June 2013.

The User Agent Working Group (UAWG) expects to advance UAAG 2.0 through the W3C Recommendation track with Implementing UAAG as a supporting Note. Until that time User Agent Accessibility Guidelines 1.0 (UAAG 1.0) [UAAG10] is the stable, referenceable version. This Working Draft does not supersede UAAG 1.0.

Web Accessibility Initiative

This document has been produced as part of the W3C Web Accessibility Initiative (WAI). The goals of the User Agent Working Group (UAWG) are discussed in the Working Group charter. The UAWG is part of the WAI Technical Activity.

No Endorsement

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.


This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents


Definition of User Agent

A user agent is any software that retrieves, renders and facilitates end-user interaction with Web content.

The classic user agent is a browser. A media player, which only performs these functions for time-based media, is also user agent. Web applications and some mobile apps that render web content are also user agents.

For specific advice in determining if software is a user agent, see What Qualifies as a User Agent (Implementing UAAG 2.0). User agents may also include authoring tool features: see Relationship to the Authoring Tool Accessibility Guidelines (ATAG) 2.0. For information on the difference between web applications and content see Relationship to the Web Content Accessibility Guidelines.

What qualifies as a User Agent?

These guidelines employ the following tests to determine if software qualifies as a user agent. UAAG 2.0 divides potential user agents into

If the following three conditions are met, then it is a platform-based application:

  1. It is a standalone application, and
  2. It interprets any W3C-specified language, and
  3. It provides a user interface or interprets a procedural or declarative language that may be used to provide a user interface

If the following two conditions are met then it is an extension or plug-in:

  1. It is launched by, or extends the functionality of a platform-based application, and
  2. Post-launch user interaction is included in, or is within the bounds of the platform-based application
If the following three conditions are met then it is an web-based application:
  1. The user interface is generated by a procedural or declarative language; and
  2. The user interface is embedded in an application that renders web content, and
  3. User interaction is controlled by a procedural or declarative language, or if user interaction does not modify the Document Object Model of its containing document.

The Role of User Agents in Web Authoring

The following is a list of several ways in which user agents are commonly involved in web content authoring and the relationship between UAAG 2.0 and ATAG 2.0.

  1. Preview tool: Authors often preview their work in user agents to test how the content will be appear and operate. ATAG 2.0 includes a special exception when previews are implemented with pre-existing user agents, so there are no additional requirements on user agent developers in this case.
  2. Checking tool: Authors often make use of user agent error panels (e.g. HTML validity, JavaScript errors) during authoring. ATAG 2.0 Part A applies, but may not include additional accessibility requirements beyond the UAAG 2.0 success criteria. If a user agent includes an "accessibility checker", the developer should consult checker implementation guidance in ATAG 2.0 Part B.
  3. Edit modes: Some user agents include a mode where the user can edit and save changes to the web content, modifying the experience of other users. In this mode, the user agent is acting as an authoring tool and all of ATAG 2.0 applies.
  4. Automatic content changes: Some user agents or plug-ins may automatically change retrieved web content before it's rendered. This functionality is not considered an authoring tool because changes are made to the user's own experience, not the experience of other users.
  5. Providing a platform for web-based authoring tools: Many web applications serve as authoring tools and make use of user agent features to deliver functionality (e.g., undo text entry, adjust font size of the authoring tool user interface etc.) User agent developers should consult ATAG 2.0 to understand the ways in which web-based authoring tools can depend on user agent features.

UAAG 2.0 Guidelines

The success criteria and applicability notes in this section are normative. Guideline summaries are informative.

PRINCIPLE 1 - Ensure that the user interface and rendered content are perceivable

Implementing Guideline 1.1 - Provide access to alternative content.

Summary: The user can choose to render any type of alternative content available. (1.1.1). The user can also choose at least one alternative such as alt text to be always displayed (1.1.3), but it's recommended that users also be able to specify a cascade (1.1.5), such as alt text if it's there, otherwise longdesc, otherwise filename, etc. It's recommended that the user can configure the caption text and that text or sign language alternative cannot obscure the video or the controls (1.1.4). The user can configure the size and position of media alternatives (1.1.6).

1.1.1 Render Alternative Content:

For any content element, the user can choose to render any types of recognized alternative content that are present. (Level A)

Note: It is recommended that the user agent allow the user to choose whether the alternative content replaces or supplements the original content element.

1.1.2 Replace Non-Text Content:

The user can have all recognized non-text content replaced by alternative content, placeholders, or both. (Level A)

Note: At level A, the user agent can specify that an alternative content or placeholder replace the non-text content. At level AA success criterion 1.1.3 requires that the user can specify one format or placeholder to be used. At level AAA success criterion 1.1.5 requires that the user can specify a cascade order of types of alternative content to be used.

1.1.3 Configurable Alternative Content Defaults:

For each type of non-text content, the user can specify a type of alternative content that, if present, will be rendered by default. (Level AA)

1.1.4 Display of Alternative Content for Time-Based Media:

For recognized on-screen alternative content for time-based media (e.g. captions, sign language video), the following are all true: (Level AA)

  1. Don't obscure primary media: The user can specify that displaying time-based media alternatives doesn't obscure the primary time-based media; and
  2. Don't obscure controls: The user can specify that the displaying time-based media alternatives doesn't obscure recognized controls for the primary time-based media; and
  3. Use configurable text: The user can configure recognized text within time-based media alternatives (e.g. captions) in conformance with 1.4.1.

Note: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size to meet this requirement.


1.1.5 Default Rendering of Alternative Content (Enhanced):

For each type of non-text content, the user can specify the cascade order in which to render different types of alternative content when preferred types are not present. (Level AAA)

1.1.6 Size and Position of Time-Based Media Alternatives:

The user can configure recognized alternative content for time-based media (e.g. captions, sign language video) as follows: (Level AAA)

  1. The user can resize alternative content for time-based media up to the size of the user agent's viewport.
  2. The user can reposition alternative content for time-based media to at least above, below, to the right, to the left, and overlapping the primary time-based media.

Note 1: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size or hidden to meet this requirement.

Note 2: Implementation may involve displaying alternative content for time-based media in a separate viewport, but this is not required.

Implementing Guideline 1.2 - Repair missing content.

Summary: The user can request useful alternative content when the author fails to provide it. For example, showing metadata in place of missing or empty (1.2.1) alt text. The user can ask the browser to predict missing structural information, such as field labels, table headings or section headings (1.2.2).

1.2.1 Support Repair by Assistive Technologies:

If text alternatives for non-text content are missing or empty then both of the following are true: (Level AA)

  1. the user agent does not attempt to repair the text alternatives with text values that are also available to assistive technologies.
  2. the user agent makes metadata related to the non-text content available programmatically (and not via fields reserved for text alternatives).

1.2.2 Repair Missing Structure:

The user can specify whether or not the user agent should attempt to insert the following types of structural markup on the basis of author-specified presentation attributes (e.g.. position and appearance): (Level AAA)
  1. Labels
  2. Headers (e.g. heading markup, table headers)

Implementing Guideline 1.3 - Provide highlighting for selection, keyboard focus, enabled elements, visited links.

Summary: The user can visually distinguish selected, focused, and enabled items, and recently visited links (1.3.1), with a choice of highlighting options that at least include foreground and background colors, and border color and thickness (1.3.2).

1.3.1 Highlighted Items:

The user can specify that the following classes be highlighted so that each is uniquely distinguished: (Level A)

  1. selection
  2. active keyboard focus (indicated by focus cursors and/or text cursors)
  3. recognized enabled input elements (distinguished from disabled elements)
  4. elements with alternative content
  5. recently visited links

1.3.2 Highlighting Options:

When highlighting classes specified by 1.3.1 Highlighted Items, the user can specify highlighting options that include at least: (Level AA)

  1. foreground colors,
  2. background colors, and
  3. borders (configurable color, style, and thickness)

Implementing Guideline 1.4 - Provide text configuration.

Summary: The user can control text font, color, and size (1.4.1), including whether all text should be the shown the same size (1.4.2).

1.4.1 Configure Rendered Text:

The user can globally set any or all of the following characteristics of visually rendered text content, overriding any specified by the author or user agent defaults: (Level A)

  1. text scale (the general size of text)
  2. font family
  3. text color (foreground and background)
  4. line spacing
  5. character spacing

1.4.2 Preserving Size Distinctions:

The user can specify whether or not distinctions in the size of rendered text are preserved when that text is rescaled (e.g. headers continue to be larger than body text). (Level A)

Implementing Guideline 1.5 - Provide volume configuration.

Summary: The user can adjust the volume of each audio track relative to the global volume level (1.5.1).

1.5.1 Global Volume:

The user can independently adjust the volume of all audio tracks, relative to the global volume level set through operating environment mechanisms. (Level A)

Implementing Guideline 1.6 - Provide synthesized speech configuration.

Summary: If synthesized speech is produced, the user can specify speech rate and volume (1.6.1), pitch and pitch range (1.6.2), and synthesizer speech characteristics like emphasis (1.6.3) and features like spelling (1.6.4).

1.6.1 Speech Rate, Volume, and Voice:

If synthesized speech is produced, the user can specify the following: (Level A)

  1. speech rate,
  2. speech volume (independently of other sources of audio), and
  3. voice, when more than one voice option is available

1.6.2 Speech Pitch and Range:

If synthesized speech is produced, the user can specify the following if offered by the speech synthesizer: (Level AA)

  1. pitch (average frequency of the speaking voice), and
  2. pitch range (variation in average frequency)

Note: Because the technical implementations of text to speech engines vary (e.g., formant-based synthesis or concatenative synthesis), a specific engine may not support varying pitch or pitch range. A user agent will expose the availability of pitch and pitch range control if the currently selected or installed text to speech engine offers this capability.

1.6.3 Advanced Speech Characteristics:

The user can adjust all of the speech characteristics offered by the speech synthesizer. (Level AAA)

1.6.4 Synthesized Speech Features:

If synthesized speech is produced, the following features are provided: (Level AA)

  1. user-defined extensions to the synthesized speech dictionary,
  2. "spell-out", where text is spelled one character at a time, or according to language-dependent pronunciation rules,
  3. at least two ways of speaking numerals: spoken as individual digits and punctuation (e.g. "one two zero three point five" for 1203.5 or "one comma two zero three point five" for 1,203.5), and spoken as full numbers are spoken (e.g. "one thousand, two hundred and three point five" for 1203.5),
  4. at least two ways of speaking punctuation: spoken literally, and with punctuation understood from natural pauses.

Implementing Guideline 1.7 - Enable Configuration of User Stylesheets.

Summary: The user agent shall support user stylesheets (1.7.1) and the user can choose which if any user-supplied (1.7.2) and author-supplied (1.7.3) stylesheets to use. The user agent will allow users to save user stylesheets (1.7.4).

1.7.1 Support User Stylesheets:

If the user agent supports a mechanism for authors to supply stylesheets, the user agent also provides a mechanism for users to supply stylesheets. (Level A)

1.7.2 Apply User Stylesheets:

If user style sheets are supported, then the user can enable or disable user stylesheets for: (Level A)

  1. all pages on specified websites, or
  2. all pages

1.7.3 Author Style Sheets:

If the user agent supports a mechanism for authors to supply stylesheets, the user can disable the use of author style sheets on the current page. (Level A)

1.7.4 Save Copies of Stylesheets:

The user can save copies of the stylesheets referenced by the current page, in order to edit and load the copies as user stylesheets. (Level AA)

Implementing Guideline 1.8 - Help users to use and orient within windows and viewports.

Summary: The user agent provides programmatic and visual cues to keep the user oriented. These include highlighting the viewport (1.8.1), keeping the focus within the viewport (1.8.2 & 1.8.7), resizing the viewport (1.8.3), providing scrollbar(s) that identify when content is outside the visible region (1.8.4) and which portion is visible (1.8.5), changing the size of graphical content with zoom (1.8.6 & 1.8.12), restoring the focus and point of regard when the user returns to a previously viewed page (1.8.8). Users can set a preference whether new windows or tabs open automatically (1.8.9) or get focus automatically (1.8.10). Additionally, the user can specify that all view ports have the same user interface elements (1.8.11), if and how new viewports open (1.8.9), and whether the new window automatically gets focus (1.8.10). The user can mark items in a webpage and use shortcuts to navigate back to marked items. (1.8.13).

1.8.1 Highlight Viewport:

The viewport with the input focus is highlighted and the user can customize attributes of the highlighting mechanism (e.g. shape, size, stroke width, color, blink rate). The viewport can include nested viewports and containers. (Level A)

1.8.2 Move Viewport to Selection and Focus:

When a viewport's selection or input focus changes, the viewport's content moves as necessary to ensure that the new selection or input focus location is at least partially in the visible portion of the viewport. (Level A)

1.8.3 Resize Viewport:

The user can resize graphical viewports within the limits of the display, overriding any values specified by the author. (Level A)

1.8.4 Viewport Scrollbars:

When the rendered content extends beyond the viewport dimensions, users can have graphical viewports include scrollbars, overriding any values specified by the author. (Level A)

1.8.5 Indicate Viewport Position:

The user can determine the viewport's position relative to the full extent of the rendered content. (Level A)

1.8.6: Zoom:

The user can rescale content within graphical viewports as follows: (Level A)

  1. Zoom in: to at least 500% of the default size; and
  2. Zoom out: to at least 10% of the default size, so the content fits within the height or width of the viewport.

1.8.7 Maintain point of regard

: To the extent possible, the point of regard remains visible and at the same location within the viewport when the viewport is resized, when content is zoomed or scaled, or when content formatting is changed. (Level A)

1.8.8 Viewport History:

For user agents that implement a viewport history mechanism (e.g. "back" button), the user can return to any state in the viewport history that is allowed by the content, including a restored point of regard, input focus and selection. (Level AA)

1.8.9 Open on Request:

The user can specify whether author content can open new top-level viewports (e.g. windows or tabs). (Level AA)

1.8.10 Do Not Take Focus:

If new top-level viewports (e.g. windows or tabs) are configured to open without explicit user request, the user can specify whether or not top-level viewports take the active keyboard focus when they open. (Level AA)

1.8.11 Same UI:

The user can specify that all top-level viewports (e.g. windows or tabs) follow the defined user interface configuration. (Level AA)

1.8.12 Reflowing Zoom:

The user can request that when reflowable content in a graphical viewport is rescaled, it is reflowed so that one dimension of the content fits within the height or width of the viewport. (Level AA)

Note: User agents are encouraged to allow users to override author instructions not to wrap content (e.g., nowrap).

1.8.13 Webpage Bookmarks:

The user can mark items in a webpage, then use shortcuts to navigate back to marked items. The user can specify whether a navigation mark disappears after a session, or is persistent across sessions. (Level AAA)

Implementing Guideline 1.9 - Provide alternative views.

Summary: The user can view the source of content (1.9.2), or an outline view of important elements. (1.9.1).

1.9.1 Outline View:

Users can view a navigable outline of rendered content composed of labels for important elements, and can move focus efficiently to these elements in the main viewport. (Level AA)

Note: The important elements depend on the web content technology, but may include headings, table captions, and content sections.

1.9.2 Source View:

The user can view all source text that is available to the user agent. (Level AAA)

Implementing Guideline 1.10 - Provide element information.

Summary:The user agent presents information about content relationships (e.g. form labels, table headers)(1.10.1), and extended link information (e.g. title, internal vs. external) (1.10.2)

1.10.1 Access Relationships:

The user can access explicitly-defined relationships based on the user's position in content (e.g. show the label of a form control, show the headers of a table cell). (Level AA)

1.10.2 Access to Element Hierarchy:

The user can determine the path of element nodes going from the root element of the element hierarchy to the currently focused or selected element. (Level AAA)

PRINCIPLE 2. Ensure that the user interface is operable

Implementing Guideline 2.1 - Ensure full keyboard access. [Return to Guideline]

Summary: Users can operate all functions (2.1.1), and move focus (2.1.2) using just the keyboard. Users can activate important or common features with shortcut keys, (2.1.6), override keyboard shortcuts in content and user interface (2.1.4), escape keyboard traps (2.1.3), specify that selecting an item in a dropdown list or menu not activate that item or move to that new web page (2.1.4) and use standard keys for that platform (2.1.5).

2.1.1 Keyboard Operation:

All functionality can be operated via the keyboard using sequential or direct keyboard commands that do not require specific timings for individual keystrokes, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints (e.g. free hand drawing). This does not forbid and should not discourage providing other input methods in addition to keyboard operation including mouse, touch, gesture and speech. (Level A)

2.1.2 Keyboard Focus:

Every viewport has an active or inactive keyboard focus at all times. (Level A)

2.1.3 No Keyboard Trap:

If keyboard focus can be moved to a component using a keyboard interface (including nested user agents), then focus can be moved away from that component using only a keyboard interface. If this requires more than unmodified arrow or tab keys (or other standard exit methods), users are advised of the method for moving focus away. (Level A)

2.1.4 Separate Selection from Activation:

The user can specify that focus and selection can be moved without causing further changes in focus, selection, or the state of controls, by either the user agent or author content. (Level A)

2.1.5 Follow Text Keyboard Conventions:

The user agent follows keyboard conventions for the operating environment. (Level A)

2.1.6 Efficient Keyboard Access:

The user agent user interface includes mechanisms to make keyboard access more efficient than sequential keyboard access. (Level A)

Implementing Guideline 2.2 - Provide sequential navigation [Return to Guideline]

Summary: Users can use the keyboard to navigate sequentially (2.2.3) to all the operable elements (2.2.1) in the viewport as well as between viewports (2.2.2). Users can optionally disable wrapping or request a signal when wrapping occurs (2.2.4).

2.2.1 Sequential Navigation Between Elements :

The user can move the keyboard focus backwards and forwards through all recognized enabled elements in the current viewport. (Level A)

2.2.2 Sequential Navigation Between Viewports:

The user can move the keyboard focus backwards and forwards between viewports, without having to sequentially navigate all the elements in a viewport. (Level A)

2.2.3 Default Navigation Order:

If the author has not specified a navigation order, the default sequential navigation order is the document order. (Level A)

2.2.4 Options for Wrapping in Navigation

The user can prevent sequential navigation from wrapping the focus at the beginning or end of a document, and can request notification when such wrapping occurs. (Level AA)

Implementing Guideline 2.3 - Provide direct navigation and activation [Return to Guideline]

Summary: Users can navigate directly (e.g. keyboard shortcuts) to important elements (2.3.1) with the option of immediate activation of the operable elements (2.3.3). Display commands with the elements to make it easier for users to discover the commands (2.3.2 & 2.3.4). The user can remap and save direct commands (2.3.5).

2.3.1 Direct Navigation to Important Elements:

The user can navigate directly to any important elements (e.g. structural or operable) in rendered content. (Level AA)

2.3.2 Present Direct Commands from Rendered Content (enhanced):

The user can have any recognized direct commands in rendered content (e.g. accesskey, landmark) be presented with their associated elements (e.g. Alt+R to reply to a web email). (Level AA)

2.3.3 Direct activation of Enabled Elements:

The user can move directly to and activate any enabled element in rendered content. (Level A)

2.3.4 Present Direct Commands in User Interface:

The user can have any direct commands in the user agent user interface (e.g. keyboard shortcuts) be presented with their associated user interface controls (e.g. "Ctrl+S" displayed on the "Save" menu item and toolbar button). (Level AA)

2.3.5 Customize Keyboard Commands:

The user can override any keyboard shortcut including recognized author supplied shortcuts (e.g. accesskeys) and user agent user interface controls, except for conventional bindings for the operating environment (e.g. arrow keys for navigating within menus). The rebinding options must include single-key and key-plus-modifier keys if available in the operating environment. The user must be able to save these settings beyond the current session. (Level AA)

Implementing Guideline 2.4 - Provide text search. [Return to Guideline]

Summary: Users can search rendered content (2.4.1) forward or backward (2.4.2) and can have the matched content highlighted in the viewport (2.4.3). The user is notified if there is no match (2.4.4). Users can also search by case and for text within alternative content (2.4.5).

2.4.1 Text Search:

The user can perform a search within rendered content (e.g. not hidden with a style), including rendered text alternatives and rendered generated content, for any sequence of printing characters from the document character set. (Level A)

2.4.2 Find Direction:

The user can search forward or backward in rendered content. (Level A)

2.4.3 Match Found:

When a search operation produces a match, the matched content is highlighted, the viewport is scrolled if necessary so that the matched content is within its visible area, and the user can search from the location of the match. (Level A)

2.4.4 Alert on Wrap or No Match:

The user can be notified when there is no match to a search operation. The user can be notified when the search continues from the beginning or end of content. (Level A)

2.4.5 Search alternative content:

The user can perform text searches within textual alternative content (e.g. text alternatives for non-text content, captions) even when the textual alternative content is not rendered onscreen. (Level AA)

Implementing Guideline 2.5 - Provide structural navigation. [Return to Guideline]

Summary: Users can view (2.5.1), navigate (2.5.2), and configure the elements used in navigating (2.5.3) content hierarchy.

2.5.1 Location in Hierarchy:

When the user agent is presenting hierarchical information, but the hierarchy is not reflected in a standardized fashion in the DOM or platform accessibility services, the user can view the path of nodes leading from the root of the hierarchy to a specified element. (Level AA) .

2.5.2 Navigate by structural element:

The user agent provides at least the following types of structural navigation, where the structure types exist:(Level AA)

  1. by heading
  2. within tables

2.5.3 Configure Elements for Structural Navigation:

The user can configure sets of important elements (including element types) for structured navigation and hierarchical/outline view. (Level AAA)

Implementing Guideline 2.6 - Provide access to event handlers [Return to Guideline]

Summary:Users can interact with web content by mouse, keyboard, voice input, gesture, or a combination of input methods. Users can discover what event handlers (e.g. onmouseover) are available at the element and activate an element's events individually (2.6.1).

2.6.1 Access to input methods:

The user can discover recognized input methods explicitly associated with an element, and activate those methods in a modality independent manner. (Level AA)

Implementing Guideline 2.7 - Configure and store preference settings [Return to Guideline]

Summary: Users can restore preference settings to default (2.7.2), and accessibility settings persist between sessions (2.7.1). Users can manage multiple sets of preference settings (2.7.3), and adjust preference setting outside the user interface so the current user interface does not prevent access (2.7.4). It's also recommended that groups of settings can be transported to compatible systems (2.7.5).

2.7.1 Persistent Accessibility Settings:

User agent accessibility preference settings persist between sessions. (Level A)

2.7.2 Restore all to default:

The user can restore all preference settings to default values. (Level A)

2.7.3 Multiple Sets of Preference Settings:

The user can save and retrieve multiple sets of user agent preference settings. (Level AA)

2.7.4 Change preference settings outside the user interface:

The user can adjust any preference settings required to meet the User Agent Accessibility Guidelines (UAAG) 2.0 from outside the user agent user interface. (Level AA)

2.7.5 Portable Preference Settings:

The user can transfer all compatible user agent preference settings between computers. (Level AAA)

Implementing Guideline 2.8 - Customize display of GUI controls [Return to Guideline]

Summary: It's recommended that users can add, remove and configure the position of graphical user agent controls and restore them to their default settings (2.8.1).

2.8.1 Customize display of controls representing user interface commands, functions, and extensions:

The user can customize which user agent commands, functions, and extensions are displayed within the user agent's user interface as follows:(Level AA)

  1. Show: The user can choose to display any controls available within the user agent user interface, including user-installed extensions. It is acceptable to limit the total number of controls that are displayed onscreen.
  2. Simplify: The user can simplify the default user interface by choosing to display only commands essential for basic operation (e.g. by hiding some control).
  3. Reposition: The user can choose to reposition individual controls within containers (e.g. Toolbars or tool palettes), as well as reposition the containers themselves to facilitate physical access (e.g. To minimize hand travel on touch screens, or to facilitate preferred hand access on handheld mobile devices).
  4. Assign Activation Keystrokes or Gestures: The user can choose to view, assign or change default keystrokes or gestures used to activate controls.
  5. Reset: The user has the option to reset the containers and controls to their original configuration.

2.8.2 Reset Toolbar Configuration:


Implementing Guideline 2.9 - Allow time-independent interaction. [Return to Guideline]

Summary: Users can extend the time limit for user input when such limits are controllable by the user agent (2.9.1); by default, the user agent shows the progress of content in the process of downloading (2.9.2).

2.9.1 Adjustable Timing:

Where time limits for user input are recognized and controllable by the user agent, the user can extend the time limits. (Level A)

2.9.2 Retrieval Progress:

By default, the user agent shows the progress of content retrieval. (Level A)

Implementing Guideline 2.10 - Help users avoid flashing that could cause seizures. [Return to Guideline]

Summary: To help users avoid seizures, the default configuration prevents the browser user interface and rendered content from flashing more than three times a second above a luminescence or color threshold (2.10.1), or does not flash at all (2.10.2).

2.10.1 Three Flashes or Below Threshold:

In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one-second period, unless the flash is below the general flash and red flash thresholds. (Level A)

2.10.2 Three Flashes:

In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one-second period (regardless of whether not the flash is below the general flash and red flash thresholds). (Level AAA)

Implementing Guideline 2.11 - Provide control of content that may reduce accessibility. [Return to Guideline]

Summary: The user can present placeholders for time-based media (2.11.1) and executable regions (2.11.2), or block all executable content (2.11.3); adjust playback (2.11.4), stop/pause/resume (2.11.5), navigate, (2.11.6) and specify tracks for prerecorded time-based media (2.11.8); and adjust contrast and brightness of visual time-based media (2.11.9).

Applicability Notes:

Guideline 2.11 and its success criteria only apply to images, animations, video, audio, etc. that the user agent can recognize.

2.11.1 Time-Based Media Load-Only:

The user can override the play on load of recognized time-based media content such that the content is not played until explicit user request. (Level A)

2.11.2 Execution Placeholder:

The user can render a placeholder instead of executable content that would normally be contained within an on-screen area (e.g. Applet, Flash), until explicit user request to execute. (Level A)

2.11.3 Execution Toggle:

The user can turn on/off the execution of executable content that would not normally be contained within a particular area (e.g. Javascript). (Level A)

2.11.4 Playback Rate Adjustment for Prerecorded Content:

The user can adjust the playback rate of prerecorded time-based media content, such that all of the following are true: (Level A)

  1. The user can adjust the playback rate of the time-based media tracks to between 50% and 250% of real time.
  2. Speech whose playback rate has been adjusted by the user maintains pitch in order to limit degradation of the speech quality.
  3. Audio and video tracks remain synchronized across this required range of playback rates.
  4. The user agent provides a function that resets the playback rate to normal (100%).

2.11.5 Stop/Pause/Resume Time-Based Media:

The user can stop, pause, and resume rendered audio and animation content (including video, animated images, and changing text) that last three or more seconds at their default playback rate. (Level A)

2.11.6 Navigate Time-Based Media:

The user can navigate along the timebase using a continuous scale, and by relative time units within rendered audio and animations (including video and animated images) that last three or more seconds at their default playback rate. (Level A)

2.11.7 Semantic Navigation of Time-Based Media:

The user can navigate by semantic structure within the time-based media, such as by chapters or scenes present in the media (Level AA).

2.11.8 Track Enable/Disable of Time-Based Media:

During time-based media playback, the user can determine which tracks are available and select or deselect tracks, overriding global default settings, such as captions or audio descriptions. (Level AA)

2.11.9 Video Contrast and Brightness:

Users can adjust the contrast and brightness of visual time-based media. (Level AAA)

Implementing Guideline 2.12 - Other Input Devices [Return to Guideline]

Summary: For all input devices supported by the platform, the user agents should let the user perform all functions aside from entering text (2.12.2), and enter text with any platform-provided features (2.12.1). If possible, it is also encouraged to let the user enter text even if the platform does not provide such a feature (2.12.3).

2.12.1 Support Platform Text Input Devices:

If the platform supports text input using an input device, the user agent is compatible with this functionality. (Level A)

2.12.2 Operation With Any Device:

If an input device is supported by the platform, all user agent functionality other than text input can be operated using that device. (Level AA)

2.12.3 Text Input With Any Device:

If an input device is supported by the platform, all user agent functionality including text input can be operated using that device. (Level AAA)

PRINCIPLE 3: Ensure that the user interface is understandable

Implementing Guideline 3.1 - Help users avoid unnecessary messages. [Return to Guideline]

Summary: Users can turn off non-essential messages from the author or user-agent.

3.1.1 Reduce Interruptions:

The user can avoid or defer recognized non-essential or low priority messages and updating/changing information in the user agent user interface and rendered content.(Level AA)

Implementing Guideline 3.2 - Help users avoid and correct mistakes. [Return to Guideline]

Summary: Users can have form submissions require confirmation (3.2.1), go back after navigating (3.2.2), and have their text checked for spelling errors (3.2.3).

3.2.1 Form Submission:

The user can specify whether or not recognized form submissions must be confirmed. (Level AA)

3.2.2 Back Button

: The user can reverse recognized navigation between web addresses (e.g. standard "back button" functionality). (Level AA)

3.2.3 Provide spell checking functionality:

User agents provide spell checking functionality for text created inside the user agent. (Level AA)

3.2.4 Text Entry Undo:

The user can reverse recognized text entry actions prior to submission. (Level A)

Note: Submission can be triggered in many different ways, such as clicking a submit button, typing a key in a control with an onkeypress event, or by a script responding to a timer.

3.2.5 Settings Change Confirmation:

If the user agent provides mechanisms for changing its user interface settings, it either allows the user to reverse the setting changes, or the user can require user confirmation to proceed. (Level A)

Implementing Guideline 3.3 - Document the user agent user interface including accessibility features. [Return to Guideline]

Summary: User documentation is available in an accessible format (3.3.1), it includes accessibility features (3.3.2), delineates differences between versions (3.3.3), provides a centralized views of conformance UAAG2.0 (3.3.4), and is available as context sensitive help in the UA (3.3.5).

3.3.1 Accessible documentation:

The product documentation is available in a format that meets success criteria of WCAG 2.0 Level "A" or greater. (Level A)

3.3.2 Document Accessibility Features:

All features of the user agent that meet User Agent Accessibility Guidelines 2.0 success criteria are documented. (Level A)

3.3.3 Changes Between Versions:

Changes to features that meet UAAG 2.0 success criteria since the previous user agent release are documented. (Level AA)

3.3.4 Centralized View:

There is a dedicated section of the documentation that presents a view of all features of the user agent necessary to meet the requirements of User Agent Accessibility Guidelines 2.0. (Level AAA)

Implementing Guideline 3.4 - The user agent must behave in a predictable fashion. [Return to Guideline]

Summary: Users can prevent non-requested focus changes (3.4.1).

3.4.1 Avoid unpredictable focus

The user can prevent focus changes that are not a result of explicit user request. (Level A)

PRINCIPLE 4: Facilitate programmatic access

Implementing Guideline 4.1 - Facilitate programmatic access to assistive technology [Return to Guideline]

Summary: Be compatible with assistive technologies by supporting platform standards (4.1.1), including providing information about all menus, buttons, dialogs, etc. (4.1.2, 4.1.6), access to DOMs (4.1.4), and access to structural relationships and meanings, such as what text or image labels a control or serves as a heading (4.1.5). Where something can't be made accessible, provide an accessible alternative version, such as a standard window in place of a customized window (4.1.3). Make sure that that programmatic exchanges are quick and responsive (4.1.7).

4.1.1 Platform Accessibility Services:

The user agent supports relevant platform accessibility services. (Level A)

4.1.2 Name, Role, State, Value, Description:

For all user interface components including user interface, rendered content, generated content, and alternative content, the user agent makes available the name, role, state, value, and description via platform accessibility services. (Level A)

4.1.3 Accessible Alternative:

If a component of the user agent user interface cannot be exposed through platform accessibility services, then the user agent provides an equivalent alternative that is exposed through the platform accessibility service. (Level A)

4.1.4 Programmatic Availability of DOMs:

If the user agent implements one or more DOMs, they must be made programmatically available to assistive technologies. (Level A)

4.1.5 Write Access:

If the user can modify the state or value of a piece of content through the user interface (e.g., by checking a box or editing a text area), the same degree of write access is available programmatically. (Level A)

4.1.6 Expose Accessible Properties:

If any of the following properties are supported by the platform accessibility services, make the properties available to the accessibility platform architecture: (Level A)

  1. the bounding dimensions and coordinates of onscreen elements
  2. font family of text
  3. font size of text
  4. foreground color of text
  5. background color of text.
  6. change state/value notifications
  7. selection
  8. highlighting
  9. input device focus
  10. direct keyboard commands
  11. underline of menu items (keyboard command/shortcuts)

4.1.7 Timely Communication:

For APIs implemented to satisfy the requirements of UAAG 2.0, ensure that programmatic exchanges proceed at a rate such that users do not perceive a delay. (Level A)

PRINCIPLE 5: Comply with applicable specifications and conventions

Implementing Guideline 5.1 - Comply with applicable specifications and conventions. [Return to Guideline]

Summary: When the browser's menus, buttons, dialogs, etc. are authored in HTML or similar standards, they need to meet W3C's Web Content Accessibility Guidelines.

5.1.1 WCAG Compliant:

Web-based user agent user interfaces meet the WCAG 2.0 success criteria: Level A to meet WCAG 2.0 Level A success criteria; Level AA to meet WCAG 2.0 Level A and AA success criteria; Level AAA to meet all WCAG 2.0 success criteria. (Level AAA)

Note: This success criterion does not apply to non-Web-based user agent user interfaces, but does include any parts of non-Web-based user agents that are Web-based (e.g. help systems).

5.1.2 Implement accessibility features of content specs:

Implement and cite in the conformance claim the accessibility features of content specifications. Accessibility features are those that are either (Level A):

  1. Identified as such in the specification or
  2. Allow authors to satisfy a requirement of WCAG

5.1.3 Implement Accessibility Features of platform:

If the user agent contains non-web-based user interfaces, then those user interfaces follow user interface accessibility guidelines for the platform. (Level A)

5.1.4 Handle Unrendered Technologies:

If the user agent does not render a technology, the user can choose a way to handle content in that technology (e.g. by launching another application or by saving it to disk). (Level A)

5.1.5 Alternative content handlers:

The user can select content elements and have them rendered in alternative viewers. (Level AA)

5.1.6 Enable Reporting of User Agent Accessibility Faults:

The user agent provides a mechanism for users to report user agent accessibility issues.(Level AAA)

Applicability Note:

When a rendering requirement of another specification contradicts a requirement of UAAG 2.0, the user agent may disregard the rendering requirement of the other specification and still satisfy this guideline.


This section is normative.

Conformance means that the user agent satisfies the success criteria defined in the guidelines section. This section lists requirements for conformance and conformance claims.

Conformance Requirements

In order for a Web page to conform to UAAG 2.0, one of the following levels of conformance is met in full.

Note: Although conformance can only be achieved at the stated levels, developers are encouraged to report (in their claim) any progress toward meeting success criteria from all levels beyond the achieved level of conformance.

Conformance Claims

User agents can conform to UAAG 2.0 without making a claim. If a conformance claim is made, the conformance claim must meet the following conditions and include the following information:

Conditions on Conformance Claims

If a conformance claim is made, the conformance claim must meet the following conditions:

Components of UAAG 2.0 Conformance Claims

  1. Claimant name and affiliation
  2. Claimant contact information
  3. Date of the claim
  4. Conformance level satisfied
  5. User agent information:
    1. Name and manufacturer
    2. Version number or version range
    3. Required patches or updates, human language of the user interface and documentation
    4. Configuration changes to the user agent that are needed to meet the success criteria (e.g. ignore author foreground/background color, turn on Carat Browsing)
    5. Plugins or extensions (including version numbers) needed to meet the success criteria (e.g. mouseless browsing)
  6. Platform: Provide relevant information about the software and/or hardware platform(s) that the user agent relies on for conformance. This information may include:
    1. Name and manufacturer
    2. Version of key software components (e.g., operating system, other software environment)
    3. Hardware requirements (e.g. audio output enabled, minimum screen size: 2", bluetooth keyboard attached)
    4. Operating system(s) (e.g. Windows, Android, iOS, GNOME)
    5. Other software environment (Java, Eclipse)
    6. Host web browser when the conforming user agent is web-based (e.g. JW Player on Firefox)
    7. Configuration changes to the platform that are needed to meet the success criteria (e.g. turn on Sticky Keys, use High Contrast Mode)
  7. Platform Limitations: If the platform (hardware or operating system) does not support a capability necessary for a given UAAG 2.0 success criterion, list the success criterion and the feature (e.g. a mobile operating system does not support platform accessibility services, therefore the user agent cannot meet success criterion 4.1.2). For these listed technologies, the user agent can claim that the success criteria do not apply.
  8. Web Content Technologies:List the web content technologies rendered by the user agent that are included in the claim. If there are any web content technologies rendered by the user agent that are excluded from the conformance claim, list these separately. Examples of web content technologies include web markup languages such HTML, XML, CSS, SVG, and MathML, image formats such as PNG, JPG and GIF, scripting languages such as JavaScript/EcmaScript, specific video codecs, and proprietary document formats.
  9. Declarations: For each success criterion, provide a declaration of either
    1. whether or not the success criterion has been satisfied; or
    2. declaration that the success criterion is not applicable and a rationale for why not

Limited Conformance for Extensions

This option may be used for a user agent extension or plug-in with limited functionality that wishes to claim UAAG 2.0 conformance. An extension or plugin can claim conformance for a specific success criterion or a narrow range of success criteria as stated in the claim. All other success criteria may be denoted as Not Applicable. The add-in must not cause the combined user agent (hosting user agent plus installed extension or plug-in) to fail any success criteria that the hosting user agent would otherwise pass.

Optional Components of an UAAG 2.0 Conformance Claim

A description of how the UAAG 2.0 success criteria were met where this may not be obvious.


Neither W3C, WAI, nor UAWG take any responsibility for any aspect or result of any UAAG 2.0 conformance claim that has not been published under the authority of the W3C, WAI, or UAWG.

Appendix A: Glossary

This glossary is normative.

a · b · c · d · e · f · g · h · i · j · k · l · m · n · o · p · q · r · s · t · u · v · w · x · y · z

accelerator key
see keyboard command
To carry out the behaviors associated with an enabled element in the rendered content or a component of the user agent user interface.
active input focus
see focus
active selection
see focus
alternative content
Content ot placeholder that can be used in place of default content that may not be universally accessible. Alternative content fulfills the same purpose as the original content. Examples include text alternatives for non-text content, captions for audio, audio descriptions for video, sign language for audio, media alternatives for time-based media. See WCAG for more information.
alternative content stack
A set of alternative content items. The items may be mutually exclusive (e.g. regular contrast graphic vs. high contrast graphic) or non-exclusive (e.g. caption track that can play at the same time as a sound track).
Graphical content rendered to automatically change over time, giving the user a visual perception of movement. Examples include video, animated images, scrolling text, programmatic animation (e.g. moving or replacing rendered objects).
application programming interface (API), (conventional input/output/device API)
An application programming interface (API) defines how communication may take place between applications.
assistive technology
An assistive technology:
  1. relies on services (such as retrieving Web resources and parsing markup) provided by one or more other "host" user agents. Assistive technologies communicate data and messages with host user agents by using and monitoring APIs.
  2. provides services beyond those offered by the host user agents to meet the requirements of users with disabilities. Additional services include alternative renderings (e.g. as synthesized speech or magnified content), alternative input methods (e.g. voice), additional navigation or orientation mechanisms, and content transformations (e.g. to make tables more accessible).

Examples of assistive technologies that are important in the context of UAAG 2.0 include the following:

Beyond UAAG 2.0, assistive technologies consist of software or hardware that has been specifically designed to assist people with disabilities in carrying out daily activities. These technologies include wheelchairs, reading machines, devices for grasping, text telephones, and vibrating pagers. For example, the following very general definition of "assistive technology device" comes from the (U.S.) Assistive Technology Act of 1998 [AT1998]:

Any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities.

The technology of sound reproduction. Audio can be created synthetically (including speech synthesis), streamed from a live source (such as a radio broadcast), or recorded from real world sounds.
audio description - (described video, video description or descriptive narration)
An equivalent alternative that takes the form of narration added to the audio to describe important visual details that cannot be understood from the main soundtrack alone. Audio description of video provides information about actions, characters, scene changes, on-screen text, and other visual content. In standard audio description, narration is added during existing pauses in dialogue. In extended audio description, the video is paused so that there is time to add additional description.
The people who have worked either alone or collaboratively to create the content (e.g. content authors, designers, programmers, publishers, testers).
author styles
See Style properties
background images
Images that are rendered on the base background.
base background
The background of the content as a whole, such that no content may be layered behind it. In graphics applications the base background is often referred to as the canvas.
blinking text
Text whose visual rendering alternates between visible and invisible at any rate of change.
captions (caption)
An equivalent alternative that takes the form of text presented and synchronized with time-based media to provide not only the speech, but also non-speech information conveyed through sound, including meaningful sound effects and identification of speakers. In some countries, the term "subtitle" is used to refer to dialogue only and "captions" is used as the term for dialogue plus sounds and speaker identification. In other countries, "subtitle" (or its translation) is used to refer to both. Open captions are captions that are always rendered with a visual track; they cannot be turned off. Closed captions are captions that may be turned on and off. The captions requirements of UAAG 2.0 assume that the user agent can recognize the captions as such.
Note: Other terms that include the word "caption" may have different meanings in UAAG 2.0. For instance, a "table caption" is a title for the table, often positioned graphically above or below the table. In UAAG 2.0, the intended meaning of "caption" will be clear from context.
collated text transcript
A collated text transcript is a text equivalent of a movie or other animation. It is the combination of the text transcript of the audio track and the text equivalent of the visual track. For example, a collated text transcript typically includes segments of spoken dialogue interspersed with text descriptions of the key visual elements of a presentation (actions, body language, graphics, and scene changes). See also the definitions of text transcript and audio description. Collated text transcripts are essential for people who are deaf-blind.
command, direct command, direct navigation command, direct activation command, sequential navigation command , spacial (directional) command, structural navigation command
direct commands apply to a specified item (e.g., button) or action (e.g. save function), regardless of the current focus location
direct navigation commands move focus to a specified item
direct activation commands activate the specified item (and may also move focus to it) or action
sequential navigation commands (sometimes called logical or linear navigation commands) move focus forwards and backwards through a list of items. The element list being navigated may be the list of all elements or just a subset (e.g. the list of headers, the list of links, etc.)
structural navigation commands move forwards, backwards, up and down a hierarchy
spatial commands (sometimes called directional commands), require the user to be cognizant of the spatial arrangement of items on the screen:
content (web content), empty content, reflowable content
Information and sensory experience to be communicated to the user by means of a user agent, including code or markup that defines the content's structure, presentation, and interactions [adapted from WCAG 2.0]

empty content (which may be alternative content) is either a null value or an empty string (e.g. one that is zero characters long). For instance, in HTML, alt="" sets the value of the alt attribute to the empty string. In some markup languages, an element may have empty content (e.g. the HR element in HTML).

reflowable content is content that can be arbitrarily wrapped over multiple lines. The primary exceptions to reflowable content are graphics and video.

continuous scale
When interacting with a time-based media presentation, a continuous scale allows user (or programmatic) action to set the active playback position to any time point on the presentation timeline. The granularity of the positioning is determined by the smallest resolvable time unit in the media timebase.
see focus
see properties
direct command, direct navigation command, direct activation command, linear navigation command , spacial (directional) command, structural navigation command
see command
A viewport may also have temporal dimensions, for instance when audio, speech, animations, and movies are rendered. When the dimensions (spatial or temporal) of rendered content exceed the dimensions of the viewport, the user agent provides mechanisms such as scroll bars and advance and rewind controls so that the user can access the rendered content "outside" the viewport. Examples include: when the user can only view a portion of a large document through a small graphical viewport, or when audio content has already been played.
document character set
The internal representation of data in the source content by a user agent.
document object, (Document Object Model, DOM)
The Document Object Model is a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. This is an overview of DOM-related materials here at W3C and around the web: http://www.w3.org/DOM/#what.
document source, (text source)
Text the user agent renders upon user request to view the source of specific viewport content (e.g. selected content, frame, page).
Any information that supports the use of a user agent. This information may be found, for example, in manuals, installation instructions, the help system, and tutorials. Documentation may be distributed (e.g. as files installed as part of the installation, some parts may be delivered on CD-ROM, others on the Web). See guideline 5.3 for information about documentation.
element, element type
UAAG 2.0 uses the terms "element" and "element type" primarily in the sense employed by the XML 1.0 specification ([XML], section 3): an element type is a syntactic construct of a document type definition (DTD) for its application. This sense is also relevant to structures defined by XML schemas. UAAG 2.0 also uses the term "element" more generally to mean a type of content (such as video or sound) or a logical construct (such as a header or list).
empty content
see content
enabled element, disabled element
An element with associated behaviors that can be activated through the user interface or through an API. The set of elements that a user agent enables is generally derived from, but is not limited to, the set of elements defined by implemented markup languages. A disabled element is a potentially enabled element that is not currently available for activation (e.g. a "grayed out" menu item).
equivalent alternative
Acceptable substitute content that a user may not be able to access. An equivalent alternative fulfills essentially the same function or purpose as the original content upon presentation:
events and scripting, event handler, event type
User agents often perform a task when an event having a particular "event type" occurs, including a user interface event, a change to content, loading of content, or a request from the operating environment. Some markup languages allow authors to specify that a script, called an event handler, be executed when an event of a given type occurs. An event handler is explicitly associated with an element through scripting, markup or the DOM.
explicit user request
An interaction by the user through the user agent user interface, the focus, or the selection. User requests are made, for example, through user agent user interface controls and keyboard commands. Some examples of explicit user requests include when the user selects "New viewport," responds "yes" to a prompt in the user agent's user interface, configures the user agent to behave in a certain way, or changes the selection or focus with the keyboard or pointing device. Note: Users can make errors when interacting with the user agent. For example, a user may inadvertently respond "yes" to a prompt instead of "no." This type of error is considered an explicit user request.
focus (active input focus, active selection, cursor, focus cursor, focusable element, highlight, inactive input focus, inactive selection, input focus, keyboard focus, pointer, pointing device focus, selection, split focus, text cursor)

Hierarchical Summary of some focus terms

active input focus
The input focus location in the active viewport. The active focus is in the active viewport, while the inactive input focus is the inactive viewport. The active input focus is usually visibly indicated. In UAAG 2.0 "active input focus" generally refers to the active keyboard input focus.
active selection
The selection that will currently be affected by a user command, as opposed to selections in other viewports, called inactive selections, which would not currently be affected by a user command.
see support
Visual indicator showing where keyboard input will occur. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field, also called a 'caret'). Cursors are active when in the active viewport, and inactive when in an inactive viewport.
focus cursor
Indicator that highlights a user interface element to show that it has keyboard focus, e.g. a dotted line around a button, or brightened title bar on a window. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field).
focusable element
Any element capable of having input focus, e.g. link, text box, or menu item. In order to be accessible and fully usable, every focusable element should take keyboard focus, and ideally would also take pointer focus.
highlight, highlighted, highlighting
Emphasis indicated through the user interface. For example, user agents highlight content that is selected, focused, or matched by a search operation. Graphical highlight mechanisms include dotted boxes, changed colors or fonts, underlining, adjacent icons, magnification, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume ("speech prosody"). User interface items may also be highlighted, for example a specific set of foreground and background colors for the title bar of the active window. Content that is highlighted may or may not be a selection.
inactive input focus
An input focus location in an inactive viewport such as a background window or pane. The inactive input focus location will become the active input focus location when input focus returns to that viewport. An inactive input focus may or may not be visibly indicated.
inactive selection
A selection that does not have the input focus and thus does not take input events.
input focus
The place where input will occur if a viewport is active. Examples include keyboard focus and pointing device focus. Input focus can also be active (in the active viewport) or inactive (in an inactive viewport).
keyboard focus
The screen location where keyboard input will occur if a viewport is active. Keyboard focus can be active (in the active viewport) or inactive (in an inactive viewport). See keyboard interface definition for types of keyboards included and what constitutes a keyboard.
keyboard interface
Keyboard interfaces are programmatic services provided by many platforms that allow operation in a device independent manner. A keyboard interface can allow keystroke input even if particular devices do not contain a hardware keyboard (e.g., a touchscreen-controlled device can have a keyboard interface built into its operating system to support onscreen keyboards as well as external keyboards that may be connected).
Note: Keyboard-operated mouse emulators, such as MouseKeys, do not qualify as operation through a keyboard interface because these emulators use pointing device interfaces, not keyboard interfaces. [from ATAG 2.0]
Visual indicator showing where pointing device input will occur. The indicator can be moved with a pointing device or emulator such as a mouse, pen tablet, keyboard-based mouse emulator, speech-based mouse commands, or 3-D wand. A pointing device click typically moves the input focus to the pointer location. The indicator may change to reflect different states.When touch screens are used, the "pointing device" is a combination of the touch screen and the user's finger or stylus. On most systems there is no pointer (on-screen visual indication) associated with this type of pointing device.
pointing device focus
The screen location where pointer input will occur if a viewport is active. There can be multiple pointing device foci for example when using a screen sharing utility there is typically one for the user's physical mouse and one for the remote mouse.
A user agent mechanism for identifying a (possibly empty) range of content that will be the implicit source or target for subsequent operations. The selection may be used for a variety of purposes, including for cut-and-paste operations, to designate a specific element in a document for the purposes of a query, and as an indication of point of regard (e.g. the matched results of a search may be automatically selected). The selection should be highlighted in a distinctive manner. On the screen, the selection may be highlighted in a variety of ways, including through colors, fonts, graphics, and magnification. When rendered using synthesized speech, the selection may be highlighted through changes in pitch, speed, or prosody.
split focus
A state when the user could be confused because the input focus is separated from something it is usually linked to, such as being at a different place than the selection or similar highlighting, or has been scrolled outside of the visible portion of the viewport.
text cursor
Indicator showing where keyboard input will occur in text (e.g. the flashing vertical bar in a text field, also called a caret).
globally, global configuration
a global setting is one that applies to the entire user agent or all content being rendered by it, rather than to a specific feature within the user agent or a specific document being viewed.
Information (e.g. text, colors, graphics, images, and animations) rendered for visual consumption.
highlight, highlighted, highlighting
see focus
Pictorial content that is static (i.e. not moving or changing). See also the definition of animation.
see support
important elements
This specification intentionally does not identify which "important elements" must be navigable because this will vary by specification. What constitutes "efficient navigation" may depend on a number of factors as well, including the "shape" of content (e.g. sequential navigation of long lists is not efficient) and desired granularity (e.g. among tables, then among the cells of a given table). Refer to the Implementing document [Implementing UAAG 2.0] for information about identifying and navigating important elements.
inactive input focus
see focus
inactive selection
see focus
informative (non-normative)
see normative
input configuration
The set of bindings between user agent functionalities and user interface input mechanisms (e.g. menus, buttons, keyboard keys, and voice commands). The default input configuration is the set of bindings the user finds after installation of the software. Input configurations may be affected by author-specified bindings (e.g. through the accesskey attribute of HTML 4 [HTML4]).
input focus
see focus
keyboard (keyboard emulator, keyboard interface)
The letter, symbol and command keys or key indicators that allow a user to control a computing device. Assistive technologies have traditionally relied on the keyboard interface as a universal, or modality independent interface. In this document references to keyboard include keyboard emulators and keyboard interfaces that make use of the keyboard's role as a modality independent interface (see Modality Independence Principle). Keyboard emulators and interfaces may be used on devices which do not have a physical keyboard, such as mobile devices based on touchscreen input.
keyboard command (keyboard binding, keyboard shortcuts or accelerator keys)
A key or set of keys that are tied to a particular UI control or application function, allowing the user to navigate to or activate the control or function without traversing any intervening controls (e.g. CTRL+"S" to save a document). It is sometimes useful to distinguish keyboard commands that are associated with controls that are rendered in the current context (e.g. ALT+"D" to move focus to the address bar) from those that may be able to activate program functionality that is not associated with any currently rendered controls (e.g. "F1" to open the Help system). Keyboard commands can be triggered using a physical keyboard or keyboard emulator (e.g. on-screen keyboard or speech recognition). (See Modality Independence Principle).
keyboard focus
see focus
natural language
Natural language is spoken, written, or signed human language such as French, Japanese, and American Sign Language. On the Web, the natural language of content may be specified by markup or HTTP headers. Some examples include the lang attribute in HTML 4 ([HTML4] section 8.1), the xml:lang attribute in XML 1.0 ([XML], section 2.12), the hreflang attribute for links in HTML 4 ([HTML4], section 12.1.5), the HTTP Content-Language header ([RFC2616], section 14.12) and the Accept-Language request header ([RFC2616], section 14.4). See also the definition of script.
see command
non-text content (non-text element, non-text equivalent)
see text
normative, informative (non-normative) [WCAG 2.0, ATAG 2.0]
What is identified as "normative" is required for conformance (noting that one may conform in a variety of well-defined ways to UAAG 2.0). What is identified as "informative" (or, "non-normative") is never required for conformance.
To make the user aware of events or status changes. Notifications can occur within the user agent user interface (e.g. a status bar) or within the content display. Notifications may be passive and not require user acknowledgment, or they may be presented in the form of a prompt requesting a user response (e.g. a confirmation dialog).
operating environment
The term "operating environment" refers to the environment that governs the user agent's operation, whether it is an operating system or a programming language environment such as Java.
operating system (OS)
Software that supports a computer's basic functions, such as scheduling tasks, executing applications, and managing hardware and peripherals.
Note: Many operating systems mediate communication between executing applications and assistive technology via a platform accessibility service.
In UAAG 2.0, the term "override" means that one configuration or behavior preference prevails over another. Generally, the requirements of UAAG 2.0 involve user preferences prevailing over author preferences and user agent default settings and behaviors. Preferences may be multi-valued in general (e.g. the user prefers blue over red or yellow), and include the special case of two values (e.g. turn on or off blinking text content).
A placeholder is content generated by the user agent to replace author-supplied content. A placeholder may be generated as the result of a user preference (e.g. to not render images) or as repair content (e.g. when an image cannot be found). A placeholder can be any type of content, including text, images, and audio cues. A placeholder should identify the technology of the replaced object. Placeholders appear in the alternative content stack.
The software and hardware environment(s) within which the user agent operates. Platforms provide a consistent operational environment. There may be layers of software in an hardware architecture and each layer may be considered a platform. Non-web-based platforms include desktop operating system (e.g. Linux, MacOS, Windows, etc.), mobile operating systems (e.g. Android, Blackberry, iOS, Windows Phone, etc.), and cross-OS environments (e.g. Java). Web-based platforms are other user agents. User agents may employ server-based processing, such as web content transformations, text-to-speech production, etc.
Note 1: A user agent may include functionality hosted on multiple platforms (e.g. a browser running on the desktop may include server-based pre-processing and web-based documentation).
Note 2: Accessibility guidelines for developers exist for many platforms.
platform accessibility service
A programmatic interface that is engineered to enhance communication between mainstream software applications and assistive technologies (e.g. MSAA, UI Automation, and IAccessible2 for Windows applications, AXAPI for MacOSX applications, Gnome Accessibility Toolkit API for Gnome applications, Java Access for Java applications). On some platforms it may be conventional to enhance communication further via implementing a DOM.
plug-in [ATAG 2.0]
A plug-in is a program that runs as part of the user agent and that is not part of content. Users generally choose to include or exclude plug-ins from their user agents.
point of regard
The point of regard is the position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard may vary. For example,it may be a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport), or a point (e.g. a moment during an audio rendering or a cursor position in a graphical rendering), or a range of text (e.g. focused text), or a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport). The point of regard is almost always within the viewport, but it may exceed the spatial or temporal dimensions of the viewport (see the definition of rendered content for more information about viewport dimensions). The point of regard may also refer to a particular moment in time for content that changes over time (e.g. an audio-only presentation). User agents may determine the point of regard in a number of ways, including based on viewport position in content, keyboard focus, and selection. The stability of the point of regard is addressed by success criterion 1.8.7
see focus
pointing device focus
see focus
A profile is a named and persistent representation of user preferences that may be used to configure a user agent. Preferences include input configurations, style preferences, and natural language preferences. In operating environments with distinct user accounts, profiles enable users to reconfigure software quickly when they log on. Users may share their profiles with one another.Platform-independent profiles are useful for those who use the same user agent on different devices.
prompt [ATAG 2.0]
Any user-agent-initiated request for a decision or piece of information from a user.
properties, values, and defaults
A user agent renders a document by applying formatting algorithms and style information to the document's elements. Formatting depends on a number of factors, including where the document is rendered (e.g. on screen, on paper, through loudspeakers, on a braille display, on a mobile device). Style information (e.g. fonts, colors, synthesized speech prosody) may come from the elements themselves (e.g. certain font and phrase elements in HTML), from style sheets, or from user agent settings. For the purposes of these guidelines, each formatting or style option is governed by a property and each property may take one value from a set of legal values. Generally in UAAG 2.0, the term "property" has the meaning defined in CSS 2 ([CSS2], section 3). A reference to "styles" in UAAG 2.0 means a set of style-related properties. The value given to a property by a user agent at installation is the property's default value.
Authors encode information in many ways, including in markup languages, style sheet languages, scripting languages, and protocols. When the information is encoded in a manner that allows the user agent to process it with certainty, the user agent can "recognize" the information. For instance, HTML allows authors to specify a heading with the H1 element, so a user agent that implements HTML can recognize that content as a heading. If the author creates a heading using a visual effect alone (e.g. just by increasing the font size), then the author has encoded the heading in a manner that does not allow the user agent to recognize it as a heading. Some requirements of UAAG 2.0 depend on content roles, content relationships, timing relationships, and other information supplied by the author. These requirements only apply when the author has encoded that information in a manner that the user agent can recognize. See the section on conformance for more information about applicability. User agents will rely heavily on information that the author has encoded in a markup language or style sheet language. Behaviors, style, meaning encoded in a script, and markup in an unfamiliar XML namespace may not be recognized by the user agent as easily or at all.
relative time units
Relative time units define time intervals for navigating media relative to the current point (e.g. move forward 30 seconds). When interacting with a time-based media presentation, a user may find it beneficial to move forward or backward via a time interval relative to their current position. For example, a user may find a concept unclear in a video lecture and elect to skip back 30 seconds from the current position to review what had been described. Relative time units may be preset by the user agent, configurable by the user, and/or automatically calculated based upon media duration (e.g. jump 5 seconds in a 30-second clip, or 5 minutes in a 60-minute clip). Relative time units are distinct from absolute time values such as the 2 minute mark, the half-way point, or the end.
rendered content, rendered text
Rendered content is the part of content that the user agent makes available to the user's senses of sight and hearing (and only those senses for the purposes of UAAG 2.0). Any content that causes an effect that may be perceived through these senses constitutes rendered content. This includes text characters, images, style sheets, scripts, and any other content that, once processed, may be perceived through sight and hearing.
The term "rendered text" refers to text content that is rendered in a way that communicates information about the characters themselves, whether visually or as synthesized speech.
In the context of UAAG 2.0, invisible content is content that is not rendered but that may influence the graphical rendering (i.e. layout) of other content. Similarly, silent content is content that is not rendered but that may influence the audio rendering of other content. Neither invisible nor silent content is considered rendered content.
repair content, repair text
Content generated by the user agent to correct an error condition. "Repair text" refers to the text portion of repair content. Error conditions that may lead to the generation of repair content include:

UAAG 2.0 does not require user agents to include repair content in the document object. Repair content inserted in the document object should conform to the Web Content Accessibility Guidelines 1.0 [WCAG10]. For more information about repair techniques for Web content and software, refer to "Techniques for Authoring Tool Accessibility Guidelines 1.0" [ATAG10-TECHS].

In UAAG 2.0, the term "script" almost always refers to a scripting (programming) language used to create dynamic Web content. However, in guidelines referring to the written (natural) language of content, the term "script" is used as in Unicode [UNICODE] to mean "A collection of symbols used to represent textual information in one or more writing systems."
Information encoded in (programming) scripts may be difficult for a user agent to recognize. For instance, a user agent is not expected to recognize that, when executed, a script will calculate a factorial. The user agent will be able to recognize some information in a script by virtue of implementing the scripting language or a known program library (e.g. the user agent is expected to recognize when a script will open a viewport or retrieve a resource from the Web).
selection, current selection
see focus
style properties, user agent default styles, author styles, user styles
Properties whose values determine the presentation (e.g., font, color, size, location, padding, volume, synthesized speech prosody) of content elements as they are rendered (e.g. onscreen, via loudspeaker, via braille display) by user agents. Style properties can have several origins:
user agent default styles: The default style property values applied in the absence of any author or user styles. Some web content technologies specify a default rendering; others do not.
author styles: Style property values that are set by the author as part of the content (e.g., in-line styles, author style sheets).
user styles: Style property values that are set by the user (e.g., via user agent interface settings, user style sheets).
style sheet, user style sheet, author style sheet
A mechanism for communicating style property settings for web content, in which the style property settings are separable from other content resources. This separation is what allows author style sheets to be toggled or substituted, and user style sheets defined to apply to more than one resource. Style sheet web content technologies include Cascading Style Sheets (CSS) and Extensible Stylesheet Language (XSL). User style sheet: Style sheets specified by the user, resulting in user styles. Author style sheet: Style sheets specified by the author, resulting in author styles.
support, implement, conform
Support, implement, and conform all refer to what a developer has designed a user agent to do, but they represent different degrees of specificity. A user agent "supports" general classes of objects, such as "images" or "Japanese." A user agent "implements" a specification (e.g. the PNG and SVG image format specifications or a particular scripting language), or an API (e.g. the DOM API) when it has been programmed to follow all or part of a specification. A user agent "conforms to" a specification when it implements the specification and satisfies its conformance criteria.
The act of time-coordinating two or more presentation components (e.g. a visual track with captions, or several tracks in a multimedia presentation). For Web content developers, the requirement to synchronize means to provide the data that will permit sensible time-coordinated rendering by a user agent. For example, Web content developers can ensure that the segments of caption text are neither too long nor too short, and that they map to segments of the visual track that are appropriate in length. For user agent developers, the requirement to synchronize means to present the content in a sensible time-coordinated fashion under a wide range of circumstances including technology constraints (e.g. small text-only displays), user limitations (e.g. slow reading speeds, large font sizes, high need for review or repeat functions), and content that is sub-optimal in terms of accessibility.
technology (web content technology) [WCAG 2.0, ATAG 2.0]
A mechanism for encoding instructions to be rendered, played or executed by user agents. Web Content technologies may include markup languages, data formats, or programming languages that authors may use alone or in combination to create end-user experiences that range from static Web pages to multimedia presentations to dynamic Web applications. Some common examples of Web content technologies include HTML, CSS, SVG, PNG, PDF, Flash, and JavaScript.
text (text content, non-text content, text element, non-text element, text equivalent, non-text equivalent )
Text used by itself refers to a sequence of characters from a markup language's document character set. Refer to the "Character Model for the World Wide Web" [CHARMOD] for more information about text and characters. Note: UAAG 2.0 makes use of other terms that include the word "text" that have highly specialized meanings: collated text transcript, non-text content, text content, non-text element, text element, text equivalent, and text transcript.

Atext element adds text characters to either content or the user interface. Both in the Web Content Accessibility Guidelines 2.0 [WCAG20] and in UAAG 2.0, text elements are presumed to produce text that can be understood when rendered visually, as synthesized speech, or as Braille. Such text elements benefit at least these three groups of users:

  1. visually-displayed text benefits users who are deaf and adept in reading visually-displayed text;
  2. synthesized speech benefits users who are blind and adept in use of synthesized speech;
  3. braille benefits users who are blind, and possibly deaf-blind, and adept at reading braille.

A text element may consist of both text and non-text data. For instance, a text element may contain markup for style (e.g. font size or color), structure (e.g. heading levels), and other semantics. The essential function of the text element should be retained even if style information happens to be lost in rendering. A user agent may have to process a text element in order to have access to the text characters. For instance, a text element may consist of markup, it may be encrypted or compressed, or it may include embedded text in a binary format (e.g. JPEG).

Text content is content that is composed of one or more text elements. A text equivalent (whether in content or the user interface) is an equivalent composed of one or more text elements. Authors generally provide text equivalents for content by using the alternative content mechanisms of a specification.

A non-text element is an element (in content or the user interface) that does not have the qualities of a text element. Non-text content is composed of one or more non-text elements. A non-text equivalent (whether in content or the user interface) is an equivalent composed of one or more non-text elements.

text decoration
Any stylistic effect that the user agent may apply to visually rendered text that does not affect the layout of the document (i.e. does not require reformatting when applied or removed). Text decoration mechanisms include underline, overline, and strike-through.
text format
Any media object given an Internet media type of "text" (e.g. "text/plain", "text/html", or "text/*") as defined in RFC 2046 [RFC2046], section 4.1, or any media object identified by Internet media type to be an XML document (as defined in [XML], section 2) or SGML application. Refer, for example, to Internet media types defined in "XML Media Types" [RFC3023].
text transcript
A text equivalent of audio information (e.g. an audio-only presentation or the audio track of a movie or other animation). A text transcript provides text for both spoken words and non-spoken sounds such as sound effects. Text transcripts make audio information accessible to people who have hearing disabilities and to people who cannot play the audio. Text transcripts are usually created by hand but may be generated on the fly (e.g. by voice-to-text converters). See also the definitions of captions and collated text transcripts.
defining a common time scale for all components of a time-based media presentation. For example, a media-player will expose a single timebase for a presentation composed of individual video and audio tracks, for instance allowing users or technology to query or alter the playback rate and position.
track (audio track or visual track)
Content rendered as sound through an audio viewport. The audio track may be all or part of the audio portion presentation (e.g. each instrument may have a track, or each stereo channel may have a track). Also see definition of visual track
A collection of commonly used controls presented in a region that can be configured or navigated separately from other regions. Such containers may be docked or free-floating, permanent or transient, integral to the application or addons. Variations are often called toolbars, palettes, panels, or inspectors.
user agent
@@ ED Note: this definition is still under discussion. A user agent is any software that retrieves, renders and facilitates end user interaction with Web content. If the software only performs these functions for time-based media, then the software is typically referred to as a *media player*, otherwise, the more general designation *browser* is used. UAAG 2.0 identifies several user agent architectures:

stand-alone, non-web-based, browser: These user agents run on non-Web platforms (operating systems and cross-OS platforms, such as Java) and perform content retrieval, rendering and end-user interaction facilitation themselves. (e.g., Firefox, IE, Chrome, Opera).

embedded user agent: These user agents "plug-in" to stand-alone user agents in order to rendering and facilitate end-user interaction for content types (e.g., multimedia), that the stand-alone user agent is not able to (e.g. Quicktime, Acrobat Reader, Shockwave). Embedded user agents establish direct connections with the platform (e.g. communication via platform accessibility services)

web-based user agent: These user agents operate by (a) transforming the web content into a technology that the stand-alone (or embedded) user agent can render and (b) injecting the user agent's own user interface functionality into the content to be rendered. (e.g. Gmail, Facebook, Skype)

web view component, mobile app: These user agents are used to package a constrained set of web content into non-web-based applications, especially on mobile platforms. If the finished application retrieves, renders and facilitates end-user interaction with web content, then the application is a user agent. If the finished application only renders non web content, then the application is not a user agent for the purposes of UAAG 2.0 conformance.

user agent extension (add-in)
Software installed into a user agent that adds one or more additional features that modify the behavior of the user agent. Two common capabilities for user agent extensions are the ability to *modify the content* before the user agent renders it (e.g., to add highlights if certain types of alternative content are present) and to *modify the user agent's own user interface * (e.g. add a headings view).
user agent default styles
User agent default styles are style property values applied in the absence of any author or user styles. Some markup languages specify a default rendering for content in that markup language; others do not. For example, XML 1.0 [XML] does not specify default styles for XML documents. HTML 4 [HTML4] does not specify default styles for HTML documents, but the CSS 2 [CSS2] specification suggests a sample default style sheet for HTML 4 based on current practice.
user interface, user interface control
For the purposes of UAAG 2.0, user interface includes both:
  1. the user agent user interface, i.e. the controls (e.g. menus, buttons, prompts, and other components for input and output) and mechanisms (e.g. selection and focus) provided by the user agent ("out of the box") that are not created by content.
  2. the "content user interface," i.e. the enabled elements that are part of content, such as form controls, links, and applets.
The document distinguishes them only where required for clarity. .

The term "user interface control" refers to a component of the user agent user interface or the content user interface, distinguished where necessary.

user styles
User styles are style property values that come from user interface settings, user style sheets, or other user interactions.
see properties
A user interface function that lets users interact with web content. UAAG 2.0 recognizes a variety of approaches to presenting the content in a view, such as:
The part of an onscreen view that the user agent is currently presenting onscreen to the user, such that the user can attend to any part of it without further action (e.g. scrolling). There may be multiple viewports on to the same view (e.g. when a split-screen is used to present the top and bottom of a document simultaneously) and viewports may be nested (e.g. a scrolling frame located within a larger document). When the viewport is smaller in extent than the content it is presenting, user agents typically provide mechanisms to bring the occluded content into the viewport (e.g., scrollbars).
A visual-only presentation is content consisting exclusively of one or more visual tracks presented concurrently or in series. A silent movie is an example of a visual-only presentation.
visual track
A visual object is content rendered through a graphical viewport. Visual objects include graphics, text, and visual portions of movies and other animations. A visual track is a visual object that is intended as a whole or partial presentation. A visual track does not necessarily correspond to a single physical object or software object.
voice browser
From "Introduction and Overview of W3C Speech Interface Framework" [VOICEBROWSER]: "A voice browser is a device (hardware and software) that interprets voice markup languages to generate voice output, interpret voice input, and possibly accept and produce other modalities of input and output."
web resource
Anything that can be identified by a Uniform Resource Identifier (URI).

Appendix B: How to refer to UAAG 2.0 from other documents

This section is informative.

There are two recommended ways to refer to the "User Agent Accessibility Guidelines 2.0" (and to W3C documents in general):

  1. References to a specific version of "User Agent Accessibility Guidelines 2.0." For example, use the "this version" URI to refer to the current document:
  2. References to the latest version of "User Agent Accessibility Guidelines 2.0." Use the "latest version" URI to refer to the most recently published document in the series:

In almost all cases, references (either by name or by link) should be to a specific version of the document. W3C will make every effort to make UAAG 2.0 indefinitely available at its original address in its original form. The top of UAAG 2.0 includes the relevant catalog metadata for specific references (including title, publication date, "this version" URI, editors' names, and copyright information).

An XHTML 1.0 paragraph including a reference to this specific document might be written:

<cite><a href="http://www.w3.org/TR/2010/WD-UAAG20-20100617/">
"User Agent Accessibility Guidelines 2.0,"</a></cite>
J. Allan, K. Ford, J. Spellman, eds.,
W3C Recommendation, http://www.w3.org/TR/ATAG20/.
The <a href="http://www.w3.org/TR/ATAG20/">latest version</a> of this document is available at http://www.w3.org/TR/ATAG20/.</p>

For very general references to this document (where stability of content and anchors is not required), it may be appropriate to refer to the latest version of this document. Other sections of this document explain how to build a conformance claim.

Appendix C: References

This section is informative.

For the latest version of any W3C specification please consult the list of W3C Technical Reports at http://www.w3.org/TR/. Some documents listed below may have been superseded since the publication of UAAG 2.0.

Note: In UAAG 2.0, bracketed labels such as "[WCAG20]" link to the corresponding entries in this section. These labels are also identified as references through markup.

"Cascading Style Sheets (CSS1) Level 1 Specification," B. Bos, H. Wium Lie, eds., 17 December 1996, revised 11 January 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-CSS1-19990111.
"Cascading Style Sheets, level 2 (CSS2) Specification," B. Bos, H. Wium Lie, C. Lilley, and I. Jacobs, eds., 12 May 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-CSS2-19980512/.
"Document Object Model (DOM) Level 2 Core Specification," A. Le Hors, P. Le Hégaret, L. Wood, G. Nicol, J. Robie, M. Champion, S. Byrne, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/.
"Document Object Model (DOM) Level 2 Style Specification," V. Apparao, P. Le Hégaret, C. Wilson, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Style-20001113/.
"XML Information Set," J. Cowan and R. Tobin, eds., 24 October 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-xml-infoset-20011024/.
"Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types," N. Freed, N. Borenstein, November 1996.
"Web Content Accessibility Guidelines 1.0," W. Chisholm, G. Vanderheiden, and I. Jacobs, eds., 5 May 1999. This W3C Recommendation is http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/.
"Extensible Markup Language (XML) 1.0 (Second Edition)," T. Bray, J. Paoli, C.M. Sperberg-McQueen, eds., 6 October 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xml-20001006.
The Assistive Technology Act of 1998.
"Authoring Tool Accessibility Guidelines 1.0," J. Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-ATAG10-20000203/.
"Techniques for Authoring Tool Accessibility Guidelines 1.0," J. Treviranus, C. McCathieNevile, J. Richards, eds., 29 Oct 2002. This W3C Note is http://www.w3.org/TR/2002/NOTE-ATAG10-TECHS-20021029/.
"Character Model for the World Wide Web," M. Dürst and F. Yergeau, eds., 30 April 2002. This W3C Working Draft is http://www.w3.org/TR/2002/WD-charmod-20020430/. The latest version is available at http://www.w3.org/TR/charmod/.
"Document Object Model (DOM) Level 2 HTML Specification," J. Stenback, P. Le Hégaret, A. Le Hors, eds., 8 November 2002. This W3C Proposed Recommendation is http://www.w3.org/TR/2002/PR-DOM-Level-2-HTML-20021108/. The latest version is available at http://www.w3.org/TR/DOM-Level-2-HTML/.
"HTML 4.01 Recommendation," D. Raggett, A. Le Hors, and I. Jacobs, eds., 24 December 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-html401-19991224/.
"Hypertext Transfer Protocol — HTTP/1.1," J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, T. Berners-Lee, June 1999.
"XML Media Types," M. Murata, S. St. Laurent, D. Kohn, January 2001.
"Synchronized Multimedia Integration Language (SMIL) 1.0 Specification," P. Hoschka, ed., 15 June 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-smil-19980615/.
"Synchronized Multimedia Integration Language (SMIL 2.0) Specification," J. Ayars, et al., eds., 7 August 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-smil20-20010807/.
"Scalable Vector Graphics (SVG) 1.0 Specification," J. Ferraiolo, ed., 4 September 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-SVG-20010904/.
"User Agent Accessibility Guidelines 1.0," I. Jacobs, J. Gunderson, E. Hansen, eds.17 December 2002. This W3C Recommendation is available at http://www.w3.org/TR/2002/REC-UAAG10-20021217/.
An appendix to UAAG 2.0 lists all of the checkpoints, sorted by priority. The checklist is available in either tabular form or list form.
Information about UAAG 1.0 conformance icons and their usage is available at http://www.w3.org/WAI/UAAG10-Conformance.
An appendix to UAAG 2.0 provides a summary of the goals and structure of User Agent Accessibility Guidelines 1.0.
"Techniques for User Agent Accessibility Guidelines 1.0," I. Jacobs, J. Gunderson, E. Hansen, eds. The latest draft of the techniques document is available at http://www.w3.org/TR/UAAG10-TECHS/.
The Unicode Consortium. The Unicode Standard, Version 6.1.0, (Mountain View, CA: The Unicode Consortium, 2012. ISBN 978-1-936213-02-3)
"Introduction and Overview of W3C Speech Interface Framework," J. Larson, 4 December 2000. This W3C Working Draft is http://www.w3.org/TR/2000/WD-voice-intro-20001204/. The latest version is available at http://www.w3.org/TR/voice-intro/. UAAG 2.0 includes references to additional W3C specifications about voice browser technology.
"World Wide Web Consortium Process Document," I. Jacobs ed. The 19 July 2001 version of the Process Document is http://www.w3.org/Consortium/Process-20010719/. The latest version is available at http://www.w3.org/Consortium/Process/.
"Web Content Accessibility Guidelines (WCAG) 2.0" B. Caldwell, M. Cooper, L. Guarino Reid, G. Vanderheiden, eds., 8 December 2008. This W3C Recommendation is http://www.w3.org/TR/2008/REC-WCAG20-20081211/. The latest version is available at http://www.w3.org/TR/WCAG20/. Additional format-specific techniques documents are available from this Recommendation.
"Techniques for Web Content Accessibility Guidelines 2.0," B. Caldwell, M. Cooper, L. Guarino Reid, G. Vanderheiden, eds., 8 December 2008. This W3C Note is http://www.w3.org/TR/2010/NOTE-WCAG20-TECHS-20101014/. The latest version is available at http://www.w3.org/TR/WCAG20-TECHS/. Additional format-specific techniques documents are available from this Note.
"Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0" E. Velleman, S. Abou-Zahra, eds., 26 February 2013. This is an informative draft of a Working Group Note. The latest version is available at http://www.w3.org/TR/WCAG-EM/
"Web Characterization Terminology and Definitions Sheet," B. Lavoie, H. F. Nielsen, eds., 24 May 1999. This is a W3C Working Draft that defines some terms to establish a common understanding about key Web concepts. This W3C Working Draft is http://www.w3.org/1999/05/WCA-terms/01.
"XML Accessibility Guidelines 1.0," D. Dardailler, S. Palmer, C. McCathieNevile, eds., 3 October 2001. This W3C Working Draft is http://www.w3.org/TR/2002/WD-xag-20021003. The latest version is available at http://www.w3.org/TR/xag.
"XHTML[tm] 1.0: The Extensible HyperText Markup Language," S. Pemberton, et al., 26 January 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xhtml1-20000126/.
"XML-Signature Syntax and Processing," D. Eastlake, J. Reagle, D. Solo, eds., 12 February 2002. This W3C Recommendation is http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/.
"XML Encryption Syntax and Processing," D. Eastlake, J. Reagle, eds., 10 December 2002. This W3C Recommendation is http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/.

Appendix D: Acknowledgments

Participants active in the UAWG prior publication:

Previous Editor:
Jan Richards, Inclusive Design Institute, OCAD University

Other previously active UAWG participants and other contributors to UAAG 2.0:

UAAG 2.0 would not have been possible without the work of those who contributed to UAAG 1.0.

This publication has been funded in part with Federal funds from the U.S. Department of Education, National Institute on Disability and Rehabilitation Research (NIDRR) under contract number ED-OSE-10-C-0067. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.

Appendix E: Alternative Content

These are the elements and attributes that present 'alternative content'.