[ contents ]

W3C

User Agent Accessibility Guidelines (UAAG) 2.0

W3C Working Draft 11 March 23 July 2009 — Review Version

This version:
http://www.w3.org/TR/2009/WD-UAAG20-20090311/ http://www.w3.org/TR/2009/WD-UAAG20-20090723/
Latest version:
http://www.w3.org/TR/UAAG20/
Previous version:
http://www.w3.org/TR/2008/WD-UAAG20-20080312/ http://www.w3.org/TR/2009/WD-UAAG20-20090311/
Editors:
James Allan, Texas School for the Blind and Visually Impaired
Kelly Ford, Microsoft
Jan Richards, Adaptive Technology Resource Centre, University of Toronto
Jeanne Spellman, W3C/Web Accessibility Initiative
Previous Editors:
NA

This document is also available in non-normative formats:


Abstract

This document provides guidelines for designing user agents that lower barriers to Web accessibility for people with disabilities. User agents include browsers and other types of software that retrieve and render Web content . A user agent that conforms to these guidelines will promote accessibility through its own user interface and through other internal facilities, including its ability to communicate with other technologies (especially assistive technologies ). Furthermore, all users, not just users with disabilities, should find conforming user agents to be more usable.

In addition to helping developers of browsers and media players, this document will also benefit developers of assistive technologies because it explains what types of information and control an assistive technology may expect from a conforming user agent. Technologies not addressed directly by this document (e.g., technologies for braille rendering) will be essential to ensuring Web access for some users with disabilities.

The "User "User Agent Accessibility Guidelines 2.0" 2.0" ( UAAG 2.0) is part of a series of accessibility guidelines published by the W3C Web Accessibility Initiative ( WAI ).

Status of this document

May be Superseded

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

Working Draft of UAAG 2.0

This is the W3C Working Draft of 11 March 9 July 2009. This draft integrates:

Substantial changes include:

The Working Group seeks feedback on the following points for this draft:

Comments on this draft should be sent to public-uaag2-comments@w3.org ( Public Archive ) by 22 April, 9 September 2009 .

UAAG 2.0 is currently informative only. After the User Agent Working Group (UAWG) is rechartered to produce W3C Recommendations under the W3C Patent Policy, the group expects to advance UAAG 2.0 through the Recommendation track. Until that time User Agent Accessibility Guidelines 1.0 (UAAG 1.0) [UAAG10] is the stable, referenceable version. This Working Draft does not supersede UAAG 1.0.

Web Accessibility Initiative

This document has been produced as part of the W3C Web Accessibility Initiative (WAI). The goals of the User Agent Working Group (UAWG) are discussed in the Working Group charter . The UAWG is part of the WAI Technical Activity .

No Endorsement

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

Patents

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .


Table of Contents


Introduction

This section is informative .

A user agent is any software that retrieves and presents Web content for end users. Examples include Web browsers, media players, plug-ins, and other programs including assistive technologies, that help in retrieving, rendering and interacting with Web content. This document specifies requirements that, if satisfied by user agent developers, will lower barriers to accessibility.

Overview

Accessibility involves a wide range of disabilities, including visual, auditory, physical, speech, cognitive, language, learning, neurological disabilities, and disabilities related to ageing. This document emphasizes the goal of ensuring that users, including users with disabilities, have control over their environment for accessing the Web. Key methods for achieving that goal include:

Some users may have more than one disability, and the needs of different disabilities may contradict. Thus, many of the requirements in this document involve configuration as one way to ensure that a functionality designed to improve accessibility for one user does not interfere with accessibility for another. A default user agent setting may be useful for one user but interfere with accessibility for another, therefore this document prefers configuration requirements rather than requirements for default settings. For some content, a feature required by this document may be ineffective or cause content to be less accessible, making it imperative that the user be able to turn off the feature. To avoid overwhelming users with an abundance of configuration options, this document includes requirements that promote ease of configuration and documentation of accessibility features.

This document also acknowledges the importance of author preferences, however, requirements are included to override certain author preferences when the user would not otherwise be able to access that content.

Some of the requirements of this document may have security implications, such as communication through APIs, and allowing programmatic read and write access to content and user interface control . This document assumes that features required by this document will be built on top of an underlying security architecture. Consequently, unless permitted explicitly in a success criterion, this document grants no conformance exemptions based on security issues.

The UAWG expects that software which satisfies the requirements of this document will be more flexible, manageable, extensible, and beneficial to all users.

UAAG 2.0 Layers of Guidance

In order to meet the varying needs of the different audiences using UAAG, several layers of guidance are provided including overall principles , general guidelines , testable success criteria , and a rich collection of sufficient techniques and resource links.

All of these layers of guidance (principles, guidelines, success criteria, and sufficient and advisory techniques) work together to provide guidance on how to make user agents more accessible. Developers are encouraged to view and apply all layers that they are able to, including the advisory techniques, in order to best address the needs of the widest possible range of users.

Note that even user agents that conform at the highest level (AAA) will not be accessible to individuals with all types, degrees, or combinations of disability, particularly in the cognitive, language, and learning areas. Developers are encouraged to consider the full range of techniques, including the advisory techniques, as well as to seek relevant advice about current best practice to ensure that their user agent is accessible, as far as possible, to this community.

UAAG 2.0 Supporting Documents

A separate document, entitled "Techniques "Techniques for User Agent Accessibility Guidelines 2.0" 2.0" (the "Techniques document" "Techniques document" from here on) will be produced at a later date . It will provide suggestions and examples of how each success criteria might be satisfied. It also includes references to other accessibility resources (such as platform-specific software accessibility guidelines) that provide additional information on how a user agent may satisfy each success criteria. The techniques in the Techniques document are informative examples only, and other strategies may be used or required to satisfy the success criteria. The UAWG expects to update the Techniques document more frequently than the current guidelines. Developers, W3C Working Groups, users, and others are encouraged to contribute techniques.

Components of Web Accessibility

Web accessibility depends not only on accessible user agents, but also on the availability of accessible content, a factor that is greatly influenced by the accessibility of authoring tools. For an overview of how these components of Web development and interaction work together, see:

Levels of Conformance

User Agents may claim conformance to UAAG 2.0 at one of three conformance levels. The level achieved depends on the level of the success criteria that have been satisfied. The conformance levels are:

  1. UAAG 2.0 Conformance at Level "A" "A"
    The user agent satisfies all of the Level A success criteria.
  2. UAAG 2.0 Conformance at Level "Double-A" "Double-A"
    The user agent satisfies all of the Level A and Level AA success criteria.
  3. UAAG 2.0 Conformance at Level "Triple-A" "Triple-A"
    The user agent satisfies all of the success criteria.

Definition of User Agent

A user agent is any software that retrieves, renders and facilitates end user interaction with Web content.


UAAG 2.0 Guidelines

Principle 1: Comply with applicable specifications and conventions

Guideline 1.1 Ensure that non-Web-based functionality is accessible.

1.1.1 Non-Web-Based Accessible (Level A): Non-Web-based user agent user interfaces comply with and cite the "Level A" "Level A" requirements of standards and/or platform operating environment conventions that benefit accessibility. The "Level A" "Level A" requirements are those that are functionally equivalent to WCAG Level A success criteria. (Level A)

1.1.2 Non-Web-Based Accessible (Level AA): Non-Web-based user agent user interfaces comply with and cite the "Level AA" "Level AA" requirements of standards and/or platform operating environment conventions that benefit accessibility. The "Level AA" "Level AA" requirements are those that are functionally equivalent to WCAG Level AA success criteria. (Level AA)

1.1.3 Non-Web-Based Accessible (Level AAA): Non-Web-based user agent user interfaces comply with and cite the "Level AAA" "Level AAA" requirements of standards and/or platform operating environment conventions that benefit accessibility. The "Level AAA" "Level AAA" requirements are those that are functionally equivalent to WCAG Level AAA success criteria. (Level AAA)

Applicability Notes:

This guideline does not apply to Web-based user agent user interfaces, but does includes any parts of Web-based user agents that are non-Web-based (e.g., client-side file uploaders).

Guideline 1.2 Ensure that Web-based functionality is accessible.

1.2.1 Web-Based Accessible (Level A): Web-based user agent user interfaces conform to WCAG Level "A". "A". (Level A)

1.2.2 Web-Based Accessible (Level AA): Web-based user agent user interfaces conform to WCAG Level "AA". "AA". (Level AA)

1.2.3 Web-Based Accessible (Level AAA): Web-based user agent user interfaces conform to WCAG Level "AAA". "AAA". (Level AAA)

Applicability Notes:

This guideline does not apply to non-Web-based user agent user interfaces, but does includes include any parts of non-Web-based user agents that are Web-based(e.g., help systems).

Guideline 1.3 Support accessibility features of technologies.

1.3.1 Accessibility Features: Implement and cite in the conformance claim the accessibility features of a technology specification. Accessibility features are those that are either (Level A):

Guideline 1.4 Render content according to specification.

1.4.1 Follow Specifications: Render content according to the technology specification. This includes any accessibility features of the technology ( see Guideline 1.3 ). (Level A)

1.4.2 Handle Unrendered Technologies: If the user agent does not render a technology, it allows the user to choose a way to handle content in that technology (e.g., by launching another application or by saving it to disk). (Level A)

Applicability Note:

When a rendering requirement of another specification contradicts a requirement of UAAG 2.0, the user agent may disregard the rendering requirement of the other specification and still satisfy this guideline.

PRINCIPLE 2. Facilitate programmatic access

Guideline 2.1 Facilitate programmatic access

2.1.1 Accessibility Platform Accessibility Architecture: Support an accessibility platform accessibility architecture relevant to the platform. operating environment . (Level A)

2.1.2 Name, Role, State, Value, Description: For all user interface components including the user interface and rendered content, make available the name, role, state, value, and description via an accessibility platform architecture. accessibility architecture . (Level A)

2.1.3 Accessible Alternative: If a feature is not supported by the accessibility architecture(s), provide an equivalent feature that does support the accessibility architecture(s). Document the equivalent feature in the conformance claim. (Level A)

2.1.4 Programmatic Availability of DOMs: If the user agent implements one or more DOMs, they must be made programmatically available to assistive technologies. (Level A)

2.1.5 Write Access: If the user can modify the state or value of a piece of content through the user interface (e.g., by checking a box or editing a text area), the same degree of write access is available programmatically. (Level A)

2.1.6 Properties: If any of the following properties are supported by the accessibility platform architecture, make the properties available to the accessibility platform architecture: (Level A)

2.1.7 Timely Communication: For APIs implemented to satisfy the requirements of this document, ensure that programmatic exchanges proceed at a rate such that users do not perceive a delay. (Level A).

Applicability Note:

Non-Web-based user agent interfaces only.

PRINCIPLE 3: Perceivable - The user interface and rendered content must be presented to users in ways they can perceive

Guideline 3.1 Provide access to alternative content.

3.1.1 Notification of Alternative Content: Provide a global option for the user to be notified of alternatives to rendered content (e.g., short text alternatives, long descriptions, captions).

3.1.2 Configurable Default Rendering: Provide the user with the global option to set which type of alternative to render by default. If the alternative content has a different height and/or width, then the user agent will reflow the viewport. (Level A)

3.1.3 Browse and Render: The user can browse the alternatives and render them according to the following (Level A):

3.1.4 Available Programmatically: If an alternative is plain text (e.g., short text alternative), then it is available programmatically, even when not rendered. (Level A)

3.1.5 Rendering Alternative (Enhanced) : Provide the user with the global option to configure a cascade of types of alternatives to render by default, in case a preferred type is unavailable. If the alternative content has a different height and/or width, then the user agent will reflow the viewport. (Level AA)

3.1.6 Unavailable Content: If a resource is unavailable, render the next item on the alternative content stack , if any. Otherwise render a placeholder . (Level A) 3.1.7 Retrieval Progress: Show the progress of content retrieval. (Level A) Editors' Note: Success Criteria from 3.2 have been moved to 4.9

Guideline 3.3 Provide access to relationship information.

3.3.1 Access Relationships: Provide access to explicitly-defined relationships based on the user's position in content (e.g., show form control's label, show label's form control, show a cell's table headers, etc.). (Level A)

3.3.2 Unavailable Content: If a resource is unavailable, render the next item on the alternative content stack ,if any. Otherwise render a placeholder .(Level A)

3.3.3 Retrieval Progress: Show the progress of content retrieval. (Level A)

3.3.4 Location in Hierarchy: For content in a hierarchy (e.g., tree node, nested frame), the user can view the path of nodes leading from the root to the content. (Level AA)

Guideline 3.4 Repair missing content.

3.4.1 Repair Missing Alternatives: The user has the option of receiving generated repair text when the user agent recognizes that the author has not provided alternative content required by the technology specification (e.g., short text alternative for an image). (Level A)

3.4.2 Repair Empty Alternatives: The user has the option of receiving generated repair text when the user agent recognizes that the author has provided empty alternative content for an enabled element . (Level AA)

Guideline 3.5 Provide highlighting for selection, content focus, enabled elements, visited links.

3.5.1 Highlighted items: The user has the option to highlight the following classes of information (Level A): @@10.2 in UAAG10@@

3.5.2 Highlighting options: The highlighting options (with the same configurable range as the platform's operating environment's conventional selection utilities) include at least (Level A):

Guideline 3.6 Provide text configuration.

3.6.1 Configure Text: The user can globally set the following characteristics of visually rendered text content, overriding any specified by the author or user agent defaults (Level A):

3.6.2 Preserve Distinctions: When rendered text is rescaled, distinctions in the size of rendered text are preserved (e.g., headers continue to be larger than body text). (Level A)

3.6.3 Option Range: The range of options for each text characteristic includes at least (Level A):

3.6.4 Maintain contrast: The user has the option to constrain the configuration of the default text foreground color, background color and highlighting colors, so that text contrast is maintained between them. (Level AAA)

Guideline 3.7 Provide volume configuration.

3.7.1 Global Volume: The user can globally set the volume of all rendered audio tracks (including a "mute" "mute" setting) through available operating environment mechanisms. (Level A)

3.7.2 Speech Volume: If speech and non-speech audio tracks can be recognized , then the user can set the volume of these two types of audio tracks independently. (Level A)

Guideline 3.8 Provide synthesized speech configuration.

3.8.1 Speech Characteristics: Rate and Volume: The user can set both of the following synthesized speech characteristics, overriding any values specified by the author (Level A):

3.8.2 Option Range: The user can set all of the speech characteristics offered by the speech synthesizer, according to the full range of values available, overriding any values specified by the author. (Level A) 3.8.3 Speech Characteristics: Pitch and Range: The user can set all of the following synthesized speech characteristics, overriding any values specified by the author (Level AA):

(c) 3.8.3 Advanced Speech Characteristics: The user can set all of the speech stress . ("speech stress" refers to characteristics offered by the height of "local peaks" in speech synthesizer, according to the intonation contour full range of values available, overriding any values specified by the voice). author. (Level AAA)

3.8.4 Speech Features: The following speech features are provided (Level AA):

3.8.5 Speech Stress :The user can set the speech stress (the height of "local peaks" in the intonation contour of the voice), overriding any values specified by the author (Level AAA)

Guideline 3.9 Provide style sheets configuration.

3.9.1 Author Style Sheets: If the author has supplied one or more style sheets , the user has the following options (Level A):

3.9.2 User Style Sheets: If the user has supplied one or more style sheets , the user has the following options (Level A):

Guideline 3.10 Help user to use and orient within viewports.

3.10.1 Highlight Viewport: The viewport with the current focus is highlighted (including any frame that takes current focus) using a highlight mechanism that does not rely on rendered text foreground and background colors alone (e.g., a thick outline). (Level A)

3.10.2 Move Viewport to Selection: When a viewport's selection changes, the viewport moves as necessary to ensure that the new selection is at least partially in the viewport . (Level A)

3.10.3 Move Viewport to Focus: When a viewport's content focus changes, the viewport moves as necessary to ensure that the new content focus is at least partially in the viewport . (Level A)

3.10.4 Resizable: The user has the option to make graphical viewports resizable, within the limits of the display, overriding any values specified by the author . (Level A)

3.10.5 Scrollbars: Graphical viewports include scrollbars if the rendered content (including after user preferences have been applied) extends beyond the viewport dimensions, overriding any values specified by the author. (Level A)

3.10.6 Viewport History: If the user agent maintains a viewport history mechanism (e.g., via the "back button") "back button") that stores previous "viable" "viable" states (i.e., that have not been negated by the content, user agent settings or user agent extensions), it maintains information about the point of regard and it restores the saved values when the user returns to a state in the history. (Level A)

3.10.7 Open on Request: The user has the option of having "top-level" "top-level" viewports (e.g., windows) only open on explicit user request . In this mode, instead of opening a viewport automatically, alert notify the user and allow the user to open it with an explicit request (e.g., by confirming a prompt or following a link generated by the user agent). (Level AA)

3.10.8 Do Not Take Focus: When configured to allow "top-level" "top-level" viewports to open without explicit user request, the user has the option that if a "top-level" "top-level" viewport opens, neither its content focus nor its user interface focus automatically becomes the current focus . (Level AA)

3.10.9 Stay on Top: The user has the option of having the viewport with the current focus remain "on top" "on top" of all other viewports with which it overlaps. (Level AA)

3.10.10 Close Viewport: The user can close any "top-level" "top-level" viewport. (Level AA)

3.10.11 Same UI: The user has the option of having all "top-level" "top-level" viewports follow the same user interface configuration as the current or spawning viewport, including the same "chrome". viewport. (Level AA)

3.10.12 Indicate Viewport Position: Indicate the viewport's position relative to rendered content (e.g., the proportion along an audio or video timeline, the proportion of a Web page before the current position ). (Level AAA)

Guideline 3.11 Provide an effective focus mechanism.

3.11.1 Content Focus: At least one content focus is provided for each viewport (including frames), where enabled elements are part of the rendered content . (Level A)

3.11.2 Current Focus: The user can make the content focus of each viewport the current focus . (Level A)

3.11.3 User Interface Focus: A user interface focus is provided. (Level A)

3.11.4 Extensions Focusable: The user interface focus can navigate within extensions to the user interface "chrome". "chrome" . (Level A)

3.11.5 Hand-Off Focus: The user agent programmatically notifies any nested user agent(s) (e.g., plug-ins) when focus moves to them. (Level A)

3.11.6 Retrieve Focus: At any time, the user agent is able to retrieve focus from a nested viewport (including nested viewports that are user agents). (Level A)

3.11.7 Return Focus: Embedded user agents are responsible for notifying embedding user agent that focus should move back to it. (Level A)

3.11.8 Bi-Directional: The user can move the content focus forward or backward to any enabled element in the viewport . (Level A)

3.11.9 Sequential Navigation: If the author has not specified a navigation order, the default is sequential navigation , in document order. (Level A)

3.11.10 Only on User Request: The user has the option of having the content focus of a viewport only change on explicit user request . (Level A)

3.11.11 On Focus: The user has the option of ensuring that moving the content focus to or from an enabled element does not cause the user agent to take any further action. (Level A)

Guideline 3.12 Provide alternative views.

3.12.1 Text View: For content authored in text formats , a view of the text source is provided. (Level A)

3.12.2 Outline View: An "outline" "outline" view of rendered content is provided, composed of labels for important structural elements (e.g., heading text, table titles, form titles, and other labels that are part of the content). (Level AA)

Note: What constitutes a label is defined by each markup language specification. For example, in HTML, a heading ( H1 - H6 ) is a label for the section that follows it, a CAPTION is a label for a table, and the title attribute is a label for its element. (Level AA)

3.12.3 Configure Set of Important Elements: The user has the option to configure the set of important elements for the "outline" "outline" view, including by element type (e.g., headers). (Level AAA)

Guideline 3.13 Provide link information.

3.13.1 Basic Link Information: The following information is provided for each link (Level A):

3.13.2 Extended Link Information: The following information is provided for each link (Level AAA):

PRINCIPLE 4. Ensure that the user interface is operable

Guideline 4.1 Ensure full keyboard access.

4.1.1 Keyboard Operation : All functionality can be operated via the keyboard using sequential and/or direct keyboard commands that do not require specific timings for individual keystrokes, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints (e.g., free hand drawing). This does not forbid and should not discourage providing mouse input or other input methods in addition to keyboard operation. (Level A)

4.1.2 Keystroke Precedence : The user has the option to specify that keystrokes be processed in the following order: user agent user interface, user agent extensions, content keystroke operations administered by the user agent (e.g., access keys), and executable content (e.g., key press events in scripts, etc.). (Level A)

4.1.3 No Keyboard Trap (Minimum) : The user agent prevents keyboard traps as follows (Level A):

4.1.4 Separate Selection from Activation: The user has the option to have selection separate from activation (e.g., navigating through the items in a dropdown menu without activating any set of radio buttons without changing which is the items). active/selected option). (Level A)

4.1.5 Discovery of Keyboard Commands : User has the option to have any *recognized* direct keyboard commands displayed with their associated controls. (Level A)

4.1.6 Standard Text Area Navigation Conventions : Views that render text support the standard text area conventions for the platform operating environment , including, but not necessarily limited to: character keys, backspace/delete, insert, "arrow" "arrow" key navigation (e.g., "caret" "caret" browsing), page up/page down, navigate to start/end, navigate by paragraph, shift-to-select mechanism, etc. (Level A)

4.1.7 Keyboard Navigation: The user can use the keyboard to navigate from group to group of focusable items and to traverse forwards and backwards all of the focusable items within each group. Groups include include, but are not limited to to, toolbars, panels, and user agent extensions. (Level AA)

4.1.8 Important Command Functions : Important command functions (e.g. related to navigation, display, content, information management, etc.) are available in using a single keystroke. or sequence of keystrokes or key combinations. (Level AA)

4.1.9 Override of UI Keyboard Commands : The user can override any keyboard shortcut binding for the user agent user interface except for conventional bindings for the operating environment (e.g., for access to help). The rebinding options must include single-key and key-plus-modifier keys if available in the operating environment. (Level A) AA)

4.1.10 Specify preferred keystrokes : The user can override any keyboard shortcut including recognized author supplied shortcuts (e.g accesskeys) and user interface controls, except for conventional bindings for the operating environment (e.g., for access to help) help). (Level AA)

4.1.11 User Override of Accesskeys : The user can override any recognized author supplied content keybinding (i.e. access key) that the user agent can *recognize*. key). The user must have an option to save the override of user interface keyboard shortcuts so that the rebinding persists beyond the current session. (Level AA)

Guideline 4.2 Provide access to event handlers.

4.2.1 All Available: The user can activate , through keyboard input alone, all input device event handlers (including those for pointing devices, voice, etc.) that are explicitly associated with the element designated by the content focus . (Level A)

4.2.2 Show All: For the element with content focus , the list of input device event types for which there are event handlers explicitly associated with the element are provided. (Level A)

4.2.3 Activate All: The user can activate, as a group, all event handlers of the same input device event type, for the same control. (Level A)

Guideline 4.3 Allow time-independent interaction.

4.3.1 Timing Adjustable : Where time limits for user input are recognized and controllable by the user agent, an option is provided to extend the time limit. (Level A)

Guideline 4.4 Help users avoid flashing that could cause seizures.

4.4.1 Below Threshold: The user interface "chrome" never violates the general flash or red flash thresholds. (Level A)

4.4.2 Three Flashes: No part of the user interface "chrome" ever flashes more than three times in any one second period. (Level AAA) [ WCAG 2.0 ]

Guideline 4.5 Store preference settings.

4.5.1 Save Settings: Change Preference Settings The user has the option to change settings that impact accessibility. (Level A)

4.5.2 Persistent Accessibility Settings : User agent accessibility preference settings are stored persist between sessions. (Level A)

4.5.2 User Profiles: 4.5.3 Multiple Sets of Preference Settings: The user can save and retrieve multiple sets of user agent preference settings. (Level AA)

4.5.3 4.5.4 Portable Profiles: Preference Settings: Sets of preferences are stored as separate files (allowing them to be transmitted electronically). The user can transfer preference settings across locations onto a compatible system. (Level AAA)

4.5.4 4.5.5 Preferences Wizard: A "wizard" wizard helps the user to configure (at least) the accessibility-related user agent preferences. (Level AAA)

Guideline 4.6 Provide text search.

4.6.1 Search Rendered: Rendered Content: The user can perform a search within rendered (e.g., not hidden with a style) content for text and text alternatives for a sequence of characters from the document character set . set. (Level AA)

4.6.2 Bi-Directional: Search Forward and Backward: The user has the option of searching forward or backward from any selected or focused location in content. (Level AA)

4.6.3 Match Found: When there is a match, both of the following are true (Level AA):

4.6.4 Alert on No Match: The user is alerted notified when there is no match or after the last match in content (i.e., prior to starting the search over from the beginning of content). (Level AA)

4.6.5 Case Insensitive: There is a case-insensitive search option. (Level AA)

Guideline 4.7 Provide structured navigation.

4.7.1 Structured Navigation: Forward and backward sequential navigation over important (structural) elements in rendered content is provided. (Level A)

4.7.2 Configure Set of Important Elements: The user has the option to configure the set of important elements for structured navigation, including by element type (e.g., headers). (Level AAA)

Note: For example, allow the user to navigate only paragraphs, or only headings and paragraphs, or to suppress and restore navigation bars, or to navigate within and among tables and table cells

Guideline 4.8 Provide tool bar toolbar configuration.

4.8.1 Configure Position: For graphical user agent user interfaces with tool bars, toolbars, the user can add, remove and configure the position of user agent user interface controls on those tool bars toolbars from a pre-defined set of controls. (Level AAA)

4.8.2 Restore Default: Default Toolbars: The user can restore the default tool bar toolbar configuration. (Level AAA)

Guideline 4.9 Provide control of content that may reduce accessibility.

Editors' Note: These success criteria are being revised. They include success criteria moved from section 3.3.

4.9.A Change Rate of Time-Based Media:

4.9.B Track Enable/Disable of Time-Based Media:

4.9.C Visual Media Scaling

4.9.D Text Scaling

4.9.E Visual Media Brightness/Contrast

4.9.F Paused Time-Based Media

4.9.1 Background Image Toggle: The user has the global option to hide/show background images . (Level A)

4.9.3 Time-Based Media Load-Only: The user has the option to load time-based media content @@DEFINE@@ such that the first frame is displayed (if video), but the content is not played until explicit user request . (Level A)

4.9.4 Execution Toggle: Placeholder: The user has the option to turn on/off the execution render a placeholder instead of executable content that would not normally be contained within a particular an on-screen area (e.g., Javascript). Applet, Flash), until explicit user request to execute. (Level A)

4.9.5 Execution Placeholder: Toggle: The user has the option to render a placeholder instead turn on/off the execution of executable content that would not normally be contained within an on-screen a particular area (e.g., Applet, Flash), until explicit user request to execute. Javascript). (Level A)

4.9.6 Slow Multimedia: The user can slow the presentation rate of recognized prerecorded audio and animation content, such that all of the following are true (Level A):

4.9.7 Stop/Pause/Resume Multimedia: The user can stop, pause, and resume rendered audio and animation content (including video and animated images) that last three or more seconds at their default playback rate. (Level A)

4.9.8 Navigate Multimedia: The user can navigate efficiently within rendered audio and animations (including video and animated images) that last three or more seconds at their default playback rate. (Level A)

Applicability Notes:

The guideline only applies to images, animations, video, audio, etc. that the user agent can recognize .

Principle 5: Ensure that user interface is understandable

Guideline 5.1 Help users avoid unnecessary messages.

5.1.1 Option to Ignore: The user has the option to turn off rendering of non-essential or low priority text messages, based on priority properties defined by the author (e.g., ignoring messages marked "polite" "polite" using ARIA WAI-ARIA ). (Level AA)

Guideline 5.2 Help users avoid and correct mistakes.

5.2.1 Form Submission: The user has the option to confirm (or cancel) any form submission made while content focus is not on the submitting control (e.g., forms that submit when Enter is pressed). (Level AA)

Guideline 5.3 Document the user agent user interface including all accessibility features.

5.3.1 Accessible Format: At least one version of the documentation is either (Level A):

5.3.2 Document Accessibility Features : All user agent features that benefit accessibility @@DEFINE - as specified in the conformance claim@@ are documented. (Level A)

5.3.3 Changes Between Versions : Changes to features that benefit accessibility since the previous version of the user agent are documented. (Level AA)

5.3.4 Centralized View : There is a centralized view of all features of the user agent that benefit accessibility, in a dedicated section of the documentation. (Level AA)

5.3.5 Context Sensitive Help : There is context-sensitive help on all user agent features that benefit accessibility. (Level AAA)

Conformance

@@Ed. This section is still under development@@

Appendix A: Glossary

This glossary is normative .

a · b · c · d · e · f · g · h · i · j · k · l · m · n · o · p · q · r · s · t · u · v · w · x · y · z

activate
To execute or carry out the behaviors associated with an enabled element in the rendered content or component of the user agent user interface .
accessibility platform architecture A programmatic interface that is specifically engineered to enhance communication between mainstream software applications and assistive technologies (e.g., MSAA and IAccessible2 for Windows applications, Gnome Accessibility Toolkit API for Gnome applications, Java Access for Java applications). On some platforms it may be conventional to enhance communication further via implementing a DOM. alert To make the user aware of some event, without requiring acknowledgement. For example, the user agent may alert the user that new content is available on the server by displaying a text message in the user agent's status bar. alternative content
Content that should is used in place of other content that a person may not be made available able to access. Alternative content fulfills essentially the user only under certain conditions (e.g., based on user preferences same function or operating environment limitations). Some examples include: The alt attribute of purpose as the IMG element in HTML 4 [HTML4] . OBJECT elements in HTML 4 [HTML4] . The switch element and test attributes in SMIL 1.0 [SMIL] . The NOSCRIPT and NOFRAMES elements in HTML 4 [HTML4] . Note: Specifications vary in how completely they define how and when to render alternative original content. Examples include text alternatives for non-text content, captions for audio, audio descriptions for video, sign language for audio, media alternatives for time-based media. See WCAG for more information.
alternative content stack:
The set of alternative content items for a given position in content. The items may be mutually exclusive (e.g., regular contrast graphic vs. high contrast graphic) or non-exclusive (e.g., caption track that can play at the same time as a sound track).
animation
Graphical Content content that, when rendered, creates a visual movement effect that is rendered such that it can automatically (i.e., without explicit change over time, potentially giving the user interaction). This definition of animation includes video and animated images. Animation techniques include: graphically displaying a sequence of snapshots within the same region (e.g., as is done for video and animated images). The series visual perception of snapshots may be provided by a single resource (e.g., an movement. Examples include video, animated GIF image) or from distinct resources (e.g., a series of images downloaded continuously by the user agent). images, scrolling text text, programmatic animation (e.g., achieved through markup moving or style sheets). displacing graphical objects around the viewport (e.g., a picture of a ball that is moved around the viewport giving the impression that it is bouncing off of the viewport edges). For instance, the SMIL 2.0 [SMIL20] animation modules explain how to create such animation effects in a declarative manner (i.e., not by composition of successive snapshots). replacing rendered objects).
applet  
A program (generally written in the Java programming language) that is part of content and that the user agent executes.
application programming interface (API) , conventional input/output/device API
An application programming interface ( API ) defines how communication may take place between applications. Implementing APIs that are independent of a particular operating environment (as are the W3C DOM Level 2 specifications) may reduce implementation costs for multi-platform user agents and promote the development of multi-platform assistive technologies. Implementing conventional APIs for a particular operating environment may reduce implementation costs for assistive technology developers who wish to interoperate with more than one piece of software running on that operating environment. A "device API " defines how communication may take place with an input or output device such as a keyboard, mouse, or video card. In this document, an "input/output API " defines how applications or devices communicate with a user agent. As used in this document, input and output APIs include, but are not limited to, device APIs. Input and output APIs also include more abstract communication interfaces than those specified by device APIs. A "conventional input/output API" is one that is expected to be implemented by software running on a particular operating environment. For example, the conventional input APIs of the user agent are for the mouse and keyboard. For touch screen devices or mobile devices, conventional input APIs may include stylus, buttons, and voice. The graphical display and sound card are considered conventional output devices for a graphical desktop computer environment, and each has an associated API .
assistive technology
In the context of this document, an An assistive technology is a user agent that: technology:
  1. relies on services (such as retrieving Web resources and parsing markup) provided by one or more other "host" "host" user agents. Assistive technologies communicate data and messages with host user agents by using and monitoring APIs .
  2. provides services beyond those offered by the host user agents to meet the requirements of users with disabilities. Additional services include alternative renderings (e.g., as synthesized speech or magnified content), alternative input methods (e.g., voice), additional navigation or orientation mechanisms, and content transformations (e.g., to make tables more accessible).

Examples of assistive technologies that are important in the context of this document include the following:

Beyond this document, assistive technologies consist of software or hardware that has been specifically designed to assist people with disabilities in carrying out daily activities. These technologies include wheelchairs, reading machines, devices for grasping, text telephones, and vibrating pagers. For example, the following very general definition of "assistive "assistive technology device" device" comes from the (U.S.) Assistive Technology Act of 1998 [AT1998] :

Any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities.

audio
Content that encodes prerecorded sound. audio-only presentation Content consisting exclusively The technology of one or more audio tracks presented concurrently or in series (e.g., musical performances, radio-style news broadcasts, narrations). audio track Content rendered as sound through an audio viewport . The audio track may reproduction. Audio can be all or part of the audio portion presentation (e.g., each instrument may have created synthetically (including speech synthesis), streamed from a track, or each stereo channel may have live source (such as a track). @@add mention of video@@ radio broadcast), or recorded from real world sounds.
 
audio description - also called described video, video description and descriptive narration
An equivalent alternative that takes the form of narration added to the audio to describe important visual details that cannot be understood from the main soundtrack alone. Audio description of video provides information about actions, characters, scene changes, on-screen text, and other visual content. In standard audio description, narration is added during existing pauses in dialogue. In extended audio description , the video is paused so that there is time to add additional description.
authors
The people who have worked either alone or collaboratively to create the content (includes content authors, designers, programmers, publishers, testers, etc.).
author styles
Style property values that come from content (e.g., style sheets within a document, that are associated with a document, or that are generated set by a server). the author as part of the content .
background images
Images that are rendered on the base background .
base background
The background of the content as a whole, such that no content may be layered behind it. In graphics applications, the base background is often referred to as the canvas.).
blinking text
Text whose visual rendering alternates between visible and invisible at any rate of change.
captions
An equivalent alternative that takes the form of text presented and synchronized with synchronized time-based media to provide not only the speech, but also non-speech information conveyed through sound, including meaningful sound effects and identification of speakers. In some countries, the term "subtitle" "subtitle" is used to refer to dialogue only and "captions" "captions" is used as the term for dialogue plus sounds and speaker identification. In other countries, "subtitle" "subtitle" (or its translation) is used to refer to both. Open captions are captions that are always rendered with a visual track; they cannot be turned off. Closed captions are captions that may be turned on and off. The captions requirements of this document assume that the user agent can recognize the captions as such.
Note: Other terms that include the word "caption" "caption" may have different meanings in this document. For instance, a "table caption" "table caption" is a title for the table, often positioned graphically above or below the table. In this document, the intended meaning of "caption" "caption" will be clear from context.
character encoding A mapping from a character set definition to the actual code units used to represent the data. Refer to the Unicode specification [UNICODE] for more information about character encodings. Refer to "Character Model for the World Wide Web" [CHARMOD] for additional information about characters and character encodings. collated text transcript
A collated text transcript is a text equivalent of a movie or other animation. More specifically, it is the combination of the text transcript of the audio track and the text equivalent of the visual track . For example, a collated text transcript typically includes segments of spoken dialogue interspersed with text descriptions of the key visual elements of a presentation (actions, body language, graphics, and scene changes). See also the definitions of text transcript and audio description . Collated text transcripts are essential for individuals who are deaf-blind.
configure , control , user option
In the context of this document, the verbs "to control" and "to configure" share in common the idea of governance such as a user may exercise over interface layout, user agent behavior, rendering style, and other parameters required by this document. Generally, the difference in the terms centers on the idea of persistence . When a user makes a change by "controlling" a setting, that change usually does not persist beyond that user session. On the other hand, when a user "configures" a setting, that setting typically persists into later user sessions. Furthermore, the term "control" typically means that the change can be made easily (such as through a keyboard shortcut) and that the results of the change occur immediately. The term "configure" typically means that making the change requires more time and effort (such as making the change via a series of menus leading to a dialog box, or via style sheets or scripts). The results of "configuration" might not take effect immediately (e.g., due to time spent reinitializing the system, initiating a new session, or rebooting the system).

In order to be able to configure and control the user agent, the user needs to be able to "write" as well as "read" values for these parameters. Configuration settings may be stored in a profile . The range and granularity of the changes that can be controlled or configured by the user may depend on limitations of the operating environment or hardware.

Both configuration and control can apply at different "levels": across Web resources (i.e., at the user agent level, or inherited from the operating environment ), to the entirety of a Web resource, or to components of a Web resource (e.g., on a per-element basis).

A global configuration is one that applies across elements of the same Web resource, as well as across Web resources. User agents may allow users to choose configurations based on various parameters, such as hardware capabilities or natural language preferences. Note: In this document, the noun "control" refers to a user interface control .
content (Web content)
In this specification, the noun "content" is used in three ways: It is used Information and sensory experience to mean the document object as a whole or in parts. It is used be communicated to mean the content user by means of an HTML a user agent, including code or XML element, in the sense employed by the XML 1.0 specification ( [XML] , section 3.1): "The text between the start-tag and end-tag is called the element's content." Context should indicate markup that defines the term content is being used in this sense. It is used in the terms non-text content content's structure, presentation, and text content . interactions [adapted from WCAG 2.0 ]

empty content (which may be alternative content ) is either a null value or an empty string (i.e., one that is zero characters long). For instance, in HTML, alt="" alt="" sets the value of the alt attribute to the empty string. In some markup languages, an element may have empty content (e.g., the HR element in HTML).

device-independence In this document, device-independence refers to the desirable property that operation of a user agent feature is not bound to only one input or output device. document object , Document Object Model ( DOM )
In general usage, the term "document object" refers to the user agent's representation of data (e.g., a document). This data generally comes from the document source , but may also be generated (e.g., from style sheets, scripts, or transformations), produced as a result of preferences set within the user agent, or added as the result of The Document Object Model is a repair performed automatically by the user agent. Some data platform- and language-neutral interface that is part of the document object is routinely rendered (e.g., in HTML, what appears between the start allows programs and end tags of elements scripts to dynamically access and update the values of attributes such as alt , title , content, structure and summary ). Other parts style of the documents. The document object are generally can be further processed by the user agent without user awareness, such as DTD - or schema-defined names of element types and attributes, and other attribute values such as href and id . Most of the requirements of this document apply to the document object after its construction. However, a few guidelines (e.g., @@ ) may affect the construction results of the document object. A "document object model" is the abstraction that governs the construction of processing can be incorporated back into the user agent's document object. The document object model employed by different user agents may vary in implementation and sometimes in scope. presented page. This specification requires that user agents implement the APIs defined in Document Object Model (DOM) Level 2 specifications ( [DOM2CORE] and [DOM2STYLE] ) for access to HTML , XML , and CSS content. These DOM APIs allow authors to access and modify the content via a scripting language (e.g., JavaScript) in a consistent manner across different scripting languages. document character set In this document, a document character set (a concept from SGML) is a collection of abstract characters that a format specification allows to appear in an instance of the format. A document character set consists of: A "repertoire": A set overview of abstract characters, such as the Latin letter "A," the Cyrillic letter "I," DOM-related materials here at W3C and around the Chinese character meaning "water." Code positions: A set of integer references to characters in the repertoire. For instance, the character set required by the HTML 4 specification [HTML4] is defined in the Unicode specification [UNICODE] . Refer to "Character Model for the World Wide Web" [CHARMOD] for more information about document character sets. web: http://www.w3.org/DOM/#what .
document source , text source
In this document, the term "document source" refers to the data that Text the user agent receives as the direct result of a renders upon user request for a Web resource (e.g., as the result of an HTTP/1.1 [RFC2616] "GET", or as the result of viewing a resource on the local file system). The document source generally refers to view the "payload" of the user agent's request, and does not generally include information exchanged as part of the transfer protocol. The document source is data that is prior to any repair by the user agent (e.g., prior to repairing invalid markup). "Text source" refers to the text portion of the document source. specific viewport content (i.e. selected content, frame, page).
documentation
Documentation refers to Any information that supports the use of a user agent. This information may be found, for example, in manuals, installation instructions, the help system, and tutorials. Documentation may be distributed (e.g., as files installed as part of the installation, some parts may be delivered on CD-ROM, others on the Web). See guideline 5.3 for information about documentation requirements. documentation.
element , element type
This document uses the terms "element" "element" and "element type" "element type" primarily in the sense employed by the XML 1.0 specification ( [XML] , section 3): an element type is a syntactic construct of a document type definition (DTD) for its application. This sense is also relevant to structures defined by XML schemas. The document also uses the term "element" "element" more generally to mean a type of content (such as video or sound) or a logical construct (such as a header or list).
enabled element , disabled element
An enabled element is a piece of content with associated behaviors that can be activated through the user interface or through an API . The set of elements that a user agent enables is generally derived from, but is not limited to, the set of interactive elements defined by implemented markup languages. Some elements may only be enabled elements for part of a user session. For instance, an element may be disabled by a script as the result of user interaction. Or, an element may only be enabled during a given time period (e.g., during part of a SMIL 1.0 [SMIL] presentation). Or, the user may be viewing content in "read-only" mode, which may disable some elements. A disabled element is a piece of content that is potentially an enabled element, but is not in the current session. One example of a disabled element is a menu item that is unavailable in the current session; it might be "grayed out" to show that it is disabled. Generally, disabled elements will be interactive elements that are not enabled in the current session. This document distinguishes disabled elements (not currently enabled) from non-interactive elements (never enabled). For the requirements of this document, user selection does not constitute user interaction with enabled elements. See the definition of content focus . Note: Enabled and disabled elements come from content; they are not part of the user agent user interface . Note: The term "active element" is not used in this document since it may suggest several different concepts, including: interactive element, enabled element, an element "in the process of being activated" (which is the meaning of :active in CSS2 [CSS2] , available for example). activation (e.g., a "grayed out" menu item).
equivalent (for content) alternative
The term "equivalent" is used in this document as it is used in the Web Content Accessibility Guidelines 1.0 [WCAG10] : Content that is "equivalent" to an acceptable substitute for other content when both fulfill essentially the same function or purpose upon presentation that a person may not be able to the user. In the context of this document, the access. An equivalent must fulfill alternative fulfills essentially the same function for the person with a disability (at least insofar as is feasible, given the nature of the disability and the state of technology), or purpose as the primary original content does for the person without any disability. Equivalents include upon presentation:
events and scripting, event handler, event type
User agents often perform a task when an event having a particular "event type" "event type" occurs, including user interface events, changes to content, loading of content, and requests from the operating environment . Some markup languages allow authors to specify that a script, called an event handler , be executed when an event of a given type occurs. An event handler is explicitly associated with an element when the event handler is associated with that element through scripting, markup or the DOM . The term " event bubbling " describes a programming style where a single event handler dispatches events to more than one element. In this case, the event handlers are not explicitly associated with the elements receiving the events (except for the single element that dispatches the events). Note: The combination of HTML, style sheets, the Document Object Model ( DOM ), and scripting is commonly referred to as "Dynamic HTML" or DHTML. However, as there is no W3C specification that formally defines DHTML, this document only refers to event handlers and scripts. DOM.
explicit user request
In this document, the term "explicit user request" refers to any Any user interaction by the user through the user agent user interface (not through rendered content ), , the focus , or the selection . User requests are made, for example, through user agent user interface controls and keyboard bindings.

Some examples of explicit user requests include when the user selects "New viewport," "New viewport," responds "yes" "yes" to a prompt in the user agent's user interface, configures the user agent to behave in a certain way, or changes the selection or focus with the keyboard or pointing device.

Note: Users can make mistakes. errors when interacting with the user agent. For example, a user may inadvertently respond "yes" "yes" to a prompt instead of "no." "no." In this document, this type of mistake error is still considered an explicit user request.

focus , content focus , user interface focus , current focus
In this document, the term "content focus" "content focus" refers to a user agent mechanism that has all of the following properties:
  1. It designates zero or one element in content that is either enabled or disabled . In general, the focus should only designate enabled elements, but it may also designate disabled elements.
  2. It has state, i.e., it may be "set" "set" on an enabled element, programmatically or through the user interface. Some content specifications (e.g., HTML, CSS) allow authors to associate behavior with focus set and unset events .
  3. Once it has been set, it may be used to trigger other behaviors associated with the enabled element (e.g., the user may activate a link or change the state of a form control). These behaviors may be triggered programmatically or through the user interface (e.g., through keyboard events).

User interface mechanisms may resemble content focus, but do not satisfy all of the properties. For example, designers of word processing software often implement a "caret" "caret" that indicates the current location of text input or editing. The caret may have state and may respond to input device events, but it does not enable users to activate the behaviors associated with enabled elements.

The user interface focus shares the properties of the content focus except that, rather than designating pieces of content, it designates zero or one control of the user agent user interface that has associated behaviors (e.g., a radio button, text box, or menu).

On the screen, the user agent may highlight the content focus in a variety of ways, including through colors, fonts, graphics, and magnification. The user agent may also highlight the content focus when rendered as synthesized speech, for example through changes in speech prosody. The dimensions of the rendered content focus may exceed those of the viewport.

In this document, each viewport is expected to have at most one content focus and at most one user interface focus. This document includes requirements for content focus only, for user interface focus only, and for both. When a requirement refers to both, the term "focus" "focus" is used.

When several viewports coexist, at most one viewport's content focus or user interface focus responds to input events; this is called the current focus.

graphical
In this document, the term "graphical" refers to information Information (including text, colors, graphics, images, and animations) rendered for visual consumption.
highlight
In this document, "to highlight" means to To emphasize through the user interface. For example, user agents highlight which content is selected or focused. Graphical highlight mechanisms include dotted boxes, underlining, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume ("speech prosody"). ("speech prosody").
image
This document uses the term "image" to refer (as is commonly the case) to pictorial Pictorial content . However, in this document, term image that is limited to static (i.e., unmoving) visual information. (i.e.not moving or changing). See also the definition of animation .
important elements
This specification intentionally does not identify which "important elements" "important elements" must be navigable as this will vary by specification. What constitutes "efficient navigation" "efficient navigation" may depend on a number of factors as well, including the "shape" "shape" of content (e.g., sequential navigation of long lists is not efficient) and desired granularity (e.g., among tables, then among the cells of a given table). Refer to the Techniques document [UAAG10-TECHS] for information about identifying and navigating important elements.
input configuration
An input configuration is the The set of "bindings" "bindings" between user agent functionalities and user interface input mechanisms (e.g., menus, buttons, keyboard keys, and voice commands). The default input configuration is the set of bindings the user finds after installation of the software. Input configurations may be affected by author-specified bindings (e.g., through the accesskey attribute of HTML 4 [HTML4] ).
 
interactive element keyboard command ,
Direct Commands* (also called non-interactive element keyboard shortcuts , An interactive element is piece of content that, by specification or by programmatic enablement, may have associated behaviors accelerator keys ) are those tied to be executed particular UI controls or carried out as a result of application functions, allowing the user to navigate-to or programmatic interaction." For instance, the interactive elements of HTML  4 [HTML4] include: links, image maps, form elements, elements with activate them without traversing any intervening controls (e.g., "ctrl"+"S" to save a value for the longdesc attribute, and elements with event handlers explicitly document). It is sometimes useful to distinguish direct commands that are associated with them controls that are rendered in the current context (e.g., through "alt"+"D" to move focus to the various "on" attributes). The role of an element as an interactive element is subject address bar) from those that may be able to applicability. A non-interactive element activate program functionality that is an element that, by format specification, does not have associated behaviors. The expectation of this document is that interactive elements become enabled elements in some sessions, and non-interactive elements never become enabled elements. with any currently rendered controls (e.g., "F1" to open the Help system). Direct commands help users accelerate their selections.
natural language
Natural language is spoken, written, or signed human language such as French, Japanese, and American Sign Language. On the Web, the natural language of content may be specified by markup or HTTP headers. Some examples include the lang attribute in HTML 4 ( [HTML4] section 8.1), the xml:lang attribute in XML 1.0 ( [XML] , section 2.12), the hreflang attribute for links in HTML 4 ( [HTML4] , section 12.1.5), the HTTP Content-Language header ( [RFC2616] , section 14.12) and the Accept-Language request header ( [RFC2616] , section 14.4). See also the definition of script .
normative , informative [ WCAG 2.0 , ATAG 2.0]
What is identified as "normative" "normative" is required for conformance (noting that one may conform in a variety of well-defined ways to this document). What is identified as "informative" "informative" (sometimes, "non-normative") "non-normative") is never required for conformance.
notify
To make the user aware of events or status changes. Notifications can occur within the user agent user interface (e.g., status bar) or within the content display. Notifications may be passive and not require user acknowledgment, or they may be presented in the form of a prompt requesting a user response (e.g., a confirmation dialog).
operating environment
The term "operating environment" "operating environment" refers to the environment that governs the user agent's operation, whether it is an operating system or a programming language environment such as Java.
override
In this document, the term "override" "override" means that one configuration or behavior preference prevails over another. Generally, the requirements of this document involve user preferences prevailing over author preferences and user agent default settings and behaviors. Preferences may be multi-valued in general (e.g., the user prefers blue over red or yellow), and include the special case of two values (e.g., turn on or off blinking text content).
placeholder
A placeholder is content generated by the user agent to replace author-supplied content. A placeholder may be generated as the result of a user preference (e.g., to not render images) or as repair content (e.g., when an image cannot be found). Placeholders can be any type of content, including text, images, and audio cues. Placeholders should identify the technology of the object of which it is holding the place. Placeholders will appear in the alternative content stack.
platform accessibility architecture
A programmatic interface that is specifically engineered to enhance communication between mainstream software applications and assistive technologies (e.g., MSAA, UI Automation, and IAccessible2 for Windows applications, AXAPI for MacOSX applications, Gnome Accessibility Toolkit API for Gnome applications, Java Access for Java applications, etc.). On some platforms it may be conventional to enhance communication further via implementing a DOM.
plug-in [ATAG 2.0]
A plug-in is a program that runs as part of the user agent and that is not part of content . Users generally choose to include or exclude plug-ins from their user agent.
point of regard
The point of regard is a position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard may vary. For example, it may be a point (e.g., a moment during an audio rendering or a cursor position in a graphical rendering), or a range of text (e.g., focused text), or a two-dimensional area (e.g., content rendered through a two-dimensional graphical viewport). The point of regard is almost always within the viewport, but it may exceed the spatial or temporal dimensions of the viewport (see the definition of rendered content for more information about viewport dimensions). The point of regard may also refer to a particular moment in time for content that changes over time (e.g., an audio-only presentation ). presentation). User agents may determine the point of regard in a number of ways, including based on viewport position in content, content focus , and selection . The stability of the point of regard is addressed by @@.
profile
A profile is a named and persistent representation of user preferences that may be used to configure a user agent. Preferences include input configurations, style preferences, and natural language preferences. In operating environments with distinct user accounts, profiles enable users to reconfigure software quickly when they log on. Users may share their profiles with one another. Platform-independent profiles are useful for those who use the same user agent on different platforms.
prompt [ATAG 2.0]
Any user agent initiated request for a decision or piece of information from users.
properties, values, and defaults
A user agent renders a document by applying formatting algorithms and style information to the document's elements. Formatting depends on a number of factors, including where the document is rendered: on screen, on paper, through loudspeakers, on a braille display, or on a mobile device. Style information (e.g., fonts, colors, and synthesized speech prosody) may come from the elements themselves (e.g., certain font and phrase elements in HTML), from style sheets, or from user agent settings. For the purposes of these guidelines, each formatting or style option is governed by a property and each property may take one value from a set of legal values. Generally in this document, the term " " property " " has the meaning defined in CSS 2 ( [CSS2] , section 3). A reference to "styles" "styles" in this document means a set of style-related properties. The value given to a property by a user agent at installation is called the property's default value .
recognize
Authors encode information in many ways, including in markup languages, style sheet languages, scripting languages, and protocols. When the information is encoded in a manner that allows the user agent to process it with certainty, the user agent can "recognize" "recognize" the information. For instance, HTML allows authors to specify a heading with the H1 element, so a user agent that implements HTML can recognize that content as a heading. If the author creates a heading using a visual effect alone (e.g., just by increasing the font size), then the author has encoded the heading in a manner that does not allow the user agent to recognize it as a heading.

Some requirements of this document depend on content roles, content relationships, timing relationships, and other information supplied by the author. These requirements only apply when the author has encoded that information in a manner that the user agent can recognize. See the section on conformance for more information about applicability.

In practice, user agents will rely heavily on information that the author has encoded in a markup language or style sheet language. On the other hand, behaviors, style, meaning encoded in a script , and markup in an unfamiliar XML namespace may not be recognized by the user agent as easily or at all. The Techniques document [UAAG10-TECHS] lists some markup known to affect accessibility that user agents can recognize.

rendered content , rendered text
Rendered content is the part of content that the user agent makes available to the user's senses of sight and hearing (and only those senses for the purposes of this document). Any content that causes an effect that may be perceived through these senses constitutes rendered content. This includes text characters, images, style sheets, scripts, and anything else in content that, once processed, may be perceived through sight and hearing.
The term "rendered text" "rendered text" refers to text content that is rendered in a way that communicates information about the characters themselves, whether visually or as synthesized speech.
In the context of this document, invisible content is content that is not rendered but that may influence the graphical rendering (e.g., layout) of other content. Similarly, silent content is content that is not rendered but that may influence the audio rendering of other content. Neither invisible nor silent content is considered rendered content.
repair content , repair text
In this document, the term "repair content" "repair content" refers to content generated by the user agent in order to correct an error condition. "Repair text" "Repair text" refers to the text portion of repair content. Some error conditions that may lead to the generation of repair content include:

This document does not require user agents to include repair content in the document object . Repair content inserted in the document object should conform to the Web Content Accessibility Guidelines 1.0 [WCAG10] . For more information about repair techniques for Web content and software, refer to "Techniques "Techniques for Authoring Tool Accessibility Guidelines 1.0" 1.0" [ATAG10-TECHS] .

script
In this document, the term "script" "script" almost always refers to a scripting (programming) language used to create dynamic Web content. However, in guidelines referring to the written (natural) language of content, the term "script" "script" is used as in Unicode [UNICODE] to mean "A "A collection of symbols used to represent textual information in one or more writing systems." systems."
Information encoded in (programming) scripts may be difficult for a user agent to recognize . For instance, a user agent is not expected to recognize that, when executed, a script will calculate a factorial. The user agent will be able to recognize some information in a script by virtue of implementing the scripting language or a known program library (e.g., the user agent is expected to recognize when a script will open a viewport or retrieve a resource from the Web).
selection , current selection
In this document, the term "selection" "selection" refers to a user agent mechanism for identifying a (possibly empty) range of content . Generally, user agents limit the type of content that may be selected to text content (e.g., one or more fragments of text). In some user agents, the value of the selection is constrained by the structure of the document tree.

On the screen, the selection may be highlighted in a variety of ways, including through colors, fonts, graphics, and magnification. The selection may also be highlighted when rendered as synthesized speech, for example through changes in speech prosody. The dimensions of the rendered selection may exceed those of the viewport.

The selection may be used for a variety of purposes, including for cut and paste operations, to designate a specific element in a document for the purposes of a query, and as an indication of point of regard .

The selection has state, i.e., it may be "set," "set," programmatically or through the user interface.

In this document, each viewport is expected to have at most one selection. When several viewports coexist, at most one viewport's selection responds to input events; this is called the current selection.

Note: Some user agents may also implement a selection for designating a range of information in the user agent user interface . The current document only includes requirements for a content selection mechanism.

serial access , sequential navigation
In this document, the expression "serial access" "serial access" refers to one-dimensional access to rendered content. Some examples of serial access include listening to an audio stream or watching a video (both of which involve one temporal dimension), or reading a series of lines of braille one line at a time (one spatial dimension). Many users with blindness have serial access to content rendered as audio, synthesized speech, or lines of braille.

The expression "sequential navigation" "sequential navigation" refers to navigation through an ordered set of items (e.g., the enabled elements in a document, a sequence of lines or pages, or a sequence of menu options). Sequential navigation implies that the user cannot skip directly from one member of the set to another, in contrast to direct or structured navigation. Users with blindness or some users with a physical disability may navigate content sequentially (e.g., by navigating through links, one by one, in a graphical viewport with or without the aid of an assistive technology). Sequential navigation is important to users who cannot scan rendered content visually for context and also benefits users unfamiliar with content. The increments of sequential navigation may be determined by a number of factors, including element type (e.g., links only), content structure (e.g., navigation from heading to heading), and the current navigation context (e.g., having navigated to a table, allow navigation among the table cells).

Users with serial access to content or who navigate sequentially may require more time to access content than users who use direct or structured navigation.

support , implement , conform
In this document, the terms "support," "implement," "support," "implement," and "conform" "conform" all refer to what a developer has designed a user agent to do, but they represent different degrees of specificity. A user agent "supports" "supports" general classes of objects, such as "images" "images" or "Japanese." "Japanese." A user agent "implements" "implements" a specification (e.g., the PNG and SVG image format specifications or a particular scripting language), or an API (e.g., the DOM API) when it has been programmed to follow all or part of a specification. A user agent "conforms to" "conforms to" a specification when it implements the specification and satisfies its conformance criteria.
synchronize
In this document, "to synchronize" "to synchronize" refers to the act of time-coordinating two or more presentation components (e.g., a visual track with captions, or several tracks in a multimedia presentation). For Web content developers, the requirement to synchronize means to provide the data that will permit sensible time-coordinated rendering by a user agent. For example, Web content developers can ensure that the segments of caption text are neither too long nor too short, and that they map to segments of the visual track that are appropriate in length. For user agent developers, the requirement to synchronize means to present the content in a sensible time-coordinated fashion under a wide range of circumstances including technology constraints (e.g., small text-only displays), user limitations (slow reading speeds, large font sizes, high need for review or repeat functions), and content that is sub-optimal in terms of accessibility.
technology (Web content) - or shortened to technology [ WCAG 2.0 , ATAG 2.0]
A mechanism for encoding instructions to be rendered, played or executed by user agents . Web Content technologies may include markup languages, data formats, or programming languages that authors may use alone or in combination to create end-user experiences that range from static Web pages to multimedia presentations to dynamic Web applications. Some common examples of Web content technologies include HTML, CSS, SVG, PNG, PDF, Flash, and JavaScript.
text
In this document, the term "text" "text" used by itself refers to a sequence of characters from a markup language's document character set . set. Refer to the "Character "Character Model for the World Wide Web" Web" [CHARMOD] for more information about text and characters. Note: This document makes use of other terms that include the word "text" "text" that have highly specialized meanings: collated text transcript , non-text content , text content , non-text element , text element , text equivalent , and text transcript .
text content , non-text content , text element , non-text element , text equivalent , non-text equivalent
As used in this document a "text element" "text element" adds text characters to either content or the user interface . Both in the Web Content Accessibility Guidelines 1.0 [WCAG10] and in this document, text elements are presumed to produce text that can be understood when rendered visually, as synthesized speech, or as Braille. Such text elements benefit at least these three groups of users:
  1. visually-displayed text benefits users who are deaf and adept in reading visually-displayed text;
  2. synthesized speech benefits users who are blind and adept in use of synthesized speech;
  3. braille benefits users who are blind, and possibly deaf-blind, and adept at reading braille.

A text element may consist of both text and non-text data. For instance, a text element may contain markup for style (e.g., font size or color), structure (e.g., heading levels), and other semantics. The essential function of the text element should be retained even if style information happens to be lost in rendering.

A user agent may have to process a text element in order to have access to the text characters. For instance, a text element may consist of markup, it may be encrypted or compressed, or it may include embedded text in a binary format (e.g., JPEG ).

"Text content" "Text content" is content that is composed of one or more text elements. A "text equivalent" "text equivalent" (whether in content or the user interface) is an equivalent composed of one or more text elements. Authors generally provide text equivalents for content by using the alternative content mechanisms of a specification.

A "non-text element" "non-text element" is an element (in content or the user interface) that does not have the qualities of a text element. "Non-text content" "Non-text content" is composed of one or more non-text elements. A "non-text equivalent" "non-text equivalent" (whether in content or the user interface) is an equivalent composed of one or more non-text elements.

text decoration
In this document, a "text decoration" "text decoration" is any stylistic effect that the user agent may apply to visually rendered text that does not affect the layout of the document (i.e., does not require reformatting when applied or removed). Text decoration mechanisms include underline, overline, and strike-through.
text format
Any media object given an Internet media type of "text" "text" (e.g., "text/plain", "text/html", "text/plain", "text/html", or "text/*") "text/*") as defined in RFC 2046 [RFC2046] , section 4.1, or any media object identified by Internet media type to be an XML document (as defined in [XML] , section 2) or SGML application. Refer, for example, to Internet media types defined in "XML "XML Media Types" Types" [RFC3023] .
text transcript
A text transcript is a text equivalent of audio information (e.g., an audio-only presentation or the audio track of a movie or other animation). It provides text for both spoken words and non-spoken sounds such as sound effects. Text transcripts make audio information accessible to people who have hearing disabilities and to people who cannot play the audio. Text transcripts are usually created by hand but may be generated on the fly (e.g., by voice-to-text converters). See also the definitions of captions and collated text transcripts .
track ( audio track or visual track )
Content rendered as sound through an audio viewport .The audio track may be all or part of the audio portion presentation (e.g., each instrument may have a track, or each stereo channel may have a track). Also see definition of visual track
user agent
A user agent is any software that retrieves, renders and facilitates end user interaction with Web content.
user agent default styles
User agent default styles are style property values applied in the absence of any author or user styles. Some markup languages specify a default rendering for content in that markup language; others do not. For example, XML 1.0 [XML] does not specify default styles for XML documents. HTML 4 [HTML4] does not specify default styles for HTML documents, but the CSS 2 [CSS2] specification suggests a sample default style sheet for HTML 4 based on current practice.
user interface , user interface control
For the purposes of this document, user interface includes both:
  1. the user agent user interface , i.e., the controls (e.g., menus, buttons, prompts, and other components for input and output) and mechanisms (e.g., selection and focus) provided by the user agent ("out ("out of the box") box") that are not created by content .
  2. the "content "content user interface," interface," i.e., the enabled elements that are part of content, such as form controls, links, and applets . applets.
The document distinguishes them only where required for clarity. For more information, see the section on requirements for content, for user agent features, or both @@.

The term "user "user interface control" control" refers to a component of the user agent user interface or the content user interface, distinguished where necessary.

user styles
User styles are style property values that come from user interface settings, user style sheets, or other user interactions.
view , viewport
The user agent renders content through one or more viewports. Viewports include windows, frames, pieces of paper, loudspeakers, and virtual magnifying glasses. A viewport may contain another viewport (e.g., nested frames). User agent user interface controls such as prompts, menus, and alerts are not viewports.

Graphical and tactile viewports have two spatial dimensions . A viewport may also have temporal dimensions, for instance when audio, speech, animations, and movies are rendered. When the dimensions (spatial or temporal) of rendered content exceed the dimensions of the viewport, the user agent provides mechanisms such as scroll bars and advance and rewind controls so that the user can access the rendered content "outside" "outside" the viewport. Examples include: when the user can only view a portion of a large document through a small graphical viewport, or when audio content has already been played.

When several viewports coexist, only one has the current focus at a given moment. This viewport is highlighted to make it stand out.

User agents may render the same content in a variety of ways; each rendering is called a view . For instance, a user agent may allow users to view an entire document or just a list of the document's headers. These are two different views of the document.

"top-level" "top-level" viewports are viewports that are not contained within other user agent viewports.

visual-only presentation
A visual-only presentation is content consisting exclusively of one or more visual tracks presented concurrently or in series. A silent movie is an example of a visual-only presentation.
visual track
A visual object is content rendered through a graphical viewport . Visual objects include graphics, text, and visual portions of movies and other animations. A visual track is a visual object that is intended as a whole or partial presentation. A visual track does not necessarily correspond to a single physical object or software object.
voice browser
From "Introduction "Introduction and Overview of W3C Speech Interface Framework" Framework" [VOICEBROWSER] : "A "A voice browser is a device (hardware and software) that interprets voice markup languages to generate voice output, interpret voice input, and possibly accept and produce other modalities of input and output." output."
web resource
Anything that can be identified by a Uniform Resource Identifier ( URI ).

Appendix B: How to refer to UAAG 2.0 from other documents

@@Ed. This section is still under development@@


Appendix C: References

This section is informative .

For the latest version of any W3C specification please consult the list of W3C Technical Reports at http://www.w3.org/TR/. Some documents listed below may have been superseded since the publication of this document.

Note: In this document, bracketed labels such as "[WCAG20]" "[WCAG20]" link to the corresponding entries in this section. These labels are also identified as references through markup.

[CSS1]
"Cascading "Cascading Style Sheets (CSS1) Level 1 Specification," Specification," B. Bos, H. Wium Lie, eds., 17 December 1996, revised 11 January 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-CSS1-19990111.
[CSS2]
"Cascading "Cascading Style Sheets, level 2 (CSS2) Specification," Specification," B. Bos, H. Wium Lie, C. Lilley, and I. Jacobs, eds., 12 May 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-CSS2-19980512/.
[DOM2CORE]
"Document "Document Object Model (DOM) Level 2 Core Specification," Specification," A. Le Hors, P. Le Hégaret, L. Wood, G. Nicol, J. Robie, M. Champion, S. Byrne, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/.
[DOM2STYLE]
"Document "Document Object Model (DOM) Level 2 Style Specification," Specification," V. Apparao, P. Le Hégaret, C. Wilson, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Style-20001113/.
[INFOSET]
"XML "XML Information Set," Set," J. Cowan and R. Tobin, eds., 24 October 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-xml-infoset-20011024/.
[RFC2046]
"Multipurpose "Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types," Types," N. Freed, N. Borenstein, November 1996.
[WCAG10]
"Web "Web Content Accessibility Guidelines 1.0," 1.0," W. Chisholm, G. Vanderheiden, and I. Jacobs, eds., 5 May 1999. This W3C Recommendation is http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/.
[XML]
"Extensible "Extensible Markup Language (XML) 1.0 (Second Edition)," Edition)," T. Bray, J. Paoli, C.M. Sperberg-McQueen, eds., 6 October 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xml-20001006.
[AT1998]
The Assistive Technology Act of 1998 .
[ATAG10]
"Authoring "Authoring Tool Accessibility Guidelines 1.0," 1.0," J. Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-ATAG10-20000203/.
[ATAG10-TECHS]
"Techniques "Techniques for Authoring Tool Accessibility Guidelines 1.0," 1.0," J. Treviranus, C. McCathieNevile, J. Richards, eds., 29 Oct 2002. This W3C Note is http://www.w3.org/TR/2002/NOTE-ATAG10-TECHS-20021029/.
[CHARMOD]
"Character "Character Model for the World Wide Web," Web," M. Dürst and F. Yergeau, eds., 30 April 2002. This W3C Working Draft is http://www.w3.org/TR/2002/WD-charmod-20020430/. The latest version is available at http://www.w3.org/TR/charmod/.
[DOM2HTML]
"Document "Document Object Model (DOM) Level 2 HTML Specification," Specification," J. Stenback, P. Le Hégaret, A. Le Hors, eds., 8 November 2002. This W3C Proposed Recommendation is http://www.w3.org/TR/2002/PR-DOM-Level-2-HTML-20021108/. The latest version is available at http://www.w3.org/TR/DOM-Level-2-HTML/.
[HTML4]
"HTML "HTML 4.01 Recommendation," Recommendation," D. Raggett, A. Le Hors, and I. Jacobs, eds., 24 December 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-html401-19991224/.
[RFC2616]
"Hypertext "Hypertext Transfer Protocol — HTTP/1.1," — HTTP/1.1," J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, T. Berners-Lee, June 1999.
[RFC3023]
"XML "XML Media Types," Types," M. Murata, S. St. Laurent, D. Kohn, January 2001.
[SMIL]
"Synchronized "Synchronized Multimedia Integration Language (SMIL) 1.0 Specification," Specification," P. Hoschka, ed., 15 June 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-smil-19980615/.
[SMIL20]
"Synchronized "Synchronized Multimedia Integration Language (SMIL 2.0) Specification," Specification," J. Ayars, et al., eds., 7 August 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-smil20-20010807/.
[SVG]
"Scalable "Scalable Vector Graphics (SVG) 1.0 Specification," Specification," J. Ferraiolo, ed., 4 September 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-SVG-20010904/.
[UAAG10]
" " User Agent Accessibility Guidelines 1.0 ," ," I. Jacobs, J. Gunderson, E. Hansen, eds.17 December 2002. This W3C Recommendation is available at http://www.w3.org/TR/2002/REC-UAAG10-20021217/.
[UAAG10-CHECKLIST]
An appendix to this document lists all of the checkpoints, sorted by priority. The checklist is available in either tabular form or list form .
[UAAG10-ICONS]
Information about UAAG 1.0 conformance icons and their usage is available at http://www.w3.org/WAI/UAAG10-Conformance.
[UAAG10-SUMMARY]
An appendix to this document provides a summary of the goals and structure of User Agent Accessibility Guidelines 1.0.
[UAAG10-TECHS]
"Techniques "Techniques for User Agent Accessibility Guidelines 1.0," 1.0," I. Jacobs, J. Gunderson, E. Hansen, eds. The latest draft of the techniques document is available at http://www.w3.org/TR/UAAG10-TECHS/.
[UNICODE]
"The "The Unicode Standard, Version 3.2." 3.2." This technical report of the Unicode Consortium is available at http://www.unicode.org/unicode/reports/tr28/. This is a revision of "The "The Unicode Standard, Version 3.0," 3.0," The Unicode Consortium, Addison-Wesley Developers Press, 2000. ISBN 0-201-61633-5. Refer also to http://www.unicode.org/unicode/standard/versions/ . For information about character encodings , encodings, refer to Unicode Technical Report #17 "Character "Character Encoding Model" Model" .
[VOICEBROWSER]
"Introduction "Introduction and Overview of W3C Speech Interface Framework," Framework," J. Larson, 4 December 2000. This W3C Working Draft is http://www.w3.org/TR/2000/WD-voice-intro-20001204/. The latest version is available at http://www.w3.org/TR/voice-intro/. This document includes references to additional W3C specifications about voice browser technology.
[W3CPROCESS]
"World "World Wide Web Consortium Process Document," Document," I. Jacobs ed. The 19 July 2001 version of the Process Document is http://www.w3.org/Consortium/Process-20010719/. The latest version is available at http://www.w3.org/Consortium/Process/.
[WCAG10-TECHS]
"Techniques "Techniques for Web Content Accessibility Guidelines 1.0," 1.0," W. Chisholm, G. Vanderheiden, and I. Jacobs, eds., 6 November 2000. This W3C Note is http://www.w3.org/TR/2000/NOTE-WCAG10-TECHS-20001106/. The latest version is available at http://www.w3.org/TR/WCAG10-TECHS/. Additional format-specific techniques documents are available from this Note.
[WEBCHAR]
"Web "Web Characterization Terminology and Definitions Sheet," Sheet," B. Lavoie, H. F. Nielsen, eds., 24 May 1999. This is a W3C Working Draft that defines some terms to establish a common understanding about key Web concepts. This W3C Working Draft is http://www.w3.org/1999/05/WCA-terms/01.
[XAG10]
"XML "XML Accessibility Guidelines 1.0," 1.0," D. Dardailler, S. Palmer, C. McCathieNevile, eds., 3 October 2001. This W3C Working Draft is http://www.w3.org/TR/2002/WD-xag-20021003. The latest version is available at http://www.w3.org/TR/xag.
[XHTML10]
"XHTML[tm] "XHTML[tm] 1.0: The Extensible HyperText Markup Language," Language," S. Pemberton, et al., 26 January 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xhtml1-2000 http://www.w3.org/TR/2000/REC-xhtml1-20000126/.
[XMLDSIG]
"XML-Signature Syntax and Processing," D. Eastlake, J. Reagle, D. Solo, eds., 12 February 2002. This W3C Recommendation is http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/.
[XMLENC]
"XML Encryption Syntax and Processing," D. Eastlake, J. Reagle, eds., 10 December 2002. This W3C Recommendation is http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/.

Appendix D: Acknowledgments

Participants active in the UAWG prior publication:

Other previously active UAWG participants and other contributors to UAAG 2.0:

This document would not have been possible without the work of those who contributed to UAAG 1.0.

This publication has been funded in part with Federal funds from the U.S. Department of Education, National Institute on Disability and Rehabilitation Research (NIDRR) under contract number ED05CO0039. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.


Appendix E: Checklist

@@Ed. This section is still under development@@


Appendix F: Comparison of UAAG 1.0 guidelines to UAAG 2.0

@@Ed. This section is still under development@@