W3C

Techniques for User Agent Accessibility Guidelines 1.0

W3C Working Draft 31 March 2001

This version:
http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20010331/
(Formats: plain text, gzip PostScript, gzip PDF, gzip tar file of HTML, zip archive of HTML)
Latest version:
http://www.w3.org/WAI/UA/UAAG10-TECHS/
Previous version:
http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20010323
Editors:
Ian Jacobs, W3C
Jon Gunderson, University of Illinois at Urbana-Champaign
Eric Hansen, Educational Testing Service
Authors and Contributors:
See acknowledgements.

Abstract

This document provides techniques for satisfying the checkpoints defined in "Techniques for User Agent Accessibility Guidelines 1.0" [UAAG10]. These techniques cover the accessibility of user interfaces, content rendering, application programming interfaces (APIs), and languages such as the Hypertext Markup Language (HTML), Cascading Style Sheets (CSS) and the Synchronized Multimedia Integration Language (SMIL).

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this document series is maintained at the W3C.

This is the 31 March 2001 Working Draft of Techniques for User Agent Accessibility Guidelines 1.0, for review by W3C Members and other interested parties. It is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress". This is work in progress and does not imply endorsement by, or the consensus of, either W3C or participants in the User Agent Accessibility Guidelines Working Group (UAWG).

While Techniques for User Agent Accessibility Guidelines 1.0 strives to be a stable document (as a W3C Recommendation), the current document is expected to evolve as technologies change and content developers discover more effective techniques for designing accessible Web sites and pages.

Please send comments about this document, including suggestions for additional techniques, to the public mailing list w3c-wai-ua@w3.org; public archives are available.

This document is part of a series of accessibility documents published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C). WAI Accessibility Guidelines are produced as part of the WAI Technical Activity. The goals of the User Agent Accessibility Guidelines Working Group are described in the charter.

A list of current W3C Recommendations and other technical documents can be found at the W3C Web site.

Table of contents

Note: With a user agent that implements HTML 4 [HTML4] access keys, readers may navigate directly to the table of contents via the "c" character. Users may have to use additional keyboard strokes depending on their operating environment.

Related resources

"Techniques for User Agent Accessibility Guidelines 1.0" and the "User Agent Accessibility Guidelines 1.0" [UAAG10] are part of a series of accessibility guidelines published by the Web Accessibility Initiative (WAI). These documents explain the responsibilities of user agent developers in making the Web accessibility to users with disabilities. The series also includes the "Web Content Accessibility Guidelines 1.0" [WCAG10] (and techniques [WCAG10-TECHS]), which explain the responsibilities of authors, and the "Authoring Tool Accessibility Guidelines 1.0" [ATAG10] (and techniques [ATAG10-TECHS]), which explain the responsibilities of authoring tool developers.


1 Introduction

This document suggests some techniques for satisfying the requirements of the "User Agent Accessibility Guidelines 1.0" [UAAG10]. The techniques listed in this document are not required for conformance to the Guidelines. These techniques are not necessarily the only way of satisfying the checkpoint, nor are they a definitive set of requirements for satisfying a checkpoint.

2 The user agent accessibility guidelines

This section lists each checkpoint of "User Agent Accessibility Guidelines 1.0" [UAAG10] along with some possible techniques for satisfying it. Each checkpoint definition includes a link to the checkpoint definition in "User Agent Accessibility Guidelines 1.0". Each checkpoint definition is followed by a list of techniques, information about related resources, and references to the accessibility topics in section 3. The accessibility topics of section 3 apply to more than one checkpoint.

Note: Most of the techniques in this document are designed for mainstream (graphical) browsers and multimedia players. However, some of them also make sense for assistive technologies and other user agents. In particular, techniques about communication between user agents will benefit assistive technologies. Refer, for example, to the appendix on loading assistive technologies for access to the document object model.

Priorities

Each checkpoint in this document is assigned a priority that indicates its importance for users with disabilities.

[Priority 1]
This checkpoint must be satisfied by user agents, otherwise one or more groups of users with disabilities will find it impossible to access the Web. Satisfying this checkpoint is a basic requirement for enabling some people to access the Web.
[Priority 2]
This checkpoint should be satisfied by user agents, otherwise one or more groups of users with disabilities will find it difficult to access the Web. Satisfying this checkpoint will remove significant barriers to Web access for some people.
[Priority 3]
This checkpoint may be satisfied by user agents to make it easier for one or more groups of users with disabilities to access information. Satisfying this checkpoint will improve access to the Web for some people.

Note: This information about checkpoint priorities is included for convenience only. For detailed information about conformance to "User Agent Accessibility Guidelines 1.0" [UAAG10], please refer to that document.

Guideline 1. Support input and output device-independence.

Checkpoints

1.1 Ensure that the user can operate the user agent fully through keyboard input alone. [Priority 1] Both content and user agent. (Checkpoint 1.1)
Note: For example, ensure that the user can interact with enabled elements, select content, navigate viewports, configure the user agent, access documentation, install the user agent, operate controls of the user interface, etc., all entirely through keyboard input. It is also possible to claim conformance to User Agent Accessibility Guidelines 1.0 [UAAG10] for full support through pointing device input and voice input. See the section on input modality labels in UAAG 1.0.

Techniques:

Since the subject of a claim may be one or more software components, one could, for example, claim conformance for the following software used together:

Functionalities addressed by this checkpoint include the following:

For activation of enabled elements:


1.2 For the element with content focus, allow the user to activate any explicitly associated input device event handlers through keyboard input alone. [Priority 1] Content only. (Checkpoint 1.2)
Note: The requirements for this checkpoint refer to any explicitly associated input device event handlers associated with an element, independent of the input modalities for which the user agent conforms. For example, suppose that an element has an explicitly associated handler for pointing device events. Even when the user agent only conforms for keyboard input (and does not conform for the pointing device, for example), this checkpoint requires the user agent to allow the user to activate that handler with the keyboard. This checkpoint is an important special case of checkpoint 1.1. Please refer to the checkpoints of guideline 9 for more information about focus requirements.

Techniques:


1.3 Ensure that every message (e.g., prompt, alert, notification, etc.) that is a non-text element and is part of the user agent user interface has a text equivalent. [Priority 1] User agent only. (Checkpoint 1.3)
Note: For example, if the user is alerted of an event by an audio cue, a visually-rendered text equivalent in the status bar would satisfy this checkpoint. Per checkpoint 6.4, a text equivalent for each such message must be available through a standard API. See also checkpoint 6.5 for requirements for programmatic alert of changes to the user interface.

Techniques:


Guideline 2. Ensure user access to all content.

Checkpoints

2.1 For all format specifications that the user agent implements, make content available through the rendering processes described by those specifications. [Priority 1] Content only. (Checkpoint 2.1)
Note: This includes format-defined interactions between author preferences and user preferences/capabilities (e.g., when to render the "alt" attribute in HTML [HTML4], the rendering order of nested OBJECT elements in HTML, test attributes in SMIL [SMIL], and the cascade in CSS2 [CSS2]). If a conforming user agent does not render a content type, it should allow the user to choose a way to handle that content (e.g., by launching another application, by saving it to disk, etc.). This checkpoint does not require that all content be available through each viewport.

Techniques:


2.2 For all text formats that the user agent implements, provide a view of the text source. Text formats include at least the following: (1) all media objects given an Internet media type of "text" (e.g., text/plain, text/HTML, or text/*), and (2) all SGML and XML applications, regardless of Internet media type (e.g., HTML 4.01, XHTML 1.1, SMIL, SVG, etc.). [Priority 1] Content only. (Checkpoint 2.2)
Note: Refer to [RFC2046], section 4.1 for information about the "text" Internet media type. A user agent would also satisfy this checkpoint by providing a source view for any text format, not just implemented text formats.

Techniques:


2.3 global configuration so that, for each piece of unrendered conditional content "C", the user agent alerts the user to the existence of the content and provides access to it. Provide access to this content according to format specifications or where unspecified, as follows. If C has a close relationship (e.g., C is a summary, title, alternative, description, expansion, etc.) with another piece of rendered content D, do at least one of the following: (1a) render C in place of D, (2a) render C in addition to D, (3a) provide access to C by querying D, or (4a) allow the user to follow a link to C from the context of D. If C does not have a close relationship to other content (i.e., a relationship other than just a document tree relationship), do at least one of the following: (1b) render a placeholder for C, (2b) provide access to C by query (e.g., allow the user to query an element for its attributes), or (3b) allow the user to follow a link in context to C. [Priority 1] Content only. (Checkpoint 2.3)
Note: The configuration requirement of this checkpoint is global; the user agent is only required to provide one switch that turns on or off these alert and access mechanisms. To satisfy this checkpoint, the user agent may provide access on an element-by-element basis (e.g., by allowing the user to query individual elements) or for all elements (e.g., by offering a configuration to render conditional content all the time). For instance, an HTML user agent might allow users to query each element for access to conditional content supplied for the "alt", "title", and "longdesc" attributes. Or, the user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism).

Techniques:

How the user selects preferred natural language for captions in Real Player

This image shows how users select a natural language preference in the Real Player. This setting, in conjunction with language markup in the presentation, determines what content is rendered.


2.4 For content where user input is only possible within a finite time interval controlled by the user agent, allow configuration to make the time interval "infinite". Do this by pausing automatically at the end of each time interval where user input is possible, and resuming automatically after the user has explicitly completed input. In this configuration, alert the user when the session has been paused and which enabled elements are time-sensitive. [Priority 1] Content only. (Checkpoint 2.4)
Note: In this configuration, the user agent may have to pause the presentation more than once if there is more than one opportunity for time-sensitive input. In SMIL 1.0 [SMIL], for example, the "begin", "end", and "dur" attributes synchronize presentation components. The user may explicitly complete input in many different ways (e.g., by following a link that replaces the current time-sensitive resource with a different resource). This checkpoint does not apply when the user agent cannot recognize the time interval in the presentation format, or when the user agent cannot control the timing (e.g., because it is controlled by the server).

Techniques:


2.5 Allow configuration or control so that text transcripts, collated text transcripts, captions, and auditory descriptions are rendered at the same time as the associated audio tracks and visual tracks. [Priority 1] Content only. (Checkpoint 2.5)
Note: This checkpoint is an important special case of checkpoint 2.1.

Techniques:


2.6 Respect synchronization cues during rendering. [Priority 1] Content only. (Checkpoint 2.6)
Note: This checkpoint is an important special case of checkpoint 2.1.

Techniques:


2.7 Allow configuration to generate repair text when the user agent recognizes that the author has failed to provide conditional content that was required by the format specification. The user agent may satisfy this checkpoint by basing the repair text on any of the following available sources of information: URI reference, content type, or element type. [Priority 2] Content only. (Checkpoint 2.7)
Note: Some markup languages (such as HTML 4 [HTML4] and SMIL 1.0 [SMIL] require the author to provide conditional content for some elements (e.g., the "alt" attribute on the IMG element). Repair text based on URI reference, content type, or element type is sufficient to satisfy the checkpoint, but may not result in the most effective repair. Information that may be recognized as relevant to repair might not be "near" the missing conditional content in the document object. For instance, instead of generating repair text on a simple URI reference, the user agent might look for helpful information near a different instance of the URI reference in the same document object, or might retrieve useful information (e.g., a title) from the resource designed by the URI reference.

Techniques:


2.8 Allow configuration so that when the user agent recognizes that conditional content required by the format specification is present but empty (e.g., the empty string), the user agent either (1) generates no repair text, or (2) generates repair text as described in checkpoint 2.7. [Priority 3] Content only. (Checkpoint 2.8)
Note: In some authoring scenarios, an empty string of text (e.g., "alt=''") may be considered to be an appropriate text equivalent (for instance, when some non-text content has no other function than pure decoration, or an image is part of a "mosaic" of several images and doesn't make sense out of the mosaic). Please refer to the Web Content Accessibility Guidelines 1.0 [WCAG10] for more information about text equivalents.

Techniques:


2.9 Allow configuration to render all conditional content automatically. Provide access to this content according to format specifications or where unspecified, by applying one of the following techniques described in checkpoint 2.3: 1a, 2a, or 1b. [Priority 3] Content only. (Checkpoint 2.9)
Note: The user agent satisfies this checkpoint if it satisfies checkpoint 2.3 by applying techniques 1a, 2a, or 1b. For instance, an HTML user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism).

Techniques:

None.

2.10 Allow configuration not to render content in unsupported natural languages. Indicate to the user in context that author-supplied content has not been rendered. [Priority 3] Content only. (Checkpoint 2.10)
Note: For example, use a text substitute or accessible graphical icon to indicate that content in a particular language has not been rendered. This checkpoint does not require the user agent to allow different configurations for different natural languages.

Techniques:


Guideline 3. Allow configuration not to render some content that may reduce accessibility.

In addition to the techniques below, refer also to the section on user control of style.

Checkpoints

3.1 Allow configuration not to render background images. In this configuration, provide an option to alert the user when a background image is available (but has not been rendered). [Priority 1] Content only. (Checkpoint 3.1)
Note: This checkpoint only requires control of background images for "two-layered renderings", i.e., one rendered background image with all other content rendered "above it". When background images are not rendered, user agents should render a solid background color instead (see checkpoint 4.3). In this configuration, the user agent is not required to retrieve background images from the Web.

Techniques:


3.2 Allow configuration not to render audio, video, or animated images except on explicit request from the user. In this configuration, provide an option to render a placeholder in context for each unrendered source of audio, video, or animated image. When placeholders are rendered, allow the user to view the original author-supplied content associated with each placeholder. [Priority 1] Content only. (Checkpoint 3.2)
Note: This checkpoint requires configuration for content rendered without any user interaction (including content rendered on load or as the result of a script), as well as content rendered as the result of user interaction that is not an explicit request (e.g., when the user activates a link). When configured not to render content except on explicit user request, the user agent is not required to retrieve the audio, video, or animated image from the Web until requested by the user. See also checkpoint 3.8, checkpoint 4.5, checkpoint 4.9, and checkpoint 4.10.

Techniques:


3.3 Allow configuration to render animated or blinking text as motionless, unblinking text. [Priority 1] Content only. (Checkpoint 3.3)
Note: A "stock quote ticker" is an example of animated text. This checkpoint does not apply for blinking and animation effects that are caused by mechanisms that the user agent cannot recognize. This checkpoint requires configuration because blinking effects may be disorienting to some users but useful to others, for example users who are deaf or hard of hearing.

Techniques:


3.4 Allow configuration not to execute any executable content (e.g., scripts and applets). In this configuration, provide an option to alert the user when executable content is available (but has not been executed). [Priority 1] Content only. (Checkpoint 3.4)
Note: Scripts and applets may provide very useful functionality, not all of which causes accessibility problems. Developers should not consider that the user's ability to turn off scripts is an effective way to improve content accessibility; turning off scripts means losing the benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off scripts as a last resort.

Techniques:


3.5 Allow configuration so that client-side content refreshes (i.e., those initiated by the user agent, not the server) do not change content except on explicit user request. Allow the user to request the new content on demand (e.g., by following a link or confirming a prompt). Alert the user, according to the schedule specified by the author, whenever fresh content is available (to be obtained on explicit user request). [Priority 1] Content only. (Checkpoint 3.5)

Techniques:


3.6 Allow configuration so that a "client-side redirect" (i.e., one initiated by the user agent, not the server) does not change content except on explicit user request. Allow the user to access the new content on demand (e.g., by following a link or confirming a prompt). The user agent is not required to provide these functionalities for client-side redirects that occur instantaneously (i.e., when there is no delay before the new content is retrieved). [Priority 2] Content only. (Checkpoint 3.6)

Techniques:


3.7 Allow configuration not to render images. In this configuration, provide an option to render a placeholder in context for each unrendered image. When placeholders are rendered, allow the user to view the original author-supplied content associated with each placeholder. [Priority 2] Content only. (Checkpoint 3.7)
Note: See also checkpoint 3.8.

Techniques:


3.8 Once the user has viewed the original author-supplied content associated with a placeholder, allow the user to turn off the rendering of the author-supplied content. [Priority 3] Content only. (Checkpoint 3.8)
Note: For example, if the user agent substitutes the author-supplied content for the placeholder in context, allow the user to "toggle" between placeholder and the associated content. Or, if the user agent renders the author-supplied content in a separate viewport, allow the user to close that viewport. See checkpoint 3.2 and checkpoint 3.7.

Techniques:

None.

Guideline 4. Ensure user control of rendering.

In addition to the techniques below, refer also to the section on user control of style.

Checkpoints for visually rendered text

4.1 Allow global configuration and control over the reference size of rendered text, with an option to override reference sizes specified by the author or user agent defaults. Allow the user to choose from among the full range of font sizes supported by the operating environment. [Priority 1] Content only. (Checkpoint 4.1)
Note: The reference size of rendered text corresponds to the default value of the CSS2 'font-size' property, which is 'medium' (refer to CSS2 [CSS2], section 15.2.4). For example, in HTML, this might be paragraph text. The default reference size of rendered text may vary among user agents. User agents may offer different mechanisms to allow control of the size of rendered text (e.g., font size control, zoom, magnification, etc.). Refer, for example to the Scalable Vector Graphics specification [SVG] for information about scalable rendering.

Techniques:


4.2 Allow global configuration of the font family of all rendered text, with an option to override font families specified by the author or by user agent defaults. Allow the user to choose from among the full range of font families supported by the operating environment. [Priority 1] Content only. (Checkpoint 4.2)
Note: For example, allow the user to specify that all text is to be rendered in a particular sans-serif font family. For text that cannot be rendered properly using the user's preferred font family, the user agent may substitute an alternative font family.

Techniques:


4.3 Allow global configuration of the foreground and background color of all rendered text, with an option to override foreground and background colors specified by the author or user agent defaults. Allow the user to choose from among the full range of colors supported by the operating environment. [Priority 1] Content only. (Checkpoint 4.3)
Note: User configuration of foreground and background colors may inadvertently lead to the inability to distinguish ordinary text from selected text, focused text, etc. See checkpoint 10.3 for more information about highlight styles.

Techniques:


Checkpoints for multimedia presentations and other presentations that change continuously over time

4.4 Allow the user to slow the presentation rate of audio and animations (including video and animated images). For a visual track, provide at least one setting between 40% and 60% of the original speed. For a prerecorded audio track including audio-only presentations, provide at least one setting between 75% and 80% of the original speed. When the user agent allows the user to slow the visual track of a synchronized multimedia presentation to between 100% and 80% of its original speed, synchronize the visual and audio tracks. Below 80%, the user agent is not required to render the audio track. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. [Priority 1] Content only. (Checkpoint 4.4)
Note: Purely stylistic effects include background sounds, decorative animated images, and effects caused by style sheets. The style exception of this checkpoint is based on the assumption that authors have satisfied the requirements of the "Web Content Accessibility Guidelines 1.0" [WCAG10] not to convey information through style alone (e.g., through color alone or style sheets alone). See checkpoint 2.6 and checkpoint 4.7.

Techniques:


4.5 Allow the user to stop, pause, resume, fast advance, and fast reverse audio and animations (including video and animated images) that last three or more seconds at their default playback rate. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. The user agent is not required to play synchronized audio during fast advance or reverse of animations (though doing so may help orient the user). [Priority 1] Content only. (Checkpoint 4.5)
Note: See checkpoint 4.4 for more information about the exception for purely stylistic effects. This checkpoint applies to content that is either rendered automatically or on request from the user. The requirement of this checkpoint is for control of each source of audio and animation that is recognized as distinct. Respect synchronization cues per checkpoint 2.6.

Techniques:


4.6 For graphical viewports, allow the user to position text transcripts, collated text transcripts, and captions in the viewport. Allow the user to choose from among at least the range of positions available to the author (e.g., the range of positions allowed by the markup or style language). [Priority 1] Content only. (Checkpoint 4.6)

Techniques:


4.7 Allow the user to slow the presentation rate of audio and animations (including video and animated images) not covered by checkpoint 4.4. The same speed percentage requirements of checkpoint 4.4 apply. [Priority 2] Content only. (Checkpoint 4.7)
Note: User agents automatically satisfy this checkpoint if they satisfy checkpoint 4.4 for all audio and animations.

Techniques:

See the techniques for checkpoint 4.4.

4.8 Allow the user to stop, pause, resume, fast advance, and fast reverse audio and animations (including video and animated images) not covered by checkpoint 4.5. [Priority 2] Content only. (Checkpoint 4.8)
Note: User agents automatically satisfy this checkpoint if they satisfy checkpoint 4.5 for all audio and animations.

Techniques:

See the techniques for checkpoint 4.5.

Checkpoints for audio volume control

4.9 Allow global configuration and control of the volume of all audio, with an option to override audio volumes specified by the author or user agent defaults. The user must be able to choose zero volume (i.e., silent). [Priority 1] Content only. (Checkpoint 4.9)
Note: User agents should allow configuration and control of volume through available operating environment controls.

Techniques:


4.10 Allow independent control of the volumes of distinct audio sources synchronized to play simultaneously. [Priority 1] Content only. (Checkpoint 4.10)
Note: Sounds that play at different times are distinguishable and therefore independent control of their volumes is not required by this checkpoint (since volume control required by checkpoint 4.9 suffices). The user agent may satisfy this checkpoint by allowing the user to control independently the volumes of all distinct audio sources. The user control required by this checkpoint includes the ability to override author-specified volumes for the relevant sources of audio. See also checkpoint 4.12.

Techniques:


Checkpoints for synthesized speech

See also techniques for synthesized speech.

4.11 Allow configuration and control of the synthesized speech rate, according to the full range offered by the speech synthesizer. [Priority 1] Content only. (Checkpoint 4.11)
Note: The range of speech rates offered by the speech synthesizer may depend on natural language.

Techniques:


4.12 Allow control of the synthesized speech volume, independent of other sources of audio. [Priority 1] Content only. (Checkpoint 4.12)
Note: The user control required by this checkpoint includes the ability to override author-specified speech volume. See also checkpoint 4.10.

Techniques:


4.13 Allow configuration of speech characteristics according to the full range of values offered by the speech synthesizer. [Priority 1] Content only. (Checkpoint 4.13)
Note: Some speech synthesizers allow users to choose values for speech characteristics at a higher abstraction layer, i.e., by choosing from present options that group several characteristics. Some typical options one might encounter include: "adult male voice", "female child voice", "robot voice", "pitch", "stress", etc. Ranges for values may vary among speech synthesizers.

Techniques:

ViaVoice control panel for configuration of voice characteristics

This image shows how ViaVoice [VIAVOICE] allows users to configure voice characteristics of the speech synthesizer.


4.14 Allow configuration of the following speech characteristics: pitch, pitch range, stress, richness. Pitch refers to the average frequency of the speaking voice. Pitch range specifies a variation in average frequency. Stress refers to the height of "local peaks" in the intonation contour of the voice. Richness refers to the richness or brightness of the voice. [Priority 2] Content only. (Checkpoint 4.14)
Note: This checkpoint is more specific than checkpoint 4.13: it requires support for the voice characteristics listed. Definitions for these characteristics are taken from section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; please refer to that specification for additional informative descriptions. Some speech synthesizers allow users to choose values for speech characteristics at a higher abstraction layer, i.e., by choosing from present options distinguished by "gender", "age", "accent", etc. Ranges of values may vary among speech synthesizers.
4.15 Provide support for user-defined extensions to the speech dictionary, as well as the following functionalities: spell-out (spell text one character at a time or according to language-dependent pronunciation rules), speak-numeral (speak a numeral as individual digits or as a full number), and speak-punctuation (speak punctuation literally or render as natural pauses). [Priority 2] Content only. (Checkpoint 4.15)
Note: Definitions for the functionalities listed are taken from section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; please refer to that specification for additional informative descriptions.

Techniques:

ViaVoice control panel for editing the user dictionary

This image shows how ViaVoice [VIAVOICE] allows users to add entries to the user's personal dictionary.


Checkpoints related to style sheets

4.16 For user agents that support style sheets, allow the user to choose from (and apply) available author and user style sheets or to ignore them. [Priority 1] Both content and user agent. (Checkpoint 4.16)
Note: By definition, the user agent's default style sheet is always present, but may be overridden by author or user styles. Developers should not consider that the user's ability to turn off author and user style sheets is an effective way to improve content accessibility; turning off style sheet support means losing the many benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off author and user style sheets as a last resort.

Techniques:


Guideline 5. Ensure user control of user interface behavior.

Checkpoints

5.1 Allow configuration so that the current focus does not move automatically to viewports that open without explicit user request. Configuration is not required if the current focus can only ever be moved by explicit user request. [Priority 2] Both content and user agent. (Checkpoint 5.1)
Note: For example, allow configuration so that neither the current focus nor the pointing device jump automatically to a viewport that opens without explicit user request.

Techniques:


5.2 For graphical user interfaces, allow configuration so that the viewport with the current focus remains "on top" of all other viewports with which it overlaps. [Priority 2] Both content and user agent. (Checkpoint 5.2)

Techniques:


5.3 Allow configuration so that viewports only open on explicit user request. In this configuration, instead of opening a viewport automatically, alert the user and allow the user to open it on demand (e.g., by following a link or confirming a prompt). Allow the user to close viewports. If a viewport (e.g., a frame set) contains other viewports, these requirements only apply to the outermost container viewport. [Priority 2] Both content and user agent. (Checkpoint 5.3)
Note: User creation of a new viewport (e.g., empty or with a new resource loaded) through the user agent's user interface constitutes an explicit user request. See also checkpoint 5.1 (for control over changes of focus when a viewport opens) and checkpoint 6.5 (for programmatic alert of changes to the user interface).

Techniques:


5.4 Allow configuration to prompt the user to confirm (or cancel) any form submission that is not caused by an explicit user request to activate a form submit control. [Priority 2] Content only. (Checkpoint 5.4)
Note: For example, do not submit a form automatically when a menu option is selected, when all fields of a form have been filled out, or when a "mouseover" or "change" event event occurs. The user agent may satisfy this checkpoint by prompting the user to confirm all form submissions.

Techniques:


5.5 Allow configuration to prompt the user to confirm (or cancel) any payment that results from activation of a fee link. [Priority 2] Content only. (Checkpoint 5.5)

Techniques:


5.6 Allow configuration to prompt the user to confirm (or cancel) closing any viewport that starts to close without explicit user request. [Priority 3] Both content and user agent. (Checkpoint 5.6)

Techniques:


Guideline 6. Implement standard application programming interfaces.

Checkpoints

6.1 Provide programmatic read access to HTML and XML content by conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification [DOM2CORE] and exporting the interfaces they define: (1) the Core module for HTML; (2) the Core and XML modules for XML. [Priority 1] Content only. (Checkpoint 6.1)
Note: Please refer to the "Document Object Model (DOM) Level 2 Core Specification" [DOM2CORE] for information about HTML and XML versions covered.

Techniques:


6.2 If the user can modify HTML and XML content through the user interface, provide the same functionality programmatically by conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification [DOM2CORE] and exporting the interfaces they define: (1) the Core module for HTML; (2) the Core and XML modules for XML. [Priority 1] Content only. (Checkpoint 6.2)
Note: For example, if the user interface allows users to complete HTML forms, this must also be possible through the required DOM APIs. Please refer to the "Document Object Model (DOM) Level 2 Core Specification" [DOM2CORE] for information about HTML and XML versions covered.

Techniques:

Allowing assistive technologies write access through the DOM allows them to:

The ability to write to the DOM can improve performance for the assistive technology. For example, if an assistive technology has already traversed a portion of the document object and knows that a section (e.g., a style element) could not be rendered, it can mark this section "to be skipped".

Another benefit is to add information necessary for audio rendering but that would not be stored directly in the DOM during parsing. Consider an ordered list. The Internet Explorer 5.5 [IE-WIN] document object model for HTML tells you that list elements are part of an ordered list but does not tell you each list element's number. The assistive technology can add the list element number to each list entry in its attribute list, for audio rendering. Furthermore, the assistive technology component that added the numeric information can mark that section as having been traversed and updated to prevent having to recompute and store the numeric information on the next pass through by the user.

See also techniques for checkpoint 6.1.


6.3 For markup languages other than HTML and XML, provide programmatic access to content using standard APIs (e.g., platform-independent APIs and standard APIs for the operating environment). If standard APIs do not exist, provide programmatic access through publicly documented APIs. [Priority 1] Content only. (Checkpoint 6.3)
Note: This checkpoint addresses content not covered by checkpoints checkpoint 6.1 and checkpoint 6.2.

Techniques:


6.4 Provide programmatic read and write access to user agent user interface controls using standard APIs. If standard APIs do not exist, provide programmatic access through publicly documented APIs. [Priority 1] User agent only. (Checkpoint 6.4)
Note: Per checkpoint 6.6, provide programmatic access through standard APIs (e.g., platform-independent APIs such as the W3C DOM; standard APIs defined for a specific operating system; and conventions for programming languages, plug-ins, virtual machine environments, etc.). This checkpoint requires user agents to provide programmatic access even in the absence of a standard API for doing so.

Techniques:


6.5 Using standard APIs, provide programmatic alert of changes to content, user interface controls, selection, content focus, and user interface focus. If standard APIs do not exist, provide programmatic alert through publicly documented APIs. [Priority 1] Both content and user agent. (Checkpoint 6.5)
Note: For instance, when user interaction in one frame causes automatic changes to content in another, provide programmatic alert through standard APIs. Use the standard APIs required by the checkpoints of guideline 6.

Techniques:


6.6 Implement standard accessibility APIs (e.g., of the operating environment). Where these APIs do not enable the user agent to satisfy the requirements of this document, use the standard input and output APIs of the operating environment. [Priority 1] Both content and user agent. (Checkpoint 6.6)
Note: Accessibility APIs enable assistive technologies to monitor input and output events. As part of satisfying this checkpoint, the user agent needs to ensure that text content is available as text through these APIs (and not, for example, as a series of strokes drawn on the screen).

Techniques:


6.7 Implement the operating environment's standard APIs for the keyboard. If standard APIs for the keyboard do not exist, implement publicly documented APIs for the keyboard. [Priority 1] User agent only. (Checkpoint 6.7)
Note: An operating environment may define more than one standard API for the keyboard. For instance, for Japanese and Chinese, input may be processed in two stages, with an API for each.

Techniques:


6.8 For an API implemented to satisfy requirements of this document, support the character encodings required for that API. [Priority 1] Both content and user agent. (Checkpoint 6.8)
Note: Support for character encodings is important so that text is not "broken" when communicated to assistive technologies. For example, the DOM Level 2 Core Specification [DOM2CORE], section 1.1.5 requires that the DOMString type be encoded using UTF-16. This checkpoint is an important special case of the other API requirements of this document.

Techniques:


6.9 For user agents that implement Cascading Style Sheets (CSS), provide programmatic access to those style sheets by conforming to the CSS module of the W3C Document Object Model (DOM) Level 2 Style Specification [DOM2STYLE] and exporting the interfaces it defines. [Priority 2] Content only. (Checkpoint 6.9)
Note: As of the publication of this document, Cascading Style Sheets (CSS) are defined by CSS Level 1 [CSS1] and CSS Level 2 [CSS2]. Please refer to the "Document Object Model (DOM) Level 2 Style Specification" [DOM2STYLE] for information about CSS versions covered.

Techniques:


6.10 Ensure that programmatic exchanges proceed in a timely manner. [Priority 2] Both content and user agent. (Checkpoint 6.10)
Note: For example, the programmatic exchange of information required by other checkpoints in this document should be efficient enough to prevent information loss, a risk when changes to content or user interface occur more quickly than the communication of those changes. The techniques for this checkpoint explain how developers can reduce communication delays. This will help ensure that assistive technologies have timely access to the document object model and other information that is important for providing access.

Techniques:


Guideline 7. Observe operating environment conventions.

Checkpoints

7.1 Follow operating environment conventions that benefit accessibility when implementing the selection, content focus, and user interface focus. [Priority 1] User agent only. (Checkpoint 7.1)
Note: This checkpoint is an important special case of checkpoint 7.3. See also checkpoint 9.1.

Techniques:


7.2 Ensure that default input configurations do not interfere with operating environment accessibility conventions. [Priority 1] User agent only. (Checkpoint 7.2)
Note: In particular, default configurations should not interfere with operating conventions for keyboard accessibility. See also checkpoint 11.5.

Techniques:


7.3 Follow operating environment conventions that benefit accessibility. In particular, follow conventions that benefit accessibility for user interface design, keyboard configuration, product installation, and documentation. [Priority 2] User agent only. (Checkpoint 7.3)
Note: Operating environment conventions that benefit accessibility are those described in this document and in platform-specific accessibility guidelines.

Techniques:


7.4 Follow operating environment conventions to indicate the input configuration. [Priority 2] User agent only. (Checkpoint 7.4)
Note: For example, in some operating environments, developers may specify which command sequence will activate a functionality so that the standard user interface components display that binding. For example, if a functionality is available from a menu, the letter of the activating key may be underlined in the menu. This checkpoint is an important special case of checkpoint 7.3. See also checkpoint 11.5.

Techniques:


Guideline 8. Implement specifications that benefit accessibility.

Checkpoints

8.1 Implement the accessibility features of all implemented specifications (markup languages, style sheet languages, metadata languages, graphics formats, etc.). The accessibility features of a specification are those identified as such and those that satisfy all of the requirements of the "Web Content Accessibility Guidelines 1.0" [WCAG10]. [Priority 1] Content only. (Checkpoint 8.1)
Note: This checkpoint applies to both W3C-developed and non-W3C specifications.

Techniques:


8.2 Use and conform to either (1) W3C Recommendations when they are available and appropriate for a task, or (2) non-W3C specifications that enable the creation of content that conforms to the Web Content Accessibility Guidelines 1.0 [WCAG10] at any conformance level. [Priority 2] Content only. (Checkpoint 8.2)
Note: For instance, for markup, the user agent may conform to HTML 4 [HTML4], XHTML 1.0 [XHTML10], or XML 1.0 [XML]. For style sheets, the user agent may conform to CSS ([CSS1], [CSS2]). For mathematics, the user agent may conform to MathML 2.0 [MATHML20]. For synchronized multimedia, the user agent may conform to SMIL 1.0 [SMIL]. A specification is considered "available" if it is published (e.g., as a W3C Recommendation) in time for integration into a user agent's development cycle.

Techniques:


Guideline 9. Provide navigation mechanisms.

Checkpoints

9.1 Allow the user to make the selection and focus of each viewport (including frames) the current selection and current focus, respectively. [Priority 1] User agent only. (Checkpoint 9.1)
Note: For example, when all frames of a frameset are displayed side-by-side, allow the user to move the focus among them with the keyboard.

Techniques:


9.2 Allow the user to move the content focus to any enabled element in the viewport. If the author has not specified a navigation order, allow at least forward sequential navigation to each element, in document order. The user agent may also include disabled elements in the navigation order. [Priority 1] Content only. (Checkpoint 9.2)
Note: In addition to forward sequential navigation, the user agent should also allow reverse sequential navigation. This checkpoint is an important special case of checkpoint 9.8.

Techniques:

Sequential navigation techniques

JAWS for Windows Links List view

This image shows how JAWS for Windows [JFW] allows users to navigate to links in a document and activate them independently. Users may also configure the user agent to navigate visited links, unvisited links, or both. Users may also change the sequential navigation order, sorting links alphabetically or leaving them in the logical tabbing order. The focus in the links view follows the focus in the main view.

Direct navigation techniques


9.3 For each state in a viewport's browsing history, maintain information about the point of regard, content focus, user interface focus, and selection. When the user returns to any state in the viewport history, restore the saved values for all four of these state variables. [Priority 1] User agent only. (Checkpoint 9.3)
Note: For example, when the user uses the "back" functionality, restore the four state variables.

Techniques:


9.4 For the element with content focus, make available the list of input device event handlers explicitly associated with the element. [Priority 2] Content only. (Checkpoint 9.4)
Note: For example, allow the user to query the element with content focus for the list of input device event handlers, or add them directly to the serial navigation order. See checkpoint 1.2 for information about activation of event handlers associated with the element with focus.

Techniques:


9.5 Allow configuration so that moving the content focus to an enabled element does not automatically activate any explicitly associated input device event handlers. [Priority 2] Content only. (Checkpoint 9.5)
Note: In this configuration, user agents should still apply any stylistic changes (e.g., highlighting) that may occur when there is a change in content focus.

Techniques:

None.

9.6 Allow the user to move the content focus to any enabled element in the viewport. If the author has not specified a navigation order, allow at least forward and reverse sequential navigation to each element, in document order. The user agent must not include disabled elements in the navigation order. [Priority 2] Content only. (Checkpoint 9.6)
Note: This checkpoint is a special case of checkpoint 9.2.

Techniques:


9.7 Allow the user to search within rendered text content for a sequence of characters from the document character set. Allow the user to start a forward search (in document order) from any selected or focused location in content. When there is a match (1) move the viewport so that the matched text content is within it, and (2) allow the user to search for the next instance of the text from the location of the match. Alert the user when there is no match, when the search reaches the end of content, and prior to any wrapping. Provide a case-insensitive search option for text in scripts (i.e., writing systems) where case is significant. [Priority 2] Content only. (Checkpoint 9.7)
Note: If the user has not indicated a start position for the search, the search should start from the beginning of content. Use operating environments conventions for indicating the result of a search (e.g., selection or content focus). A wrapping search is one that restarts automatically at the beginning of content once the end of content has been reached.

Techniques:


9.8 Allow the user to navigate efficiently to and among important structural elements. Allow forward and backward sequential navigation to important structural elements. [Priority 2] Content only. (Checkpoint 9.8)
Note: This specification intentionally does not identify which "important elements" must be navigable as this will vary according to markup language. What constitutes "efficient navigation" may depend on a number of factors as well, including the "shape" of content (e.g., serial navigation of long lists is not efficient) and desired granularity (e.g., among tables, then among the cells of a given table).

Techniques:

User agents should construct the navigation view with the goal of breaking content into sensible pieces according to the author's design. In most cases, user agents should not break down content into individual elements for navigation; element-by-element navigation of the document object does not meet the goal of facilitating navigation to important pieces of content. (The navigation view may also be an expanding/contracting outline view; see checkpoint 10.5.)

Instead, user agents are expected to construct the navigation view based on markup. For those languages with known (e.g., by specification, schema, metadata, etc.) conventions for identifying important components, user agents should construct the navigation tree from those components, allowing users to navigate up and down the document tree, and forward and backward among siblings. As the same time, allow users to shrink and expand portions of the document tree. For instance, if a subtree consists of a long series of links, this will pose problems for users with serial access to content. At any level in the document tree (for forward and backward navigation of siblings), limit the number of siblings to between five and ten. Break longer lists down into structured pieces so that users can access content efficiently, decide whether they want to explore it in detail, or skip it and move on.

Tables and forms illustrate the utility of a recursive navigation mechanism. The user should be able to navigate to tables, then change "scope" and navigate within the cells of that table. Nested tables (a table within the cell of another table) fit nicely within this scheme. However, the headers of a nested table may provide important context for the cells of the same row(s) or column(s) containing the nested table. The same ideas apply to forms: users should be able to navigate to a form, then among the controls within that form.

User agents should allow users to:

  1. Navigate to a piece of content that the author has identified as important according to the markup language specification and conventional usage. In HTML, for example, this includes headings, forms, tables, navigation mechanisms, and lists.
  2. Navigate past that piece of content (i.e., avoid the details of that component).
  3. Navigate into that piece of content (i.e., chose to view the details of that component).
  4. Change the navigation view as they go, expanding and contracting portions of content that they wish to examine or ignore. This will speed up navigation and facilitate orientation at the same time.

9.9 Allow configuration and control of the set of important elements required by checkpoint 9.8 and checkpoint 10.5. Allow the user to include and exclude element types in the set of elements. [Priority 3] Content only. (Checkpoint 9.9)
Note: For example, allow the user to navigate only paragraphs, or only headings and paragraphs, etc. See also checkpoint 6.4.

Techniques:


Guideline 10. Orient the user.

Checkpoints

10.1 Make available to the user the purpose of each table and the relationships among the table cells and headers. [Priority 1] Content only. (Checkpoint 10.1)
Note: This checkpoint refers only to table information that the user can recognize. Depending on the table, some techniques may be more efficient than others for conveying data relationships. For many tables, user agents rendering in two dimensions may satisfy this checkpoint by rendering a table as a grid and by ensuring that users can find headers associated with cells. However, for large tables or small viewports, allowing the user to query cells for information about related headers may improve access. This checkpoint is an important special case of checkpoint 2.1.

Techniques:

Internet Explorer context menu item to display table cell header information

This image shows how Internet Explorer [IE-WIN] provides cell header information through the context menu.


10.2 Provide a mechanism for highlighting the selection and content focus. Allow the user to configure the highlight styles. The highlight mechanism must not rely on color alone. For graphical viewports, if the highlight mechanism involves colors or text decorations, allow the user to choose from among the full range of colors or text decorations supported by the operating environment. [Priority 1] Content only. (Checkpoint 10.2)
Note: Examples of highlight mechanisms include foreground and background color variations, underlining, distinctive voice pitches, rectangular boxes, etc. Because the selection and focus change frequently, user agents should not highlight them using mechanisms (e.g., font size variations) that cause content to reflow as this may disorient the user. See also checkpoint 7.1.

Techniques:


10.3 Ensure that all of the default highlight styles for the selection, content focus, enabled elements, recently visited links, and fee links (1) do not rely on color alone, and (2) differ from each other, and not by color alone. [Priority 1] Content only. (Checkpoint 10.3)
Note: For instance, by default a graphical user agent may present the selection using color and a dotted outline, the focus using a solid outline, enabled elements as underlined in blue, recently visited links as dotted underlined in purple, and fee links using a special icon or flag to draw the user's attention.

Techniques:


10.4 Provide a mechanism for highlighting all enabled elements, recently visited links, and fee links. Allow the user to configure the highlight styles. The highlight mechanism must not rely on color alone. For graphical viewports, if the highlight mechanism involves colors, fonts, or text decorations, allow the user to choose from among the full range of colors, fonts, or text decorations supported by the operating environment. For an image map, the user agent must highlight the image map as a whole and should allow configuration to highlight each enabled region. [Priority 2] Content only. (Checkpoint 10.4)
Note: Examples of highlight mechanisms include foreground and background color variations, font variations, underlining, distinctive voice pitches, rectangular boxes, etc.

Techniques:

The Opera dialog box for configuring the rendering of links

This image shows how Opera [OPERA] allows the user to configure link rendering, including the identification of visited links.


10.5 Make available to the user an "outline" view of content, composed of labels for important structural elements (e.g., heading text, table titles, form titles, etc.). [Priority 2] Content only. (Checkpoint 10.5)
Note: This checkpoint is meant to provide the user with a simplified view of content (e.g, a table of contents). What constitutes a label is defined by each markup language specification. For example, in HTML, a heading (H1-H6) is a label for the section that follows it, a CAPTION is a label for a table, the "title" attribute is a label for its element, etc. A label is not required to be text only. For important elements that do not have associated labels, user agents may generate labels for the outline view. For information about what constitutes the set of important structural elements, please see the Note following checkpoint 9.8. By making the outline view navigable, it is possible to satisfy this checkpoint and checkpoint 9.8 together: Allow users to navigate among the important elements of the outline view, and to navigate from a position in the outline view to the corresponding position in a full view of content. See also checkpoint 9.9.

Techniques:

Amaya table of contents view

This image shows the table of contents view provided by Amaya [AMAYA]. This view is coordinated with the main view so that users may navigate in one viewport and the focus follows in the other. An entry in the table of contents with a target icon means that the heading in the document has an associated anchor.


10.6 To help the user decide whether to traverse a link, make available the following information about it: link element content, link title, whether the link is internal to the resource (e.g., the link is to a target in the same Web page), whether the user has traversed the link recently, whether traversing it may involve a fee, and information about the type, size, and natural language of linked Web resources. The user agent is not required to compute or make available information that requires retrieval of linked Web resources. [Priority 3] Content only. (Checkpoint 10.6)

Techniques:


Checkpoints for the user interface

10.7 Provide a mechanism for highlighting the viewport with the current focus. For graphical viewports, the default highlight mechanism must not rely on color alone. [Priority 1] User agent only. (Checkpoint 10.7)
Note: This includes highlighting and identifying frames. This checkpoint is an important special case of checkpoint 1.1. See also to checkpoint checkpoint 7.3.

Techniques:

Example of a solid line border used to indicate the content focus in Opera 3.60

This image shows how Opera [OPERA] uses a solid line border to indicate content focus.

Example of system highlight colors used to indicate the content focus by the accessible browser project

This image shows how the Accessible Web Browser [AWB] uses the operating environment highlight colors to indicate content focus.


10.8 Ensure that when a viewport's selection or content focus changes, it is in the viewport after the change. [Priority 2] User agent only. (Checkpoint 10.8)
Note: For example, if users navigating links move to a portion of the document outside a graphical viewport, the viewport should scroll to include the new location of the focus. Or, for users of audio viewports, allow configuration to render the selection or focus immediately after the change.

Techniques:


10.9 Indicate the relative position of the viewport in rendered content (e.g., the proportion of an audio or video clip that has been played, the proportion of a Web page that has been viewed, etc.). [Priority 3] User agent only. (Checkpoint 10.9)
Note: The user agent may calculate the relative position according to content focus position, selection position, or viewport position, depending on how the user has been browsing. The user agent may indicate the proportion of content viewed in a number of ways, including as a percentage, as a relative size in bytes, etc. For two-dimensional renderings, relative position includes both vertical and horizontal positions.

Techniques:


Guideline 11. Allow configuration and customization.

Checkpoints

11.1 Provide information to the user about current user preferences for input configurations. [Priority 1] User agent only. (Checkpoint 11.1)
Note: To satisfy this checkpoint, the user agent may make available binding information in a centralized fashion (e.g., a list of bindings) or a distributed fashion (e.g., by listing keyboard shortcuts in user interface menus).

Techniques:


11.2 Provide a centralized view of the current author-specified input configuration bindings. [Priority 2] Content only. (Checkpoint 11.2)
Note: For example, for HTML documents, provide a view of keyboard bindings specified by the author through the "accesskey" attribute. The intent of this checkpoint is to centralize information about author-specified bindings so that the user does not have to read the entire content first to find out what bindings are available. The user agent may satisfy this checkpoint by providing different views for different input modalities (keyboard, pointing device, voice, etc.).

Techniques:


11.3 Allow the user to override any binding that is part of the user agent default input configuration The user agent is not required to allow the user to override standard bindings for the operating environment (e.g., for access to help). [Priority 2] User agent only. (Checkpoint 11.3)
Note: The override requirement only applies to bindings for the same input modality (e.g., the user must be able to override a keyboard binding with another keyboard binding). See also checkpoint 11.5, checkpoint 11.7, and checkpoint 12.3.

Techniques:


11.4 Allow the user to override any binding in the default keyboard configuration with a binding to either a key plus modifier keys or to a single-key. For each functionality in the set required by checkpoint 11.5, allow the user to configure a single-key binding (i.e., one key press performs the task, with zero modifier keys). If the number of physical keys on the keyboard is less than the number of functionalities required by by checkpoint 11.5, allow single-key bindings for as many of those functionalities as possible. The user agent is not required to allow the user to override standard bindings for the operating environment (e.g., for access to help). [Priority 2] User agent only. (Checkpoint 11.4)
Note: In this checkpoint, "key" refers to a physical key of the keyboard (rather than, say, a character of the document character set). Because single-key access is so important to some users with physical disabilities, user agents should ensure that (1) most keys of the physical keyboard may be configured for single-key bindings, and (2) most functionalities of the user agent may be configured for single-key bindings. This checkpoint does not require single physical key bindings for character input, only for the activation of user agent functionalities. For information about access to user agent functionality through a keyboard API, see checkpoint 6.7.

Techniques:


11.5 Ensure that the default input configuration includes bindings for the following functionalities required by other checkpoints in this document: move focus to next enabled element; move focus to previous enabled element; activate focused link; search for text; search again for same text; increase size of rendered text; decrease size of rendered text; increase global volume; decrease global volume; (each of) stop, pause, resume, fast advance, and fast reverse selected audio and animations (including video and animated images). If the user agent supports the following functionalities, the default input configuration must also include bindings for them: next history state (forward); previous history state (back); enter URI for new resource; add to favorites (i.e., bookmarked resources); view favorites; stop loading resource; reload resource; refresh rendering; forward one viewport; back one viewport; next line; previous line. [Priority 2] User agent only. (Checkpoint 11.5)
Note: This checkpoint does not make any requirements about the ease of use of default input configurations, though clearly the default configuration should include single-key bindings and allow easy operation. Ease of use is ensured by the configuration requirements of checkpoint 11.3.

Techniques:


11.6 For the configuration requirements of this document, allow the user to save user preferences in at least one user profile. Allow users to choose from among available profiles or no profile (i.e., the user agent default settings). [Priority 2] User agent only. (Checkpoint 11.6)
Note: The configuration requirements of the checkpoints in this document involve user preferences for styles, presentation rates, input configurations, navigation, viewport behavior, and user agent prompts and alerts.

Techniques:


11.7 For graphical user interfaces, allow the user to configure the position of controls on tool bars of the user agent user interface, to add or remove controls for the user interface from a predefined set, and to restore the default user interface. [Priority 3] User agent only. (Checkpoint 11.7)
Note: This checkpoint is a special case of checkpoint 11.3.

Techniques:


Guideline 12. Provide accessible product documentation and help.

Checkpoints

12.1 Ensure that at least one version of the product documentation conforms to at least Level Double-A of the Web Content Accessibility Guidelines 1.0 [WCAG10]. [Priority 1] User agent only. (Checkpoint 12.1)

Techniques:


12.2 Document all user agent features that benefit accessibility. [Priority 1] User agent only. (Checkpoint 12.2)
Note: For example, review the documentation or help system to ensure that it includes information about the functions and capabilities of the user agent that are required by WAI Accessibility Guidelines, platform-specific accessibility guidelines, etc. The documentation of accessibility features should be integrated into the documentation as a whole.

Techniques:


12.3 Document the default input configuration (e.g., the default keyboard bindings). [Priority 1] User agent only. (Checkpoint 12.3)
Note: If the default input configuration is inconsistent with conventions of the operating environment, the documentation should alert the user.

Techniques:

The following table shows how one might document keyboard bindings. It shows the default keyboard configuration for versions of Netscape Navigator [NAVIGATOR] running on the Macintosh, Unix, and Windows operating systems. If a function exists in the browser but does not have a binding, its corresponding cell is marked with an asterisk. If the function does not exist, it is left blank. Note: This table lists some, but not all, functionalities and keyboard bindings of Navigator. It is meant to illustrate, not serve as definitive documentation for Netscape Navigator.

Some entries contain links to special notes. The number in parentheses following the link is the number of the relevant note.

Note: To make this table accessible, a linear version of Navigator Keyboard Bindings is available.

Navigator Keyboard Bindings
Function Macintosh (v 4.61) Unix (v 4.51) Windows (v 4.7)
Move within a document
Scroll to next page Page Down Page Down Page Down
Scroll to previous page Page Up Page Up Page Up
Scroll to top * * Control-Home
Scroll to bottom * * Control-End
Move between documents
Open a new document Command+L Alt+O Control+O
Stop loading a document Command+. Esc Esc
Refresh a document Command+R Alt+R Control+R
Load previous document Command+[
or
Command+Left Arrow
Alt+Left Arrow Alt+Left Arrow
Load next document Command+]
or
Command+Right Arrow
Alt+Right Arrow Alt+Right Arrow
Navigate elements within a document
Move focus to next frame * * *
Move focus to previous frame * * *
Move focus to next enabled element (1) Tab Tab Tab
Move focus to previous enabled element (1) Shift+Tab Shift+Tab Shift+Tab
Find word in page Command+F Alt+F Control+F
Act on HTML elements
Select a link * * Enter
Toggle a check box * * Shift or Enter
Activate radio button * * Shift
Move focus to next item in an option box * * Down Arrow or Right Arrow
Move focus to previous item in an option box * * Up Arrow or Left Arrow
Select item in an option box * * Enter
Press a button (2) Return, Space Enter, Space Enter, Space
Navigate menus
Activate menu * * Alt+ the underlined letter in the menu title
Deactivate menu * Esc Esc
Move focus to next menu item * * (3) Down Arrow
Move focus to previous menu item * * (3) Up Arrow
Select menu item * underlined letter in the menu item Enter
Move focus to submenu * * (3) Right Arrow
Move focus to main menu * * (3) Left Arrow
Navigate bookmarks
View bookmarks menu * (4) * Alt+C+B
Move focus to next item in bookmarks menu Down Arrow (4) * Down Arrow
Move focus to previous item in bookmarks menu Up Arrow (4) * Up Arrow
Select item in bookmarks menu Return (4) * Enter
Add bookmark Command+D Alt+K Control+D
Edit bookmarks Command+B Alt+B Control+B
Delete current bookmark (5) Delete Alt+D Delete
Navigate history list
View history list Command+H Alt+H Control+H
Move focus to next item in history list * * Down Arrow
Move focus to previous item in history list * * Up Arrow
Move focus to first item in history list * * Left Arrow
Select item in history list * * Enter (6)
Close history list Command+W Alt+W Control+W
Define view
Increase font size (7) Shift+Command+] Alt+] Control+]
Decrease font size (7) Shift+Command+[ Alt+[ Control+[
Change font color * * *
Change background color * * *
Turn off author-defined style sheets * * *
Turn on user-defined style sheets (8) ? ? ?
Apply next user-defined style sheet ? ? ?
Apply previous user-defined style sheet ? ? ?
Other functionalities
Access to documentation * * *

Notes.

  1. In Windows, enabled elements of the user interface include links, text entry boxes, buttons, checkboxes, radio buttons, etc. In Unix and Macintosh, Tab cycles through text entry boxes only.
  2. In Windows, this works for any button, since any button can gain the user interface focus using keyboard commands. In Unix and Macintosh, this only applies to the "Submit" button following a text entry.
  3. In Unix, the menus cannot be opened with binding keys. However, once a menu is opened it stays opened until it is explicitly closed, which means that the menus can still be used with shortcut keys to some extent. Sometimes left and right arrows move between menus and up and down arrows move within menus, but this does not seem to work consistently, even within a single session.
  4. In Macintosh, you cannot explicitly view the bookmarks menu. However, if you choose "Edit Bookmarks", which does have a keyboard binding, you can then navigate through the bookmarks and open bookmarked documents in the viewport.
  5. To delete a bookmark you must first choose "Edit Bookmarks" and then move the focus to the bookmark you want to delete.
  6. In Windows, when you open a link from the history menu using Enter, the document opens in a new window.
  7. All three systems have menu items (and corresponding shortcut keys) meant to allow the user to change the font size. However, the menu items are consistently disabled in both Macintosh and Unix. The user seems to be able to actually change the font sizes only in Windows.
  8. It is important to allow users to set their own Cascading Style Sheets. Although Navigator does (currently) allow the user to override the author's choice of foreground color, background color, font, and font size, it does not allow some of the advanced capabilities that make CSS so powerful. For example, a blind user may want to save a series of style sheets which show only headings, only links, etc., and then view the same page using some or all of these style sheets in order to orient himself to the organization of the page before reading the page.

12.4 In a dedicated section of the documentation, describe all features of the user agent that benefit accessibility. [Priority 2] User agent only. (Checkpoint 12.4)
Note: This is a more specific requirement than checkpoint 12.2.

Techniques:


12.5 In each software release, document all changes that affect accessibility. [Priority 2] User agent only. (Checkpoint 12.5)
Note: Features that affect accessibility are those required by WAI Accessibility Guidelines, platform-specific accessibility guidelines, etc.

Techniques:


3 Accessibility topics

This section presents general accessibility techniques that may apply to more than one checkpoint.

3.1 Access to content

User agents need to ensure that users have access to content, either rendered through the user interface or made available to assistive technologies through an API. While providing serial access to a stream of content would satisfy this requirement, this would be analogous to offering recorded music on a cassette: other technologies exist (e.g., CD-ROMs) that allow direct access to music. It is just as important for user agents to allow users to access Web content efficiently, whether the content is being rendered as a two-dimensional graphical layout, an audio stream, or a line-by-line braille stream. Providing efficient access to content involves:

These topics are addressed below.

3.1.1 Preserve and provide structure

When used properly, markup languages structure content in ways that allow user agents to communicate that structure across different renderings. A table describes relationships among cells and headers. Graphically, user agents generally render tables as a two-dimensional grid. However, serial renderings (e.g., speech and braille) also need to make those relationships apparent, otherwise users may not understand the purpose of the table and the relationships among its cells (see the section on table techniques). User agents need to render content in ways that allow users to understand the underlying document structure, which may consist of headings, lists, tables, synchronized multimedia, link relationships, etc. Providing alternative renderings (e.g., an outline view) will also help users understand document structure.

Note: Even though the structure of a language like HTML is defined by a Document Type Definition (DTD), user agents may convey structure according to a "more intelligent" document model when that model is well-known. For instance, in the HTML DTD, heading elements (H1 - H6) do not nest, but presenting the document as nested headings may convey the document's structure more effectively than as a flat list of headers.

3.1.2 Allow access to selected content

The guidelines emphasize the importance of navigation as a way to provide efficient access to content. Navigation allows users to access content more efficiently and when used in conjunction with selection and focus mechanisms, allows users to query content for metadata. For instance, blind users often navigate a document by skipping from link to link, deciding whether to follow each link based on metadata about the link. User agents can help them decide whether to follow a link by allowing them to query each focused link for the link text, title information, information about whether the link has been visited, whether the link involves a fee, etc. While much of this information may be rendered, the information has to also be available to assistive technologies.

For example, the Amaya browser/editor [AMAYA] makes available all attributes and their values to the user through a context menu. The user selects an element (e.g., with the mouse) and opens an attribute menu that shows which attributes are available for the element and which are set. The user may read or write values to attributes (since Amaya is an editor). Information about attributes is also available through Amaya's structured view, which renders the document tree as structured text.

The selection may be widened (moved to the nearest node one level up the document tree) by pressing the Escape key; this is a form of structured navigation based on the underlying document object model.

Users may want to select content based on structure alone (as offered by Amaya) but also based on how the content has been rendered. For instance, most user agents allow users to select ranges of rendered text that may cross "element boundaries".

3.1.3 Access to conditional content

Authors often use the conditional content mechanisms of a specification to satisfy the requirements of the Web Content Accessibility Guidelines 1.0 [WCAG10] for providing equivalents to content so that users may understand the function of a page or part of a page even though they may not be able to make use of a particular content type. For example, authors need to provide text equivalents for non-text content (e.g., images, video, audio-only presentations, etc.) because text may be rendered as speech or braille and may be used by users with visual or hearing or both disabilities. User agents need to ensure that this conditional content is available to users, through the user interface and through APIs.

How authors specify conditional content depends on the markup language used.

In HTML 4 [HTML4], conditional content mechanisms include the following:

Techniques for providing access to conditional content include the following:

3.1.4 Context

Authors and user agents provide context to users through content, structure, navigation mechanisms, and query mechanisms. Titles, dimensions, dates, relationships, the number of elements, and other metadata all help orient the user, particularly when available as text. For instance, user agents can help orient users by allowing them to request that document headings and lists be numbered. See also the section on table techniques, which explains how user agents can offer table navigation and the ability to query a table cell for information about the cell's row and column position, associated header information, etc.

3.2 User control of style

To ensure accessibility, users need to be able to configure the style of rendered content and the user interface. Author-specified styles, while important, may make content inaccessible to some users. User agents need to allow users to increase the size of rendered text (e.g., with a zoom mechanism or font size control), to change colors and color combinations, to slow down multimedia presentations, etc.

To give authors design flexibility and allow users to control important aspects of content style, user agents should implement CSS ([CSS1], [CSS2]) and allow users to create and apply user style sheets. CSS includes mechanisms for tailoring rendering for a particular output medium, including audio, braille, screen, and print.

In JavaScript, the following may be used to change style information:
document.all.myElement style.color = "red";

3.3 Link techniques

User agents make links accessible by providing navigation to links, helping users decide whether to follow them, and allowing interaction in a device-independent manner. Link techniques include the following:

JAWS for Windows HTML Options menu, which allows configuration of a number of link rendering options

As shown in the following image, JAWS for Windows [JFW] offers a view for configuring a number of rendering features, notably some concerning link types, text link verbosity, image map link verbosity, graphical link verbosity, and internal links.

3.4 List techniques

User agents can make lists accessible by ensuring that list structure – and in particular, embedded list structure – is available through navigation and rendering.

3.5 Table techniques

The HTML TABLE element was designed to represent relationships among data ("data" tables). Even when authored well and used according to specification, tables may pose problems for users with disabilities for a number of reasons:

For these situations, user agents may assist these users by providing table navigation mechanisms and supplying context that is present in a two-dimensional rendering (e.g., the cells surrounding a given cell).

To complicate matters, many authors use tables to lay out Web content ("layout" tables). Not only are table structures used to lay out objects on the screen, table elements such as TH (table header) in HTML are used to font styling rather than to indicate a true table header. These practices make it difficult for assistive technologies to rely on markup to convey document structure. Consequently, assistive technologies often resort to interpreting the rendered content, even though the rendered content has "lost" information encoded in the markup. For instance, when an assistive technology "reads" a table from its graphical rendering, the contents of multiline cells may become intermingled. For example, consider the following table:

This is the top left cell    This is the top right cell 
of the table.                of the table.

This is the bottom left      This is the bottom right 
cell of the table.           cell of the table.

Screen readers that read rendered content line by line would read the table cells incorrectly as "This is the top left cell This is the top right cell". So that assistive technologies are not required to gather incomplete information from renderings, these guidelines require that user agents provide access to document source through an API (see checkpoint 6.3).

The following sections discuss techniques for providing improved access to tables.

3.5.1 Table metadata

Users of screen readers or other serial access devices cannot gather information "at a glance" about a two-dimensional table. User agents can make tables more accessible by providing the user with table metadata such as the following:

When navigating, quick access to table metadata will allow users to decide whether to navigate within the table or skip over it. Other techniques:

3.5.2 Linear rendering of tables

A linear rendering of tables -- cells presented one at a time, row by row or column by column -- may be useful, but generally only for simple tables. For more complex tables, user agents need to convey more information about relationships among cells and their headers. A linear rendering of a table may be useful as an equivalent for a multi-dimensional table.

Note: The following techniques apply to columns as well as rows. The elements listed in this section are HTML 4.01 table elements ([HTML4], section 11).

3.5.3 Cell rendering

The most important aspect of rendering a table cell is that the cell's contents be rendered faithfully and be identifiable as the contents of a single cell. However, user agents may provide additional information to help orient the user:

3.5.4 Cell header algorithm

Properly constructed data tables distinguish header cells from data cells. How headers are associated with table cells depends on the markup language. The following algorithm is based on the HTML 4.01 algorithm to calculate header information ([HTML4], section 11.4.3). For the sake of brevity, it assumes a left-to-right ordering, but will work for right-to-left tables as well (refer to the "dir" attribute of HTML 4 [HTML4], section 8.2). For a given cell:

3.5.5 Cell header repair strategies

Not all data tables include proper header markup, which the user agent may be able to detect. Some repair strategies for finding header information include the following:

Other repair issues to consider:

3.5.6 Table navigation

To permit efficient access to tables, user agents should allow users to navigate to tables and within tables, to select individual cells, and to query them for information about the cell and the table as a whole.

3.6 Image map techniques

One way to make an image map accessible is to render the links it contains as text links. This allows assistive technologies to render the links a speech or braille, and benefits users with slow access to the Web and users of small Web devices that do not support images but can support hypertext. User agents may allow users to toggle back and forth between a graphical mode for image maps and a text mode.

To construct a text version of an image map in HTML:

Furthermore, user agents that render a text image map instead of an image may preface the text image map with inline metadata such as:

Allow users to suppress, shrink, and expand text versions of image maps so that they may quickly navigate to an image map (which may be, for example, a navigation tool bar) and decide whether to "expand" it and follow the links of the map. The metadata listed above will allow users to decide whether to expand the map. Ensure that the user can expand and shrink the map and navigate its links using the keyboard and other input devices.

3.7 Frame techniques

Frames were originally designed so that authors could divide up graphic real estate and allow the pieces to change independently (e.g., selecting an entry in a table of contents in one frame changes the contents of a second frame). While frames are not inherently inaccessible, they raise some accessibility issues:

To name a frame in HTML, use the following algorithm:

  1. Use the "title" attribute on FRAME, or if not present,
  2. Use the "name" attribute on FRAME, or if not present,
  3. Use title information of the referenced frame source (e.g., the TITLE element of the source HTML document), or
  4. Use title information of the referenced long description (e.g., what "longdesc" refers to in HTML), or
  5. Use frame context (e.g., "Frame 2.1.3" to indicate the path to this frame in nested framesets).

To make frames accessible, user agents should do the following:

Consider renderings of the following document:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN">
<HTML lang="en">
<HEAD>
  <META http-equiv="Content-Type" 
           content="text/html; charset=iso-8859-1">
  <TITLE>Time Value of Money</TITLE>
</HEAD>

<FRAMESET COLS="*, 388">
  <FRAMESET ROWS="51, *">
    <FRAME src="sizebtn" marginheight="5" marginwidth="1" 
       name="Size buttons" title="Size buttons">
    <FRAME src="outlinec" marginheight="4" marginwidth="4" 
       name="Presentation Outline" 
           title="Presentation Outline">
  </FRAMESET>

  <FRAMESET ROWS="51, 280, *">
    <FRAME src="navbtn" marginheight="5" marginwidth="1" 
       name="Navigation buttons" 
       title="Navigation buttons">
    <FRAME src="slide001" marginheight="0" marginwidth="0" 
      name="Slide Image" title="Slide Image">
    <FRAME src="note001" name="Notes" title="Notes">
  </FRAMESET>
<NOFRAMES> 
<P>List of Presentation Slides</P>
<OL>
<LI><A HREF="slide001">Time Value of Money</A>
<LI><A HREF="slide002">Topic Overview</A>
<LI><A HREF="slide003">Terms and Short Hand</A>
<LI><A HREF="slide004">Future Value of a Single CF</A>
<LI><A HREF="slide005">Example 1: FV example:The
NBA’s new Larry Bird exception</A>
<LI><A HREF="slide006">FV Example: NBA’s Larry
Bird Exception (cont.)</A>
<LI><A HREF="slide007">SuperStar’s Contract
Breakdown</A>
<LI><A HREF="slide008">Present Value of a Single
Cash Flow</A>
<LI><A HREF="slide009">Example 2: Paying Jr, and
A-Rod</A>
<LI><A HREF="slide010">Example 3: Finding Rate of
Return or Interest Rate</A>
<LI><A HREF="slide011">Annuities</A>
<LI><A HREF="slide012">FV of Annuities</A>
<LI><A HREF="slide013">PV of Annuities</A>
<LI><A HREF="slide014">Example 4: Invest Early in
an IRA</A>
<LI><A HREF="slide015">Example 4 Solution</A>
<LI><A HREF="slide016">Example 5: Lotto Fever
</A>
<LI><A HREF="slide017">Uneven Cash Flows: Example
6:Fun with the CF function</A>
<LI><A HREF="slide018">Example 6 CF worksheet inputs</A>
<LI><A HREF="slide019">CF inputs continued</A>
<LI><A HREF="slide020">Non-Annual Interest
Compounding</A>
<LI><A HREF="slide021">Example 7: What rate are
you really paying?</A>
<LI><A HREF="slide022">Nominal to EAR Calculator</A>
<LI><A HREF="slide023">Continuous Interest Compounding</A>
<LI><A HREF="slide024">FV and PV with non-annual
interest compounding</A>
<LI><A HREF="slide025">Non-annual annuities</A>
<LI><A HREF="slide026">Example 8: Finding Monthly
Mortgage Payment</A>
<LI><A HREF="slide027">solution to Example 8</A>
</OL>
</NOFRAMES>
</FRAMESET>
</HTML>

The following examples show how some user agents handle this frameset.

Example frameset with five frame panes rendered in Microsoft Internet Explorer 5.0

Rendering of a frameset by Internet Explorer [IE-WIN].

Rendering by Lynx [LYNX]:

                                        Time Value of Money

   FRAME: Size buttons
   FRAME: Presentation Outline
   FRAME: Navigation buttons
   FRAME: Slide Image
   FRAME: Notes

   List of Presentation Slides
    1. Time Value of Money 
    2. Topic Overview 
    3. Terms and Short Hand 
    4. Future Value of a Single CF 
    5. Example 1: FV example:The NBA's new Larry Bird exception 
    6. FV Example: NBA's Larry Bird Exception (cont.) 
    7. SuperStar's Contract Breakdown 
    8. Present Value of a Single Cash Flow 
    9. Example 2: Paying Jr, and A-Rod 
   10. Example 3: Finding Rate of Return or Interest Rate 
   11. Annuities 
   12. FV of Annuities 
   13. PV of Annuities 
   14. Example 4: Invest Early in an IRA 
   15. Example 4 Solution 
   16. Example 5: Lotto Fever 
   17. Uneven Cash Flows: Example 6:Fun with the CF function 
   18. Example 6 CF worksheet inputs 
   19. CF inputs continued 
   20. Non-Annual Interest Compounding 
   21. Example 7: What rate are you really paying? 
   22. Nominal to EAR Calculator 
   23. Continuous Interest Compounding 
   24. FV and PV with non-annual interest compounding 
   25. Non-annual annuities 
   26. Example 8: Finding Monthly Mortgage Payment 
   27. solution to Example 8 

Example frameset with five links for each of the frame elements in IBM home page reader

Rendering of a frameset by Home Page Reader [HPR].

User agents may also indicate the number of frames in a document and which frame has the current focus via the menu bar or popup menus. Users can configure the user agent to include a FRAMES menu item in their menu bar. The menu bar makes the information highly visible to all users and is very accessible to assistive technologies.

A pull down menu indicating the number of frames in a document, the labels associated with each frame, and a check mark to indicate the frame with the current focus

In this image of the Accessible Web Browser [AWB], the menu bar indicates the number of frames and uses a check mark next to the name of the frame with the current focus.

3.8 Form techniques

To make forms accessible, user agents need to ensure that users may interact with them in a device-independent manner, that users can navigate to the various form controls, and that information about the form and its controls is available on demand.

3.8.1 Form navigation techniques

3.8.2 Form orientation techniques

Provide the following information about forms on demand:

3.8.3 Form control orientation techniques

Provide the following information about the controls in a form on demand (e.g., for the control with focus):

3.8.4 Form submission techniques

Some users do not want forms to be submitted without their consent. The following techniques address user control of form submissions:

3.9 Generated content techniques

User agents may help orient users by generating additional content that "announces" a context change. This may be done through CSS 2 [CSS2] style sheets using a combination of selectors (including the ':before' and ':after' pseudo-elements described in section 12.1) and the 'content' property (section 12.2).

For instance, the user might choose to hear "language:German" when the natural language changes to German and "language:default" when it changes back. This may be implemented in CSS 2 with the ':before' and ':after' pseudo-elements ([CSS2], section 5.12.3)

For example, with the following definition in the stylesheet:

    [lang|=es]:before { content: "start Spanish "; }
    [lang|=es]:after  { content: " end Spanish"; }

the following HTML example:

<P lang="es" class="Spanish">
 <A href="foo_esp.html" 
    hreflang="es">Esta pagina en español</A></P>

might be spoken "start Spanish _Esta pagina en espanol_ end Spanish". Refer also to information on matching attributes and attribute values useful for language matching in CSS 2 ([CSS2], section 5.8.1).

The following example uses style sheets to distinguish visited from unvisited links with color and a text prefix.

The phrase "Visited link:" is inserted before every visited link:

    A:link           { color: red }     /* For unvisited links */
    A:visited        { color: green }   /* For visited links */
    A:visited:before { content: "Visited link: "; }

To hide content, use the CSS 'display' or 'visibility' properties ([CSS2], sections 9.2.5 and 11.2, respectively). The 'display' property suppresses rendering of an entire subtree. The 'visibility' property causes the user agent to generate a rendering structure, but the content is invisible.

The following XSLT style sheet (excerpted from the XSLT Recommendation [XSLT], Section 7.7) shows how one might number H4 elements in HTML with a three-part label.

Example.

<xsl:template match="H4">
   <fo:block>
     <xsl:number level="any" from="H1" count="H2"/>
     <xsl:text>.</xsl:text>
     <xsl:number level="any" from="H2" count="H3"/>
     <xsl:text>.</xsl:text>
     <xsl:number level="any" from="H3" count="H4"/>
     <xsl:text> </xsl:text>
     <xsl:apply-templates/>
   </fo:block>
  </xsl:template>

End example.

3.10 Content repair techniques

When generating repair content, user agent developers should consider the following issues:

See also the section on table cell header repair strategies. Refer also to the W3C document "Techniques for Authoring Tool Accessibility Guidelines 1.0" [ATAG10-TECHS].

3.11 Script and applet techniques

User agents need to make dynamic content accessible to users who may be disoriented by changes in content, who may have a physical disability that prevents them from interacting with a document within a time interval specified by the author, or whose user agent does not support scripts or applets. Not only do user agents make available equivalents to dynamic content, they have to allow users to turn off scripts, to stop animations, adjust timing parameters, etc.

3.11.1 Script techniques

Certain elements of a markup language may have associated event handlers that are activated when certain events occur. User agents need to be able to identify those elements with event handlers statically associated (i.e., associated in the document source, not in a script). In HTML 4 ([HTML4], section 18.2.3), intrinsic events are specified by the attributes beginning with the prefix "on": "onblur", "onchange", "onclick", "ondblclick", "onkeydown", "onkeypress", "onkeyup", "onload", "onmousedown", "onmousemove", "onmouseout", "onmouseover", "onmouseup", "onreset", "onselect", "onsubmit", and "onunload".

Techniques for providing access to scripts include the following:

3.11.2 Applet techniques

When a user agent loads an applet, it should support the Java system conventions for loading an assistive technology (see the appendix on loading assistive technologies for DOM access). If the user is accessing the applet through an assistive technology, the assistive technology should alert the user when the applet receives content focus as this will likely result in the launch of an associated plug-in or browser-specific Java Virtual Machine. The user agent then needs to turn control of the applet over to the assistive technology. User agents need to make conditional content available to the assistive technology. Applets generally include an application frame that provides title information.

3.12 Input configuration techniques

Many people benefit from direct access to important user agent functionalities (e.g., via a single key stroke or short voice command): users with poor physical control (who might mistakenly repeat a key stroke), users who fatigue easily (for whom key combinations involve significant effort), users who cannot remember key combinations, and any user who wants to operate the user agent efficiently.

User agents that allow users to modify default input configurations need to account for configuration information from several sources: user agent defaults, user preferences, author-specified configurations, and operating environment conventions. In HTML, the author may specify keyboard bindings with the "accesskey" attribute ([HTML4], section 17.11.2). Users generally specify their preferences through the user interface but may also do so programmatically or through a profile. The user agent may also consider user preferences set at the operating environment level.

To the user, the most important information is the final configuration once all sources have been cascaded (combined) and all conflicts resolved. Knowing the default configuration is also important; checkpoint 12.3 requires that the default configuration be documented. The user may also want to know how the current configuration differs from the default configuration and what configuration in the viewport with the current focus comes from the author. This information may also be useful to technical support personnel who may be assisting users.

3.12.1 Resolution of input configuration conflicts

In general, user preferences should override other configurations, however this may not always be desirable. For example, users should be prevented from configuring the user agent in a way that would interfere with important functionalities such as quitting the user agent or reconfiguring it.

Some possible options user agents may make available to the user to resolve conflicts include:

3.12.2 Invocation through the input configuration

Users may want to use a keyboard or voice binding to shift focus without actually activating associated event handlers (see parallel behavior described for navigation of enabled elements in the section on sequential navigation techniques). First-time users may want to access additional information before deciding whether to activate a control. More experienced users or those familiar with a page may want to focus and activate in one step. Therefore, the user agent may provide the user with the following options:

  1. On invocation of the input binding, move focus to the associated enabled element, but do not activate it.
  2. On invocation of the input binding, move focus to the associated enabled element and prompt the user with information that will allow the user to decide whether to activate the element (e.g., link title or text). Allow the user to suppress future prompts for this particular input binding.
  3. On invocation of the input binding, move focus to the associated enabled element and activate it.

3.13 Speech techniques

The following techniques apply to user agents that render content as synthesized speech. For information about speech recognition and accessibility, refer to "Speak to Write" [SPEAK2WRITE]. For more information about voice browser technology developed at W3C, refer to "Voice Browsers: An introduction and glossary for the requirements drafts" [VOICEBROWSER].

3.14 Techniques for reducing dependency on spatial interactions

Some interactions with content may require spatial information, often provided by users without disabilities through a pointing device such as a mouse. User agents should not require users to "move through space" to interact with content (or "time", for that matter; see checkpoint 2.4). Thus, for users without a pointing device, the user agent's first approximation of access, say through the keyboard, would be to simulate the mouse with keystrokes. However, such a technique usually requires a significant amount of visual feedback as well as physical dexterity, both of which may not be possible for users with some disabilities. A better alternative is to allow users to enter coordinates where a click should occur. While this is "direct access" to the coordinate, this requires extensive knowledge of the geometry in question. A still better alternative is to allow the user to interact with "objects" in content at a more abstract level than geometry. For example, most HTML authors can use "client-side" image maps rather than "server-side" since what is important is not the actual coordinates but rather that the user has selected one region instead of another. The user agent should convey information about the regions, using descriptions provided by the author. Instead of having users choose a state of the United States by its precise longitude and latitude, it is possible to allow them to choose state by name.

4 Appendix: Accessibility features of some operating systems

Several operating systems now include built-in accessibility features designed to assist individuals with varying abilities. Despite operating systems differences, the built-in accessibility features use a similar naming convention and offer similar functionalities, within the limits imposed by each operating system (or particular hardware platform). The following is a list of built-in accessibility features from several platforms:

StickyKeys
StickyKeys allows users who have difficulties with pressing several keys simultaneously to press and release sequentially each key of the configuration.
MouseKeys
These allow users to move the mouse cursor and activate the mouse button(s) from the keyboard.
RepeatKeys
RepeatKeys allows users to set how fast a key repeats ("repeat rate") when the key is held pressed. It also allows users to control how quickly the key starts to repeat after the key has been pressed ("delay until repeat"). Users can also turn off key repeating.
SlowKeys
SlowKeys instructs the computer not to accept a key as pressed until it has been pressed and held down for more than a user-configurable length of time.
BounceKeys
BounceKeys prevents extra characters from being typed if the user bounces (e.g., due to a tremor) on the same key when pressing or releasing it.
ToggleKeys
ToggleKeys provides an audible indication for the status of keys that have a toggled state (keys that maintain status after being released). The most common toggling keys include Caps Lock, Num Lock, and Scroll Lock.
SoundSentry
SoundSentry monitors the operating system and applications for sounds in order to provide a graphical indication when a sound is being played. Older versions of SoundSentry may have flashed the entire display screen for example, while newer versions of SoundSentry provide the user with a selection of options, such as flashing the viewport that has the current focus or flashing the active window caption bar.

The next three built-in accessibility features are not as commonly available as the above group of features, but are included here for definition, completeness, and future compatibility.

ShowSounds
ShowSounds are user settings or software switches that cause audio to be presented using both audio and graphics. Applications may use these switches as the basis of user preferences.
HighContrast
HighContrast sets fonts and colors designed to make the screen easier to read.
TimeOut
TimeOut turns off built-in accessibility features automatically if the computer remains idle for a user-configurable length of time. This is useful for computers in public settings such as a library. TimeOut might also be referred to as "reset" or "automatic reset".

The next accessibility feature listed here is not considered to be a built-in accessibility feature (since it only provides an alternative input channel) and is presented here only for definition, completeness, and future compatibility.

SerialKeys
SerialKeys allows a user to perform all keyboard and mouse functions from an external assistive device (such as communication aid) communicating with the computer via a serial character stream (e.g., serial port, infra-red port, etc.) rather than or in conjunction with, the keyboard, mouse, and other standard input devices/methods.

Microsoft Windows 95, Windows 98, and Windows NT 4.0

To find out about built-in accessibility features on Windows platforms, ask the system via the "SystemParametersInfo" function. Please refer to "Software accessibility guidelines for Windows applications" [MS-ENABLE] for more information.

For information about Microsoft keyboard configurations (Internet Explorer, Windows 95, Windows 98, and more), refer to documentation on keyboard assistance for Internet Explorer and MS Windows [MS-KEYBOARD].

The following accessibility features can be adjusted from the Accessibility Options Control Panel:

Additional accessibility features available in Windows 98:

Magnifier
Magnifier is a windowed, screen enlargement and enhancement program used by people with low vision to magnify an area of the graphical display (e.g., by tracking the text cursor, current focus, etc.). Magnifier can also invert the colors used by the system within the magnification window.
Accessibility Wizard
The Accessibility Wizard is a setup tool to assist users with the configuration of system accessibility features.

Apple Macintosh operating system

The following accessibility features can be adjusted from the Easy Access Control panel. Note: The Apple naming convention for accessibility features is to put spaces between the terms (e.g., "Sticky Keys" instead of "StickyKeys").

The following accessibility features can be adjusted from the Keyboard Control Panel.

The following accessibility feature can be adjusted from the Sound or Monitors and Sound Control Panel (depending on system version).

Additional accessibility features available for the Macintosh OS:

CloseView
CloseView is a full screen, screen enlargement and enhancement program used by people with low vision to magnify the information on the graphical display, and it can also change the colors used by the system.
SerialKeys
SerialKeys is available as freeware from Apple and several other Web sites.

AccessX, X Keyboard Extension (XKB), and the X Window System

The following accessibility features can be adjusted from the AccessX graphical user interface X client on some DEC, SUN, and SGI operating systems. Other systems supporting XKB may require the user to manipulate the features via a command line parameter(s).

Note: AccessX became a supported part of the X Window System X Server with the release of the X Keyboard Extension in version X11R6.1

DOS (Disk Operating System)

The following accessibility features are available from a freeware program called AccessDOS, which is available from several Internet Web sites including IBM, Microsoft, and the Trace Center, for either PC-DOS or MS-DOS versions 3.3 or higher.

5 Appendix: Loading assistive technologies for access to the document object model

Many of the checkpoints in the guidelines require a "host" user agent to communicate information about content and the user interface to assistive technologies. This appendix explains how developers can ensure the timely exchange of this information (see checkpoint 6.10). The techniques described here include:

  1. Loading the entire assistive technology in the address space of the host user agent;
  2. Loading part of the assistive technology in the address space of the host user agent (e.g., piece of stub code, a dynamically linked library (DLL), a browser helper object, etc.);
  3. Out-of-process access to the document object model.

The first two techniques are similar, differing in the amount of, or capability of, the assistive technology loaded in the same process or address space as the host user agent. These techniques are likely to provide faster access to the document object model since they will not be subject to inter-process communication overhead.

Note: This appendix does not address specialized user agents.

Loading assistive technologies for direct navigation of the document object model

First, the host user agent needs to know which assistive technology to load. One technique for this is to store a reference to an assistive technology in a system registry file or, in the case of Java, a properties file. Registry files are common among many operating system platforms:

Here is an example entry for Java:

    assistive_technologies=com.ibm.sns.svk.AccessEngine

In Microsoft Windows, a similar technique could be followed by storing the name of a Dynamic Link Library (DLL) for an assistive technology in a designated assistive technology key name/assistive technology pair.

Here is an example entry for Windows:

    HKEY_LOCAL_MACHINE\Software\Accessibility\DOM 
           "ScreenReader, VoiceNavigation"

Attaching the assistive technologies to the document object model

Once the assistive technology has been registered, any other user agent can determine whether it needs to be loaded and then load it. Once loaded, the assistive technology can monitor the document object model as needed.

On a non-Java platform, a technique to do this would be to create a separate thread with a reference to the document object model using a DLL. This new thread will load the DLL and call a specified DLL entry name with a pointer to the document object model interface. The assistive technology process will then run as long as required.

The assistive technology has the option to either:

In the future, it will be necessary to provide a more comprehensive reference to the application that not only provides direct navigation to its client area document object model, but also multiple document object models that it is processing and an event model for monitoring them.

Java's direct access

Java can facilitate timely access to accessibility components. In this example, an assistive technology running on a separate thread monitors user interface events such as focus changes. When focus changes, the assistive technology is alerted of which component object has focus. The assistive technology can communicate directly with all components in the application by walking the parent/child hierarchy and connecting to each component's methods and monitor events directly. In this case, an assistive technology has direct access to component specific methods as well as those provided for by the Java Accessibility API. There is no reason that a document object model interface to user agent components could not be provided via Java.

In Java 1.1.x, Sun's Java access utilities load an assistive by monitoring the Java awt.properties file for the presence of assistive technologies and loads them as shown in the following code example:

import java.awt.*;
import java.util.*;
      
String atNames = Toolkit.getProperty("AWT.assistive_technologies",null);
if (atNames != null) {
    StringTokenizer parser = new StringTokenizer(atNames," ,");
    String atName;
    while (parser.hasMoreTokens()) {
       atName = parser.nextToken();
       try {
          Class.forName(atName).newInstance();
       } 
       catch (ClassNotFoundException e) {
          throw new AWTError("Assistive Technology not found: " + atName);
       } 
       catch (InstantiationException e) {
          throw new AWTError("Could not instantiate Assistive" + 
                             " Technology: " + atName);
       } 
       catch (IllegalAccessException e) {
          throw new AWTError("Could not access Assistive" + 
                             " Technology: " + atName);
       } catch (Exception e) {
          throw new AWTError("Error trying to install Assistive" + 
                             " Technology: " + atName + " " + e);
       }
    }
}

In the above code example, the function Class.forName(atName).newInstance() creates a new instance of the assistive technology. The constructor for the assistive technology will then be responsible for monitoring application component objects by monitoring system events.

In the following code example, the constructor for the assistive technology, AccessEngine, adds a focus change listener using Java accessibility utilities. When the assistive technology is alerted that an object has received focus, it has direct access to that object. If the Object, o, has implemented a document object model interface, the assistive technology will have direct access to the document object model in the same process space as the application.

   import java.awt.*;
   import javax.accessibility.*;
   import com.sun.java.accessibility.util.*;
   import java.awt.event.FocusListener;

   class AccessEngine implements FocusListener {
      public AccessEngine() {
         //Add the AccessEngine as a focus change listener
         SwingEventMonitor.addFocusListener((FocusListener)this);
      }
      
      public void focusGained(FocusEvent theEvent) {
         // get the component object source
         Object o = theEvent.getSource();
         // check to see if this is a document object model component
         if (o instanceof DOM) {
            ...
         }
      }
      public void focusLost(FocusEvent theEvent) {
         // Do Nothing
      }
   }

In this example, the assistive technology has the option of running stand-alone or acting as a cache for a bridge that communicates with a main assistive technology running outside the Java virtual machine.

Loading part of the assistive technologies for direct access to the document object model

In order to attach to a running instance of Internet Explorer 4.0, you can use a Browser Helper Object ([BHO]), which is a DLL that will attach itself to every new instance of Internet Explorer 4.0 [IE-WIN] (only if you explicitly run iexplore.exe). You can use this feature to gain access to the object model of Internet Explorer and to monitor events. This can be tremendously helpful when many method calls need to be made to IE, as each call will be executed much more quickly than the out of process case.

There are some requirements when creating a Browser Helper Object:

Java access bridge

To provide native Microsoft Windows assistive technologies access to Java applications without creating a Java native solution, Sun Microsystems provides the "Java Access Bridge." This bridge is loaded as an assistive technology as described in the section on loading assistive technologies for direct navigation of the document object model. The bridge uses a Java Native Invocation (JNI) to Dynamic Link Library (DLL) communication and caching mechanism that allows a native assistive technology to gather and monitor accessibility information in the Java environment. In this environment, the assistive technology determines that a Java application or applet is running and communicates with the Java Access Bridge DLL to process accessibility information about the application/applet running in the Java Virtual Machine.

Loading assistive technologies for indirect access to the document object model

Access to application specific data across process boundaries or address space might be costly in terms of performance. However, there are other reasons to consider when accessing the document object model that might lead a developer to wish to access it from their own process or memory address space. One obvious protection this method provides is that, if the user agent fails, it does not disable the user's assistive technology as well. Another consideration would be legacy systems, where the user relies on their assistive technology for access to software other than the user agent, and thus would have their application loaded all the time.

There are several ways to gain access to the user agent's document object model. Most user agents support some kind of external interface, or act as a mini-server to other applications running on the desktop. Internet Explorer [IE-WIN] is a good example of this, as IE can behave as a component object model (COM) server to other applications. Mozilla [MOZILLA], the open source release of Navigator also supports cross platform COM (XPCOM).

The following example illustrates the use of COM to access the IE object model. This is an example of how to use COM to get a pointer to the WebBrowser2 module, which in turn enables access to an interface/pointer to the document object, or IE document object model for the content.

   /* first, get a pointer to the WebBrowser2 control */
   if (m_pIE == NULL) {
      hr = CoCreateInstance(CLSID_InternetExplorer, 
           NULL, CLSCTX_LOCAL_SERVER, IID_IWebBrowser2, 
           (void**)&m_pIE);

      /* next, get a interface/pointer to the document in view, 
         this is an interface to the document object model (DOM)*/

      void CHelpdbDlg::Digest_Document() {
         HRESULT hr;
         if (m_pIE != NULL) {
            IDispatch* pDisp;
            hr = m_pIE->QueryInterface(IID_IDispatch, (void**) &pDisp);
            if (SUCCEEDED(hr)) {
        
               IDispatch* lDisp;
               hr = m_pIE->get_Document(&lDisp);
               if (SUCCEEDED(hr)) {
           
                   IHTMLDocument2* pHTMLDocument2;
                   hr = lDisp->QueryInterface(IID_IHTMLDocument2,
                               (void**) &pHTMLDocument2);
                   if (SUCCEEDED(hr)) {
            
                   /* with this interface/pointer, IHTMLDocument2*,
                      you can then work on the document */
                      IHTMLElementCollection* pColl;
                      hr = pHTMLDocument2->get_all(&pColl);
                      if (SUCCEEDED(hr)) {
                    
                         LONG c_elem;
                         hr = pColl->get_length(&c_elem);
                         if (SUCCEEDED(hr)) {
                            FindElements(c_elem, pColl);
                         }
                         pColl->Release();
                      }
                      pHTMLDocument2->Release();
                   }
                   lDisp->Release();
               }
               pDisp->Release();
            }
         }
      }
   }   

For a working example of this method, refer to HelpDB [HELPDB].

6 Glossary

Note: In this document, glossary terms generally link to the corresponding entries in this section. These terms are also highlighted through style sheets and identified as glossary terms through markup.

Activate
In this document, the verb "to activate" means (depending on context) either:

The effect of activation depends on the type of enabled element or user interface control. For instance, when a link is activated, the user agent generally retrieves the linked Web resource. When a form control is activated, it may change state (e.g., check boxes) or may take user input (e.g., a text entry field).

Alert
In this document, "to alert" means to make the user aware of some event, without requiring acknowledgement. For example, the user agent may alert the user that new content is available on the server by displaying a text message in the user agent's status bar. See checkpoint 1.3 for requirements about alerts.
Animation
In this document, the term "animation" refers to any visual movement effect created automatically (i.e., without manual user interaction). This definition of animation includes video and animated images. Animation techniques include:
Application Programming Interface (API), standard input/output/device API
An application programming interface (API) defines how communication may take place between applications.

As part of encouraging interoperability, this document recommends using standard APIs where possible, although this document does not define in all cases how those APIs are standardized (i.e., whether they are defined by specifications such as W3C Recommendations, defined by an operating environment vendor, de facto standards, etc.). Implementing APIs that are independent of a particular operating environment (as are the W3C DOM Level 2 specifications) may reduce implementation costs for multi-platform user agents and promote the development of multi-platform assistive technologies. Implementing standard APIs defined for a particular operating environment may reduce implementation costs for assistive technology developers who wish to interoperate with more than one piece of software running on that operating environment.

A "device API" defines how communication may take place with an input or output device such as a keyboard, mouse, video card, etc. A "standard device API" is one that is considered standard for that particular device on a given operating or windowing system.

In this document, an "input/output API" defines how applications or devices communicate with a user agent. As used in this document, input and output APIs include, but are not limited to, device APIs. Input and output APIs also include more abstract communication interfaces than those specified by device APIs. A "standard input/output API" is one that is expected to be implemented by software running on a particular operating environment. Standard input/output APIs may vary from system to system. For example, on desktop computers today, the standard input APIs are for the mouse and keyboard. For touch screen devices or mobile devices, standard input APIs may include stylus, buttons, voice, etc. The graphical display and sound card are considered standard ouput devices for a graphical desktop computer environment, and each has a standard API.

Assistive technology
In the context of this document, an assistive technology is a user agent that:
  1. relies on services (such as retrieving Web resources, parsing markup, etc.) provided by one or more other "host" user agents. Assistive technologies communicate data and messages with host user agents by using and monitoring APIs.
  2. provides services beyond those offered by the host user agents to meet the requirements of a users with disabilities. Additional services include alternative renderings (e.g., as synthesized speech or magnified content), alternative input methods (e.g., voice), additional navigation or orientation mechanisms, content transformations (e.g., to make tables more accessible), etc.

For example, screen reader software is an assistive technology because it relies on browsers or other software to enable Web access, particularly for people with visual and learning disabilities.

Examples of assistive technologies that are important in the context of this document include the following:

Beyond this document, assistive technologies consist of software or hardware that has been specifically designed to assist people with disabilities in carrying out daily activities, e.g., wheelchairs, reading machines, devices for grasping, text telephones, vibrating pagers, etc. For example, the following very general definition of "assistive technology device" comes from the (U.S.) Assistive Technology Act of 1998 [AT1998]:

Any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities.

Attribute
This document uses the term "attribute" in the XML sense: an element may have a set of attribute specifications (refer to the XML 1.0 specification [XML] section 3).
Audio-only presentation
An audio-only presentation is content consisting exclusively of one or more audio tracks presented concurrently or in series. Examples of an audio-only presentation include a musical performance, a radio-style news broadcast, and a book reading.
Audio track
An audio object is content rendered as sound through an audio viewport. An audio track is an audio object that is intended as a whole or partial presentation. An audio track may, but is not required to, correspond to a single audio channel (left or right audio channel).
Auditory description
An auditory description (sometimes, "audio description") is either a prerecorded human voice or a synthesized voice (recorded or generated dynamically) describing the key visual elements of a movie or other animation. The auditory description is synchronized with the audio track of the presentation, usually during natural pauses in the audio track. Auditory descriptions include information about actions, body language, graphics, and scene changes.
Author styles
Authors styles are style property values that come from a document, or from its associated style sheets, or that are generated by the server.
Captions
Captions (sometimes, "closed captions") are text transcripts that are synchronized with other audio tracks or visual tracks. Captions convey information about spoken words and non-spoken sounds such as sound effects. They benefit people who are deaf or hard-of-hearing, and anyone who cannot hear the audio (e.g., someone in a noisy environment). Captions are generally rendered graphically above, below, or superimposed over video. Note: Other terms that include the word "caption" may have different meanings in this document. For instance, a "table caption" is a title for the table, often positioned graphically above or below the table. In this document, the intended meaning of "caption" will be clear from context.
Character encoding
A "character encoding" is a mapping from a character set definition to the actual code units used to represent the data. Please refer to the Unicode specification [UNICODE] for more information about character encodings. Refer to "Character Model for the World Wide Web" [CHARMOD] for additional information about characters and character encodings.
Collated text transcript
A collated text transcript is a text equivalent of a movie or other animation. More specifically, it is the combination of the text transcript of the audio track and the text equivalent of the visual track. For example, a collated text transcript typically includes segments of spoken dialogue interspersed with text descriptions of the key visual elements of a presentation (actions, body language, graphics, and scene changes). See also the definitions of text transcript and auditory description. Collated text transcripts are essential for individuals who are deaf-blind.
Conditional content
Conditional content is content that, by specification, should be made available to users through the user interface, generally under certain conditions (e.g., user preferences or operating environment limitations). Some examples of conditional content mechanisms include:

Specifications vary in how completely they define how and when to render conditional content. For instance, the HTML 4 specification includes the rendering conditions for the "alt" attribute, but not for the "title" attribute. The HTML 4 specification does indicate that the "title" attribute should be available to users through the user interface ("Values of the title attribute may be rendered by user agents in a variety of ways...").

Note: The Web Content Accessibility Guidelines 1.0 requires that authors provide text equivalents for non-text content. This is generally done by using the conditional content mechanisms of a markup language. Since conditional content may not be rendered by default, the current document requires the user agent to provide access to unrendered conditional content (checkpoint 2.3 and checkpoint 2.9) as it may have been provided to promote accessibility.

Configure, Control
In the context of this document, the verbs "to control" and "to configure" share in common the idea of governance such as a user may exercise over interface layout, user agent behavior, rendering style, and other parameters required by this document. Generally, the difference in the terms centers on the idea of persistence. When a user makes a change by "controlling" a setting, that change usually does not persist beyond that user session. On the other hand, when a user "configures" a setting, that setting typically persists into later user sessions. Furthermore, the term "control" typically means that the change can be made easily (such as through a keyboard shortcut) and that the results of the change occur immediately, whereas the term "configure" typically means that making the change requires more time and effort (such as making the change via a series of menus leading to a dialog box, via style sheets or scripts, etc.) and that the results of the change may not take effect immediately (e.g., due to time spent reinitializing the system, initiating a new session, rebooting the system). In order to be able to configure and control the user agent, the user needs to be able to "read" as well as "write" values for these parameters. Configuration settings may be stored in a profile. The range and granularity of the changes that can be controlled or configured by the user may depend on limitations of the operating environment or hardware.

Both configuration and control may apply at different "levels": across Web resources (i.e., at the user agent level, or inherited from the operating environment), to the entirety of a Web resource, or to components of a Web resource (e.g., on a per-element basis). In this document, the term global configuration is used to emphasize when a configuration applies across Web resources. For example, users may configure the user agent to apply the same font family across Web resources, so that all text is displayed by default using that font family. On the other hand, the user may wish to configure the rendering of a particular element type, which may be done through style sheets. Or, the user may wish to control the text size dynamically (zooming in and out) for a given document, without having to reconfigure the user agent. Or, the user may wish to control the text size dynamically for a given element, e.g., by navigating to the element and zooming in on it.

User agents may allow users to choose configurations based on various parameters, such as hardware capabilities, natural language, etc.

Note: In this document, the noun "control" means "user interface component" or "form component".

Content
In this specification, the noun "content" is used in three ways:
  1. It is used to mean the document object as a whole or in parts.
  2. It is used to mean the content of an HTML or XML element, in the sense employed by the XML 1.0 specification ([XML], section 3.1): "The text between the start-tag and end-tag is called the element's content." Context should indicate that the term content is being used in this sense.
  3. It is used in the context of the phrases non-text content and text content.
Device-independence
Device-independence refers to the ability to make use of software with any supported input or output device.
Document Object, Document Object Model
In general usage, the term "document object" refers to the user agent's representation of data (e.g., a document). This data generally comes from the document source, but may also be generated (from style sheets, scripts, transformations, etc.), produced as a result of preferences set within the user agent, added as the result of a repair performed automatically by the user agent, etc. Some data that is part of the document object is routinely rendered (e.g., in HTML, what appears between the start and end tags of elements and the values of attributes such as "alt", "title", and "summary"). Other parts of the document object are generally processed by the user agent without user awareness, such as DTD-defined names of element types and attributes, and other attribute values such as "href", "id", etc. These guidelines require that users have access to both types of data through the user interface. Most of the requirements of this document apply to the document object after its construction. However, a few checkpoints (e.g., checkpoint 2.7 and checkpoint 2.10) may affect the construction of the document object.

A "document object model" is the abstraction that governs the construction of the user agent's document object. The document object model employed by different user agents may vary in implementation and sometimes in scope. This specification requires that user agents implement the APIs defined in Document Object Model (DOM) Level 2 Specifications ([DOM2CORE] and [DOM2STYLE]) for access to HTML, XML, and CSS content. These DOM APIs allow authors to access and modify the content via a scripting language (e.g., JavaScript) in a consistent manner across different scripting languages. As a standard interface, the DOM APIs make it easier not just for authors, but for assistive technology developers to extract information and render it in ways most suited to the needs of particular users.

Document character set
A document character set (an concept taken from SGML) is a sequence of abstract characters that may appear in Web content represented in a particular format (such as HTML, XML, etc.). A document character set consists of: For instance, the character set required by the HTML 4 specification [HTML4] is defined in the Unicode specification [UNICODE]. Refer to "Character Model for the World Wide Web" [CHARMOD] for more information about document character sets.
Document source, Document source view
In this document, the term "document source" refers to the data that the user agent receives as the direct result of a request for a Web resource (e.g., as the result of an HTTP/1.1 [RFC2616] "GET", or as the result of viewing a resource on the local file system). The document source is generally a subset of the document object (e.g., since the document object may include repair content).
Documentation
Documentation refers to information that supports the use of a product. This information may be found in product manuals, installation instructions, the help system, tutorials, etc. Documentation may be distributed (e.g., some parts may be delivered on CD-ROM, others on the Web). Refer to guideline 12 for information about documentation requirements.
Element
This document uses the term "element" both in the XML sense (an element is a syntactic construct as described in the XML 1.0 specification [XML], section 3) and more generally to mean a type of content (such as video or sound) or a logical construct (such as a header or list).
Enabled element, disabled element
An enabled element is a piece of content that is subject to user activation through the user interface or indirectly through an API. The set of elements that a user agent enables is generally derived from, but is not limited to, the set of interactive elements defined by implemented markup languages.

A disabled element is a piece of content that is not an enabled element.

Some elements may only be enabled elements for part of a user session. For instance, an element may be disabled by a script as the result of user interaction. Or, an element may only be enabled during a given time period (e.g., during part of a SMIL 1.0 [SMIL] presentation). Or, the user may be viewing content in "read-only" mode, which may disable some elements.

For the requirements of this document, user selection does not constitute user interaction with enabled elements. See the definition of content focus.

Note: Enabled and disabled elements come from content; they are not part of the user agent user interface.

Note: The term "active element" is not used in this document since it may suggest several different concepts, including: interactive element, enabled element, an element "in the process of being activated" (which is the meaning of ':active' in CSS2 [CSS2], for example).

Equivalent (for content)
The term "equivalent" is used in this document as it is used in the Web Content Accessibility Guidelines 1.0 [WCAG10]:

Content is "equivalent" to other content when both fulfill essentially the same function or purpose upon presentation to the user. In the context of this document, the equivalent must fulfill essentially the same function for the person with a disability (at least insofar as is feasible, given the nature of the disability and the state of technology), as the primary content does for the person without any disability.

Equivalents include text equivalents (e.g., text equivalents for images; text transcripts for audio tracks; collated text transcripts for multimedia presentations and animations) and non-text equivalents (e.g., a prerecorded auditory description of a visual track of a movie, or a sign language video rendition of a written text, etc.).

Each markup language defines its own mechanisms for specifying conditional content, and these mechanisms may be used by authors to provide text equivalents. For instance, in HTML 4 [HTML4] or SMIL 1.0 [SMIL], authors may use the "alt" attribute to specify a text equivalent for some elements. In HTML 4, authors may provide equivalents (or portions of equivalents) in attribute values (e.g., the "summary" attribute for the TABLE element), in element content (e.g., OBJECT for external content it specifies, NOFRAMES for frame equivalents, and NOSCRIPT for script equivalents), and in prose. Please consult the Web Content Accessibility Guidelines 1.0 [WCAG10] and its associated Techniques document [WCAG10-TECHS] for more information about equivalents.

Events and scripting, event handler
User agents often perform a task when an event occurs that is due to user interaction (e.g., document loading, mouse motion or a key press), a request from the operating environment, etc. Some markup languages allow authors to specify that a script, called an event handler, be executed when the event occurs. An event handler is "explicitly associated with an element" when the event handler is associated with that element through markup or the DOM. The term "event bubbling" describes a programming style where a single event handler dispatches events to more than one element. In this case, the event handlers are not explicitly associated with the elements receiving the events (except for the single element that dispatches the events).

Note: The combination of HTML, style sheets, the Document Object Model (DOM) and scripting is commonly referred to as "Dynamic HTML" or DHTML. However, as there is no W3C specification that formally defines DHTML, this document only refers to event handlers and scripts.

Explicit user request
In several checkpoints in this document, the term "explicit user request" is used to mean any user interaction recognized with certainty to be for a specific purpose. For instance, when the user selects "New viewport" in the user agent's user interface, this is an explicit user request for a new viewport. On the other hand, it is not an explicit request when the user activates a link and that link has been marked up by the author to open a new viewport (since the user may not know that a new viewport will open). Nor is it an explicit user request even if the link text states "will open a new viewport". Some other examples of explicit user requests include "yes" responses to prompts from the user agent, configuration through the user agent's user interface, activation of known form submit controls, and link activation (which should not be assumed to mean more than "get this linked resource", even if the link text or title or role indicates more). Some examples of behaviors that happen without explicit user request include changes due to scripts.

Note: Users do make mistakes. For example, a user may submit a form inadvertently by activating a known form submit control. In this document, this type of mistake is still considered an explicit user request.

Fee link
For the purpose of this document, the term "fee link" refers to a link that when activated, debits the user's electronic "wallet" (generally, a "micropayment"). The link's role as a fee link is identified through markup (in a manner that the user agent can recognize). This definition of fee link excludes payment mechanisms (e.g., some form-based credit card transactions) that cannot be recognized by the user agent as causing payments. For more information about fee links, refer to "Common Markup for micropayment per-fee-links" [MICROPAYMENT].
Focus, content focus, user interface focus, current focus
Focus is a user interface mechanism that has the following properties:

User agents generally implement two types of focus:

In this document,the unmodified term "focus" means both "content focus" and "user agent focus".

When several viewports coexist, at most one viewport's content focus or user interface focus receives input events; this is called the current focus.

Graphical
In this document, the term "graphical" refers to information (text, colors, graphics, images, animations, etc.) rendered for visual consumption.
Highlight
In this document, "to highlight" means to emphasize through the user interface. For example, user agents highlight which content is selected or focused. Graphical highlight mechanisms include dotted boxes, underlining, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume.
Input configuration
An input configuration is the mapping of user agent functionalities to some user interface input mechanisms (e.g., menus, buttons, keyboard keys, voice commands, etc.). The default input configuration is the mapping the user finds after installation of the software; it must be documented (per checkpoint 12.3]). Input configurations may be affected by author-specified bindings (e.g., through the "accesskey" attribute of HTML 4 [HTML4]).
Interactive element
An interactive element is piece of content that is intended, by specification, to be an enabled element in some user sessions. For instance, the interactive elements of HTML 4 [HTML4] include: links, image maps, form controls, elements with a value for the "longdesc" attribute, and elements with event handlers explicitly associated with them (e.g., through the various "on" attributes). The role of an element as an interactive element is subject to applicability .
Natural language
Natural language is spoken, written, or signed human language such as French, Japanese, and American Sign Language. On the Web, the natural language of content may be specified by markup or HTTP headers. Some examples include the "lang" attribute in HTML 4 ([HTML4] section 8.1), the "xml:lang" attribute in XML 1.0 ([XML], section 2.12), the HTML 4 "hreflang" attribute for links in HTML 4 ([HTML4], section 12.1.5), the HTTP Content-Language header ([RFC2616], section 14.12) and the Accept-Language request header ([RFC2616], section 14.4). See also the definition of script.
Normative, informative
As used in this document, the term "normative" refers to "that on which the requirements of this document depend for their most precise statement." What is normative is required for conformance (though the conformance scheme of this document allows claimants to exempt certain normative provisions as long as the claim discloses the exemption). What is identified as "informative" (sometimes, "non-normative") is never required for conformance.
Operating environment
The term "operating environment" refers to the environment that governs the user agent's operation, whether it is an operating system or a programming language environment such as Java.
Placeholder
A placeholder is content generated by the user agent to replace author-supplied content. A placeholder may be generated as the result of a user preference (e.g., to not render images) or as repair content (e.g., when an image cannot be found). Placeholders can be any type of content, including text and images.

This document includes requirements that the user be able to view the original author-supplied content associated with a placeholder. To satisfy these requirements, the user agent might render the content in place of the placeholder or in a separate viewport (leaving the placeholder as is). A request to view the original content associated with a placeholder is considered an explicit user request to render that content.

This document does not require user agents to include placeholders in the document object. A placeholder that is inserted in the document object should conform to the Web Content Accessibility Guidelines 1.0 [WCAG10]. If a placeholder is not part of the document object, it is part of the user interface only (and subject, for example, to checkpoint 1.3).

Point of regard
The point of regard is a position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard may vary. For example, it may be a point (e.g., a moment in an audio rendering or a cursor in a graphical rendering), or a range of text (e.g., focused text), or a two-dimensional area (e.g., content rendered through a two-dimensional graphical viewport). The point of regard is almost always within a viewport (though the dimensions of the point of regard could exceed those of the viewport). The point of regard may also refer to a particular moment in time for content that changes over time (e.g., an audio-only presentation). User agents may determine the point of regard in a number of ways, including based on viewport position in content, content focus, selection, etc. A user agent should not change the point of regard unexpectedly as this may disorient the user.
Profile
A profile is a named and persistent representation of user preferences that may be used to configure a user agent. Preferences include input configurations, style preferences, natural language preferences, etc. In operating environments with distinct user accounts, profiles enable users to reconfigure software quickly when they log on, and profiles may be shared by several users. Platform-independent profiles are useful for those who use the same user agent on different platforms.
Prompt
In this document, "to prompt" means to require input from the user. The user agent should allow users to configure how they wish to be prompted. For instance, for a user agent functionality X, configurations might include: always do X without prompting me, never do X without prompting me, never do X but tell me when you could have, never do X and never tell me that you could have, etc.
Properties, values, and defaults
A user agent renders a document by applying formatting algorithms and style information to the document's elements. Formatting depends on a number of factors, including where the document is rendered: on screen, on paper, through loudspeakers, on a braille display, on a mobile device, etc. Style information (e.g., fonts, colors, speech prosody, etc.) may come from the elements themselves (e.g., certain font and phrase elements in HTML), from style sheets, or from user agent settings. For the purposes of these guidelines, each formatting or style option is governed by a property and each property may take one value from a set of legal values. Generally in this document, the term "property" has the meaning defined in CSS 2 ([CSS2], section 3). A reference to "styles" in this document means a set of style-related properties.
The value given to a property by a user agent when it is installed is called the property's default value.
Recognize
Authors encode information in markup languages, style sheet languages, scripting languages, protocols, etc. When the information is encoded in a manner that allows the user agent to process it with certainty, the user agent can "recognize" the information. For instance, HTML allows authors to specify a heading with the H1 element, so a user agent that implements HTML can recognize that content as a heading. If the author creates headings using a visual effect alone (e.g., by increasing the font size), then the author has encoded the heading in a manner that does not allow the user agent to recognize it as a heading.

Some requirements of this document depend on content roles, content relationships, timing relationships, and other information supplied by the author. These requirements only apply when the author has encoded that information in a manner that the user agent can recognize. See the section on conformance in User Agent Accessibility Guidelines 1.0 [UAAG10] for more information about applicability.

In practice, user agents will rely heavily on information that the author has encoded in a markup language or style sheet language. On the other hand, behaviors, style, and meaning encoded in a script may not be recognized by the user agent as easily. For instance, a user agent is not expected to recognize that, when executed, a script will calculate a factorial. The user agent will be able to recognize some information in a script by virtue of implementing the scripting language or a known program library (e.g., the user agent is expected to recognize when a script will open a viewport or retrieve a resource from the Web).

Rendered content, rendered text
Rendered content is the part of content capable of being perceived by a user through a given viewport (whether visual, auditory, or tactile). Some rendered content may lie "outside" of a viewport at some times (e.g., when the user can only view a portion of a large document through a small graphical viewport, when audio content has already been played, etc.). By changing the viewport's position, the user can view the remaining rendered content.
Note: In the context of this document, "invisible content" is content that influences graphical rendering of other content but is not rendered itself. Similarly, "silent content" is content that influences audio rendering of other content but is not rendered itself. Neither invisible nor silent content is considered rendered content.
Repair content, repair text
In this document, the term "repair content" refers to content generated by the user agent in order to correct an error condition. "Repair text" means repair content consisting only of text. Some error conditions that may lead to the generation of repair content include:

This document does not require user agents to include repair content in the document object. Repair content inserted in the document object should conform to the Web Content Accessibility Guidelines 1.0 [WCAG10]. For more information about repair techniques for Web content and software, refer to "Techniques for Authoring Tool Accessibility Guidelines 1.0" [ATAG10-TECHS].

Script
In this document, the term "script" almost always refers to a scripting (programming) language used to create dynamic Web content. However, in checkpoints referring to the written (natural) language of content, the term "script" is used as in Unicode [UNICODE] to mean "A collection of symbols used to represent textual information in one or more writing systems."
Selection, current selection
The selection generally identifies a range of content (e.g., text, images, etc.) in a document. This range may be empty. The selection may be structured (based on the document tree) or unstructured (e.g., text-based). Content may be selected through user interaction, scripts, etc. The selection may be used for a variety of purposes: for cut and paste operations, to designate a specific element in a document for the purposes of a query, to identify what a screen reader should read, etc. The selection may be set by the user (e.g., by a pointing device or the keyboard) or through an application programming interface (API).

A viewport has at most one selection (though the selection may be rendered graphically as discontinuous text fragments). When several viewports coexist, at most one viewport's selection receives input events; this is called the current selection.

On the screen, the selection may be highlighted using colors, fonts, graphics, magnification, etc. The selection may also be rendered through changes in speech prosody, for example.

Support, implement, conform
In this document, the terms "support", "implement", and "conform" all refer to what a developer has designed a user agent to do, but they represent different degrees of specificity. A user agent "supports" general classes of objects, such as "images" or "Japanese". A user agent "implements" a specification (e.g., the PNG and SVG image format specifications, a particular scripting language, etc.) or an API (e.g., the DOM API) when it has been programmed to follow all or part of a specification. A user agent "conforms to" a specification when it implements the specification and satisfies its conformance criteria. This document includes some explicit conformance requirements (e.g., to a particular level of the "Web Content Accessibility Guidelines 1.0" [WCAG10]).
Synchronize
In this document, "to synchronize" refers to the time-coordination of two or more presentation components (e.g., in a multimedia presentation, a visual track with captions). For Web content developers, the requirement to synchronize means to provide the data that will permit sensible time-coordinated rendering by a user agent. For example, Web content developers can ensure that the segments of caption text are neither too long nor too short, and that they map to segments of the visual track that are appropriate in length. For user agent developers, the requirement to synchronize means to present the content in a sensible time-coordinated fashion under a wide range of circumstances including technology constraints (e.g., small text-only displays), user limitations (slow reading speeds, large font sizes, high need for review or repeat functions), and content that is sub-optimal in terms of accessibility.
Text
In this document, the term "text" used by itself refers to a sequence of characters from a markup language's document character set. Refer to the "Character Model for the World Wide Web " [CHARMOD] for more information about text and characters. Note: This document makes use of other terms that include the word "text" that have highly specialized meanings: collated text transcript, non-text content, text content, non-text element, text element, text equivalent, and text transcript.
Text content, non-text content, text element, non-text element, text equivalent non-text equivalent
As used in this document a "text element" adds text characters to either content or the user interface. Both in the Web Content Accessibility Guidelines 1.0 [WCAG10] and in this document, text elements are presumed to produce text that can be understood when rendered visually, as speech, or as Braille. Such text elements benefit at least these three groups of users:
  1. visually-displayed text benefits users who are deaf and adept in reading visually-displayed text;
  2. synthesized speech benefits users who are blind and adept in use of synthesized speech;
  3. braille benefits users who are deaf-blind and adept at reading braille.

A text element may consist of both text and non-text data. For instance, a text element may contain markup for style (e.g., font size or color), structure (e.g., heading levels), and other semantics. The essential function of the text element should be retained even if style information happens to be lost in rendering.

A user agent may have to process a text element in order to have access to the text characters. For instance, a text element may consist of markup, it may be encrypted or compressed, or it may include embedded text in a binary format (e.g., JPEG).

"Text content" is content that is composed of one or more text elements. A "text equivalent" (whether in content or the user interface) is an equivalent composed of one or more text elements. Authors generally provide text equivalents for content by using the conditional content mechanisms of a specification.

A "non-text element" is an element (in content or the user interface) that does not have the qualities of a text element. "Non-text content" is composed of one or more non-text elements. A "non-text equivalent" (whether in content or the user interface) is an equivalent composed of one or more non-text elements.

Note that the terms "text element" and "non-text element" are defined by the characteristics of their output (e.g., rendering) rather than those of their input (e.g., information sources) or their internals (e.g., format). Both text elements and non-text elements should be understood as "pre-rendering" content in contrast to the "post-rendering" content that they produce.

Text decoration
In this document, a "text decoration" is any stylistic effect that the user agent may apply to visually rendered text that does not affect the layout of the document (i.e., does not require reformatting when applied or removed). Text decoration mechanisms include underline, overline, and strike-through.
Text transcript
A text transcript is a text equivalent of audio information (e.g., an audio-only presentation or the audio track of a movie or other animation). It provides text for both spoken words and non-spoken sounds such as sound effects. Text transcripts make audio information accessible to people who have hearing disabilities and to people who cannot play the audio. Text transcripts are usually pre-written but may be generated on the fly (e.g., by speech-to-text converters). See also the definitions of captions and collated text transcripts.
User agent
In this document, the term "user agent" is used in two ways:
  1. Any software that retrieves and renders Web content for users. This may include Web browsers, media players, plug-ins, and other programs -- including assistive technologies -- that help in retrieving and rendering Web content.
  2. The subject of a conformance claim to User Agent Accessibility Guidelines 1.0 [UAAG10]. This is the most common use of the term in this document and is the usage in the checkpoints.
User agent default styles
User agent default styles are style property values applied in the absence of any author or user styles. Some markup languages specify a default rendering for documents in that markup language. Other specifications may not specify default styles. For example, XML 1.0 [XML] does not specify default styles for XML documents. HTML 4 [HTML4] does not specify default styles for HTML documents, but the CSS 2 [CSS2] specification suggests a sample default style sheet for HTML 4 based on current practice.
User interface
For the purposes of this document, user interface includes both:
  1. the "user agent user interface", i.e., the controls and mechanisms offered by the user agent for user interaction, such as menus, buttons, keyboard access, etc.
  2. the "content user interface", i.e., the enabled elements that are part of content, such as form controls, links, applets, etc.
The document distinguishes them only where required for clarity.
User styles
User styles are style property values that come from user interface settings, user style sheets, or other user interactions.
Visual-only presentation
An visual-only presentation is content consisting exclusively of one or more visual tracks presented concurrently or in series. Examples of an visual-only presentation include a silent movie.
Visual track
A visual object is content rendered through a graphical viewport. Visual objects include graphics, text, and visual portions of movies and other animations. An visual track is a visual object that is intended as a whole or partial presentation. A visual track does not necessarily correspond to a single physical object or software object. A visual track may be text-based or graphic. A visual track may be static or involve animation.
Views, viewports
User agents may handle different types of content: markup language, sound, video, etc. The user views rendered content through a viewport. Viewports include windows, frames, pieces of paper, loudspeakers, virtual magnifying glasses, etc. A viewport may contain another viewport (e.g., nested frames). User interface controls such as prompts, menus, alerts, etc. are not viewports. When the dimensions (spatial or temporal) of a viewport exceed the dimensions of rendered content, the viewport includes mechanisms such as scroll bars and advance and rewind functionalities to provide access to the content.

When several viewports coexist, only one has the current focus at a given moment. This viewport is highlighted to make it stand out.

User agents may render the same content in a variety of ways; each rendering is called a view. For instance, a user agent may allow users to view an entire document or just a list of the document's headers. These are two different views of the document.

Voice browser
From "Introduction and Overview of W3C Speech Interface Framework" [VOICEBROWSER]: "A voice browser is a device (hardware and software) that interprets voice markup languages to generate voice output, interpret voice input, and possibly accept and produce other modalities of input and output."
Web resource
The term "Web resource" is used in this document in accordance with Web Characterization Terminology and Definitions Sheet [WEBCHAR] to mean anything that can be identified by a Uniform Resource Identifier (URI); refer to RFC 2396 [RFC2396].

7 References

For the latest version of any W3C specification please consult the list of W3C Technical Reports at http://www.w3.org/TR/. Some documents listed below may have been superseded since the publication of this document.

Note: In this document, bracketed labels such as "[HTML4]" link to the corresponding entries in this section. These labels are also identified as references through markup.

7.1 How to refer to this document

There are two recommended ways to refer to the "Techniques for User Agent Accessibility Guidelines 1.0" (and to W3C documents in general):

  1. References to a specific version of "Techniques for User Agent Accessibility Guidelines 1.0". For example, use the "this version" URI to refer to the current document: http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20010331.
  2. References to the latest version of "Techniques for User Agent Accessibility Guidelines 1.0". Use the "latest version" URI to refer to the most recently published document in the series: http://www.w3.org/WAI/UA/UAAG10-TECHS.

In almost all cases, references (either by name or by link) should be to a specific version of the document. W3C will make every effort to make this document indefinitely available at its original address in its original form. The top of this document includes the relevant catalog metadata for specific references (including title, publication date, "this version" URI, editors' names, and copyright information).

An XHTML 1.0 [XHTML10] paragraph including a reference to this specific document might be written:

<p>
<cite><a href="http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20010331/">
"Techniques for User Agent Accessibility Guidelines 1.0"</a></cite>,
I. Jacobs, J. Gunderson, E. Hansen, eds.,
W3C Working Draft, 31 March 2001.
The <a href="http://www.w3.org/WAI/UA/UAAG10-TECHS/">latest
version</a> of this document is available at
http://www.w3.org/WAI/UA/UAAG10-TECHS/.</p>

For very general references to this document (where stability of content, anchors, etc., is not required), it may be appropriate to refer to the latest version of this document. In this case, please use the "latest version" URI at the top of this document.

7.2 Normative references

[DOM2CORE]
"Document Object Model (DOM) Level 2 Core Specification", A. Le Hors, P. Le Hégaret, L. Wood, G. Nicol, J. Robie, M. Champion, S. Byrne, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/.
[DOM2STYLE]
"Document Object Model (DOM) Level 2 Style Specification", V. Apparao, P. Le Hégaret, C. Wilson, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Style-20001113/.
[RFC2046]
"Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types", N. Freed, N. Borenstein, November 1996.
[UAAG10]
"User Agent Accessibility Guidelines 1.0", I. Jacobs, J. Gunderson, E. Hansen, eds. The latest draft of the guidelines is available at http://www.w3.org/WAI/UA/UAAG10/.
[WCAG10]
"Web Content Accessibility Guidelines 1.0", W. Chisholm, G. Vanderheiden, and I. Jacobs, eds., 5 May 1999. This W3C Recommendation is http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/.

7.3 Informative references

[AT1998]
The Assistive Technology Act of 1998, 13 November 1998, United States P.L. 105-394..
[ATAG10]
"Authoring Tool Accessibility Guidelines 1.0", J. Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-ATAG10-20000203/.
[ATAG10-TECHS]
"Techniques for Authoring Tool Accessibility Guidelines 1.0", J. Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards, eds., 4 May 2000. This W3C Note is http://www.w3.org/TR/2000/NOTE-ATAG10-TECHS-20000504/.
[CHARMOD]
"Character Model for the World Wide Web", M. Dürst and F. Yergeau, eds., 29 November 1999. This W3C Working Draft is http://www.w3.org/TR/1999/WD-charmod-19991129/
[CSS-ACCESS]
"Accessibility Features of CSS", I. Jacobs, J. Brewer, 4 August 1999. This W3C Note is http://www.w3.org/1999/08/NOTE-CSS-access-19990804.
[CSS1]
"CSS, level 1 Recommendation", B. Bos, H. Wium Lie, eds., 17 December 1996, revised 11 January 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-CSS1-19990111.
[CSS2]
"CSS, level 2 Recommendation", B. Bos, H. Wium Lie, C. Lilley, and I. Jacobs, eds., 12 May 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-CSS2-19980512/.
[DOM2EVENTS]
Document Object Model (DOM) Level 2 Events Specification, V. Pixley, ed., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Events-20001113/.
[DOM2RANGE]
Document Object Model (DOM) Level 2 Traversal and Range Specification, J. Kesselman, J. Robie, M. Champion, P. Sharpe, V. Apparao, and L. Wood, eds., 13 November 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-DOM-Level-2-Traversal-Range-20001113/.
[HTML4]
"HTML 4.01 Recommendation", D. Raggett, A. Le Hors, and I. Jacobs, eds., 24 December 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-html401-19991224/.
[MATHML20]
"Mathematical Markup Language (MathML) Version 2.0", D. Carlisle, P. Ion, R. Miner, N. Poppelier, et al., 21 February 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-MathML2-20010221/.
[MICROPAYMENT]
"Common Markup for micropayment per-fee-links", T. Michel, ed., 25 August 1999. This W3C Working Draft is http://www.w3.org/TR/1999/WD-Micropayment-Markup-19990825/.
[PNG]
"PNG (Portable Network Graphics) Specification 1.0", T. Boutell, ed., 1 October 1996. This W3C Recommendation is http://www.w3.org/TR/REC-png.
[RFC2396]
"Uniform Resource Identifiers (URI): Generic Syntax", T. Berners-Lee, R. Fielding, L. Masinter, August 1998.
[RFC2616]
"Hypertext Transfer Protocol -- HTTP/1.1", J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, T. Berners-Lee, June 1999.
[SMIL]
"Synchronized Multimedia Integration Language (SMIL) 1.0 Specification", P. Hoschka, ed., 15 June 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-smil-19980615/.
[SMIL-ACCESS]
"Accessibility Features of SMIL", M-R. Koivunen, I. Jacobs, 21 September 1999. This W3C Note is http://www.w3.org/TR/1999/NOTE-SMIL-access-19990921/.
[SMIL20]
Synchronized Multimedia Integration Language (SMIL 2.0) Specification, J. Ayars, et al., eds., 1 March 2001. This W3C Working Draft is http://www.w3.org/TR/2001/WD-smil20-20010301/. The latest version of SMIL 2.0 is available at http://www.w3.org/TR/smil20.
[SVG]
"Scalable Vector Graphics (SVG) 1.0 Specification", J. Ferraiolo, ed., 2 August 2000. This W3C Candidate Recommendation is http://www.w3.org/TR/2000/CR-SVG-20000802/.
[SVG-ACCESS]
"Accessibility Features of SVG", C. McCathieNevile and M.-R. Koivunen, 7 August 2000. This W3C Note is http://www.w3.org/TR/2000/NOTE-SVG-access-20000807/.
[UNICODE]
"The Unicode Standard, Version 3.0", The Unicode Consortium, Reading, MA, Addison-Wesley Developers Press, 2000. ISBN 0-201-61633-5. Refer also to http://www.unicode.org/unicode/standard/versions/. For information about character encodings, refer to Unicode Technical Report #17 "Character Encoding Model".
[VOICEBROWSER]
"Voice Browsers: An introduction and glossary for the requirements drafts", M. Robin, J. Larson, 23 December 1999. This document is http://www.w3.org/TR/1999/WD-voice-intro-19991223/. This document includes references to additional W3C specifications about voice browser technology.
[WCAG10-TECHS]
"Techniques for Web Content Accessibility Guidelines 1.0", W. Chisholm, G. Vanderheiden, and I. Jacobs, eds. This W3C Note is http://www.w3.org/TR/1999/WAI-WEBCONTENT-TECHS-19990505/.
[WEBCHAR]
"Web Characterization Terminology and Definitions Sheet", B. Lavoie, H. F. Nielsen, eds., 24 May 1999. This is a W3C Working Draft that defines some terms to establish a common understanding about key Web concepts. This W3C Working Draft is http://www.w3.org/1999/05/WCA-terms/01.
[XHTML10]
"XHTML[tm] 1.0: The Extensible HyperText Markup Language", S. Pemberton, et al., 26 January 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xhtml1-20000126/.
[XLINK]
"XML Linking Language (XLink) Version 1.0", S. DeRose, E. Maler, D. Orchard, B. Trafford, eds., 3 July 2000. This XML 1.0 Candidate Recommendation is http://www.w3.org/TR/2000/CR-xlink-20000703/.
[XML]
"Extensible Markup Language (XML) 1.0", T. Bray, J. Paoli, C.M. Sperberg-McQueen, eds., 10 February 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-xml-19980210.
[XMLSTYLE]
"Associating Style Sheets with XML documents Version 1.0", J. Clark, ed., 29 June 1999. This W3C Recommendation is http://www.w3.org/1999/06/REC-xml-stylesheet-19990629/
[XSLT]
"XSL Transformations (XSLT) Version 1.0", J. Clark, 16 November 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-xslt-19991116.

8 Resources

Note: W3C does not guarantee the stability of any of the following references outside of its control. These references are included for convenience. References to products are not endorsements of those products.

8.1 Operating system and programming guidelines

[APPLE-HI]
Refer to the following guidelines from Apple:
[BHO]
Browser Helper Objects: The Browser the Way You Want It, D. Esposito, January 1999. Refer also to http://support.microsoft.com/support/kb/articles/Q179/2/30.asp.
[ED-DEPT]
"Requirements for Accessible Software Design", US Department of Education, version 1.1 March 6, 1997.
[EITAAC]
"EITAAC Desktop Software standards", Electronic Information Technology Access Advisory (EITAAC) Committee.
[IBM-ACCESS]
"Software Accessibility", IBM Special Needs Systems.. Refer to the IBM guidelines for software accessibility, IBM guidelines for Java accessibility.
[ICCCM]
"The Inter-Client communication conventions manual". A protocol for communication between clients in the X Window system.
[ICE-RAP]
"An ICE Rendezvous Mechanism for X Window System Clients", W. Walker. A description of how to use the ICE and RAP protocols for X Window clients.
[JAVA-ACCESS]
"IBM Guidelines for Writing Accessible Applications Using 100% Pure Java", R. Schwerdtfeger, IBM Special Needs Systems.
[JAVA-CHECKLIST]
"Java Accessibility Guidelines and Checklist". IBM Special Needs Systems.
[JAVA-TUT]
"The Java Tutorial. Trail: Creating a GUI with JFC/Swing". An online tutorial that describes how to use the Swing Java Foundation Class to build an accessible user interface. Refer also to information on the Java Foundation Classes.
[JAVA13]
Refer to information about character encodings required by Java version 1.3.
[JAVAAPI]
Information on Java Accessibility API can be found at Java Accessibility Utilities.
[MOTIF]
The OSF/Motif Style Guide.
[MS-ENABLE]
Software accessibility guidelines for Windows applications. Refer also to Built-in accessibility features.
[MS-KEYBOARD]
Information on keyboard assistance for Internet Explorer and MS Windows.
[MS-SOFTWARE]
"The Microsoft Windows Guidelines for Accessible Software Design". Note: This page summarizes the guidelines and includes links to the full guidelines in various formats (including plain text).
[MSAA]
Information on active accessibility can be found at the Microsoft Active Accessibility home page.
[NISO]
National Information Standards Organization. One activity pursued by this organization concerns Digital Talking Books. Refer to the "Digital Talking Book Features List" and "Digital Talking Book Standards Committee Document Navigation Features List" drafts for more information.
[NOTES-ACCESS]
"Lotus Notes Accessibility Guidelines" IBM Special Needs Systems.
[PHOTO-RDF]
"Describing and retrieving photos using RDF and HTTP", Y. Lafon and B. Bos. The 3 May 2000 version of the W3C Note is http://www.w3.org/TR/2000/NOTE-photo-rdf-20000503/.
[SAMI]
Information on Synchronized Accessible Multimedia Interchange (SAMI) accessibility.
[SUN-DESIGN]
Articles, Talks, and Papers from Sun Microsystems about accessibility.
[SUN-HCI]
"Towards Accessible Human-Computer Interaction", Eric Bergman, Earl Johnson, Sun Microsytems 1995. A substantial paper, with a valuable print bibliography.
[TRACE-EZ]
"EZ ACCESS(tm) for electronic devices V 2.0 implementation guide", C. M. Law, G. C. Vanderheiden, 23 February 2000. This guide, developed by the Trace Research and Development Center, describes a simple set of interface enhancements that can be applied to electronic devices so that they can be used by people with disabilities, or anyone who experiences difficulty using a device in the standard method of operation.
[TRACE-REF]
"Application Software Design Guidelines" compiled by G. Vanderheiden. A thorough reference work.
[WHAT-IS]
"What is Accessible Software", James W. Thatcher, Ph.D., IBM, 1997. This paper, available at the IBM Accessibility Center, gives a short example-based introduction to the difference between software that is accessible, and software that can be used by some assistive technologies.
[XGUIDELINES]
Information on accessibility guidelines for Unix and X Window applications. The Open Group has various guides that explain the Motif and Common Desktop Environment (CDE) with topics like how users interact with Motif/CDE applications and how to customize these environments. Note: In X, the terms client and server are used differently from their use when discussing the Web.

8.2 User agents and other tools

A list of alternative Web browsers (assistive technologies and other user agents designed for accessibility) is maintained at the WAI Web site.

[ADOBE]
access.adobe.com. Tools and information about Adobe PDF and accessibility.
[ALTIFIER]
The Altifier Tool generates "alt" text intelligently.
[AMAYA]
Amaya is W3C's test-bed browser and editor.
[AWB]
The Accessible Web Browser<, senior project at the University of Illinois Champaign-Urbana
[CSSVALIDATOR]
W3C's CSS Validator service.
[DIRECTDOM]
DirectDom technology, available from alphaWorks, allows a Java developer to manipulate the live Document Object Model of a browser or Scalable Vector Graphics plugin to build rich graphical user interfaces.
[G2]
The G2 player version 7 for Windows.
[HELPDB]
HelpDB is a test tool for Web table navigation.
[HPR]
Home Page Reader.
[IE-WIN]
Internet Explorer 5.0 for Windows 95, Windows 98, and Windows NT. Refer also to information on using COM with IE. Refer also to information about monitoring HTML events in the IE document object model.
[JFW]
JAWS for Windows.
[LYNX]
The Lynx Browser.
[MOZILLA]
The Mozilla browser.
[NAVIGATOR]
Netscape Navigator.
[OPERA]
The Opera Browser.
[QUICKTIME]
The QuickTime player.
[TABLENAV]
A table navigation script from the Trace Research Center.
[VALIDATOR]
W3C's HTML/XML Validator service.
[VIAVOICE]
ViaVoice speech recognition software.
[WINDOWEYES]
Window-Eyes.
[WINVISION]
Winvision.

8.3 Accessibility resources

[BRAILLEFORMATS]
"Braille Formats: Principles of Print to Braille Transcription 1997" .
[NBA]
The National Braille Association.
[NBP]
The National Braille Press.
[RFBD]
Recording for the Blind and Dyslexic.
[SAPI]
Microsoft's Speech Application Programming Interface.
[SPEAK2WRITE]
Speak to Write is a site about using speech recognition to promote accessibility.

8.4 Standards resources

[ISO639]
"Codes for the representation of names of languages", ISO 639:1988. For more information, consult http://www.iso.ch/cate/d4766.html. Refer also to http://www.oasis-open.org/cover/iso639a.html.

9 Acknowledgments

The active participants of the User Agent Accessibility Guidelines Working Group who authored this document were: James Allan, Denis Anson (College Misericordia), Harvey Bingham, Al Gilman, Jon Gunderson (Chair of the Working Group, University of Illinois, Urbana-Champaign), Eric Hansen (Educational Testing Service), Ian Jacobs (Team Contact, W3C), Tim Lacy (Microsoft), Charles McCathieNevile (W3C), David Poehlman, Mickey Quenzer, Gregory Rosmaita (Visually Impaired Computer Users Group of New York City), and Rich Schwerdtfeger (IBM).

Many thanks to the following people who have contributed through review and past participation in the Working Group: Paul Adelson, Kitch Barnicle, Olivier Borius, Judy Brewer, Dick Brown, Bryan Campbell, Kevin Carey, Tantek Çelik, Wendy Chisholm, David Clark, Chetz Colwell, Wilson Craig, Nir Dagan, Daniel Dardailler, B. K. Delong, Neal Ewers, Geoff Freed, John Gardner, Larry Goldberg, Glen Gordon, John Grotting, Markku Hakkinen, Earle Harrison, Chris Hasser, Kathy Hewitt, Philipp Hoschka, Masayasu Ishikawa, Phill Jenkins, Earl Johnson, Jan Kärrman (for help with html2ps), Leonard Kasday, George Kerscher, Marja-Riitta Koivunen, Peter Korn, Josh Krieger, Catherine Laws, Aaron Leventhal, Greg Lowney, Susan Lesch, Scott Luebking, William Loughborough, Napoleon Maou, Peter Meijer, Karen Moses, Masafumi Nakane, Mark Novak, Charles Oppermann, Mike Paciello, David Pawson, Michael Pederson, Helen Petrie, Michael Pieper, Jan Richards, Hans Riesebos, Joe Roeder, Lakespur L. Roca, Madeleine Rothberg, Lloyd Rutledge, Liam Quinn, T.V. Raman, Robert Savellis, Constantine Stephanidis, Jim Thatcher, Jutta Treviranus, Claus Thogersen, Steve Tyler, Gregg Vanderheiden, Jaap van Lelieveld, Jon S. von Tetzchner, Willie Walker, Ben Weiss, Evan Wies, Chris Wilson, Henk Wittingen, and Tom Wlodkowski.