Techniques for User Agent Accessibility Guidelines 1.0

1 The user agent accessibility guidelines

Contents · Next


This section lists each checkpoint of "User Agent Accessibility Guidelines 1.0" [UAAG10] along with some possible techniques for satisfying it. Each checkpoint definition includes a link to the checkpoint definition in "User Agent Accessibility Guidelines 1.0". Each checkpoint definition is followed by one or more of the following:

Note: Most of the techniques in this document are designed for mainstream (graphical) browsers and multimedia players. However, some of them also make sense for assistive technologies and other user agents. In particular, techniques about communication between user agents will benefit assistive technologies. Refer, for example, to the appendix on loading assistive technologies for access to the document object model.

Priorities

Each checkpoint in this document is assigned a priority that indicates its importance for users with disabilities.

[Priority 1]
This checkpoint must be satisfied by user agents, otherwise one or more groups of users with disabilities will find it impossible to access the Web. Satisfying this checkpoint is a basic requirement for enabling some people to access the Web.
[Priority 2]
This checkpoint should be satisfied by user agents, otherwise one or more groups of users with disabilities will find it difficult to access the Web. Satisfying this checkpoint will remove significant barriers to Web access for some people.
[Priority 3]
This checkpoint may be satisfied by user agents to make it easier for one or more groups of users with disabilities to access information. Satisfying this checkpoint will improve access to the Web for some people.

Note: This information about checkpoint priorities is included for convenience only. For detailed information about conformance to "User Agent Accessibility Guidelines 1.0" [UAAG10], please refer to that document.

Guideline 1. Support input and output device-independence.

Checkpoints

1.1 Ensure that the user can operate the user agent fully through keyboard input alone. [Priority 1] Both content and user agent. ( Checkpoint 1.1)
Note: For example, ensure that the user can interact with enabled elements, select content, navigate viewports, configure the user agent, access documentation, install the user agent, operate controls of the user interface, etc., all entirely through keyboard input. It is also possible to claim conformance to User Agent Accessibility Guidelines 1.0 [UAAG10] for full support through pointing device input and voice input. See the section on input modality labels in UAAG 1.0.
Notes and rationale:
  1. For instance, the user must be able to do the following through the keyboard alone (or pointing device alone or voice alone):
    • Select content and operate on it. For example, if the user can select rendered text with the mouse and make it the content of a new link by pushing a button, they also need to be able to do so through the keyboard and other supported devices. Other operations include cut, copy, and paste.
    • Set the focus on viewports and on enabled elements.
    • Install, configure, uninstall, and update the user agent software.
    • Use the graphical user interface menus. Some users may wish to user the graphical user interface even if they cannot use or do not wish to use the pointing device.
    • Fill out forms.
    • Access documentation.
  2. Suppose a user agent does not allow complete operation through the keyboard alone. It is still possible to claim conformance for the user agent in conjunction with a special module designed to "fill in the gap".

1.2 For the element with content focus, allow the user to activate any explicitly associated input device event handlers through keyboard input alone. [Priority 1] Content only. ( Checkpoint 1.2)
Note: The requirements for this checkpoint refer to any explicitly associated input device event handlers associated with an element, independent of the input modalities for which the user agent conforms. For example, suppose that an element has an explicitly associated handler for pointing device events. Even when the user agent only conforms for keyboard input (and does not conform for the pointing device, for example), this checkpoint requires the user agent to allow the user to activate that handler with the keyboard. This checkpoint is an important special case of checkpoint 1.1. Please refer to the checkpoints of guideline 9 for more information about focus requirements.
Notes and rationale:
  1. For example, users without a pointing device (such as some users who are blind or have physical disabilities) need to be able to activate form controls and links (including the links in a client-side image map).
Example techniques:
  1. For example, in HTML 4 [HTML4], input device event handlers are described in section 18.2.3. They are: onclick, ondblclick, onmousedown, onmouseover, onmouseout, onfocus, onblur, onkeypress, onkeydown, and onkeyup.
  2. In "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], focus and activation types are discussed in section 1.6.1. They are: DOMFocusIn, DOMFocusOut, and DOMActivate. These events are specified independent of a particular input device type.
  3. In "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], mouse event types are discussed in section 1.6.2. They are: click, mousedown, mouseup, mouseover, mousemove and mouseout.
  4. The DOM Level 2 Event specification does not provide a key event module.
  5. Sequential technique: Add each input device event handler to the serial navigation order (refer to checkpoint 9.2). Alert the user when the user has navigated to an event handler, and allow activation. For example, an link that also has a onMouseOver and onMouseOut event handlers defined, might generate three "stops" in the navigation order: one for the link and two for the event handlers. If this technique is used, allow configuration so that input device event handlers are not inserted in the navigation order.
  6. Query technique: Allow the user to query the element with content focus for a menu of input device event handlers.
  7. Descriptive information about handlers can allow assistive technologies to choose the most important functions for activation. This is possible in the Java Accessibility API [JAVAAPI], which provides an an AccessibleAction Java interface. This interface provides a list of actions and descriptions that enable selective activation. See also checkpoint 6.3.
  8. Using MSAA [MSAA] on the Windows platform:
    • Retrieve the node in the document object that has current focus.
    • Call the IHTMLDocument4::fireEvent method on that node.
Related techniques:
  1. See image map techniques.
References:
  1. For example, section 16.5 of the SVG 1.0 Candidate Recommendation [SVG] specifies processing order for user interface events.

1.3 Ensure that every message (e.g., prompt, alert, notification, etc.) that is a non-text element and is part of the user agent user interface has a text equivalent. [Priority 1] User agent only. (Checkpoint 1.3)
Note: For example, if the user is alerted of an event by an audio cue, a visually-rendered text equivalent in the status bar would satisfy this checkpoint. Per checkpoint 6.4, a text equivalent for each such message must be available through a standard API. See also checkpoint 6.5 for requirements for programmatic alert of changes to the user interface.
Notes and rationale:
  1. User agents should use modality-specific messages in the user interface (e.g., graphical scroll bars, beeps, and flashes) as long as redundant mechanisms are available or possible. These redundant mechanisms will benefit all users, not just users with disabilities. For instance, mechanisms that are redundant to audio will benefit individuals who are deaf, hard of hearing, or operating the user agent in a noisy or silent environment where the use of sound is not practical.
Example techniques:
  1. Render text messages on the status bar of the graphical user interface. Allow users to query the viewport for this status information (in addition to having access through graphical rendering).
  2. Make available information in a manner that allows other software to present it according to the user's preferences. For instance, if the graphical user agent uses proportional scroll bars to indicate the position of the viewport in content, make available this same information in text form. For instance, this will allow other software to render the proportion of content viewed as speech or as braille.
Doing more:
  1. Allow configuration to render or not render status information (e.g., allow the user to hide the status bar).

[next guideline 2] [review guideline 1] [contents]

Guideline 2. Ensure user access to all content.

Checkpoints

2.1 For all format specifications that the user agent implements, make content available through the rendering processes described by those specifications. [Priority 1] Content only. (Checkpoint 2.1)
Note: This includes format-defined interactions between author preferences and user preferences/capabilities (e.g., when to render the "alt" attribute in HTML [HTML4], the rendering order of nested OBJECT elements in HTML, test attributes in SMIL [SMIL], and the cascade in CSS2 [CSS2]). If a conforming user agent does not render a content type, it should allow the user to choose a way to handle that content (e.g., by launching another application, by saving it to disk, etc.). This checkpoint does not require that all content be available through each viewport.
Example techniques:
  1. Provide access to attribute values (one at a time, not as a group). For instance, allow the user to select an element and read values for all attributes set for that element. For many attributes, this type of inspection should be significantly more usable than a view of the text source.
  2. When content changes dynamically (e.g., due to embedded scripts or content refresh), users need to have access to the content before and after the change.
  3. Make available information about abbreviation and acronym expansions. For instance, in HTML, look for abbreviations specified by the ABBR and ACRONYM elements. The expansion may be given with the "title" attribute (refer to the Web Content Accessibility Guidelines 1.0 [WCAG10], checkpoint 4.2). To provide expansion information, user agents may:
    • Allow the user to configure that the expansions be used in place of the abbreviations,
    • Provide a list of all abbreviations in the document, with their expansions (a generated glossary of sorts)
    • Generate a link from an abbreviation to its expansion.
    • Allow the user to query the expansion of a selected or input abbreviation.
    • If an acronym has no explicit expansion in one location, look for another occurrence in content with an explicit expansion. User agents may also look for possible expansions (e.g., in parentheses) in surrounding context, though that is a less reliable technique.
Related techniques:
  1. See the sections on access to content, link techniques, table techniques, frame techniques, and form techniques.
References:
  1. Sections 10.4 ("Client Error 4xx") and 10.5 ("Server Error 5xx") of the HTTP/1.1 specification [RFC2616] state that user agents should have the following behavior in case of these error conditions:

    Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user.


2.2 For all text formats that the user agent implements, provide a view of the text source. Text formats include at least the following: (1) all media objects given an Internet media type of "text" (e.g., text/plain, text/HTML, or text/*), and (2) all SGML and XML applications, regardless of Internet media type (e.g., HTML 4.01, XHTML 1.1, SMIL, SVG, etc.). [Priority 1] Content only. (Checkpoint 2.2)
Note: Refer to [RFC2046], section 4.1 for information about the "text" Internet media type. A user agent would also satisfy this checkpoint by providing a source view for any text format, not just implemented text formats.
Notes and rationale:
  1. In general, user agent developers should not rely on a "source view" to convey information to users, most of whom are not familiar with markup languages. A source view is still important as a "last resort" to some users as content might not otherwise be accessible at all.
Example techniques:
  1. Make the text view useful. For instance, enable links (i.e., URIs), allowing searching and other navigation within the view.
  2. A source view is an easily-implementable view that will help users inspect some types of content, such as style sheet fragments or scripts. This does not mean, however, that a source view of style sheets is the best user interface for reading or changing style sheets.
Doing more:
  1. Provide a source view for any text format, not just implemented text formats.

2.3 Allow global configuration so that, for each piece of unrendered conditional content "C", the user agent alerts the user to the existence of the content and provides access to it. Provide access to this content according to format specifications or where unspecified, as follows. If C has a close relationship (e.g., C is a summary, title, alternative, description, expansion, etc.) with another piece of rendered content D, do at least one of the following: (1a) render C in place of D, (2a) render C in addition to D, (3a) provide access to C by querying D, or (4a) allow the user to follow a link to C from the context of D. If C does not have a close relationship to other content (i.e., a relationship other than just a document tree relationship), do at least one of the following: (1b) render a placeholder for C, (2b) provide access to C by query (e.g., allow the user to query an element for its attributes), or (3b) allow the user to follow a link in context to C. [Priority 1] Content only. ( Checkpoint 2.3)
Note: The configuration requirement of this checkpoint is global; the user agent is only required to provide one switch that turns on or off these alert and access mechanisms. To satisfy this checkpoint, the user agent may provide access on an element-by-element basis (e.g., by allowing the user to query individual elements) or for all elements (e.g., by offering a configuration to render conditional content all the time). For instance, an HTML user agent might allow users to query each element for access to conditional content supplied for the "alt", "title", and "longdesc" attributes. Or, the user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism).
Notes and rationale:
  1. Allow users to choose more than one piece of conditional content at a given time. For instance,users with low vision may want to view images (even imperfectly) but require a text equivalent for the image; the text may be rendered with a large font or as speech.
Example techniques:
  1. In HTML 4 [HTML4], conditional content mechanisms include the following:
  2. Allow the user to configure how the user agent renders a long description (e.g., "longdesc" in HTML 4 [HTML4]). Some possibilities include:
    1. Render the long description in a separate view.
    2. Render the long description in place of the associated element.
    3. Do not render the long description, but allow the user to query whether an element has an associated long description (e.g., with a context-sensitive menu) and provide access to it.
    4. Use an icon (with a text equivalent) to indicate the presence of a long description.
    5. Use an audio cue to indicate the presence of a long description when the user navigates to the element.
  3. For an object (e.g., an image) with an author-specified geometry that the user agent does not render, allow the user to configure how the conditional content should be rendered. For example, within the specified geometry, by ignoring the specified geometry altogether, etc.
  4. For multimedia presentations with several alternative tracks, ensure access to all tracks and allow the user to select individual tracks. The QuickTime player [QUICKTIME] allows users to turn on and off any number of tracks separately. For example, construct a list of all available tracks from short descriptions provided by the author (e.g., through the "title" attribute).
  5. For multimedia presentations with several alternative tracks, allow users to choose tracks based on natural language preferences. SMIL 1.0 [SMIL] allows users to specify captions in different natural languages. By setting language preferences in the SMIL player (e.g., the G2 player [G2]), users may access captions (or audio) in different languages. Allow users to specify different languages for different content types (e.g., English audio and Spanish captions).
  6. If a multimedia presentation has several captions (or subtitles) available, allow the user to choose from among them. Captions might differ in level of detail, reading levels, natural language, etc. Multilingual audiences may wish to have captions in different natural languages on the screen at the same time. Users may wish to use both captions and auditory descriptions concurrently as well.
  7. Make apparent through the user agent user interface which audio tracks are meant to be played separately.
Doing more:
  1. Make information available with different levels of detail. For example, for a voice browser, offer two options for HTML IMG elements:
    1. Speak only "alt" text by default, but allow the user to hear "longdesc" text on an image by image basis.
    2. Speak "alt" text and "longdesc" for all images.
  2. Allow the user to configure different natural language preferences for different types of conditional content (e.g., captions and auditory descriptions). Users with disabilities may need to choose the language they are most familiar with in order to understand a presentation for which supplementary tracks are not all available in all desired languages. In addition, some users may prefer to hear the program audio in its original language while reading captions in another, fulfilling the function of subtitles or to improve foreign language comprehension. In classrooms, teachers may wish to configure the language of various multimedia elements to achieve specific educational goals.

    How the user selects preferred natural language for captions in Real Player

    This image shows how users select a natural language preference in the Real Player. This setting, in conjunction with language markup in the presentation, determines what content is rendered.

Related techniques:
  1. See the section on access to content.

2.4 For content where user input is only possible within a finite time interval controlled by the user agent, allow configuration to make the time interval "infinite". Do this by pausing automatically at the end of each time interval where user input is possible, and resuming automatically after the user has explicitly completed input. In this configuration, alert the user when the session has been paused and which enabled elements are time-sensitive. [Priority 1] Content only. (Checkpoint 2.4)
Note: In this configuration, the user agent may have to pause the presentation more than once if there is more than one opportunity for time-sensitive input. In SMIL 1.0 [SMIL], for example, the "begin", "end", and "dur" attributes synchronize presentation components. The user may explicitly complete input in many different ways (e.g., by following a link that replaces the current time-sensitive resource with a different resource). This checkpoint does not apply when the user agent cannot recognize the time interval in the presentation format, or when the user agent cannot control the timing (e.g., because it is controlled by the server).
Notes and rationale:
  1. The requirement to pause at the end (rather than at the beginning) of a time-interval is to allow the user to review content that may change during the elapse of this time.
  2. This checkpoint requires the user agent to pause a presentation automatically, whereas the pause requirement of checkpoint 4.5 is manual.
Example techniques:
  1. Some HTML user agents recognize time intervals specified through the META element, although this usage is not defined in HTML 4 [HTML4].
  2. Render time-dependent links as a static list that occupies the same screen real estate; authors may create such documents in SMIL 1.0 [SMIL]. Include temporal context in the list of links. For example, provide the time at which the link appeared along with a way to easily jump to that portion of the presentation.
Doing more:
  1. The checkpoint requires that the user agent make the time interval infinite, but one consequence of this is that the user needs to confirm manually the end of input. The user agent may provide additional configurations to lengthen time intervals so that manual confirmation at the end of input is not required. For instance, the user agent might include a configuration to allow the user three to five times the author's specified time interval for input. Or, the user agent might include a configuration to add additional time to each time interval (e.g., 10 extra seconds).
  2. Allow users to view a list of all media elements or links of the presentations sorted by start or end time or alphabetically.
References:
  1. Refer to section 4.2.4 of SMIL 1.0 [SMIL] for information about the SMIL time model.

2.5 Allow configuration or control so that text transcripts, collated text transcripts, captions, and auditory descriptions are rendered at the same time as the associated audio tracks and visual tracks. [Priority 1] Content only. ( Checkpoint 2.5)
Note: This checkpoint is an important special case of checkpoint 2.1.
Example techniques:
  1. Allow users to turn on and off auditory descriptions and captions.
  2. For the purpose of applying this clause, SMIL 1.0 [SMIL] user agents should recognize as captions any media object whose reference from SMIL is guarded by the 'system-captions' test attribute.
  3. SMIL user agents should allow users to configure whether they want to view captions, and this user interface switch should be bound to the 'system-captions' test attribute. Users should be able to indicate a preference for receiving available auditory descriptions, but SMIL 1.0 [SMIL] does not include a mechanism analogous to 'system-captions' for auditory descriptions, though [SMIL20] is expected to.
  4. Another SMIL 1.0 test attribute, 'system-overdub-or-captions', allows users to choose between subtitles and overdubs in multilingual presentations. User agents should not interpret a value of 'caption' for this test attribute as meaning that the user prefers accessibility captions; that is the purpose of the 'system-captions' test attribute. When subtitles and accessibility captions are both available, users who are deaf may prefer to view captions, as they generally contain information not in subtitles: information on music, sound effects, who is speaking, etc.
  5. User agents that play QuickTime movies should allow the user to turn on and off the different tracks embedded in the movie. Authors may use these alternative tracks to provide content for accessibility purposes. The Apple QuickTime player provides this feature through the menu item "Enable Tracks."
  6. User agents that play Microsoft Windows Media Object presentations should provide support for Synchronized Accessible Media Interchange (SAMI [SAMI]), a protocol for creating and displaying captions) and should allow users to configure how captions are viewed. In addition, user agents that play Microsoft Windows Media Object presentations should allow users to turn on and off other conditional content, including auditory description and alternative visual tracks.
References:
  1. User agents that implement SMIL 1.0 [SMIL] should implement the "Accessibility Features of SMIL" [SMIL-ACCESS].

2.6 Respect synchronization cues during rendering. [Priority 1] Content only. (Checkpoint 2.6)
Note: This checkpoint is an important special case of checkpoint 2.1.
Notes and rationale:
  1. The term "synchronization cues" refers to pieces of information that may affect synchronization, such as the size and expected duration of tracks and their segments, the type of element and how much those elements can be sped up or slowed down (both from technological and intelligibility standpoints).
  2. Captions and auditory descriptions may not make sense unless rendered synchronously with related video or audio content. For instance, if someone with a hearing disability is watching a video presentation and reading associated captions, the captions should be synchronized with the audio so that the individual can use any residual hearing. For auditory descriptions, it is crucial that an audio track and an auditory description track be synchronized to avoid having them both play at once, which would reduce the clarity of the presentation.
Example techniques:
  1. The idea of "sensible time-coordination" of components in the definition of synchronize centers on the idea of simultaneity of presentation, but also encompasses strategies for handling deviations from simultaneity resulting from a variety of causes. Consider how deviations might be handled for captions for a multimedia presentation such as a movie clip. Captions consist of a text equivalent of the audio track that is synchronized with the visual track. Typically, a segment of the captions appears visually near the video for several seconds while the person reads the text. As the visual track continues, a new segment of the captions is presented. However, a problem arises if the captions are longer than can fit in the display space. This can be particularly difficult if due to a visual disability, the font size has been enlarged, thus reducing the amount of rendered caption text that can be presented. The user agent needs to respond sensibly to such problems, for example by ensuring that the user has the opportunity to navigate (e.g., scroll down or page down) through the caption segment before proceeding with the visual presentation and presenting the next segment.
  2. Developers of user agents need to determine how they will handle other synchronization challenges, such as:
    1. Under what circumstances will the presentation automatically pause? Some circumstances where this might occur include:
      • the segment of rendered caption text is more than can fit on the visual display
      • the user wishes more time to read captions or the collated text transcript
      • the auditory description is of longer duration than the natural pause in the audio.
    2. Once the presentation has paused, then under what circumstances will it resume (e.g., only when the user signals it to resume, or based on a predefined pause length)?
    3. If the user agent allows the user to jump to a location in a presentation by activating a link, then how will related tracks behave? Will they jump as well? Will the user be able to return to a previous location or undo the action?
  3. Developers of user agents need to anticipate many of the challenges that may arise in synchronization of diverse tracks.

2.7 Allow configuration to generate repair text when the user agent recognizes that the author has failed to provide conditional content that was required by the format specification. The user agent may satisfy this checkpoint by basing the repair text on any of the following available sources of information: URI reference, content type, or element type. [Priority 2] Content only. (Checkpoint 2.7)
Note: Some markup languages (such as HTML 4 [HTML4] and SMIL 1.0 [SMIL] require the author to provide conditional content for some elements (e.g., the "alt" attribute on the IMG element). Repair text based on URI reference, content type, or element type is sufficient to satisfy the checkpoint, but may not result in the most effective repair. Information that may be recognized as relevant to repair might not be "near" the missing conditional content in the document object. For instance, instead of generating repair text on a simple URI reference, the user agent might look for helpful information near a different instance of the URI reference in the same document object, or might retrieve useful information (e.g., a title) from the resource designed by the URI reference.
Notes and rationale:
  1. Some examples of missing conditional content that is required by specification:
    • in HTML 4 [HTML4], "alt" is required for the IMG and AREA elements (for validation). In SMIL 1.0 [SMIL], on the other hand, "alt" is not required on media objects.
    • whatever the format, text equivalents for non-text content are required by the Web Content Accessibility Guidelines 1.0 [WCAG10].
  2. Conditional content may come from markup, inside images (e.g., refer to "Describing and retrieving photos using RDF and HTTP" [PHOTO-RDF]), etc.
Example techniques:
  1. When HTTP is used, HTTP headers provide information about the URI of the Web resource ("Content-Location") and its type ("Content-Type"). Refer to the HTTP/1.1 specification [RFC2616], sections 14.14 and 14.17, respectively. Refer to "Uniform Resource Identifiers (URI): Generic Syntax" ([RFC2396], section 4) for information about URI references, as well as the HTTP/1.1 specification [RFC2616], section 3.2.1.
Doing more:
  1. When configured to generate text, also inform the user (e.g., in the generated text itself) that this content was not provided by the author as a text equivalent.
Related techniques:
  1. See content repair techniques, and cell header repair strategies.
References:
  1. The "Altifier Tool" [ALTIFIER] illustrates smart techniques for generating text equivalents (for images, etc.) when the author has not specified any.

2.8 Allow configuration so that when the user agent recognizes that conditional content required by the format specification is present but empty (e.g., the empty string), the user agent either (1) generates no repair text, or (2) generates repair text as described in checkpoint 2.7. [Priority 3] Content only. (Checkpoint 2.8)
Note: In some authoring scenarios, an empty string of text (e.g., "alt=''") may be considered to be an appropriate text equivalent (for instance, when some non-text content has no other function than pure decoration, or an image is part of a "mosaic" of several images and doesn't make sense out of the mosaic). Please refer to the Web Content Accessibility Guidelines 1.0 [WCAG10] for more information about text equivalents.
Notes and rationale:
  1. User agents should render nothing in this case because the author may specify an empty text equivalent for content that has no function in the page other than as decoration.
Example techniques:
  1. The user agent should not render generic labels such as "[INLINE]" or "[GRAPHIC]" in the face of empty conditional content (unless configured to do so).
  2. If no captioning information is available and captioning is turned on, render "no captioning information available" in the captioning region of the viewport (unless configured not to generate repair content).
Doing more:
  1. Labels (e.g., "[INLINE]" or "[GRAPHIC]") may be useful in some situations, so the user agent may allow configuration to render "No author text" (or similar) instead of empty conditional content.

2.9 Allow configuration to render all conditional content automatically. Provide access to this content according to format specifications or where unspecified, by applying one of the following techniques described in checkpoint 2.3: 1a, 2a, or 1b. [Priority 3] Content only. ( Checkpoint 2.9)
Note: The user agent satisfies this checkpoint if it satisfies checkpoint 2.3 by applying techniques 1a, 2a, or 1b. For instance, an HTML user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism).
Example techniques:
  1. None.

2.10 Allow configuration not to render content in unsupported natural languages. Indicate to the user in context that author-supplied content has not been rendered. [Priority 3] Content only. ( Checkpoint 2.10)
Note: For example, use a text substitute or accessible graphical icon to indicate that content in a particular language has not been rendered. This checkpoint does not require the user agent to allow different configurations for different natural languages.
Notes and rationale:
  1. Rendering content in an unsupported language (e.g., as "garbage" characters) may confuse all users. However, this checkpoint is designed primarily to benefit users who access content serially as it allows them to skip portions of content that would be unusable as rendered.
  2. There may be cases when a conforming user agent supports a natural language but a speech synthesizer does not, or vice versa.
Example techniques:
  1. For instance, a user agent that doesn't support Korean (e.g., doesn't have the appropriate fonts or voice set) should allow configuration to announce the language change with the message "Unsupported language – unable to render" (e.g., when the language itself is not recognized) or "Korean not supported – unable to render" (e.g., when the language is recognized by the user agent doesn't have resources to render it). The user should also be able to choose no alert of language changes. Rendering could involve speaking in the designated natural language in the case of a voice browser or screen reader. If the natural language is not supported, the language change alert could be spoken in the default language by a screen reader or voice browser.
  2. A user agent may not be able to render all characters in a document meaningfully, for instance, because the user agent lacks a suitable font, a character has a value that may not be expressed in the user agent's internal character encoding, etc. In this case, section 5.4 of HTML 4 [HTML4] recommends the following for undisplayable characters:
    1. Adopt a clearly visible (or audible), but unobtrusive mechanism to alert the user of missing resources.
    2. If missing characters are presented using their numeric representation, use the hexadecimal (not decimal) form since this is the form used in character set standards.
  3. When HTTP is used, HTTP headers provide information about content encoding ("Content-Encoding") and content language ("Content-Language"). Refer to the HTTP/1.1 specification [RFC2616], sections 14.11 and 14.12, respectively.
  4. CSS2's attribute selector may be used with the HTML "lang" or XML "xml:lang" attributes to control rendering based on recognized natural language information. Refer also to the ':lang' pseudo-class ([CSS2], section 5.11.4).
Related techniques:
  1. See techniques for generated content, which may be used to insert text to indicate a language change.
  2. See content repair techniques and accessibility and internationalization techniques.
  3. See techniques for synthesized speech.
References:
  1. For information on language codes, refer to "Codes for the representation of names of languages" [ISO639].
  2. Refer to "Character Model for the World Wide Web" [CHARMOD]. It contains basic definitions and models, specifications to be used by other specifications or directly by implementations, and explanatory material. In particular, this document addresses early uniform normalization, string identity matching, string indexing, and conventions for URIs.

[next guideline 3] [review guideline 2] [previous guideline 1] [contents]

Guideline 3. Allow configuration not to render some content that may reduce accessibility.

In addition to the techniques below, refer also to the section on user control of style.

Checkpoints

3.1 Allow configuration not to render background images. In this configuration, provide an option to alert the user when a background image is available (but has not been rendered). [Priority 1] Content only. ( Checkpoint 3.1)
Note: This checkpoint only requires control of background images for "two-layered renderings", i.e., one rendered background image with all other content rendered "above it". When background images are not rendered, user agents should render a solid background color instead (see checkpoint 4.3). In this configuration, the user agent is not required to retrieve background images from the Web.
Notes and rationale:
  1. Background images may make it difficult or impossible to read superimposed text or understand other superimposed content.
  2. This checkpoint does not address issues of multi-layered renderings and does not require the user agent to change background rendering for multi-layer renderings (refer, for example, to the 'z-index' property in Cascading Style Sheets, level 2 ([CSS2], section 9.9.1).
Example techniques:
  1. If background image are turned off, make available to the user associated conditional content.
  2. In CSS, background images may be turned on/off with the 'background' and 'background-image' properties ([CSS2], section 14.2.1).
Doing more:
  1. Allow control of image depth in multi-layer presentations.

3.2 Allow configuration not to render audio, video, or animated images except on explicit request from the user. In this configuration, provide an option to render a placeholder in context for each unrendered source of audio, video, or animated image. When placeholders are rendered, allow the user to view the original author-supplied content associated with each placeholder. [Priority 1] Content only. ( Checkpoint 3.2)
Note: This checkpoint requires configuration for content rendered without any user interaction (including content rendered on load or as the result of a script), as well as content rendered as the result of user interaction that is not an explicit request (e.g., when the user activates a link). When configured not to render content except on explicit user request, the user agent is not required to retrieve the audio, video, or animated image from the Web until requested by the user. See also checkpoint 3.8, checkpoint 4.5, checkpoint 4.9, and checkpoint 4.10.
Example techniques:
  1. User agent may satisfy this checkpoint by treating content as invisible or silent (e.g., by implementing the 'visibility' property defined in section 11.2 of CSS 2 [CSS2]). However, this solution means that the content is processed, though not rendered, and processing may cause undesirable side effects such as firing events. Or, processing may interfere with the processing of other content (e.g., silent audio may interfere with other sources of sound such as the output of a speech synthesizer). This technique should be deployed with caution.
  2. As a placeholder for an animated image, render a motionless image built from the first frame of the animated image.

3.3 Allow configuration to render animated or blinking text as motionless, unblinking text. [Priority 1] Content only. ( Checkpoint 3.3)
Note: A "stock quote ticker" is an example of animated text. This checkpoint does not apply for blinking and animation effects that are caused by mechanisms that the user agent cannot recognize. This checkpoint requires configuration because blinking effects may be disorienting to some users but useful to others, for example users who are deaf or hard of hearing.
Example techniques:
  1. The user agent may render the motionless text in a number of ways. Inline is preferred, but for extremely long text, it may be better to render the text in another viewport, easily reachable from the user's browsing context.
  2. Allow the user to turn off animated or blinking text through the user agent user interface (e.g., by pressing the Escape key to stop animations).
  3. Some sources of blinking and moving text are:
    • The BLINK element in HTML. Note: The BLINK element is not defined by a W3C specification.
    • The MARQUEE element in HTML. Note: The MARQUEE element is not defined by a W3C specification.
    • The 'blink' value of the 'text-decoration' property in CSS ([CSS2], section 16.3.1).
    • In JavaScript, to control the start and speed of scrolling for a MARQUEE element:
      • document.all.myBanner.start();
      • document.all.myBanner.scrollDelay = 100

3.4 Allow configuration not to execute any executable content (e.g., scripts and applets). In this configuration, provide an option to alert the user when executable content is available (but has not been executed). [Priority 1] Content only. (Checkpoint 3.4)
Note: Scripts and applets may provide very useful functionality, not all of which causes accessibility problems. Developers should not consider that the user's ability to turn off scripts is an effective way to improve content accessibility; turning off scripts means losing the benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off scripts as a last resort.
Notes and rationale:
  1. Executable content includes scripts, applets, ActiveX controls, etc. This checkpoint does not apply to plug-ins that are not part of content.
  2. Executable content includes those that run "on load" (e.g., when a document loads into a viewport) and when other events occur (e.g., user interface events).
  3. The alert that scripts are available but not executed is important, for instance, for helping users understand why some poorly authored pages without script alternatives produce no content when scripts are turned off.
  4. Control of scripts is particularly important when they can cause the screen to flicker, since people with photosensitive epilepsy can have seizures triggered by flickering or flashing, particularly in the 4 to 59 flashes per second (Hertz) range. Peak sensitivity to flickering or flashing occurs at 20 Hertz.
  5. Where possible, authors should encode knowledge in declarative formats rather than in scripts. Knowledge and behaviors embedded in scripts is difficult to extract, which means that user agents are less likely to be able to offer control by the user over the script's effect.
Example techniques:
  1. Do not make available the switch to turn scripts off only in the "Security" part of the user interface as people may not think to look there. For instance, include a "Scripts" entry in the index that allows people to find the switch more easily.
Doing more:
  1. While this checkpoint only requires a global on/off switch, user agents should allow finer control over executable content. For instance, in addition to the global switch, allow users to turn off just input device event handlers, or to turn on and off scripts in a given scripting language only.
Related techniques:
  1. See the section on script techniques.

3.5 Allow configuration so that client-side content refreshes (i.e., those initiated by the user agent, not the server) do not change content except on explicit user request. Allow the user to request the new content on demand (e.g., by following a link or confirming a prompt). Alert the user, according to the schedule specified by the author, whenever fresh content is available (to be obtained on explicit user request). [Priority 1] Content only. ( Checkpoint 3.5)
Notes and rationale:
  1. Some HTML authors create a refresh effect by using a META element with http-equiv="refresh" and the refresh rate specified in seconds by the "content" attribute.
Example techniques:
  1. Alert the user of pages that refresh automatically and allow them to specify a refresh rate through the user agent user interface.
Doing more:
  1. Allow configuration for at least one very slow refresh rate (e.g., every 10 minutes).
  2. Retrieve new content without displaying it automatically. Allow the user to view the differences (e.g., by highlighting or filtering) between the currently rendered content and the new content (including no differences).

3.6 Allow configuration so that a "client-side redirect" (i.e., one initiated by the user agent, not the server) does not change content except on explicit user request. Allow the user to access the new content on demand (e.g., by following a link or confirming a prompt). The user agent is not required to provide these functionalities for client-side redirects that occur instantaneously (i.e., when there is no delay before the new content is retrieved). [Priority 2] Content only. ( Checkpoint 3.6)
Notes and rationale:
  1. This checkpoint is a Priority 2 checkpoint in part because the author's redirect implies that users aren't expected to use the content prior to the redirect.
Example techniques:
  1. Provide a configuration so that when the user navigates "back" through the user agent history to a page with a client-side redirect, the user agent does not re-execute the client-side redirect.
Doing more:
  1. Allow configuration to allow access on demand to new content even when the client-side redirect has been specified by the author to be instantaneous.
References:
  1. For Web content authors: refer to the HTTP/1.1 specification [RFC2616] for information about using server-side redirect mechanisms (instead of client-side redirects).

3.7 Allow configuration not to render images. In this configuration, provide an option to render a placeholder in context for each unrendered image. When placeholders are rendered, allow the user to view the original author-supplied content associated with each placeholder. [Priority 2] Content only. (Checkpoint 3.7)
Note: See also checkpoint 3.8.
Related techniques:
  1. See techniques for checkpoint 3.1.

3.8 Once the user has viewed the original author-supplied content associated with a placeholder, allow the user to turn off the rendering of the author-supplied content. [Priority 3] Content only. ( Checkpoint 3.8)
Note: For example, if the user agent substitutes the author-supplied content for the placeholder in context, allow the user to "toggle" between placeholder and the associated content. Or, if the user agent renders the author-supplied content in a separate viewport, allow the user to close that viewport. See checkpoint 3.2 and checkpoint 3.7.
Example techniques:
  1. Allow the user to designate a placeholder and request to view the associated content in a separate viewport (e.g., through the context menu), leaving the placeholder in context. Per checkpoint 5.3, users are able to close the new viewport.

[next guideline 4] [review guideline 3] [previous guideline 2] [contents]

Guideline 4. Ensure user control of rendering.

In addition to the techniques below, refer also to the section on user control of style.

Checkpoints for visually rendered text

4.1 Allow global configuration and control over the reference size of rendered text, with an option to override reference sizes specified by the author or user agent defaults. Allow the user to choose from among the full range of font sizes supported by the operating environment. [Priority 1] Content only. ( Checkpoint 4.1)
Note: The reference size of rendered text corresponds to the default value of the CSS2 'font-size' property, which is 'medium' (refer to CSS2 [CSS2], section 15.2.4). For example, in HTML, this might be paragraph text. The default reference size of rendered text may vary among user agents. User agents may offer different mechanisms to allow control of the size of rendered text (e.g., font size control, zoom, magnification, etc.). Refer, for example to the Scalable Vector Graphics specification [SVG] for information about scalable rendering.
Notes and rationale:
  1. The choice of optimal techniques depends in part on which markup language is being used. For instance, HTML user agents may allow the user to change the font size of a particular piece of text (e.g., by using CSS user style sheets) independent of other content (e.g., images). Since the user agent can reflow the text after resizing the font, the rendered text will become more legible without, for example, distorting bitmap images. On the other hand, some languages, such as SVG, do not allow text reflow, which means that changes to font size may cause rendered text to overlap with other content, reducing accessibility. SVG is designed to scale, making a zoom functionality the more natural technique for SVG user agents satisfying this checkpoint.
Example techniques:
  1. Inherit text size information from user preferences specified for the operating environment.
  2. Use operating environment magnification features.
  3. When scaling text, maintain size relationships among text of different sizes.
  4. Implement the 'font-size' property in CSS ([CSS2], section 15.2.4).
Doing more:
  1. Allow the user to configure the text size on an element level (i.e., more precisely than globally). User style sheets allow such detailed configurations.
  2. Allow the user to configure the text size differently for different scripts (i.e., writing systems).

4.2 Allow global configuration of the font family of all rendered text, with an option to override font families specified by the author or by user agent defaults. Allow the user to choose from among the full range of font families supported by the operating environment. [Priority 1] Content only. ( Checkpoint 4.2)
Note: For example, allow the user to specify that all text is to be rendered in a particular sans-serif font family. For text that cannot be rendered properly using the user's preferred font family, the user agent may substitute an alternative font family.
Example techniques:
  1. Inherit font family information from user preferences specified for the operating environment.
  2. Implement the 'font-family' property in CSS ([CSS2], section 15.2.2).
  3. Allow the user to override author-specified font families with differing levels of detail. For instance, use font A in place of any sans-serif font and font B in place of any serif font.
Doing more:
  1. Allow the user to configure font families on an element level (i.e., more precisely than globally). User style sheets allow such detailed configurations.

4.3 Allow global configuration of the foreground and background color of all rendered text, with an option to override foreground and background colors specified by the author or user agent defaults. Allow the user to choose from among the full range of colors supported by the operating environment. [Priority 1] Content only. ( Checkpoint 4.3)
Note: User configuration of foreground and background colors may inadvertently lead to the inability to distinguish ordinary text from selected text, focused text, etc. See checkpoint 10.3 for more information about highlight styles.
Example techniques:
  1. Inherit foreground and background color information from user preferences specified for the operating environment.
  2. Implement the 'color' and 'border-color' properties in CSS 2 ([CSS2], sections 14.1 and 8.5.2, respectively).
  3. Implement the 'background-color' property (and other background properties) in CSS 2 ([CSS2], section 14.2.1).
Doing more:
  1. Allow the user to specify minimal contrast between foreground and background colors, adjusting colors dynamically to meet those requirements.

Checkpoints for multimedia presentations and other presentations that change continuously over time

4.4 Allow the user to slow the presentation rate of audio and animations (including video and animated images). For a visual track, provide at least one setting between 40% and 60% of the original speed. For a prerecorded audio track including audio-only presentations, provide at least one setting between 75% and 80% of the original speed. When the user agent allows the user to slow the visual track of a synchronized multimedia presentation to between 100% and 80% of its original speed, synchronize the visual and audio tracks. Below 80%, the user agent is not required to render the audio track. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. [Priority 1] Content only. (Checkpoint 4.4)
Note: Purely stylistic effects include background sounds, decorative animated images, and effects caused by style sheets. The style exception of this checkpoint is based on the assumption that authors have satisfied the requirements of the "Web Content Accessibility Guidelines 1.0" [WCAG10] not to convey information through style alone (e.g., through color alone or style sheets alone). See checkpoint 2.6 and checkpoint 4.7.
Notes and rationale:
  1. Allowing the user to slow the presentation of audio and animations will benefit individuals with specific learning disabilities, cognitive disabilities, or individuals with newly acquired sensory limitations (such as a person who is newly blind and learning to use a screen reader). The same feature will benefit individuals who have beginning familiarity with a natural language. Slowing one track (e.g., video) may make it harder for a user to understand another synchronized track (e.g., audio), but if the user can understand content after two passes, this is better than not being able to understand it at all.
  2. Some formats (e.g., streaming formats), might not enable the user agent to slow down playback and would thus be subject to applicability.
Example techniques:
  1. When changing the rate of audio, avoid pitch distortion.
  2. HTML 4 [HTML4], background animations may be specified with the deprecated background attribute.
  3. The SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the speed attribute, which can be used to change the playback rate (as well as forward or reverse direction) of any animation.
  4. Authors sometimes specify background sounds with the "bgsound" attribute. Note: This attribute is not part of HTML 4 [HTML4].
Doing more:
  1. Allowing the user to speed up audio is also useful. For example, some users who access content serially benefit from the ability to speed up audio.
References:
  1. Refer to variable playback speed techniques used for Digital Talking Books [TALKINGBOOKS].

4.5 Allow the user to stop, pause, resume, fast advance, and fast reverse audio and animations (including video and animated images) that last three or more seconds at their default playback rate. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. The user agent is not required to play synchronized audio during fast advance or reverse of animations (though doing so may help orient the user). [Priority 1] Content only. (Checkpoint 4.5)
Note: See checkpoint 4.4 for more information about the exception for purely stylistic effects. This checkpoint applies to content that is either rendered automatically or on request from the user. The requirement of this checkpoint is for control of each source of audio and animation that is recognized as distinct. Respect synchronization cues per checkpoint 2.6.
Notes and rationale:
  1. Some formats (e.g., streaming formats), might not enable the user agent to fast advance or fast reverse content and would thus be subject to applicability.
Example techniques:
  1. Allow the user to advance or rewind the presentation in increments. This is particularly valuable to users with physical disabilities who may not have fine control over advance and rewind functionalities. Allow users to configure the size of the increments.
  2. If buttons are used to control advance and rewind, make the advance/rewind distances proportional to the time the user activates the button. After a certain delay, accelerate the advance/rewind.
  3. The SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the speed attribute, which can be used to change the playback direction (forward or reverse) of any animation. See also the accelerate and decelerate attributes.
  4. Some content lends itself to different forward and reverse functionalities. For instance, compact disk players often let listeners fast forward and reverse, but also skip to the next or previous song.
Doing more:
  1. The user agent should display time codes or represent otherwise position in content to orient the user.
  2. Apply techniques for changing audio speed without introducing distortion.
References:
  1. Refer to fast advance and fast reverse techniques used for Digital Talking Books [TALKINGBOOKS].
  2. Home Page Reader [HPR] lets users insert bookmarks in presentations.

4.6 For graphical viewports, allow the user to position text transcripts, collated text transcripts, and captions in the viewport. Allow the user to choose from among at least the range of positions available to the author (e.g., the range of positions allowed by the markup or style language). [Priority 1] Content only. (Checkpoint 4.6)
Notes and rationale:
  1. Some users need to be able to position captions, etc. so that they do not obscure other content or are not obscured by other content. Other users (e.g., users with screen magnifiers or who have other visual disabilities) require pieces of content to be in a particular relation to one another, even if this means that some content will obscure other content.
Example techniques:
  1. User agents should implement the positioning features of the employed markup or style sheet language. Even when a markup language does not explicitly allow positioning, when a user agent can recognize distinct text transcripts, collated text transcripts, or captions, the user agent should allow the user to reposition them. User agents are not required to allow repositioning when the captions, etc. cannot be separated from other media (e.g., the captions are part of the video track).
  2. For the purpose of applying this clause, SMIL 1.0 [SMIL] user agents should recognize as captions any media object whose reference from SMIL is guarded by the 'system-captions' test attribute.
  3. Implement the CSS 2 'position' property ([CSS2], section 9.3.1).
  4. Allow the user to choose whether captions appear at the bottom or top of the video area or in other positions. Currently authors may place captions overlying the video or in a separate box. Captions prevent users from being able to view other information in the video or on other parts of the screen, making it necessary to move the captions in order to view all content at once. In addition, some users will find captions easier to read if they can place them in a location best suited to their reading style.
  5. Allow users to configure a general preference for caption position and to be able to fine tune specific cases. For example, the user may want the captions to be in front of and below the rest of the presentation.
  6. Allow the user to drag and drop the captions to a place on the screen. To ensure device-independence, allow the user to enter the screen coordinates of one corner of the caption.
  7. Do not require users to edit the source code of the presentation to achieve the desired effect.
Doing more:
  1. Allow the user to position all parts of a presentation rather than trying to identify captions specifically (i.e., solving the problem generally may be easier than for captions alone).
  2. Allow the user to resize (graphically) the captions, etc.

4.7 Allow the user to slow the presentation rate of audio and animations (including video and animated images) not covered by checkpoint 4.4. The same speed percentage requirements of checkpoint 4.4 apply. [Priority 2] Content only. ( Checkpoint 4.7)
Note: User agents automatically satisfy this checkpoint if they satisfy checkpoint 4.4 for all audio and animations.
Related techniques:
  1. See the techniques for checkpoint 4.4.

4.8 Allow the user to stop, pause, resume, fast advance, and fast reverse audio and animations (including video and animated images) not covered by checkpoint 4.5. [Priority 2] Content only. ( Checkpoint 4.8)
Note: User agents automatically satisfy this checkpoint if they satisfy checkpoint 4.5 for all audio and animations.
Related techniques:
  1. See the techniques for checkpoint 4.5.

Checkpoints for audio volume control

4.9 Allow global configuration and control of the volume of all audio, with an option to override audio volumes specified by the author or user agent defaults. The user must be able to choose zero volume (i.e., silent). [Priority 1] Content only. ( Checkpoint 4.9)
Note: User agents should allow configuration and control of volume through available operating environment controls.
Example techniques:
  1. Use audio control mechanisms provided by the operating environment. Control of volume mix is particularly important, and the user agent should provide easy access to those mechanisms provided by the operating environment.
  2. Implement the CSS 2 'volume' property ([CSS2], section 19.2).
  3. Implement the 'display', 'play-during', and 'speak' properties in CSS 2 ([CSS2], sections 9.2.5, 19.6, and 19.5, respectively).
  4. Authors sometimes specify background sounds with the "bgsound" attribute. Note: This attribute is not part of HTML 4 [HTML4].
References:
  1. Refer to guidelines for audio characteristics used for Digital Talking Books [TALKINGBOOKS].

4.10 Allow independent control of the volumes of distinct audio sources synchronized to play simultaneously. [Priority 1] Content only. ( Checkpoint 4.10)
Note: Sounds that play at different times are distinguishable and therefore independent control of their volumes is not required by this checkpoint (since volume control required by checkpoint 4.9 suffices). The user agent should satisfy this checkpoint by allowing the user to control independently the volumes of all distinct audio sources. The user control required by this checkpoint includes the ability to override author-specified volumes for the relevant sources of audio. See also checkpoint 4.12.
Notes and rationale:
  1. There are at least three good reasons for strongly recommending that all sounds be independently configurable, not just those synchronized to play simultaneously.
    1. sounds that are not synchronized may end up playing simultaneously;
    2. if the user cannot anticipate when a sound will play, the user cannot adjust the global volume control at appropriate times to affect this sound;
    3. it is extremely inconvenient to have to adjust the global volume frequently.
Related techniques:
  1. For each source of audio recognized as distinct, allow the user to control the volume using the same user interface used to satisfy the requirements of checkpoint 4.5.

Checkpoints for synthesized speech

See also techniques for synthesized speech.

4.11 Allow configuration and control of the synthesized speech rate, according to the full range offered by the speech synthesizer. [Priority 1] Content only. ( Checkpoint 4.11)
Note: The range of speech rates offered by the speech synthesizer may depend on natural language.
Example techniques:
  1. For example, many speech synthesizers offer a range for English speech of 120 - 500 words per minute or more. The user should be able to increase or decrease the speech rate in convenient increments (e.g., in large steps, then in small steps for finer control).
  2. User agents may allow different speech rate configurations for different natural languages. For example, this may be implemented with CSS2 style sheets using the :lang pseudo-class ([CSS2], section 5.11.4).
  3. Use synthesized speech mechanisms provided by the operating environment.
  4. Implement the CSS 2 'speech-rate' property ([CSS2], section 19.8).
Doing more:
  1. Content may include commands that are interpreted by a speech engine to change the speech rate (or control other speech parameters). This checkpoint does not require the user agent to allow the user to override author-specified speech rate changes (e.g., by transforming or otherwise stripping out these commands before passing on the content to the speech engine). Speech engines themselves may allow user override of author-specified speech rate changes. For these such speech engines, the user agent should ensure access to this feature as part of satisfying this checkpoint.

4.12 Allow control of the synthesized speech volume, independent of other sources of audio. [Priority 1] Content only. ( Checkpoint 4.12)
Note: The user control required by this checkpoint includes the ability to override author-specified speech volume. See also checkpoint 4.10.
Example techniques:
  1. The user agent should allow the user to make synthesized speech louder and softer than other audio sources.
  2. Use synthesized speech mechanisms provided by the operating environment.
  3. Implement the CSS 2 'volume' property ([CSS2], section 19.2).

4.13 Allow configuration of speech characteristics according to the full range of values offered by the speech synthesizer. [Priority 1] Content only. ( Checkpoint 4.13)
Note: Some speech synthesizers allow users to choose values for speech characteristics at a higher abstraction layer, i.e., by choosing from present options that group several characteristics. Some typical options one might encounter include: "adult male voice", "female child voice", "robot voice", "pitch", "stress", etc. Ranges for values may vary among speech synthesizers.
Example techniques:
  1. Use synthesized speech mechanisms provided by the operating environment.
  2. One example of a speech API is Microsoft's Speech Application Programming Interface [SAPI].
  3. ViaVoice control panel for configuration of voice characteristics

    This image shows how ViaVoice [VIAVOICE] allows users to configure voice characteristics of the speech synthesizer.

References:
  1. For information about these speech characteristics, please refer to descriptions in section 19.8 of Cascading Style Sheets Level 2 [CSS2].

4.14 Allow configuration of the following speech characteristics: pitch, pitch range, stress, richness. Pitch refers to the average frequency of the speaking voice. Pitch range specifies a variation in average frequency. Stress refers to the height of "local peaks" in the intonation contour of the voice. Richness refers to the richness or brightness of the voice. [Priority 2] Content only. ( Checkpoint 4.14)
Note: This checkpoint is more specific than checkpoint 4.13: it requires support for the voice characteristics listed. Definitions for these characteristics are taken from section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; please refer to that specification for additional informative descriptions. Some speech synthesizers allow users to choose values for speech characteristics at a higher abstraction layer, i.e., by choosing from present options distinguished by "gender", "age", "accent", etc. Ranges of values may vary among speech synthesizers.
Related techniques:
  1. See checkpoint 4.13.

4.15 Provide support for user-defined extensions to the speech dictionary, as well as the following functionalities: spell-out (spell text one character at a time or according to language-dependent pronunciation rules), speak-numeral (speak a numeral as individual digits or as a full number), and speak-punctuation (speak punctuation literally or render as natural pauses). [Priority 2] Content only. ( Checkpoint 4.15)
Note: Definitions for the functionalities listed are taken from section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; please refer to that specification for additional informative descriptions.
Example techniques:
  1. ViaVoice control panel for editing the user dictionary

    This image shows how ViaVoice [VIAVOICE] allows users to add entries to the user's personal dictionary.

References:
  1. For information about these functionalities, please refer to descriptions in section 19.8 of Cascading Style Sheets Level 2 [CSS2].

Checkpoints related to style sheets

4.16 For user agents that support style sheets, allow the user to choose from (and apply) available author and user style sheets or to ignore them. [Priority 1] Both content and user agent. ( Checkpoint 4.16)
Note: By definition, the user agent's default style sheet is always present, but may be overridden by author or user styles. Developers should not consider that the user's ability to turn off author and user style sheets is an effective way to improve content accessibility; turning off style sheet support means losing the many benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off author and user style sheets as a last resort.
Example techniques:
  1. For HTML [HTML4], make available "class" and "id" information so that users can override styles.
  2. Implement user style sheets.
  3. Implement the "!important" semantics of CSS 2 ([CSS2], section 6.4.2).
References:
  1. For information about how alternative style sheets are specified in HTML 4 [HTML4], please refer to section 14.3.1.
  2. For information about how alternative style sheets are specified in XML 1.0 [XML], please refer to "Associating Style Sheets with XML documents Version 1.0" [XMLSTYLE].

[next guideline 5] [review guideline 4] [previous guideline 3] [contents]

Guideline 5. Ensure user control of user interface behavior.

Checkpoints

5.1 Allow configuration so that the current focus does not move automatically to viewports that open without explicit user request. Configuration is not required if the current focus can only ever be moved by explicit user request. [Priority 2] Both content and user agent. ( Checkpoint 5.1)
Note: For example, allow configuration so that neither the current focus nor the pointing device jump automatically to a viewport that opens without explicit user request.
Notes and rationale:
  1. Moving the focus automatically to a new viewport, this may disorient users with cognitive disabilities or who are blind, and it may be difficult to restore the previous point of regard.
Example techniques:
  1. Allow the user to configure how the current focus changes when a new viewport opens. For instance, the user might choose between these two options:
    1. Do not change the focus when a viewport opens, but alert the user (e.g., with a beep, flash, and text message on the status bar). Allow the user to navigate directly to the new window upon demand.
    2. Change the focus when a window opens and use a subtle alert (e.g., a beep, flash, and text message on the status bar) to indicate that the focus has changed.
  2. If a new viewport or prompt appears but focus does not move to it, alert assistive technologies (per checkpoint 6.5) so that they may discreetly inform the user.
  3. When a viewport is duplicated, the focus in the new viewport should initially be the same as the focus in the original viewport. Duplicate viewports allow users to navigate content (e.g., in search of some information) in one viewport while allowing the user to return with little effort to the point of regard in the duplicate viewport. There are other techniques for accomplishing this (e.g., "registers" in Emacs).
  4. In JavaScript, the focus may be changed with myWindow.focus();
  5. For user agents that implement CSS 2 [CSS2], the following rule will generate a message to the user at the beginning of link text for links that are meant to open new windows when followed:
       A[target=_blank]:before{content:"Open new window"}
    
Doing more:
  1. The user agent may also allow configuration about whether the pointing device moves automatically to windows that open without an explicit user request.

5.2 For graphical user interfaces, allow configuration so that the viewport with the current focus remains "on top" of all other viewports with which it overlaps. [Priority 2] Both content and user agent. (Checkpoint 5.2)
Notes and rationale:
  1. The alert is important to ensure that the user realizes a new viewport has opened; the new viewport may be hidden by the viewport configured to remain on top.
  2. In most operating environments, the viewport with focus is generally the viewport "on top". In some environments, it's possible to allow a viewport that is not on top to have focus.
Doing more:
  1. The user agent may also allow configuration about whether the viewport designated by the pointing device always remains on top.

5.3 Allow configuration so that viewports only open on explicit user request. In this configuration, instead of opening a viewport automatically, alert the user and allow the user to open it on demand (e.g., by following a link or confirming a prompt). Allow the user to close viewports. If a viewport (e.g., a frame set) contains other viewports, these requirements only apply to the outermost container viewport. [Priority 2] Both content and user agent. (Checkpoint 5.3)
Note: User creation of a new viewport (e.g., empty or with a new resource loaded) through the user agent's user interface constitutes an explicit user request. See also checkpoint 5.1 (for control over changes of focus when a viewport opens) and checkpoint 6.5 (for programmatic alert of changes to the user interface).
Notes and rationale:
  1. Navigation of multiple open viewports may be difficult for some users who navigate viewports serially (e.g., users with visual or physical disabilities) and for some users with cognitive disabilities (who may be disoriented).
Example techniques:
  1. For HTML [HTML4], allow the user to control the process of opening a document in a new "target" frame or a viewport created by a script. For example, for target="_blank", open the window according to the user's preference.
  2. For SMIL [SMIL], allow the user to control viewports created with the "new" value of the "show" attribute.
  3. In JavaScript, windows may be opened with:
    • myWindow.open("example.com", "My New Window");
    • myWindow.showHelp(URI);

5.4 Allow configuration to prompt the user to confirm (or cancel) any form submission that is not caused by an explicit user request to activate a form submit control. [Priority 2] Content only. (Checkpoint 5.4)
Note: For example, do not submit a form automatically when a menu option is selected, when all fields of a form have been filled out, or when a "mouseover" or "change" event event occurs. The user agent may satisfy this checkpoint by prompting the user to confirm all form submissions.
Example techniques:
  1. In HTML 4 [HTML4], form submit controls are the INPUT element (section 17.4) with type="submit" and type="image", and the BUTTON element (section 17.5) with type="submit".
  2. Allow the user to configure script-based submission (e.g., form submission accomplished through an "onChange" event). For instance, allow these settings:
    1. Do not allow script-based submission.
    2. Allow script-based submission after confirmation from the user.
    3. Allow script-based submission without prompting the user (but not by default).
  3. Authors may write scripts that submit a form when particular events occur (e.g., "onchange" events). Be aware of this type of practice:
    <SELECT NAME="condition" onchange="switchpage(this)">
    
    
    As soon as the user attempts to navigate the menu, the "switchpage" function opens a document in a new viewport. Try to avoid orientation problems that may be caused by scripts bound to form controls.
  4. Be aware that users may inadvertently pressing the Return or Enter key and accidentally submit a form.
  5. In JavaScript, a form may be submitted with:
    • document.form[0].submit();
    • document.all.mySubmitButton.click();
  6. Generate an explicit form submit button when the author has not provided one.
Doing more:
  1. Users who navigate a document serially may think that the submit button in a form is the "last" control they need to complete before submitting the form. Therefore, for forms in which additional controls follow a submit button, if those controls have not been completed, inform the user and ask for confirmation (or completion) before submission.
  2. For forms, allow users to search for controls that need to be changed by the user before submitting the form.

5.5 Allow configuration to prompt the user to confirm (or cancel) any payment that results from activation of a fee link. [Priority 2] Content only. (Checkpoint 5.5)
Example techniques:
  1. Allow the user to configure the user agent to prompt for payments above a certain amount (including any payment).
  2. Warn the user that even in this configuration, the user agent may not be able to recognize some payment mechanisms.

5.6 Allow configuration to prompt the user to confirm (or cancel) closing any viewport that starts to close without explicit user request. [Priority 3] Both content and user agent. ( Checkpoint 5.6)
Example techniques:
  1. In JavaScript, windows may be closed with myWindow.close();

[next guideline 6] [review guideline 5] [previous guideline 4] [contents]

Guideline 6. Implement standard application programming interfaces.

Checkpoints

6.1 Provide programmatic read access to HTML and XML content by conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification [DOM2CORE] and exporting the interfaces they define: (1) the Core module for HTML; (2) the Core and XML modules for XML. [Priority 1] Content only. (Checkpoint 6.1)
Note: Please refer to the "Document Object Model (DOM) Level 2 Core Specification" [DOM2CORE] for information about HTML and XML versions covered.
Notes and rationale:
  1. The primary reason for requiring user agents to implement the DOM is that this gives assistive technologies access to the original structure of the document. For example, this means that assistive technologies that render content as speech are not required to construct the speech view by "reverse engineering" a graphical view. Direct access to the structure allows the assistive technologies to render content in a manner best suited to a particular output device. This does not mean that assistive technologies should be prevented from having access to the rendering of the conforming user agent; simply that they not be required to depend entirely on it. In fact, speech user agents may wish to synchronize a graphical view with a speech view.
  2. Note that the W3C DOM is designed to be used on a server as well as a client and does not address some user interface-specific information.
Example techniques:
  1. Refer to a listing of DOM implementations at the Open Directory Project [ODP-DOM].
Related techniques:
  1. See the appendix on loading assistive technologies for DOM access.
References:
  1. For information about rapid access to Internet Explorer's [IE-WIN] DOM through COM, refer to [BHO].
  2. Refer to the DirectDOM Java implementation of the DOM [DIRECTDOM].

6.2 If the user can modify HTML and XML content through the user interface, provide the same functionality programmatically by conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification [DOM2CORE] and exporting the interfaces they define: (1) the Core module for HTML; (2) the Core and XML modules for XML. [Priority 1] Content only. (Checkpoint 6.2)
Note: For example, if the user interface allows users to complete HTML forms, this must also be possible through the required DOM APIs. Please refer to the "Document Object Model (DOM) Level 2 Core Specification" [DOM2CORE] for information about HTML and XML versions covered.
Notes and rationale:
  1. Allowing assistive technologies write access through the DOM allows them to:
    • modify the attribute list of a document and thus add information into the document object that will not be rendered by the user agent.
    • add entire nodes to the document that are specific to the assistive technologies and that may not be rendered by a user agent unaware of their function.
  2. The ability to write to the DOM can improve performance for the assistive technology. For example, if an assistive technology has already traversed a portion of the document object and knows that a section (e.g., a style element) could not be rendered, it can mark this section "to be skipped".
  3. Another benefit is to add information necessary for audio rendering but that would not be stored directly in the DOM during parsing (e.g., numbers in an ordered list). An assistive technology component can add numeric information to the document object. The assistive technology can also mark a subtree as having been traversed and updated, to eliminate recalculating the information the next time the user visits the subtree.
Related techniques:
  1. See also techniques for checkpoint 6.1.

6.3 For markup languages other than HTML and XML, provide programmatic access to content using standard APIs (e.g., platform-independent APIs and standard APIs for the operating environment). If standard APIs do not exist, provide programmatic access through publicly documented APIs. [Priority 1] Content only. (Checkpoint 6.3)
Note: This checkpoint addresses content not covered by checkpoints checkpoint 6.1 and checkpoint 6.2.
Notes and rationale:
  1. Some examples of markup languages covered by this checkpoint include SGML applications other than HTML and RTF, and TeX.
Related techniques:
  1. See techniques for checkpoint 6.4.
References:
  1. Some public APIs that enable access include:
    • Microsoft Active Accessibility ([MSAA]) in Windows 95/98/NT versions.
    • Sun Microsystems Java Accessibility API ([JAVAAPI]) in Java JDK. If the user agent supports Java applets and provides a Java Virtual Machine to run them, the user agent should support the proper loading and operation of a Java native assistive technology. This assistive technology can provide access to the applet as defined by Java accessibility standards.

6.4 Provide programmatic read and write access to user agent user interface controls using standard APIs. If standard APIs do not exist, provide programmatic access through publicly documented APIs. [Priority 1] User agent only. (Checkpoint 6.4)
Note: Per checkpoint 6.6, provide programmatic access through standard APIs (e.g., platform-independent APIs such as the W3C DOM; standard APIs defined for a specific operating system; and conventions for programming languages, plug-ins, virtual machine environments, etc.). This checkpoint requires user agents to provide programmatic access even in the absence of a standard API for doing so.
Example techniques:
  1. Use standard operating environment) APIs that support accessibility by providing a bridge between the standard user interface supported by the operating operating and alternative user interfaces developed by assistive technologies. User agents that implement these APIs are generally more compatible with assistive technologies and provide accessibility at no extra cost.
  2. Use standard user interface controls. Third-party assistive technology developers are more likely able to access standard controls than custom controls. If you use custom controls, review them for accessibility and compatibility with third-party assistive technology. Ensure that they provide accessibility information through an API as is done for the standard controls.
  3. Make use of operating environment-level features. See the appendix of accessibility features for some common operating systems.
References:
  1. Some public accessibility APIs include:
    • Microsoft Active Accessibility ([MSAA]) in Windows 95/98/NT versions.
    • Sun Microsystems Java Accessibility API ([JAVAAPI]) in Java JDK. If the user agent supports Java applets and provides a Java Virtual Machine to run them, the user agent should support the proper loading and operation of a Java native assistive technology. This assistive technology can provide access to the applet as defined by Java accessibility standards.
  2. For information about rapid access to Internet Explorer's [IE-WIN] DOM through COM, refer to Browser Helper Objects [BHO].

6.5 Using standard APIs, provide programmatic alert of changes to content, user interface controls, selection, content focus, and user interface focus. If standard APIs do not exist, provide programmatic alert through publicly documented APIs. [Priority 1] Both content and user agent. (Checkpoint 6.5)
Note: For instance, when user interaction in one frame causes automatic changes to content in another, provide programmatic alert through standard APIs. Use the standard APIs required by the checkpoints of guideline 6.
Example techniques:
  1. Write output to and take input from standard operating environment APIs rather than directly from hardware controls. This will enable the input/output to be redirected from or to assistive technology devices – for example, screen readers and braille displays often redirect output (or copy it) to a serial port, while many devices provide character input, or mimic mouse functionality. The use of generic APIs makes this feasible in a way that allows for interoperability of the assistive technology with a range of applications.
  2. Alert the user when an action in one frame causes the content of another frame to change. Allow the user to navigate with little effort to the frame(s) that changed.
References:
  1. Refer to "mutation events" in "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS]. This DOM Level 2 specification allows assistive technologies to be informed of changes to the document tree.
  2. Refer also to information about monitoring HTML events through the document object model in Internet Explorer [IE-WIN].

6.6 Implement standard accessibility APIs (e.g., of the operating environment). Where these APIs do not enable the user agent to satisfy the requirements of this document, use the standard input and output APIs of the operating environment. [Priority 1] Both content and user agent. ( Checkpoint 6.6)
Note: Accessibility APIs enable assistive technologies to monitor input and output events. As part of satisfying this checkpoint, the user agent needs to ensure that text content is available as text through these APIs (and not, for example, as a series of strokes drawn on the screen).
Example techniques:
  1. Operating system and application frameworks provide standard mechanisms for communication with input devices. In the case of Windows, OS/2, the X Windows System, and Mac OS, the window manager provides Graphical User Interface (GUI) applications with this information through the messaging queue. In the case of non-GUI applications, the compiler run-time libraries provide standard mechanisms for receiving keyboard input in the case of desktop operating systems. If you use an application framework such as the Microsoft Foundation Classes, the framework used should support the same standard input mechanisms.
  2. Do not communicate directly with an input device; this may circumvent operating environment messaging. For instance, in Windows, do not open the keyboard device driver directly. It is often the case that the windowing system needs to change the form and method for processing standard input mechanisms for proper application coexistence within the user interface framework.
  3. Do not implement your own input device event queue mechanism; this may circumvent operating environment messaging. Some assistive technologies use standard system facilities for simulating keyboard and mouse events. From the application's perspective, these events are no different than those generated by the user's actions. The "Journal Playback Hooks" (in both OS/2 and Windows) are one example of an application that feeds the standard event queues. For an example of a standard event queue mechanism, refer to the "Carbon Event Manager Preliminary API Reference" [APPLE-HI].
  4. Operating environments provide standard mechanisms for using standard output devices. In the case of common desktop operating systems such as Windows, OS/2, and Mac OS, standard APIs are provided for writing to the display and the multimedia subsystems.
  5. Avoid rendering text in the form of a bitmap before transferring to the screen, since some screen readers rely on the user agent's offscreen model. An offscreen model is rendered content created by an assistive technology that is based on the rendered content of another user agent. Assistive technologies that rely on an offscreen model generally construct it by intercepting standard Operating environments drawing calls. For example, in the case of display drivers, some screen readers are designed to monitor what is drawn on the screen by hooking drawing calls at different points in the drawing process. While knowing about the user agent's formatting may provide some useful information to assistive technologies, this document encourages assistive technologies to access to content directly through published APIs (such as the DOM) rather than via a particular rendering.
  6. Common operating environment two-dimensional graphics engines and drawing libraries provide functions for drawing text to the screen. Examples of this are the Graphics Device Interface (GDI) for Windows, Graphics Programming Interface (GPI) for OS/2, and the X library (XLIB) for the X Windows System or Motif.
  7. Do not communicate directly with an output device.
  8. Do not draw directly to the video frame buffer.
  9. Do not provide your own mechanism for generating pre-defined operating environment sounds.
  10. When writing textual information in a GUI operating environment, use standard operating environment APIs for drawing text.
  11. Use operating environment resources for rendering audio information. When doing so, do not take exclusive control of system audio resources. This could prevent an assistive technology such as a screen reader from speaking if they use software text-to-speech conversion. Also, in operating environments like Windows, a set of standard audio sound resources are provided to support standard sounds such as alerts. These preset sounds are used to trigger SoundSentry graphical cues when a problem occurs; this benefits users with hearing disabilities. These cues may be manifested by flashing the desktop, active caption bar, or current viewport. It is important to use the standard mechanisms to generate audio feedback so that operating environments or special assistive technologies can add additional functionality for users with hearing disabilities.
References:
  1. Microsoft Active Accessibility ([MSAA]) is the standard accessibility API for the Windows 95/98/NT operating systems.
  2. Sun Microsystems Java Accessibility API ([JAVAAPI]) in the Java JDK is the standard accessibility API for the Java environment.

6.7 Implement the operating environment's standard APIs for the keyboard. If standard APIs for the keyboard do not exist, implement publicly documented APIs for the keyboard. [Priority 1] User agent only. (Checkpoint 6.7)
Note: An operating environment may define more than one standard API for the keyboard. For instance, for Japanese and Chinese, input may be processed in two stages, with an API for each.
Example techniques:
  1. Account for author-specified keyboard bindings, such as those specified by "accesskey" attribute in HTML 4 ([HTML4], section 17.11.2).
  2. Test that all user interface components may be operable by software or devices that emulate a keyboard. Use SerialKeys and/or voice recognition software to test keyboard event emulation.
Doing more:
  1. Enhance the functionality of standard operating environment controls to improve accessibility where none is provided by responding to standard keyboard input mechanisms. For example provide keyboard navigation to menus and dialog box controls in the Apple Macintosh operating system. Another example is the Java Foundation Classes, where internal frames do not provide a keyboard mechanism to give them focus. In this case, you will need to add keyboard activation through the standard keyboard activation facility for Abstract Window Toolkit components.
Related techniques:
  1. Apply the techniques for checkpoint 1.1 to the keyboard.

6.8 For an API implemented to satisfy requirements of this document, support the character encodings required for that API. [Priority 1] Both content and user agent. (Checkpoint 6.8)
Note: Support for character encodings is important so that text is not "broken" when communicated to assistive technologies. For example, the DOM Level 2 Core Specification [DOM2CORE], section 1.1.5 requires that the DOMString type be encoded using UTF-16. This checkpoint is an important special case of the other API requirements of this document.
Example techniques:
  1. The list of character encodings that any conforming implementation of Java version 1.3 [JAVA13] must support is: US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, and UTF-16.
  2. MSAA [MSAA] relies on the COM interface, which in turn relies on Unicode [UNICODE], which means that for MSAA a user agent must support UTF-16. From Chapter 3 of the COM documentation, on interfaces, entitled "Interface Binary Standard":

    Finally, and quite significantly, all strings passed through all COM interfaces (and, at least on Microsoft platforms, all COM APIs) are Unicode strings. There simply is no other reasonable way to get interoperable objects in the face of (i) location transparency, and (ii) a high-efficiency object architecture that doesn't in all cases intervene system-provided code between client and server. Further, this burden is in practice not large."


6.9 For user agents that implement Cascading Style Sheets (CSS), provide programmatic access to those style sheets by conforming to the CSS module of the W3C Document Object Model (DOM) Level 2 Style Specification [DOM2STYLE] and exporting the interfaces it defines. [Priority 2] Content only. (Checkpoint 6.9)
Note: As of the publication of this document, Cascading Style Sheets (CSS) are defined by CSS Level 1 [CSS1] and CSS Level 2 [CSS2]. Please refer to the "Document Object Model (DOM) Level 2 Style Specification" [DOM2STYLE] for information about CSS versions covered.
Related techniques:
  1. See techniques for checkpoint 6.1.

6.10 Ensure that programmatic exchanges proceed in a timely manner. [Priority 2] Both content and user agent. (Checkpoint 6.10)
Note: For example, the programmatic exchange of information required by other checkpoints in this document should be efficient enough to prevent information loss, a risk when changes to content or user interface occur more quickly than the communication of those changes. The techniques for this checkpoint explain how developers can reduce communication delays. This will help ensure that assistive technologies have timely access to the document object model and other information that is important for providing access.
Doing more:
  1. Alert the user when information may be lost due to communication delays.
Related techniques:
  1. Please see the appendix that explains how to load assistive technologies for DOM access.

[next guideline 7] [review guideline 6] [previous guideline 5] [contents]

Guideline 7. Observe operating environment conventions.

Checkpoints

7.1 Follow operating environment conventions that benefit accessibility when implementing the selection, content focus, and user interface focus. [Priority 1] User agent only. ( Checkpoint 7.1)
Note: This checkpoint is an important special case of checkpoint 7.3. See also checkpoint 9.1.
Related techniques:
  1. See techniques for checkpoint 7.3.
References:
  1. Refer to Selection and Partial Selection of DOM Level 2 ([DOM2RANGE], section 2.2.2).
  2. For information about focus in the Motif environment (under X Windows), refer to the OSF/Motif Style Guide [MOTIF].

7.2 Ensure that default input configurations do not interfere with operating environment accessibility conventions. [Priority 1] User agent only. ( Checkpoint 7.2)
Note: In particular, default configurations should not interfere with operating conventions for keyboard accessibility. See also checkpoint 11.5.
Example techniques:
  1. The default configuration should not include "Alt-F4", "Control-Alt-Delete", or other combinations that have reserved meanings in a given operating environment.
  2. Clearly document any default configurations that depart from operating environment conventions.
Related techniques:
  1. Some reserved keyboard bindings are listed in the appendix on accessibility features of some operating systems.

7.3 Follow operating environment conventions that benefit accessibility. In particular, follow conventions that benefit accessibility for user interface design, keyboard configuration, product installation, and documentation. [Priority 2] User agent only. (Checkpoint 7.3)
Note: Operating environment conventions that benefit accessibility are those described in this document and in platform-specific accessibility guidelines.
Notes and rationale:
  1. Much of the rationale behind the content requirements of User Agent Accessibility Guidelines 1.0 also makes sense for the user agent user interface (e.g., allow the user to turn off any blinking or moving user interface components).
Example techniques:
  1. Follow operating environment conventions for loading assistive technologies. See the appendix on loading assistive technologies for DOM access for information about how an assistive technology developer can load its software into a Java Virtual Machine.
  2. Inherit operating environment settings related to accessibility (e.g., for fonts, colors, natural language preferences, input configurations, etc.).
  3. Ensure that any online services (e.g., automated update facilities, download-and-install functionalities, sniff-and-fill forms, etc.) observe relevant operating environment conventions concerning device independence and accessibility (as well as the Web Content Accessibility Guidelines 1.0 [WCAG10]).
  4. Evaluate the standard interface controls on the target platform against any built-in operating environment accessibility functions (see the appendix on accessibility features of some operating systems). Ensure that the user agent operates properly with all these functions. Here is a sample of features to consider:
    • Microsoft Windows offers an accessibility function called "High Contrast". Standard window classes and controls automatically support this setting. However, applications created with custom classes or controls work with the "GetSysColor" API to ensure compatibility with High Contrast.
    • Apple Macintosh offers an accessibility function called "Sticky Keys". Sticky Keys operate with keys the operating environment recognizes as modifier keys, and therefore a custom control should not attempt to define a new modifier key.
    • Maintain consistency in the user interface between versions of the software. Consistency is less important than improved general accessibility and usability when implementing new features. However, developers should make changes conservatively to the layout of user interface controls, the behavior of existing functionalities, and the default keyboard configuration.
Related techniques:
  1. See techniques for checkpoint 6.6, checkpoint 6.4, and checkpoint 7.2.
References:
  1. Follow accessibility guidelines for specific platforms:
    • "Macintosh Human Interface Guidelines" [APPLE-HI]
    • "IBM Guidelines for Writing Accessible Applications Using 100% Pure Java" [JAVA-ACCESS].
    • "An Inter-client Exchange (ICE) Rendezvous Mechanism for X Window System Clients" [ICE-RAP].
    • "Information for Developers About Microsoft Active Accessibility" [MSAA].
    • "The Inter-Client communication conventions manual" [ICCCM].
    • "Lotus Notes accessibility guidelines" [NOTES-ACCESS].
    • "Java accessibility guidelines and checklist" [JAVA-CHECKLIST].
    • "The Java Tutorial. Trail: Creating a GUI with JFC/Swing" [JAVA-TUT].
    • "The Microsoft Windows Guidelines for Accessible Software Design" [MS-SOFTWARE].
  2. Follow general guidelines for producing accessible software:
    • "Accessibility for applications designers" [MS-ENABLE].
    • "Application Software Design Guidelines" [TRACE-REF]. Refer also to "EZ ACCESS(tm) for electronic devices V 2.0 implementation guide" [TRACE-EZ] from the Trace Research and Development Center.
    • Articles and papers from Sun Microsystems about accessibility [SUN-DESIGN].
    • "EITAAC Desktop Software standards" [EITAAC].
    • "Requirements for Accessible Software Design" [ED-DEPT].
    • "Software Accessibility" [IBM-ACCESS].
    • Towards Accessible Human-Computer Interaction" [SUN-HCI].
    • "What is Accessible Software" [WHAT-IS].
    • Accessibility guidelines for Unix and X Window applications [XGUIDELINES].

7.4 Follow operating environment conventions to indicate the input configuration. [Priority 2] User agent only. ( Checkpoint 7.4)
Note: For example, in some operating environments, developers may specify which command sequence will activate a functionality so that the standard user interface components display that binding. For example, if a functionality is available from a menu, the letter of the activating key may be underlined in the menu. This checkpoint is an important special case of checkpoint 7.3. See also checkpoint 11.5.
Example techniques:
  1. Use operating environment conventions to indicate the current configuration (e.g., in menus, indicate what key strokes will activate the functionality, underline single keys that will work in conjunction with a key such as Alt, etc.) These are conventions used by the Sun Java Foundations Classes [JAVA-TUT] and Microsoft Foundations Classes for Windows.
  2. Ensure that information about changes to the input configuration is available in a device-independent manner (e.g., through visual and audio cues, and through text).
  3. If the current configuration changes locally (e.g., a search prompt opens, changing the keyboard mapping for the duration of the prompt), alert the user.
  4. Named configurations are easier to remember. This is especially important for people with certain types of cognitive disabilities. For example, if the invocation of a search prompt changes the input configuration, the user may remember more easily which key strokes are meaningful in search mode if alerted that there is a "Search Mode". Context-sensitive help (if available) should reflect the change in mode, and a list of keybindings for the current mode should be readily available to the user.
Related techniques:
  1. See input configuration techniques.

[next guideline 8] [review guideline 7] [previous guideline 6] [contents]

Guideline 8. Implement specifications that benefit accessibility.

Checkpoints

8.1 Implement the accessibility features of all implemented specifications (markup languages, style sheet languages, metadata languages, graphics formats, etc.). The accessibility features of a specification are those identified as such and those that satisfy all of the requirements of the "Web Content Accessibility Guidelines 1.0" [WCAG10]. [Priority 1] Content only. ( Checkpoint 8.1)
Note: This checkpoint applies to both W3C-developed and non-W3C specifications.
Example techniques:
  1. Make obvious to users features that are known to benefit accessibility. Make them easy to find in the user interface and in documentation.
  2. Some specifications include optional features (not required for conformance to the specification). If an optional feature is likely to cause accessibility problems, developers should either ensure that the user can turn off the feature or they not implement the feature.
  3. Refer to the following list of accessibility features of HTML 4 [HTML4] (in addition to those described in techniques for checkpoint 2.1):
References:
  1. Refer to the "Accessibility Features of CSS" [CSS-ACCESS]. Note that CSS 2 includes properties for configuring synthesized speech styles.
  2. Refer to the "Accessibility Features of SMIL" [SMIL-ACCESS].
  3. Refer to the "Accessibility Features of SVG" [SVG-ACCESS].
  4. For information about the Sun Microsystems Java Accessibility API in Java JDK, refer to [JAVAAPI].
  5. For information about captioning for the Synchronized Accessible Multimedia Interchange (SAMI), refer to [SAMI].

8.2 Use and conform to either (1) W3C Recommendations when they are available and appropriate for a task, or (2) non-W3C specifications that enable the creation of content that conforms to the Web Content Accessibility Guidelines 1.0 [WCAG10] at any conformance level. [Priority 2] Content only. (Checkpoint 8.2)
Note: For instance, for markup, the user agent may conform to HTML 4 [HTML4], XHTML 1.0 [XHTML10], or XML 1.0 [XML]. For style sheets, the user agent may conform to CSS ([CSS1], [CSS2]). For mathematics, the user agent may conform to MathML 2.0 [MATHML20]. For synchronized multimedia, the user agent may conform to SMIL 1.0 [SMIL]. A specification is considered "available" if it is published (e.g., as a W3C Recommendation) in time for integration into a user agent's development cycle.
Notes and rationale:
  1. The requirement of this checkpoint is to conform to at least one W3C Recommendation that is available and appropriate for a particular task, or at least one non-W3C specification that allows the creation of content that conforms to WCAG 1.0 [WCAG10]. For example, user agents would satisfy this checkpoint by conforming to the Portable Network Graphics 1.0 specification [PNG] for raster images. In addition, user agents may implement other image formats such as JPEG, GIF, etc. Each specification defines what conformance means for that specification.
Example techniques:
  1. If more than one version or level of a specification is appropriate for a particular task, user agents are encouraged to conform to the latest version. However, developers should consider implementing the version that best supports accessibility, even if this is not the latest version.
  2. For reasons of backward compatibility, user agents should continue to implement deprecated features of specifications. Information about deprecated language features is generally part of the language's specification.
References:
  1. The list of current W3C Recommendations and other technical documents is available at http://www.w3.org/TR/.
  2. W3C make available validation services to promote the proper usage and implementation of specifications. Refer to the:
  3. Information about PDF and accessibility is made available by Adobe [ADOBE].

[next guideline 9] [review guideline 8] [previous guideline 7] [contents]

Guideline 9. Provide navigation mechanisms.

Checkpoints

9.1 Allow the user to make the selection and focus of each viewport (including frames) the current selection and current focus, respectively. [Priority 1] User agent only. (Checkpoint 9.1)
Note: For example, when all frames of a frameset are displayed side-by-side, allow the user (via the keyboard) to move the focus among them.
Example techniques:
  1. Some operating environments provide a means to move the user interface focus among all open windows using multiple input devices (e.g., keyboard and mouse). This technique would suffice for switching among user agent viewports that are separate windows.

9.2 Allow the user to move the content focus to any enabled element in the viewport. If the author has not specified a navigation order, allow at least forward sequential navigation to each element, in document order. The user agent may also include disabled elements in the navigation order. [Priority 1] Content only. (Checkpoint 9.2)
Note: In addition to forward sequential navigation, the user agent should also allow reverse sequential navigation. This checkpoint is an important special case of checkpoint 9.8.
Example techniques:
  1. Allow the user to move the content focus to each enabled element by repeatedly pressing a single key. Many user agents today allow users to navigate sequentially by repeating a key combination – for example, using the Tab key for forward navigation and Shift-Tab for reverse navigation. Because the Tab key is typically on one side of the keyboard while arrow keys are located on the other, users should be allowed to configure the user agent so that sequential navigation is possible with keys that are physically closer to the arrow keys. See also checkpoint 11.3.
  2. Maintain a logical element navigation order. For instance, users may use the keyboard to navigate among elements or element groups using the arrow keys within a group of elements. One example of a group of elements is a set of radio buttons. Users should be able to navigate to the group of buttons, then be able to select each button in the group. Similarly, allow users to navigate from table to table, but also among the cells within a given table (up, down, left, right, etc.).
  3. Respect author-specified information about navigation order (e.g., the "tabindex" attribute in HTML 4 [HTML4], section 17.11.1). Allow users to override the author-specified navigation order (e.g., by offering an alphabetized view of links or other orderings).
  4. The default sequential navigation order should respect the conventions of the natural language of the document. Thus, for most left-to-right languages, the usual navigation order is top-to-bottom and left-to-right. For right-to-left languages, the order would be top-to-bottom and right-to-left.
  5. Implement the ':hover', ':active', and ':focus' pseudo-classes of CSS 2 ([CSS2], section 5.11.3). This allows users to modify content focus presentation with user style sheets. Use them in conjunction with the CSS 2 ':before' pseudo-elements ([CSS2], section 5.12.3) to clearly indicate that something is a link (e.g., 'A:before { content : "LINK:" }').
  6. In Java, a component is part of the sequential navigation order when added to a panel and its isFocusTraversable method returns true. A component can be removed from the navigation order by extending the component, overloading this method, and returning false.
  7. JAWS for Windows Links List view

    This image shows how JAWS for Windows [JFW] allows users to navigate to links in a document and activate them independently. Users may also configure the user agent to navigate visited links, unvisited links, or both. Users may also change the sequential navigation order, sorting links alphabetically or leaving them in the logical tabbing order. The focus in the links view follows the focus in the main view.

Doing more:
  1. Provide other sequential navigation mechanisms for particular element types or semantic units, e.g., "Find the next table" or "Find the previous form." For more information about sequential navigation of form controls and form submission, see techniques for checkpoint 5.4.
  2. For graphical user agents (or any user agent offering a two-dimensional display), navigation based not on document order but on layout may also benefit the user. For example, allow the user to navigate up, down, left, and right to the nearest rendered enabled link. This type of navigation may be particularly useful when it is clear from the layout where the next navigation step will take the user (e.g., grid layouts where it is clear what the next link to the left or below will be).
  3. Excessive use of sequential navigation can reduce the usability of software for both disabled and non-disabled users. Some useful types of direct navigation include: navigation based on position (e.g., all links are numbered by the user agent), navigation based on element content (e.g., the first letter of text content), direct navigation to a table cell by its row/column position, and searching (e.g., based on form control text, associated labels, or form control names).

9.3 For each state in a viewport's browsing history, maintain information about the point of regard, content focus, user interface focus, and selection. When the user returns to any state in the viewport history, restore the saved values for all four of these state variables. [Priority 1] User agent only. ( Checkpoint 9.3)
Note: For example, when the user uses the "back" functionality, restore the four state variables.
Example techniques:
  1. If the user agent allows the user to browse multimedia or audio-only presentations, when the user leaves one presentation for another, pause the presentation. When the user returns to a previous presentation, allow the user to resume the presentation where it was paused (i.e., return the point of regard to the same place in space and time). Note: This may be done for a presentation that is available "completely" but not for a "live" stream or any part of a presentation that continues to run in the background.
  2. Allow the user to configure whether leaving a viewport pauses a multimedia presentation.
  3. If the user activates a broken link, leave the viewport where it is and alert the user (e.g., in the status bar and with a graphical or audio alert). Moving the viewport suggests that a link is not broken, which may disorient the user.
  4. In JavaScript, the following may be used to change the Web resource in the viewport, and navigate the history:
    • myWindow.home();
    • myWindow.forward();
    • myWindow.back();
    • myWindow.navigate("http://example.com/");
    • myWindow.history.back();
    • myWindow.history.forward();
    • myWindow.history.go( -2 );
    • location.href = "http://example.com/"
    • location.reload();
    • location.replace("http://example.com/");
Doing more:
  1. Restore the four state variables after the user refreshes the same content.
References:
  1. Refer to the HTTP/1.1 specification for information about history mechanisms ([RFC2616], section 13.13).

9.4 For the element with content focus, make available the list of input device event handlers explicitly associated with the element. [Priority 2] Content only. (Checkpoint 9.4)
Note: For example, allow the user to query the element with content focus for the list of input device event handlers, or add them directly to the serial navigation order. See checkpoint 1.2 for information about activation of event handlers associated with the element with focus.
Example techniques:
  1. For HTML content, the left mouse button is generally the only mouse button that is used to activate event handlers associated with mouse clicks.
References:
  1. See checkpoint 1.2 for information about input device event handlers in HTML 4 [HTML4] and the Document Object Model (DOM) Level 2 Events Specification [DOM2EVENTS].

9.5 Allow configuration so that moving the content focus to an enabled element does not automatically activate any explicitly associated input device event handlers. [Priority 2] Content only. ( Checkpoint 9.5)
Note: In this configuration, user agents should still apply any stylistic changes (e.g., highlighting) that may occur when there is a change in content focus.
Notes and rationale:
  1. First-time users of a page may want access to link text before deciding whether to follow (activate) the link. More experienced users of a page might prefer to follow the link directly, without the intervening content focus step.
Example techniques:
  1. Allow the following configurations:
    1. On invocation of the input binding, move focus to the associated enabled element, but do not activate it.
    2. On invocation of the input binding, move focus to the associated enabled element and prompt the user with information that will allow the user to decide whether to activate the element (e.g., link title or text). Allow the user to suppress future prompts for this particular input binding.
    3. On invocation of the input binding, move focus to the associated enabled element and activate it.

9.6 Allow the user to move the content focus to any enabled element in the viewport. If the author has not specified a navigation order, allow at least forward and reverse sequential navigation to each element, in document order. The user agent must not include disabled elements in the navigation order. [Priority 2] Content only. (Checkpoint 9.6)
Note: This checkpoint is a special case of checkpoint 9.2.
Related techniques:
  1. Apply the techniques of checkpoint 9.2 to enabled elements only.

9.7 Allow the user to search within rendered text content for a sequence of characters from the document character set. Allow the user to start a forward search (in document order) from any selected or focused location in content. When there is a match (1) move the viewport so that the matched text content is within it, and (2) allow the user to search for the next instance of the text from the location of the match. Alert the user when there is no match, when the search reaches the end of content, and prior to any wrapping. Provide a case-insensitive search option for text in scripts (i.e., writing systems) where case is significant. [Priority 2] Content only. (Checkpoint 9.7)
Note: If the user has not indicated a start position for the search, the search should start from the beginning of content. Use operating environments conventions for indicating the result of a search (e.g., selection or content focus). A wrapping search is one that restarts automatically at the beginning of content once the end of content has been reached.
Example techniques:
  1. Use the selection or focus to indicate found text. This will provide assistive technologies with access to the text.
  2. Allow users to search all views (e.g., including views of the text source).
  3. For extremely small viewports or extremely long matches, the entire matched text content may not fit within the viewport. In this case, developers may move the viewport to encompass the initial part of the matched content.
  4. The search string input method should follow operating environment conventions (e.g., for international character input).
  5. When the point of regard depends on time (e.g., for audio viewports), the user needs to be able to search through content that will be available through that viewport. This is analogous to content rendered graphically that is reachable by scrolling.
  6. For frames, allow users to search for content in all frames, without having to be in a particular frame.
  7. For multimedia presentations, allow users to search and examine time-dependent media elements and links in a time-independent manner. For example, present a static list of time-dependent links.
  8. Allow users to search the element content of form controls (where applicable) and any label text.
  9. When searching a document, the user agent should not search text whose properties prevent it from being visible (such as text that has visibility="hidden"), or equivalent text for elements with such properties (such as "alt" text for an image that has visibility="hidden").
Doing more:
  1. If the number of matches is known, provide this information to orient the user.
  2. It may be confusing to allow users to search for text content that is not rendered (and thus that they have not viewed). If this type of search is possible, alert the user of this particular search mode.
  3. Allow the following additional search functionalities:
    1. Allow the user to start a search from the beginning of the document rather than from the current selection or focus.
    2. Provide distinct alerts for the situation where the user has searched through all content or where the user has simply reached the end of the document and needs to wrap to the beginning.
    3. Allow reverse search so the user doesn't not have to start he search from the beginning of the document if the search goes too far.
    4. Allow the user to easily start a search from the beginning of the content currently rendered in the viewport.
    5. Provide the option of searching through conditional content that is associated with rendered content, and render the found conditional content (e.g., by showing its relation to the rendered content).
References:
  1. For information about when case is significant in a script, please refer to Section 4.1 of Unicode [UNICODE].

9.8 Allow the user to navigate efficiently to and among important structural elements. Allow forward and backward sequential navigation to important structural elements. [Priority 2] Content only. (Checkpoint 9.8)
Note: This specification intentionally does not identify which "important elements" must be navigable as this will vary according to markup language. What constitutes "efficient navigation" may depend on a number of factors as well, including the "shape" of content (e.g., serial navigation of long lists is not efficient) and desired granularity (e.g., among tables, then among the cells of a given table).
Notes and rationale:
  1. User agents should construct the navigation view with the goal of breaking content into sensible pieces according to the author's design. In most cases, user agents should not break down content into individual elements for navigation; element-by-element navigation of the document object does not meet the goal of facilitating navigation to important pieces of content. (The navigation view may also be an expanding/contracting outline view; see checkpoint 10.5.) Instead, user agents are expected to construct the navigation view based on markup.
Example techniques:
  1. In HTML 4 [HTML4], important elements include: A, ADDRESS, APPLET, BUTTON, FIELDSET, DD, DIV, DL, DT, FORM, FRAME, H1-H6, IFRAME, IMG, INPUT, LI, LINK (if rendered), MAP, OBJECT, OL, OPTGROUP, OPTION, P, TABLE, TEXTAREA, and UL. HTML also allows authors to specify keyboard configurations ("accesskey", "tabindex"), which can serve as hints about what the author considers important.
  2. Allow navigation based on commonly understood document models, even if they do not adhere strictly to a Document Type Definition (DTD). For instance, in HTML, although headings (H1-H6) are not containers, they may be treated as such for the purpose of navigation. Note that they should be properly nested.
  3. Use the DOM ([DOM2CORE]) as the basis of structured navigation (e.g., a postorder traversal). However, for well-known markup languages such as HTML, structured navigation should take advantage of the structure of the source tree and what is rendered.
  4. Follow operating environment conventions for indicating navigation progress (e.g., selection or content focus).
  5. Allow the user to limit navigation to the cells of a table (notably left and right within a row and up and down within a column). Navigation techniques include keyboard navigation from cell to cell (e.g., using the arrow keys) and page up/down scrolling. See the section on table navigation.
  6. Alert the user when navigation has led to the beginning or end of a structure (e.g., end of a list, end of a form, table row or column end, etc.). See also checkpoint 1.3.
  7. For those languages with known (e.g., by specification, schema, metadata, etc.) conventions for identifying important components, user agents should construct the navigation tree from those components, allowing users to navigate up and down the document tree, and forward and backward among siblings. As the same time, allow users to shrink and expand portions of the document tree. For instance, if a subtree consists of a long series of links, this will pose problems for users with serial access to content. At any level in the document tree (for forward and backward navigation of siblings), limit the number of siblings to between five and ten. Break longer lists down into structured pieces so that users can access content efficiently, decide whether they want to explore it in detail, or skip it and move on.
  8. Tables and forms illustrate the utility of a recursive navigation mechanism. The user should be able to navigate to tables, then change "scope" and navigate within the cells of that table. Nested tables (a table within the cell of another table) fit nicely within this scheme. However, the headers of a nested table may provide important context for the cells of the same row(s) or column(s) containing the nested table. The same ideas apply to forms: users should be able to navigate to a form, then among the controls within that form.
  9. Navigation and orientation go together. The user agent should allow the user to navigate to a location in content, explore the context, navigate again, etc. In particular, user agents should allow users to:
    1. Navigate to a piece of content that the author has identified as important according to the markup language specification and conventional usage. In HTML, for example, this includes headings, forms, tables, navigation mechanisms, and lists.
    2. Navigate past that piece of content (i.e., avoid the details of that component).
    3. Navigate into that piece of content (i.e., chose to view the details of that component).
    4. Change the navigation view as they go, expanding and contracting portions of content that they wish to examine or ignore. This will speed up navigation and facilitate orientation at the same time.
  10. Provide context-sensitive navigation. For instance, when the user navigates to a list or table, provide locally useful navigation mechanisms (e.g., within a table, cell-by-cell navigation) using similar input commands.
  11. Allow users to skip author-specified navigation mechanisms such as navigation bars. For instance, navigation bars at the top of each page at a Web site may force users with screen readers or some physical disabilities to wade through many links before reaching the important information on the page. User agents may facilitate browsing for these users by allowing them to skip recognized navigation bars (e.g., through a configuration option). Some techniques for this include:
    1. Providing a functionality to jump to the first non-link content.
    2. If the number of elements of a particular type is known, provide this information to orient the user.
    3. In HTML, the MAP element may be used to mark up a navigation bar (even when there is no associated image). Thus, users might ask that MAP elements not be rendered in order to hide links inside the MAP element. User agents might allow users to hide MAP elements selectively. For example, hide any MAP element with a "title" attribute specified. Note: Starting in HTML 4, the MAP element allows block content, not just AREA elements.
  12. Allow depth-first as well as breadth-first navigation.
  13. Allow users to navigate synchronized multimedia presentations. See also checkpoint 4.5.
Doing more:
  1. Allow the user to navigate characters, words, sentences, paragraphs, screenfuls, etc. according to conventions of the natural language. This benefits users of speech-based user agents and has been implemented by several screen readers, including Winvision [WINVISION], Window-Eyes [WINDOWEYES], and JAWS for Windows [JFW].
References:
  1. The following is a summary of ideas provided by the National Information Standards Organization with respect to Digital Talking Books [TALKINGBOOKS]:

    A talking book's "Navigation Control Center" (NCC) resembles a traditional table of contents, but it is more. It contains links to all headings at all levels in the book, links to all pages, and links to any items that the reader has chosen not to have read. For example, the reader may have turned off the automatic reading of footnotes. To allow the user to retrieve that information efficiently, the reference to the footnote is placed in the NCC and the reader can go to the reference, understand the context for the footnote, and then read the footnote.

    Once the reader is at a desired location and wishes to begin reading, the navigation process changes. Of course, the reader may elect to read sequentially, but often some navigation is required (e.g., frequently people navigate forward or backward one word or character at a time). Moving from one sentence or paragraph at a time is also needed. This type of local navigation is different from the global navigation used to get to the location of what you want to read. It is frequently desirable to move from one block element to the next. For example, moving from a paragraph to the next block element which may be a list, blockquote, or sidebar is the normally expected mechanism for local navigation.


9.9 Allow configuration and control of the set of important elements required by checkpoint 9.8 and checkpoint 10.5. Allow the user to include and exclude element types in the set of elements. [Priority 3] Content only. ( Checkpoint 9.9)
Note: For example, allow the user to navigate only paragraphs, or only headings and paragraphs, etc. See also checkpoint 6.4.
Example techniques:
  1. Allow the user to navigate HTML elements that share the same "class" attribute.
  2. The CSS 'display' and 'visibility' properties ([CSS2], sections 9.2.5 and 11.2, respectively), allow the user to override the default settings in user style sheets.

    Example.

    The following CSS 2 style sheet will turn the display off of all HTML elements inside the BODY element except heading elements:

    <STYLE type="text/css">
       BODY * { display: none }
       H1, H2, H3, H4, H5, H6 { display: block }
    </STYLE>
    

    Another approach would be to use class selectors to identify those elements to hide or display.

    End example.

Doing more:
  1. Allow the user to navigate according to similar styles (which may be an approximation for similar element types).

[next guideline 10] [review guideline 9] [previous guideline 8] [contents]

Guideline 10. Orient the user.

Checkpoints

10.1 Make available to the user the purpose of each table and the relationships among the table cells and headers. [Priority 1] Content only. (Checkpoint 10.1)
Note: This checkpoint refers only to table information that the user can recognize. Depending on the table, some techniques may be more efficient than others for conveying data relationships. For many tables, user agents rendering in two dimensions may satisfy this checkpoint by rendering a table as a grid and by ensuring that users can find headers associated with cells. However, for large tables or small viewports, allowing the user to query cells for information about related headers may improve access. This checkpoint is an important special case of checkpoint 2.1.
Notes and rationale:
  1. The more complex the table, the more clues to table structure are needed. Make available information summarizing table structure, including any table head and foot rows, and possible row grouping into multiple table bodies, column groups, header cells and how they relate to data cells, the grouping and spanning of rows and columns that apply to qualify any cell value, cell position information, table dimensions, etc.
Example techniques:
  1. Refer to the THEAD, TBODY, and TFOOT elements of HTML 4 ([HTML4], section 11.2.3). These elements may be "fixed" to the screen (or repeated on paper) with the 'fixed' value of the CSS2 'position' property ([CSS2], section 9.3.1). When these elements are used by authors, users can scroll through data while retaining headers and footers "in view".
  2. In HTML, beyond the TR, TH, and TD elements, the table attributes "summary", "abbr", "headers", "scope", and "axis" provide information about relationships among cells and headers. For more information, see the section on table techniques.
  3. When rendering a table serially, allow the user to specify how cell header information should be rendered before cell data information. Some possibilities are illustrated by the CSS2 'speak-header' property ([CSS2], section 17.7.1).
  4. Internet Explorer context menu item to display table cell header information

    This image shows how Internet Explorer [IE-WIN] provides cell header information through the context menu.


10.2 Provide a mechanism for highlighting the selection and content focus. Allow the user to configure the highlight styles. The highlight mechanism must not rely on color alone. For graphical viewports, if the highlight mechanism involves colors or text decorations, allow the user to choose from among the full range of colors or text decorations supported by the operating environment. [Priority 1] Content only. ( Checkpoint 10.2)
Note: Examples of highlight mechanisms include foreground and background color variations, underlining, distinctive voice pitches, rectangular boxes, etc. Because the selection and focus change frequently, user agents should not highlight them using mechanisms (e.g., font size variations) that cause content to reflow as this may disorient the user. See also checkpoint 7.1.
Notes and rationale:
  1. Two reasons not why it is important not to rely on color alone as a distinguishing factor are that some users may not perceive colors and some devices may not render them.
Example techniques:
  1. Inherit selection and focus information from user's settings for the operating environment.
  2. A highlighted selection or focus may span text with different background colors, text foreground colors, font families, etc.
  3. For selection:
  4. For focus, implement the ':hover', ':active', and ':focus' pseudo-classes of CSS 2 ([CSS2], section 5.11.3). and dynamic outlines and focus of CSS 2 ([CSS2], sections 5.11.3 and 18.4.1, respectively).

    Example.

    The following rule will cause links with focus to appear with a blue background and yellow text.

       A:focus { background: blue; color: yellow }
    

    The following rule will cause TEXTAREA elements with focus to appear with a particular focus outline:

       TEXTAREA:focus { outline: thick black solid }
    
Doing more:
  1. Test the user agent to ensure that individuals who have low vision and use screen magnification software are able to follow highlighted item(s).

10.3 Ensure that all of the default highlight styles for the selection, content focus, enabled elements, recently visited links, and fee links (1) do not rely on color alone, and (2) differ from each other, and not by color alone. [Priority 1] Content only. (Checkpoint 10.3)
Note: For instance, by default a graphical user agent may present the selection using color and a dotted outline, the focus using a solid outline, enabled elements as underlined in blue, recently visited links as dotted underlined in purple, and fee links using a special icon or flag to draw the user's attention.
Example techniques:
  1. If the user overrides the default styling for any one of these mechanisms, the new styling may interfere with the others. Therefore, the user agent should allow the user to configure them all at once or should alert the user to potential conflicts when change are made. For instance, if the user configures both the selection and focus highlighting to use colors, there may be a conflict (especially if the colors are the same or similar).

10.4 Provide a mechanism for highlighting all enabled elements, recently visited links, and fee links. Allow the user to configure the highlight styles. The highlight mechanism must not rely on color alone. For graphical viewports, if the highlight mechanism involves colors, fonts, or text decorations, allow the user to choose from among the full range of colors, fonts, or text decorations supported by the operating environment. For an image map, the user agent must highlight the image map as a whole and should allow configuration to highlight each enabled region. [Priority 2] Content only. (Checkpoint 10.4)
Note: Examples of highlight mechanisms include foreground and background color variations, font variations, underlining, distinctive voice pitches, rectangular boxes, etc.
Notes and rationale:
  1. For example, most graphical user agents highlight all the links on a page so that users know at a glance where to interact.
Example techniques:
  1. Do not rely solely on fonts or colors to alert the user whether or not the link has previously been followed. Allow the user to configure how information will be presented (colors, sounds, status bar messages, some combination, etc.).
  2. Use CSS2 [CSS2] to add style to these different classes of elements. In particular, consider the 'text-decoration' property ([CSS2], section 16.3.1), aural cascading style sheets, font properties, and color properties.
  3. For enabled elements, implement CSS2 attribute selectors to match elements with associated scripts ([CSS2], section 5.8).
  4. For fee links:
    • The W3C specification "Common Markup for micropayment per-fee-links" [MICROPAYMENT] describes how authors may mark up micropayment information in an interoperable manner.
    • Use standard, accessible interface controls to present information about fees and to prompt the user to confirm payment.
    • For a link that has content focus, allow the user to query the link for fee information (e.g., by activating a menu or key stroke).
  5. The Opera dialog box for configuring the rendering of links

    This image shows how Opera [OPERA] allows the user to configure link rendering, including the identification of visited links.

Doing more:
  1. Test the user agent to ensure that individuals who have low vision and use screen magnification software are able to follow highlighted item(s).
Related techniques:
  1. For links, see the section on link techniques, the visited links example in the section on generated content techniques, and techniques for checkpoint 9.2.

10.5 Make available to the user an "outline" view of content, composed of labels for important structural elements (e.g., heading text, table titles, form titles, etc.). [Priority 2] Content only. ( Checkpoint 10.5)
Note: This checkpoint is meant to provide the user with a simplified view of content (e.g, a table of contents). What constitutes a label is defined by each markup language specification. For example, in HTML, a heading (H1-H6) is a label for the section that follows it, a CAPTION is a label for a table, the "title" attribute is a label for its element, etc. A label is not required to be text only. For important elements that do not have associated labels, user agents may generate labels for the outline view. For information about what constitutes the set of important structural elements, please see the Note following checkpoint 9.8. By making the outline view navigable, it is possible to satisfy this checkpoint and checkpoint 9.8 together: Allow users to navigate among the important elements of the outline view, and to navigate from a position in the outline view to the corresponding position in a full view of content. See also checkpoint 9.9.
Example techniques:
  1. For instance, in HTML, labels include the following:
    • The CAPTION element is a label for TABLE
    • The "title" attribute is a label for many elements.
    • The H1-H6 elements are labels for sections that follow
    • The LABEL element is a label for form control
    • The LEGEND element is a label for a set of form controls
    • The TH element is a label for a row/column of table cells.
    • The TITLE element is a label for the document.
  2. Allow the user to expand or shrink portions of the outline view (configure detail level) for faster access to important parts of content.
  3. Hide portions of content by using the CSS 'display' and 'visibility' properties ([CSS2], sections 9.2.5 and 11.2, respectively).
  4. Provide a structured view of form controls (e.g., those grouped by LEGEND or OPTGROUP in HTML) along with their labels.
  5. Amaya table of contents view

    This image shows the table of contents view provided by Amaya [AMAYA]. This view is coordinated with the main view so that users may navigate in one viewport and the focus follows in the other. An entry in the table of contents with a target icon means that the heading in the document has an associated anchor.

Doing more:
  1. For documents that do not use structure properly, user agents may attempt to create an outline based on the rendering of elements and heuristics about what elements may indicate about document structure.
Related techniques:
  1. See structured navigation techniques for checkpoint 9.8.

10.6 To help the user decide whether to traverse a link, make available the following information about it: link element content, link title, whether the link is internal to the resource (e.g., the link is to a target in the same Web page), whether the user has traversed the link recently, whether traversing it may involve a fee, and information about the type, size, and natural language of linked Web resources. The user agent is not required to compute or make available information that requires retrieval of linked Web resources. [Priority 3] Content only. (Checkpoint 10.6)
Example techniques:
  1. Some markup languages allow authors to provide hints about the nature of linked content (e.g., in HTML 4 [HTML4], the "hreflang" and "type" attributes on the A element). Specifications should indicate when this type of information is a hint from the author and when these hints may be overridden by another mechanism (e.g., by HTTP headers in the case of HTML). User agent developers should make the author's hints available to the user (prior to retrieving a resource), but should provide definitive information once available.
  2. Links may be simple (e.g., HTML links) or more complex, such as those defined by the XML Linking Language (XLink) [XLINK].
  3. The scope of "recently followed link" depends on the user agent. The user agent may allow the user to configure this parameter, and should allow the user to reset all links as "not followed recently".
  4. User agents should cache information determined as the result of retrieving a Web resource and should make it available to the user. Refer to HTTP/1.1 caching mechanisms described in RFC 2616 [RFC2616], section 13.
  5. For a link that has content focus, allow the user to query the link for information (e.g., by activating a menu or key stroke).
  6. Do not mark all local links (to anchors in the same page) as visited when the page has been visited.
Doing more:
  1. User agents may provide information about any input bindings associated with a link. See checkpoint 11.2.
Related techniques:
  1. See the section on link techniques.
References:
  1. User agents may use HTTP HEAD rather than GET for information about size, language, etc. Refer to RFC 2616 [RFC2616], section 9.3
  2. For information about content size in HTTP/1.1, refer to RFC 2616 [RFC2616], section 14.13. User agents are not expected to compute content size recursively (i.e., by adding the sizes of resources referenced by URIs within another resource).
  3. For information about content language in HTTP/1.1, refer to RFC 2616 [RFC2616], section 14.12.
  4. For information about content type in HTTP/1.1, refer to RFC 2616 [RFC2616], section 14.17.

Checkpoints for the user interface

10.7 Provide a mechanism for highlighting the viewport with the current focus. For graphical viewports, the default highlight mechanism must not rely on color alone. [Priority 1] User agent only. (Checkpoint 10.7)
Note: This includes highlighting and identifying frames. This checkpoint is an important special case of checkpoint 1.1. See also to checkpoint checkpoint 7.3.
Example techniques:
  1. Provide a setting that causes a window that is the viewport with the current focus to be maximized automatically. For example, maximize the parent window of the browser when launched, and maximize each child window automatically when it receives focus. Maximizing does not necessarily mean occupying the whole screen or parent window; it means expanding the viewport so that users have to scroll horizontally or vertically as little as possible.
  2. If the viewport with the current focus is a frame or the user does not want windows to pop to the foreground, use colors, reverse videos, or other graphical clues to indicate the viewport with the current focus.
  3. For speech or braille output, use the frame or window title to identify the viewport with the current focus.
  4. Use operating environment conventions, for specifying selection and content focus (e.g., schemes in Windows).
  5. Implement the ':hover', ':active', and ':focus' pseudo-classes of CSS 2 ([CSS2], section 5.11.3). This allows users to modify content focus rendering with user style sheets.
  6. Example of a solid line border used to indicate the content focus in Opera 3.60

    This image shows how Opera [OPERA] uses a solid line border to indicate content focus.

    Example of system highlight colors used to indicate the content focus by the accessible browser project

    This image shows how the Accessible Web Browser [AWB] uses the operating environment highlight colors to indicate content focus.

Related techniques:
  1. See the section on frame techniques.

10.8 Ensure that when a viewport's selection or content focus changes, it is in the viewport after the change. [Priority 2] User agent only. ( Checkpoint 10.8)
Note: For example, if users navigating links move to a portion of the document outside a graphical viewport, the viewport should scroll to include the new location of the focus. Or, for users of audio viewports, allow configuration to render the selection or focus immediately after the change.
Example techniques:
  1. There are times when the content focus changes (e.g., link navigation) and the viewport should move to track it. There are other times when the viewport changes position (e.g., scrolling) and the content focus is moved to follow it. In both cases, the focus (or selection) is in the viewport after the change.
  2. If a search causes the selection or focus to change, ensure that the found content is not hidden by the search prompt.
  3. When the content focus changes, register the newly focused element in the navigation sequence; sequential navigation should start from there.
  4. Unless viewports have been coordinated explicitly, changes to selection or focus in one viewport should not affect the selection or focus in another viewport.
  5. The persistence of the selection or focus in the viewport will vary according to the type of viewport. For any viewport with persistent rendering (e.g., a two-dimensional graphical or tactile viewport), the focus or selection should remain in the viewport after the change until the user changes the viewport. For any viewport without persistent rendering (e.g., and audio viewport), once the focus or selection has been rendered, it will no longer be "in" the viewport. In a pure audio environment, the whole persistent context is in the mind of the user. In a graphical viewport, there is a large shared buffer of dialog information in the display. In audio, there is no such sensible patch of interaction that is maintained by the computer and accessed, ad lib, by the user. The audio rendering of content requires the elapse of time, which is a scarce resource. Consequently, the flow of content through the viewport has to be managed more carefully, notably when the content was designed primarily for graphical rendering.
  6. If the rendered selection or focus does not fit entirely within the limits of a graphical viewport:
    1. if the region actually displayed prior to the change was within the selection or focus, do not move the viewport.
    2. otherwise, if the region actually displayed prior to the change was not within the newly selected or focused content, move to display at least the initial fragment of such content.

10.9 Indicate the relative position of the viewport in rendered content (e.g., the proportion of an audio or video clip that has been played, the proportion of a Web page that has been viewed, etc.). [Priority 3] User agent only. (Checkpoint 10.9)
Note: The user agent may calculate the relative position according to content focus position, selection position, or viewport position, depending on how the user has been browsing. The user agent may indicate the proportion of content viewed in a number of ways, including as a percentage, as a relative size in bytes, etc. For two-dimensional renderings, relative position includes both vertical and horizontal positions.
Example techniques:
  1. Provide a scrollbar for the viewport. Some specifications address scrolling requirements or suggestions explicitly, such as for the THEAD and TBODY elements of HTML 4 ([HTML4], section 11.2.3) and the 'overflow' property of CSS 2 ([CSS2], section 11.1.1).
  2. Indicate the size of the document, so that users may decide whether to download for offline viewing. For example, the playing time of an audio file could be stated in terms of hours, minutes, and seconds. The size of a primarily text-based Web page might be stated in both kilobytes and screens, where a screen of information is calculated based on the current dimensions of the viewport.
  3. Indicate the number of screens of information, based on the current dimensions of the viewport (e.g., "screen 4 of 10").
  4. Use a variable pitch audio signal to indicate the viewport's different positions.
  5. Provide standard markers for specific percentages through the document.
  6. Provide markers for positions relative to some position – a user selected point, the bottom, the H1, etc.
  7. Put a marker on the scrollbar, or a highlight at the bottom of the page while scrolling (so you can see what was the bottom before you started scrolling).
  8. For images that render gradually (coarsely to finely), it is not necessary to show percentages for each rendering pass.
Doing more:
  1. Allow users to configure what status information they want rendered. Useful status information includes:
    • Document proportions (numbers of lines, pages, width, etc.);
    • Number of elements of a particular type (e.g., tables, forms, and headings);
    • Whether the viewport is at the beginning or end of the document;
    • Size of document in bytes;
    • The number of controls in a form and controls in a form control group (e.g., FIELDSET in HTML).

[next guideline 11] [review guideline 10] [previous guideline 9] [contents]

Guideline 11. Allow configuration and customization.

Checkpoints

11.1 Provide information to the user about current user preferences for input configurations. [Priority 1] User agent only. ( Checkpoint 11.1)
Note: To satisfy this checkpoint, the user agent may make available binding information in a centralized fashion (e.g., a list of bindings) or a distributed fashion (e.g., by listing keyboard shortcuts in user interface menus).
Related techniques:
  1. See input configuration techniques.

11.2 Provide a centralized view of the current author-specified input configuration bindings. [Priority 2] Content only. ( Checkpoint 11.2)
Note: For example, for HTML documents, provide a view of keyboard bindings specified by the author through the "accesskey" attribute. The intent of this checkpoint is to centralize information about author-specified bindings so that the user does not have to read the entire content first to find out what bindings are available. The user agent may satisfy this checkpoint by providing different views for different input modalities (keyboard, pointing device, voice, etc.).
Example techniques:
  1. If the user agent offers a special view that lists author-specified bindings, allow the user to navigate easily back and forth between the viewport with the current focus and the list of bindings.
Doing more:
  1. In addition to providing a centralized view of bindings, allow users to find out about bindings in content. For example, highlight enabled elements that have associated event handlers (e.g., by indicating bindings near the element).
Related techniques:
  1. See input configuration techniques.

11.3 Allow the user to override any binding that is part of the user agent default input configuration The user agent is not required to allow the user to override standard bindings for the operating environment (e.g., for access to help). [Priority 2] User agent only. (Checkpoint 11.3)
Note: The override requirement only applies to bindings for the same input modality (e.g., the user must be able to override a keyboard binding with another keyboard binding). See also checkpoint 11.5, checkpoint 11.7, and checkpoint 12.3.
Notes and rationale:
  1. Many people benefit from direct access to important user agent functionalities (e.g., via a single key stroke or short voice command): users with poor physical control (who might mistakenly repeat a key stroke), users who fatigue easily (for whom key combinations involve significant effort), users who cannot remember key combinations, and any user who wants to operate the user agent efficiently.
Doing more:
  1. Allow users to choose from among pre-packaged configurations, to override some of the chosen configuration, and to save it as a profile. Not only will the user save time configuring the user agent, but this will reduce questions to technical support personnel.
  2. Allow users to restore easily the default input configuration.
  3. Allow users to create macros and bind them to key strokes or other input methods.
  4. Test the default keyboard configuration for usability. Ask users with different disabilities and combinations of disabilities to test configurations.
Related techniques:
  1. See input configuration techniques.

11.4 Allow the user to override any binding in the default keyboard configuration with a binding to either a key plus modifier keys or to a single-key. For each functionality in the set required by checkpoint 11.5, allow the user to configure a single-key binding (i.e., one key press performs the task, with zero modifier keys). If the number of physical keys on the keyboard is less than the number of functionalities required by checkpoint 11.5, allow single-key bindings for as many of those functionalities as possible. The user agent is not required to allow the user to override standard bindings for the operating environment (e.g., for access to help). [Priority 2] User agent only. (Checkpoint 11.4)
Note: In this checkpoint, "key" refers to a physical key of the keyboard (rather than, say, a character of the document character set). Because single-key access is so important to some users with physical disabilities, user agents should ensure that (1) most keys of the physical keyboard may be configured for single-key bindings, and (2) most functionalities of the user agent may be configured for single-key bindings. This checkpoint does not require single physical key bindings for character input, only for the activation of user agent functionalities. For information about access to user agent functionality through a keyboard API, see checkpoint 6.7.
Notes and rationale:
  1. When using a physical keyboard, some users require single-key access, others require that keys activated in combination be physically close together, while others require that they be spaced physically far apart.
  2. In some modes of interaction (e.g., when the user is entering text), the number of available single keys will be significantly reduced.
Example techniques:
  1. Offer a single-key mode where, once the user has entered into that mode (e.g., by pressing a single key), most of the keys of the keyboard are configurable for single-key operation of the user agent. Allow the user to exit that mode by pressing a single key as well. For example, Opera [OPERA] includes a mode in which users can access important user agent functionalities with single strokes from the numeric keypad.
  2. Consider distance between keys and key alignment (e.g., "9/I/K", which align almost vertically on many keyboards) in the default configuration. For instance, if Enter is used to activate links, put other link navigation commands near it (e.g., page up/down, arrow keys, etc. on many keyboards). In configurations for users with reduced mobility, pair related functionalities on the keyboard (e.g., left and right arrows for forward and back navigation).
  3. Mouse Keys (available in some operating environments) allow users to simulate the mouse through the keyboard. They provide a usable command structure without interfering with the user interface for users who do not require keyboard-only and single-key access.
Doing more:
  1. Allow users to accomplish tasks through repeated key strokes (e.g., sequential navigation) since this means less physical repositioning for all users. However, repeated key strokes may not be efficient for some tasks. For instance, do not require the user to position the pointing device by pressing the "down arrow" key repeatedly.
  2. So that users do not mistakenly activate certain functionalities, make certain combinations "more difficult" to invoke (e.g., users are not likely to press Control-Alt-Delete accidentally).

11.5 Ensure that the default input configuration includes bindings for the following functionalities required by other checkpoints in this document: move focus to next enabled element; move focus to previous enabled element; activate focused link; search for text; search again for same text; increase size of rendered text; decrease size of rendered text; increase global volume; decrease global volume; (each of) stop, pause, resume, fast advance, and fast reverse selected audio and animations (including video and animated images). If the user agent supports the following functionalities, the default input configuration must also include bindings for them: next history state (forward); previous history state (back); enter URI for new resource; add to favorites (i.e., bookmarked resources); view favorites; stop loading resource; reload resource; refresh rendering; forward one viewport; back one viewport; next line; previous line. [Priority 2] User agent only. ( Checkpoint 11.5)
Note: This checkpoint does not make any requirements about the ease of use of default input configurations, though clearly the default configuration should include single-key bindings and allow easy operation. Ease of use is ensured by the configuration requirements of checkpoint 11.3.
Example techniques:
  1. Input configurations should allow quick and direct navigation that does not rely on graphical output. Do not require the user to navigate through a graphical user interface as the only way to activate a functionality.
Doing more:
  1. Provide different input configuration profiles (e.g., one keyboard profile with key combinations close together and another with key combinations far apart).
  2. Offer a mode that makes the input configuration compatible with other versions of the software (or with other software).
  3. Provide convenient bindings for controlling the user interface, such as showing, hiding, moving, and resizing graphical viewports.
  4. Allow the user to configure how much the viewport should move when scrolling the viewport backward or forward through content (e.g., for a graphical viewport, "page down" causes the viewport to move half the height of the viewport, or the full height, or twice the height, etc.).
Related techniques:
  1. See also checkpoint 7.4.

11.6 For the configuration requirements of this document, allow the user to save user preferences in at least one user profile. Allow users to choose from among available profiles or no profile (i.e., the user agent default settings). [Priority 2] User agent only. (Checkpoint 11.6)
Note: The configuration requirements of the checkpoints in this document involve user preferences for styles, presentation rates, input configurations, navigation, viewport behavior, and user agent prompts and alerts.
Example techniques:
  1. Follow applicable operating environment conventions for input configuration profiles.
  2. Allow users to choose a different profile, to switch rapidly between profiles, and to return to the default input configuration.

11.7 For graphical user interfaces, allow the user to configure the position of controls on tool bars of the user agent user interface, to add or remove controls for the user interface from a predefined set, and to restore the default user interface. [Priority 3] User agent only. (Checkpoint 11.7)
Note: This checkpoint is a special case of checkpoint 11.3.
Example techniques:
  1. Use standard operating environment controls for allowing configuration of font sizes, speech rates, and other style parameters.
  2. Allow the user to show and hide controls. This benefits users with cognitive disabilities and users who navigate user interface controls sequentially.
  3. Allow the user to choose icons and/or text.
  4. Allow the user to change the grouping of icons and the order of menu entries (e.g., for faster access to frequently used controls).
  5. Allow multiple icon sizes (big, small, other sizes). Ensure that these values are applied consistently across the user interface.
  6. Allow the user to change the position of control bars, icons, etc. Do not rely solely on drag-and-drop for reordering tool bar. Allow the user to configure the user agent user interface in a device-independent manner (e.g., through a text-based profile).

[next guideline 12] [review guideline 11] [previous guideline 10] [contents]

Guideline 12. Provide accessible product documentation and help.

Checkpoints

12.1 Ensure that at least one version of the product documentation conforms to at least Level Double-A of the Web Content Accessibility Guidelines 1.0 [WCAG10]. [Priority 1] User agent only. (Checkpoint 12.1)
Notes and rationale:
  1. User agents may provide documentation in many formats, but at least one must conform to at least Level Double-A of the Web Content Accessibility Guidelines 1.0 [WCAG10].
  2. Remember to keep documentation accessible as the product evolves (e.g., when bug fixes are published, etc.).
Example techniques:
  1. Distribute accessible documentation over the Web, on CD-ROM, or by telephone. Alternative hardcopy formats may also benefit some users.
  2. For example, for conformance to the Web Content Accessibility Guidelines 1.0 [WCAG10]:
    1. Provide text equivalents of all non-text content (e.g., graphics, audio-only presentations, etc.);
    2. Provide extended descriptions of screen-shots, flow charts, etc.;
    3. Provide a text equivalent for audio user agent tutorials. Tutorials that use speech to guide a user through the operation of the user agent should also be available at the same time as graphical representations.
    4. Use clear and consistent navigation and search mechanisms;
    5. Use the NOFRAMES element when the support/documentation is presented in a FRAMESET;
    6. See also checkpoint 12.3.
  3. Describe the user interface with device-independent terms. For example, use "select" instead of "click on".
  4. Provide documentation in small chunks (for rapid downloads) and also as a single source (for easy download and/or printing). A single source might be a single HTML file or a compressed archive of several HTML documents and included images.
  5. Ensure that run-time help and any Web-based help or support information is accessible and may be operated with a single, well-documented, input command (e.g., key stroke). Use operating environment conventions for input configurations related to run-time help.
  6. Ensure that product identification codes are accessible to users so they may install their software. Codes printed on product cases may not be accessible to people with visual disabilities.
Doing more:
  1. Provide accessible documentation for all audiences: end users, developers, etc. For instance, developers with disabilities may wish to add accessibility features to the user agent, and so require information on available APIs and other implementation details.
  2. Provide documentation in alternative formats such as braille (refer to "Braille Formats: Principles of Print to Braille Transcription 1997" [BRAILLEFORMATS]), large print, or audio tape. Agencies such as Recording for the Blind and Dyslexic [RFBD] and the National Braille Press [NBP] can create alternative formats.

12.2 Document all user agent features that benefit accessibility. [Priority 1] User agent only. ( Checkpoint 12.2)
Note: For example, review the documentation or help system to ensure that it includes information about the functions and capabilities of the user agent that are required by WAI Accessibility Guidelines, platform-specific accessibility guidelines, etc. The documentation of accessibility features should be integrated into the documentation as a whole.
Example techniques:
  1. Document any features that affect accessibility and that depart from system conventions.
  2. Provide a sensible index to accessibility features. For instance, users should be able to find "How to turn off blinking text" in the documentation (and the user interface). The user agent may support this feature by turning off scripts, but users should not have to guess (or know) that turning off scripts will turn off blinking text.
  3. Document configurable features in addition to defaults for those features.
  4. Document the features implemented to conform with these guidelines.
  5. Include references to accessibility features in both the table of contents and index of the documentation.
  6. In developer documentation, document the APIs that are required by this document. Please see the requirements of checkpoint 6.6, checkpoint 6.1, checkpoint 6.3, and checkpoint 6.4.

12.3 Document the default input configuration (e.g., the default keyboard bindings). [Priority 1] User agent only. ( Checkpoint 12.3)
Note: If the default input configuration is inconsistent with conventions of the operating environment, the documentation should alert the user.
References:
  1. As an example of online documentation of keyboard support, refer to the Mozilla Keyboard Planning FAQ and Cross Reference for the Mozilla browser [MOZILLA].

12.4 In a dedicated section of the documentation, describe all features of the user agent that benefit accessibility. [Priority 2] User agent only. ( Checkpoint 12.4)
Note: This is a more specific requirement than checkpoint 12.2.
Example techniques:
  1. Integrate information about accessibility features throughout the documentation. The dedicated section on accessibility should provide access to the documentation as a whole rather than standing alone as an independent section. For instance, in a hypertext-based help system, the section on accessibility may link to pertinent topics elsewhere in the documentation.
  2. Ensure that the section on accessibility features is easy to find.

12.5 In each software release, document all changes that affect accessibility. [Priority 2] User agent only. (Checkpoint 12.5)
Note: Features that affect accessibility are those required by WAI Accessibility Guidelines, platform-specific accessibility guidelines, etc.
Notes and rationale:
  1. In particular, document changes to the user interface.
Example techniques:
  1. Either describe the changes that affect accessibility in the section of the documentation dedicated to accessibility features (see checkpoint 12.4) or link to the changes from the dedicated section.
  2. Provide a text description of changes (e.g., in a README file).

[review guideline 12] [previous guideline 11] [contents]