#Techniques for UAAG 1.0 Postscript format Techniques for UAAG 1.0 PDF format Techniques for UAAG 1.0 plain text format Techniques for UAAG 1.0 zip archive [contents] _________________________________________________________________ W3C Techniques for User Agent Accessibility Guidelines 1.0 W3C Working Draft 4 April 2001 This version: http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20010404/ (Formats: plain text, gzip PostScript, gzip PDF, gzip tar file of HTML, zip archive of HTML) Latest version: http://www.w3.org/WAI/UA/UAAG10-TECHS/ Previous version: http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20010331 Editors: Ian Jacobs, W3C Jon Gunderson, University of Illinois at Urbana-Champaign Eric Hansen, Educational Testing Service Authors and Contributors: See acknowledgements. Copyright ©1999 - 2001 W3C^® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. _________________________________________________________________ Abstract This document provides techniques for satisfying the checkpoints defined in "Techniques for User Agent Accessibility Guidelines 1.0" [UAAG10]. These techniques cover the accessibility of user interfaces, content rendering, application programming interfaces (APIs), and languages such as the Hypertext Markup Language (HTML), Cascading Style Sheets (CSS) and the Synchronized Multimedia Integration Language (SMIL). Status of this document This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this document series is maintained at the W3C. This is the 4 April 2001 Working Draft of Techniques for User Agent Accessibility Guidelines 1.0, for review by W3C Members and other interested parties. It is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress". This is work in progress and does not imply endorsement by, or the consensus of, either W3C or participants in the User Agent Accessibility Guidelines Working Group (UAWG). While Techniques for User Agent Accessibility Guidelines 1.0 strives to be a stable document (as a W3C Recommendation), the current document is expected to evolve as technologies change and content developers discover more effective techniques for designing accessible Web sites and pages. Please send comments about this document, including suggestions for additional techniques, to the public mailing list w3c-wai-ua@w3.org; public archives are available. This document is part of a series of accessibility documents published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C). WAI Accessibility Guidelines are produced as part of the WAI Technical Activity. The goals of the User Agent Accessibility Guidelines Working Group are described in the charter. A list of current W3C Recommendations and other technical documents can be found at the W3C Web site. Table of contents * Abstract * Status of this document * 1 Introduction * 2 The user agent accessibility guidelines + 1. Support input and output device-independence. + 2. Ensure user access to all content. + 3. Allow configuration not to render some content that may reduce accessibility. + 4. Ensure user control of rendering. + 5. Ensure user control of user interface behavior. + 6. Implement standard application programming interfaces. + 7. Observe operating environment conventions. + 8. Implement specifications that benefit accessibility. + 9. Provide navigation mechanisms. + 10. Orient the user. + 11. Allow configuration and customization. + 12. Provide accessible product documentation and help. * 3 Accessibility topics + 3.1 Access to content + 3.2 User control of rendering and style + 3.3 Link techniques + 3.4 List techniques + 3.5 Table techniques + 3.6 Image map techniques + 3.7 Frame techniques + 3.8 Form techniques + 3.9 Generated content techniques + 3.10 Content repair techniques + 3.11 Script and applet techniques + 3.12 Input configuration techniques + 3.13 Speech techniques + 3.14 Techniques for reducing dependency on spatial interactions + 3.15 Accessibility and internationalization techniques * 4 Appendix: Accessibility features of some operating systems * 5 Appendix: Loading assistive technologies for access to the document object model * 6 Glossary * 7 References + 7.1 How to refer to this document + 7.2 Normative references + 7.3 Informative references * 8 Resources + 8.1 Operating system and programming guidelines + 8.2 User agents and other tools + 8.3 Accessibility resources + 8.4 Standards resources * 9 Acknowledgments Note: With a user agent that implements HTML 4 [HTML4] access keys, readers may navigate directly to the table of contents via the "c" character. Users may have to use additional keyboard strokes depending on their operating environment. Related resources "Techniques for User Agent Accessibility Guidelines 1.0" and the "User Agent Accessibility Guidelines 1.0" [UAAG10] are part of a series of accessibility guidelines published by the Web Accessibility Initiative (WAI). These documents explain the responsibilities of user agent developers in making the Web accessibility to users with disabilities. The series also includes the "Web Content Accessibility Guidelines 1.0" [WCAG10] (and techniques [WCAG10-TECHS]), which explain the responsibilities of authors, and the "Authoring Tool Accessibility Guidelines 1.0" [ATAG10] (and techniques [ATAG10-TECHS]), which explain the responsibilities of authoring tool developers. _________________________________________________________________ 1 Introduction This document suggests some techniques for satisfying the requirements of the "User Agent Accessibility Guidelines 1.0" [UAAG10]. The techniques listed in this document are not required for conformance to the Guidelines. These techniques are not necessarily the only way of satisfying the checkpoint, nor are they a definitive set of requirements for satisfying a checkpoint. 2 The user agent accessibility guidelines This section lists each checkpoint of "User Agent Accessibility Guidelines 1.0" [UAAG10] along with some possible techniques for satisfying it. Each checkpoint definition includes a link to the checkpoint definition in "User Agent Accessibility Guidelines 1.0". Each checkpoint definition is followed by one or more of the following: * Notes and rationale: Additional rationale and explanation of the checkpoint; * Example techniques: Some techniques to illustrate how a user agent might satisfy the requirements of the checkpoint; * Doing more: Techniques to achieve more than what is required by the checkpoint; * Related techniques: Links to other techniques in section 3. The accessibility topics of section 3 generally apply to more than one checkpoint. * References: References to other guidelines, specifications, or resources. Note: Most of the techniques in this document are designed for mainstream (graphical) browsers and multimedia players. However, some of them also make sense for assistive technologies and other user agents. In particular, techniques about communication between user agents will benefit assistive technologies. Refer, for example, to the appendix on loading assistive technologies for access to the document object model. Priorities Each checkpoint in this document is assigned a priority that indicates its importance for users with disabilities. [Priority 1] This checkpoint must be satisfied by user agents, otherwise one or more groups of users with disabilities will find it impossible to access the Web. Satisfying this checkpoint is a basic requirement for enabling some people to access the Web. [Priority 2] This checkpoint should be satisfied by user agents, otherwise one or more groups of users with disabilities will find it difficult to access the Web. Satisfying this checkpoint will remove significant barriers to Web access for some people. [Priority 3] This checkpoint may be satisfied by user agents to make it easier for one or more groups of users with disabilities to access information. Satisfying this checkpoint will improve access to the Web for some people. Note: This information about checkpoint priorities is included for convenience only. For detailed information about conformance to "User Agent Accessibility Guidelines 1.0" [UAAG10], please refer to that document. Guideline 1. Support input and output device-independence. Checkpoints 1.1 Ensure that the user can operate the user agent fully through keyboard input alone. [Priority 1] Both content and user agent. (Checkpoint 1.1) Note: For example, ensure that the user can interact with enabled elements, select content, navigate viewports, configure the user agent, access documentation, install the user agent, operate controls of the user interface, etc., all entirely through keyboard input. It is also possible to claim conformance to User Agent Accessibility Guidelines 1.0 [UAAG10] for full support through pointing device input and voice input. See the section on input modality labels in UAAG 1.0. Notes and rationale: 1. For instance, the user must be able to do the following through the keyboard alone (or pointing device alone or voice alone): o Select content and operate on it. For example, if the user can select rendered text with the mouse and make it the content of a new link by pushing a button, they also need to be able to do so through the keyboard and other supported devices. Other operations include cut, copy, and paste. o Set the focus on viewports and on enabled elements. o Install, configure, uninstall, and update the user agent software. o Use the graphical user interface menus. Some users may wish to user the graphical user interface even if they cannot use or do not wish to use the pointing device. o Fill out forms. o Access documentation. 2. Suppose a user agent does not allow complete operation through the keyboard alone. It is still possible to claim conformance for the user agent in conjunction with a special module designed to "fill in the gap". __________________________________________________________ 1.2 For the element with content focus, allow the user to activate any explicitly associated input device event handlers through keyboard input alone. [Priority 1] Content only. (Checkpoint 1.2) Note: The requirements for this checkpoint refer to any explicitly associated input device event handlers associated with an element, independent of the input modalities for which the user agent conforms. For example, suppose that an element has an explicitly associated handler for pointing device events. Even when the user agent only conforms for keyboard input (and does not conform for the pointing device, for example), this checkpoint requires the user agent to allow the user to activate that handler with the keyboard. This checkpoint is an important special case of checkpoint 1.1. Please refer to the checkpoints of guideline 9 for more information about focus requirements. Notes and rationale: 1. For example, users without a pointing device (such as some users who are blind or have physical disabilities) need to be able to activate form controls and links (including the links in a client-side image map). Example techniques: 1. For example, in HTML 4 [HTML4], input device event handlers are described in section 18.2.3. They are: onclick, ondblclick, onmousedown, onmouseover, onmouseout, onfocus, onblur, onkeypress, onkeydown, and onkeyup. 2. In "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], focus and activation types are discussed in section 1.6.1. They are: DOMFocusIn, DOMFocusOut, and DOMActivate. These events are specified independent of a particular input device type. 3. In "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], mouse event types are discussed in section 1.6.2. They are: click, mousedown, mouseup, mouseover, mousemove and mouseout. 4. The DOM Level 2 Event specification does not provide a key event module. 5. Sequential technique: Add each input device event handler to the serial navigation order (refer to checkpoint 9.2). Alert the user when the user has navigated to an event handler, and allow activation. For example, an link that also has a onMouseOver and onMouseOut event handlers defined, might generate three "stops" in the navigation order: one for the link and two for the event handlers. If this technique is used, allow configuration so that input device event handlers are not inserted in the navigation order. 6. Query technique: Allow the user to query the element with content focus for a menu of input device event handlers. 7. Descriptive information about handlers can allow assistive technologies to choose the most important functions for activation. This is possible in the Java Accessibility API [JAVAAPI], which provides an an AccessibleAction Java interface. This interface provides a list of actions and descriptions that enable selective activation. See also checkpoint 6.3. 8. Using MSAA [MSAA] on the Windows platform: o Retrieve the node in the document object that has current focus. o Call the IHTMLDocument4::fireEvent method on that node. Related techniques: 1. See image map techniques. References: 1. For example, section 16.5 of the SVG 1.0 Candidate Recommendation [SVG] specifies processing order for user interface events. __________________________________________________________ 1.3 Ensure that every message (e.g., prompt, alert, notification, etc.) that is a non-text element and is part of the user agent user interface has a text equivalent. [Priority 1] User agent only. (Checkpoint 1.3) Note: For example, if the user is alerted of an event by an audio cue, a visually-rendered text equivalent in the status bar would satisfy this checkpoint. Per checkpoint 6.4, a text equivalent for each such message must be available through a standard API. See also checkpoint 6.5 for requirements for programmatic alert of changes to the user interface. Notes and rationale: 1. User agents should use modality-specific messages in the user interface (e.g., graphical scroll bars, beeps, and flashes) as long as redundant mechanisms are available or possible. These redundant mechanisms will benefit all users, not just users with disabilities. For instance, mechanisms that are redundant to audio will benefit individuals who are deaf, hard of hearing, or operating the user agent in a noisy or silent environment where the use of sound is not practical. Example techniques: 1. Render text messages on the status bar of the graphical user interface. Allow users to query the viewport for this status information (in addition to having access through graphical rendering). 2. Make available information in a manner that allows other software to present it according to the user's preferences. For instance, if the graphical user agent uses proportional scroll bars to indicate the position of the viewport in content, make available this same information in text form. For instance, this will allow other software to render the proportion of content viewed as speech or as braille. Doing more: 1. Allow configuration to render or not render status information (e.g., allow the user to hide the status bar). __________________________________________________________ [next guideline 2] [review guideline 1] [contents] Guideline 2. Ensure user access to all content. Checkpoints 2.1 For all format specifications that the user agent implements, make content available through the rendering processes described by those specifications. [Priority 1] Content only. (Checkpoint 2.1) Note: This includes format-defined interactions between author preferences and user preferences/capabilities (e.g., when to render the "alt" attribute in HTML [HTML4], the rendering order of nested OBJECT elements in HTML, test attributes in SMIL [SMIL], and the cascade in CSS2 [CSS2]). If a conforming user agent does not render a content type, it should allow the user to choose a way to handle that content (e.g., by launching another application, by saving it to disk, etc.). This checkpoint does not require that all content be available through each viewport. Example techniques: 1. Provide access to attribute values (one at a time, not as a group). For instance, allow the user to select an element and read values for all attributes set for that element. For many attributes, this type of inspection should be significantly more usable than a view of the text source. 2. When content changes dynamically (e.g., due to embedded scripts or content refresh), users need to have access to the content before and after the change. 3. Make available information about abbreviation and acronym expansions. For instance, in HTML, look for abbreviations specified by the ABBR and ACRONYM elements. The expansion may be given with the "title" attribute (refer to the Web Content Accessibility Guidelines 1.0 [WCAG10], checkpoint 4.2). To provide expansion information, user agents may: o Allow the user to configure that the expansions be used in place of the abbreviations, o Provide a list of all abbreviations in the document, with their expansions (a generated glossary of sorts) o Generate a link from an abbreviation to its expansion. o Allow the user to query the expansion of a selected or input abbreviation. o If an acronym has no explicit expansion in one location, look for another occurrence in content with an explicit expansion. User agents may also look for possible expansions (e.g., in parentheses) in surrounding context, though that is a less reliable technique. Related techniques: 1. See the sections on access to content, link techniques, table techniques, frame techniques, and form techniques. References: 1. Sections 10.4 ("Client Error 4xx") and 10.5 ("Server Error 5xx") of the HTTP/1.1 specification [RFC2616] state that user agents should have the following behavior in case of these error conditions: Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user. __________________________________________________________ 2.2 For all text formats that the user agent implements, provide a view of the text source. Text formats include at least the following: (1) all media objects given an Internet media type of "text" (e.g., text/plain, text/HTML, or text/*), and (2) all SGML and XML applications, regardless of Internet media type (e.g., HTML 4.01, XHTML 1.1, SMIL, SVG, etc.). [Priority 1] Content only. (Checkpoint 2.2) Note: A user agent would also satisfy this checkpoint by providing a source view for any text format, not just implemented text formats. Notes and rationale: 1. In general, user agent developers should not rely on a "source view" to convey information to users, most of whom are not familiar with markup languages. A source view is still important as a "last resort" to some users as content might not otherwise be accessible at all. Example techniques: 1. Make the text view useful. For instance, enable links (i.e., URIs), allowing searching and other navigation within the view. 2. A source view is an easily-implementable view that will help users inspect some types of content, such as style sheet fragments or scripts. This does not mean, however, that a source view of style sheets is the best user interface for reading or changing style sheets. Doing more: 1. Provide a source view for any text format, not just implemented text formats. References: 1. Refer to [RFC2046], section 4.1 for information about the "text" Internet media type. __________________________________________________________ 2.3 global configuration so that, for each piece of unrendered conditional content "C", the user agent alerts the user to the existence of the content and provides access to it. Provide access to this content according to format specifications or where unspecified, as follows. If C has a close relationship (e.g., C is a summary, title, alternative, description, expansion, etc.) with another piece of rendered content D, do at least one of the following: (1a) render C in place of D, (2a) render C in addition to D, (3a) provide access to C by querying D, or (4a) allow the user to follow a link to C from the context of D. If C does not have a close relationship to other content (i.e., a relationship other than just a document tree relationship), do at least one of the following: (1b) render a placeholder for C, (2b) provide access to C by query (e.g., allow the user to query an element for its attributes), or (3b) allow the user to follow a link in context to C. [Priority 1] Content only. (Checkpoint 2.3) Note: The configuration requirement of this checkpoint is global; the user agent is only required to provide one switch that turns on or off these alert and access mechanisms. To satisfy this checkpoint, the user agent may provide access on an element-by-element basis (e.g., by allowing the user to query individual elements) or for all elements (e.g., by offering a configuration to render conditional content all the time). For instance, an HTML user agent might allow users to query each element for access to conditional content supplied for the "alt", "title", and "longdesc" attributes. Or, the user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism). Notes and rationale: 1. Allow users to choose more than one piece of conditional content at a given time. For instance,users with low vision may want to view images (even imperfectly) but require a text equivalent for the image; the text may be rendered with a large font or as speech. Example techniques: 1. In HTML 4 [HTML4], conditional content mechanisms include the following: o For the IMG element (section 13.2): the "alt" (section 13.8), "title" (section 7.4.3), and "longdesc" (section 13.2) attributes. See the section on long descriptions. o For the OBJECT element (section 13.3): the content of the element and the "title" attribute. o For the deprecated APPLET element (section 13.4): the "alt" attribute and the content of the element. o For the AREA element (section 13.6.1): the "alt" attribute. o For the INPUT element (section 17.4): the "alt" attribute. o For the ACRONYM and ABBR elements (section 9.2.1): the "title" attribute (for acronym or abbreviation expansion). o For the TABLE element (section 11.2.1): the "summary" attribute. o For frames: the NOFRAMES element (section 16.4.1) and the "longdesc" attribute (section 16.2.2) on FRAME and IFRAME (section 16.5). o For scripts: the NOSCRIPT element (section 18.3.1). 2. Allow the user to configure how the user agent renders a long description (e.g., "longdesc" in HTML 4 [HTML4]). Some possibilities include: 1. Render the long description in a separate view. 2. Render the long description in place of the associated element. 3. Do not render the long description, but allow the user to query whether an element has an associated long description (e.g., with a context-sensitive menu) and provide access to it. 4. Use an icon (with a text equivalent) to indicate the presence of a long description. 5. Use an audio cue to indicate the presence of a long description when the user navigates to the element. 3. For an object (e.g., an image) with an author-specified geometry that the user agent does not render, allow the user to configure how the conditional content should be rendered. For example, within the specified geometry, by ignoring the specified geometry altogether, etc. 4. For multimedia presentations with several alternative tracks, ensure access to all tracks and allow the user to select individual tracks. The QuickTime player [QUICKTIME] allows users to turn on and off any number of tracks separately. For example, construct a list of all available tracks from short descriptions provided by the author (e.g., through the "title" attribute). 5. For multimedia presentations with several alternative tracks, allow users to choose tracks based on natural language preferences. SMIL 1.0 [SMIL] allows users to specify captions in different natural languages. By setting language preferences in the SMIL player (e.g., the G2 player [G2]), users may access captions (or audio) in different languages. Allow users to specify different languages for different content types (e.g., English audio and Spanish captions). 6. If a multimedia presentation has several captions (or subtitles) available, allow the user to choose from among them. Captions might differ in level of detail, reading levels, natural language, etc. Multilingual audiences may wish to have captions in different natural languages on the screen at the same time. Users may wish to use both captions and auditory descriptions concurrently as well. 7. Make apparent through the user agent user interface which audio tracks are meant to be played separately. Doing more: 1. Make information available with different levels of detail. For example, for a voice browser, offer two options for HTML IMG elements: 1. Speak only "alt" text by default, but allow the user to hear "longdesc" text on an image by image basis. 2. Speak "alt" text and "longdesc" for all images. 2. Allow the user to configure different natural language preferences for different types of conditional content (e.g., captions and auditory descriptions). Users with disabilities may need to choose the language they are most familiar with in order to understand a presentation for which supplementary tracks are not all available in all desired languages. In addition, some users may prefer to hear the program audio in its original language while reading captions in another, fulfilling the function of subtitles or to improve foreign language comprehension. In classrooms, teachers may wish to configure the language of various multimedia elements to achieve specific educational goals. How the user selects preferred natural language for captions in Real Player This image shows how users select a natural language preference in the Real Player. This setting, in conjunction with language markup in the presentation, determines what content is rendered. Related techniques: 1. See the section on access to content. __________________________________________________________ 2.4 For content where user input is only possible within a finite time interval controlled by the user agent, allow configuration to make the time interval "infinite". Do this by pausing automatically at the end of each time interval where user input is possible, and resuming automatically after the user has explicitly completed input. In this configuration, alert the user when the session has been paused and which enabled elements are time-sensitive. [Priority 1] Content only. (Checkpoint 2.4) Note: In this configuration, the user agent may have to pause the presentation more than once if there is more than one opportunity for time-sensitive input. In SMIL 1.0 [SMIL], for example, the "begin", "end", and "dur" attributes synchronize presentation components. The user may explicitly complete input in many different ways (e.g., by following a link that replaces the current time-sensitive resource with a different resource). This checkpoint does not apply when the user agent cannot recognize the time interval in the presentation format, or when the user agent cannot control the timing (e.g., because it is controlled by the server). Notes and rationale: 1. The requirement to pause at the end (rather than at the beginning) of a time-interval is to allow the user to review content that may change during the elapse of this time. 2. This checkpoint requires the user agent to pause a presentation automatically, whereas the pause requirement of checkpoint 4.5 is manual. Example techniques: 1. Some HTML user agents recognize time intervals specified through the META element, although this usage is not defined in HTML 4 [HTML4]. 2. Render time-dependent links as a static list that occupies the same screen real estate; authors may create such documents in SMIL 1.0 [SMIL]. Include temporal context in the list of links. For example, provide the time at which the link appeared along with a way to easily jump to that portion of the presentation. Doing more: 1. The checkpoint requires that the user agent make the time interval infinite, but one consequence of this is that the user needs to confirm manually the end of input. The user agent may provide additional configurations to lengthen time intervals so that manual confirmation at the end of input is not required. For instance, the user agent might include a configuration to allow the user three to five times the author's specified time interval for input. Or, the user agent might include a configuration to add additional time to each time interval (e.g., 10 extra seconds). 2. Allow users to view a list of all media elements or links of the presentations sorted by start or end time or alphabetically. References: 1. Refer to section 4.2.4 of SMIL 1.0 [SMIL] for information about the SMIL time model. __________________________________________________________ 2.5 Allow configuration or control so that text transcripts, collated text transcripts, captions, and auditory descriptions are rendered at the same time as the associated audio tracks and visual tracks. [Priority 1] Content only. (Checkpoint 2.5) Note: This checkpoint is an important special case of checkpoint 2.1. Example techniques: 1. Allow users to turn on and off auditory descriptions and captions. 2. For the purpose of applying this clause, SMIL 1.0 [SMIL] user agents should recognize as captions any media object whose reference from SMIL is guarded by the 'system-captions' test attribute. 3. SMIL user agents should allow users to configure whether they want to view captions, and this user interface switch should be bound to the 'system-captions' test attribute. Users should be able to indicate a preference for receiving available auditory descriptions, but SMIL 1.0 [SMIL] does not include a mechanism analogous to 'system-captions' for auditory descriptions, though [SMIL20] is expected to. 4. Another SMIL 1.0 test attribute, 'system-overdub-or-captions', allows users to choose between subtitles and overdubs in multilingual presentations. User agents should not interpret a value of 'caption' for this test attribute as meaning that the user prefers accessibility captions; that is the purpose of the 'system-captions' test attribute. When subtitles and accessibility captions are both available, users who are deaf may prefer to view captions, as they generally contain information not in subtitles: information on music, sound effects, who is speaking, etc. 5. User agents that play QuickTime movies should allow the user to turn on and off the different tracks embedded in the movie. Authors may use these alternative tracks to provide content for accessibility purposes. The Apple QuickTime player provides this feature through the menu item "Enable Tracks." 6. User agents that play Microsoft Windows Media Object presentations should provide support for Synchronized Accessible Media Interchange (SAMI [SAMI]), a protocol for creating and displaying captions) and should allow users to configure how captions are viewed. In addition, user agents that play Microsoft Windows Media Object presentations should allow users to turn on and off other conditional content, including auditory description and alternative visual tracks. References: 1. User agents that implement SMIL 1.0 [SMIL] should implement the "Accessibility Features of SMIL" [SMIL-ACCESS]. __________________________________________________________ 2.6 Respect synchronization cues during rendering. [Priority 1] Content only. (Checkpoint 2.6) Note: This checkpoint is an important special case of checkpoint 2.1. Notes and rationale: 1. The term "synchronization cues" refers to pieces of information that may affect synchronization, such as the size and expected duration of tracks and their segments, the type of element and how much those elements can be sped up or slowed down (both from technological and intelligibility standpoints). 2. Captions and auditory descriptions may not make sense unless rendered synchronously with related video or audio content. For instance, if someone with a hearing disability is watching a video presentation and reading associated captions, the captions should be synchronized with the audio so that the individual can use any residual hearing. For auditory descriptions, it is crucial that an audio track and an auditory description track be synchronized to avoid having them both play at once, which would reduce the clarity of the presentation. Example techniques: 1. The idea of "sensible time-coordination" of components in the definition of synchronize centers on the idea of simultaneity of presentation, but also encompasses strategies for handling deviations from simultaneity resulting from a variety of causes. Consider how deviations might be handled for captions for a multimedia presentation such as a movie clip. Captions consist of a text equivalent of the audio track that is synchronized with the visual track. Typically, a segment of the captions appears visually near the video for several seconds while the person reads the text. As the visual track continues, a new segment of the captions is presented. However, a problem arises if the captions are longer than can fit in the display space. This can be particularly difficult if due to a visual disability, the font size has been enlarged, thus reducing the amount of rendered caption text that can be presented. The user agent needs to respond sensibly to such problems, for example by ensuring that the user has the opportunity to navigate (e.g., scroll down or page down) through the caption segment before proceeding with the visual presentation and presenting the next segment. 2. Developers of user agents need to determine how they will handle other synchronization challenges, such as: 1. Under what circumstances will the presentation automatically pause? Some circumstances where this might occur include: # the segment of rendered caption text is more than can fit on the visual display # the user wishes more time to read captions or the collated text transcript # the auditory description is of longer duration than the natural pause in the audio. 2. Once the presentation has paused, then under what circumstances will it resume (e.g., only when the user signals it to resume, or based on a predefined pause length)? 3. If the user agent allows the user to jump to a location in a presentation by activating a link, then how will related tracks behave? Will they jump as well? Will the user be able to return to a previous location or undo the action? 3. Developers of user agents need to anticipate many of the challenges that may arise in synchronization of diverse tracks. __________________________________________________________ 2.7 Allow configuration to generate repair text when the user agent recognizes that the author has failed to provide conditional content that was required by the format specification. The user agent may satisfy this checkpoint by basing the repair text on any of the following available sources of information: URI reference, content type, or element type. [Priority 2] Content only. (Checkpoint 2.7) Note: Some markup languages (such as HTML 4 [HTML4] and SMIL 1.0 [SMIL] require the author to provide conditional content for some elements (e.g., the "alt" attribute on the IMG element). Repair text based on URI reference, content type, or element type is sufficient to satisfy the checkpoint, but may not result in the most effective repair. Information that may be recognized as relevant to repair might not be "near" the missing conditional content in the document object. For instance, instead of generating repair text on a simple URI reference, the user agent might look for helpful information near a different instance of the URI reference in the same document object, or might retrieve useful information (e.g., a title) from the resource designed by the URI reference. Notes and rationale: 1. Some examples of missing conditional content that is required by specification: o in HTML 4 [HTML4], "alt" is required for the IMG and AREA elements (for validation). In SMIL 1.0 [SMIL], on the other hand, "alt" is not required on media objects. o whatever the format, text equivalents for non-text content are required by the Web Content Accessibility Guidelines 1.0 [WCAG10]. 2. Conditional content may come from markup, inside images (e.g., refer to "Describing and retrieving photos using RDF and HTTP" [PHOTO-RDF]), etc. Example techniques: 1. When HTTP is used, HTTP headers provide information about the URI of the Web resource ("Content-Location") and its type ("Content-Type"). Refer to the HTTP/1.1 specification [RFC2616], sections 14.14 and 14.17, respectively. Refer to "Uniform Resource Identifiers (URI): Generic Syntax" ([RFC2396], section 4) for information about URI references, as well as the HTTP/1.1 specification [RFC2616], section 3.2.1. Doing more: 1. When configured to generate text, also inform the user (e.g., in the generated text itself) that this content was not provided by the author as a text equivalent. Related techniques: 1. See content repair techniques, and cell header repair strategies. References: 1. The "Altifier Tool" [ALTIFIER] illustrates smart techniques for generating text equivalents (for images, etc.) when the author has not specified any. __________________________________________________________ 2.8 Allow configuration so that when the user agent recognizes that conditional content required by the format specification is present but empty (e.g., the empty string), the user agent either (1) generates no repair text, or (2) generates repair text as described in checkpoint 2.7. [Priority 3] Content only. (Checkpoint 2.8) Note: In some authoring scenarios, an empty string of text (e.g., "alt=''") may be considered to be an appropriate text equivalent (for instance, when some non-text content has no other function than pure decoration, or an image is part of a "mosaic" of several images and doesn't make sense out of the mosaic). Please refer to the Web Content Accessibility Guidelines 1.0 [WCAG10] for more information about text equivalents. Notes and rationale: 1. User agents should render nothing in this case because the author may specify an empty text equivalent for content that has no function in the page other than as decoration. Example techniques: 1. The user agent should not render generic labels such as "[INLINE]" or "[GRAPHIC]" in the face of empty conditional content (unless configured to do so). 2. If no captioning information is available and captioning is turned on, render "no captioning information available" in the captioning region of the viewport (unless configured not to generate repair content). Doing more: 1. Labels (e.g., "[INLINE]" or "[GRAPHIC]") may be useful in some situations, so the user agent may allow configuration to render "No author text" (or similar) instead of empty conditional content. __________________________________________________________ 2.9 Allow configuration to render all conditional content automatically. Provide access to this content according to format specifications or where unspecified, by applying one of the following techniques described in checkpoint 2.3: 1a, 2a, or 1b. [Priority 3] Content only. (Checkpoint 2.9) Note: The user agent satisfies this checkpoint if it satisfies checkpoint 2.3 by applying techniques 1a, 2a, or 1b. For instance, an HTML user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism). Example techniques: 1. None. __________________________________________________________ 2.10 Allow configuration not to render content in unsupported natural languages. Indicate to the user in context that author-supplied content has not been rendered. [Priority 3] Content only. (Checkpoint 2.10) Note: For example, use a text substitute or accessible graphical icon to indicate that content in a particular language has not been rendered. This checkpoint does not require the user agent to allow different configurations for different natural languages. Notes and rationale: 1. Rendering content in an unsupported language (e.g., as "garbage" characters) may confuse all users. However, this checkpoint is designed primarily to benefit users who access content serially as it allows them to skip portions of content that would be unusable as rendered. 2. There may be cases when a conforming user agent supports a natural language but a speech synthesizer does not, or vice versa. Example techniques: 1. For instance, a user agent that doesn't support Korean (e.g., doesn't have the appropriate fonts or voice set) should allow configuration to announce the language change with the message "Unsupported language - unable to render" (e.g., when the language itself is not recognized) or "Korean not supported - unable to render" (e.g., when the language is recognized by the user agent doesn't have resources to render it). The user should also be able to choose no alert of language changes. Rendering could involve speaking in the designated natural language in the case of a voice browser or screen reader. If the natural language is not supported, the language change alert could be spoken in the default language by a screen reader or voice browser. 2. A user agent may not be able to render all characters in a document meaningfully, for instance, because the user agent lacks a suitable font, a character has a value that may not be expressed in the user agent's internal character encoding, etc. In this case, section 5.4 of HTML 4 [HTML4] recommends the following for undisplayable characters: 1. Adopt a clearly visible (or audible), but unobtrusive mechanism to alert the user of missing resources. 2. If missing characters are presented using their numeric representation, use the hexadecimal (not decimal) form since this is the form used in character set standards. 3. When HTTP is used, HTTP headers provide information about content encoding ("Content-Encoding") and content language ("Content-Language"). Refer to the HTTP/1.1 specification [RFC2616], sections 14.11 and 14.12, respectively. 4. CSS2's attribute selector may be used with the HTML "lang" or XML "xml:lang" attributes to control rendering based on recognized natural language information. Refer also to the ':lang' pseudo-class ([CSS2], section 5.11.4). Related techniques: 1. See techniques for generated content, which may be used to insert text to indicate a language change. 2. See content repair techniques and accessibility and internationalization techniques. 3. See techniques for synthesized speech. References: 1. For information on language codes, refer to "Codes for the representation of names of languages" [ISO639]. 2. Refer to "Character Model for the World Wide Web" [CHARMOD]. It contains basic definitions and models, specifications to be used by other specifications or directly by implementations, and explanatory material. In particular, this document addresses early uniform normalization, string identity matching, string indexing, and conventions for URIs. __________________________________________________________ [next guideline 3] [review guideline 2] [previous guideline 1] [contents] Guideline 3. Allow configuration not to render some content that may reduce accessibility. In addition to the techniques below, refer also to the section on user control of style. Checkpoints 3.1 Allow configuration not to render background images. In this configuration, provide an option to alert the user when a background image is available (but has not been rendered). [Priority 1] Content only. (Checkpoint 3.1) Note: This checkpoint only requires control of background images for "two-layered renderings", i.e., one rendered background image with all other content rendered "above it". When background images are not rendered, user agents should render a solid background color instead (see checkpoint 4.3). In this configuration, the user agent is not required to retrieve background images from the Web. Notes and rationale: 1. Background images may make it difficult or impossible to read superimposed text or understand other superimposed content. 2. This checkpoint does not address issues of multi-layered renderings and does not require the user agent to change background rendering for multi-layer renderings (refer, for example, to the 'z-index' property in Cascading Style Sheets, level 2 ([CSS2], section 9.9.1). Example techniques: 1. If background image are turned off, make available to the user associated conditional content. 2. In CSS, background images may be turned on/off with the 'background' and 'background-image' properties ([CSS2], section 14.2.1). Doing more: 1. Allow control of image depth in multi-layer presentations. __________________________________________________________ 3.2 Allow configuration not to render audio, video, or animated images except on explicit request from the user. In this configuration, provide an option to render a placeholder in context for each unrendered source of audio, video, or animated image. When placeholders are rendered, allow the user to view the original author-supplied content associated with each placeholder. [Priority 1] Content only. (Checkpoint 3.2) Note: This checkpoint requires configuration for content rendered without any user interaction (including content rendered on load or as the result of a script), as well as content rendered as the result of user interaction that is not an explicit request (e.g., when the user activates a link). When configured not to render content except on explicit user request, the user agent is not required to retrieve the audio, video, or animated image from the Web until requested by the user. See also checkpoint 3.8, checkpoint 4.5, checkpoint 4.9, and checkpoint 4.10. Example techniques: 1. User agent may satisfy this checkpoint by treating content as invisible or silent (e.g., by implementing the 'visibility' property defined in section 11.2 of CSS 2 [CSS2]). However, this solution means that the content is processed, though not rendered, and processing may cause undesirable side effects such as firing events. Or, processing may interfere with the processing of other content (e.g., silent audio may interfere with other sources of sound such as the output of a speech synthesizer). This technique should be deployed with caution. 2. As a placeholder for an animated image, render a motionless image built from the first frame of the animated image. __________________________________________________________ 3.3 Allow configuration to render animated or blinking text as motionless, unblinking text. [Priority 1] Content only. (Checkpoint 3.3) Note: A "stock quote ticker" is an example of animated text. This checkpoint does not apply for blinking and animation effects that are caused by mechanisms that the user agent cannot recognize. This checkpoint requires configuration because blinking effects may be disorienting to some users but useful to others, for example users who are deaf or hard of hearing. Example techniques: 1. The user agent may render the motionless text in a number of ways. Inline is preferred, but for extremely long text, it may be better to render the text in another viewport, easily reachable from the user's browsing context. 2. Allow the user to turn off animated or blinking text through the user agent user interface (e.g., by pressing the Escape key to stop animations). 3. Some sources of blinking and moving text are: o The BLINK element in HTML. Note: The BLINK element is not defined by a W3C specification. o The MARQUEE element in HTML. Note: The MARQUEE element is not defined by a W3C specification. o The 'blink' value of the 'text-decoration' property in CSS ([CSS2], section 16.3.1). o In JavaScript, to control the start and speed of scrolling for a MARQUEE element: # document.all.myBanner.start(); # document.all.myBanner.scrollDelay = 100 __________________________________________________________ 3.4 Allow configuration not to execute any executable content (e.g., scripts and applets). In this configuration, provide an option to alert the user when executable content is available (but has not been executed). [Priority 1] Content only. (Checkpoint 3.4) Note: Scripts and applets may provide very useful functionality, not all of which causes accessibility problems. Developers should not consider that the user's ability to turn off scripts is an effective way to improve content accessibility; turning off scripts means losing the benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off scripts as a last resort. Notes and rationale: 1. Executable content includes scripts, applets, ActiveX controls, etc. This checkpoint does not apply to plug-ins that are not part of content. 2. Executable content includes those that run "on load" (e.g., when a document loads into a viewport) and when other events occur (e.g., user interface events). 3. The alert that scripts are available but not executed is important, for instance, for helping users understand why some poorly authored pages without script alternatives produce no content when scripts are turned off. 4. Control of scripts is particularly important when they can cause the screen to flicker, since people with photosensitive epilepsy can have seizures triggered by flickering or flashing, particularly in the 4 to 59 flashes per second (Hertz) range. Peak sensitivity to flickering or flashing occurs at 20 Hertz. 5. Where possible, authors should encode knowledge in declarative formats rather than in scripts. Knowledge and behaviors embedded in scripts is difficult to extract, which means that user agents are less likely to be able to offer control by the user over the script's effect. Example techniques: 1. None. Doing more: 1. While this checkpoint only requires a global on/off switch, user agents should allow finer control over executable content. For instance, in addition to the global switch, allow users to turn off just input device event handlers. Related techniques: 1. See the section on script techniques. __________________________________________________________ 3.5 Allow configuration so that client-side content refreshes (i.e., those initiated by the user agent, not the server) do not change content except on explicit user request. Allow the user to request the new content on demand (e.g., by following a link or confirming a prompt). Alert the user, according to the schedule specified by the author, whenever fresh content is available (to be obtained on explicit user request). [Priority 1] Content only. (Checkpoint 3.5) Notes and rationale: 1. Some HTML authors create a refresh effect by using a META element with http-equiv="refresh" and the refresh rate specified in seconds by the "content" attribute. Example techniques: 1. Alert the user of pages that refresh automatically and allow them to specify a refresh rate through the user agent user interface. 2. Allow configuration for at least one very slow refresh rate (e.g., every 10 minutes). Doing more: 1. Retrieve new content without displaying it automatically. Allow the user to view the differences (e.g., by highlighting or filtering) between the currently rendered content and the new content (including no differences). __________________________________________________________ 3.6 Allow configuration so that a "client-side redirect" (i.e., one initiated by the user agent, not the server) does not change content except on explicit user request. Allow the user to access the new content on demand (e.g., by following a link or confirming a prompt). The user agent is not required to provide these functionalities for client-side redirects that occur instantaneously (i.e., when there is no delay before the new content is retrieved). [Priority 2] Content only. (Checkpoint 3.6) Notes and rationale: 1. This checkpoint is a Priority 2 checkpoint in part because the author's redirect implies that users aren't expected to use the content prior to the redirect. Example techniques: 1. Provide a configuration so that when the user navigates "back" through the user agent history to a page with a client-side redirect, the user agent does not re-execute the client-side redirect. Doing more: 1. Allow configuration to allow access on demand to new content even when the client-side redirect has been specified by the author to be instantaneous. References: 1. For Web content authors: refer to the HTTP/1.1 specification [RFC2616] for information about using server-side redirect mechanisms (instead of client-side redirects). __________________________________________________________ 3.7 Allow configuration not to render images. In this configuration, provide an option to render a placeholder in context for each unrendered image. When placeholders are rendered, allow the user to view the original author-supplied content associated with each placeholder. [Priority 2] Content only. (Checkpoint 3.7) Note: See also checkpoint 3.8. Related techniques: 1. See techniques for checkpoint 3.1. __________________________________________________________ 3.8 Once the user has viewed the original author-supplied content associated with a placeholder, allow the user to turn off the rendering of the author-supplied content. [Priority 3] Content only. (Checkpoint 3.8) Note: For example, if the user agent substitutes the author-supplied content for the placeholder in context, allow the user to "toggle" between placeholder and the associated content. Or, if the user agent renders the author-supplied content in a separate viewport, allow the user to close that viewport. See checkpoint 3.2 and checkpoint 3.7. Example techniques: 1. None. __________________________________________________________ [next guideline 4] [review guideline 3] [previous guideline 2] [contents] Guideline 4. Ensure user control of rendering. In addition to the techniques below, refer also to the section on user control of style. Checkpoints for visually rendered text 4.1 Allow global configuration and control over the reference size of rendered text, with an option to override reference sizes specified by the author or user agent defaults. Allow the user to choose from among the full range of font sizes supported by the operating environment. [Priority 1] Content only. (Checkpoint 4.1) Note: The reference size of rendered text corresponds to the default value of the CSS2 'font-size' property, which is 'medium' (refer to CSS2 [CSS2], section 15.2.4). For example, in HTML, this might be paragraph text. The default reference size of rendered text may vary among user agents. User agents may offer different mechanisms to allow control of the size of rendered text (e.g., font size control, zoom, magnification, etc.). Refer, for example to the Scalable Vector Graphics specification [SVG] for information about scalable rendering. Notes and rationale: 1. The choice of optimal techniques depends in part on which markup language is being used. For instance, HTML user agents may allow the user to change the font size of a particular piece of text (e.g., by using CSS user style sheets) independent of other content (e.g., images). Since the user agent can reflow the text after resizing the font, the rendered text will become more legible without, for example, distorting bitmap images. On the other hand, some languages, such as SVG, do not allow text reflow, which means that changes to font size may cause rendered text to overlap with other content, reducing accessibility. SVG is designed to scale, making a zoom functionality the more natural technique for SVG user agents satisfying this checkpoint. Example techniques: 1. Inherit text size information from user preferences specified for the operating environment. 2. Use operating environment magnification features. 3. When scaling text, maintain size relationships among text of different sizes. 4. Implement the 'font-size' property in CSS ([CSS2], section 15.2.4). Doing more: 1. Allow the user to configure the text size on an element level (i.e., more precisely than globally). User style sheets allow such detailed configurations. 2. Allow the user to configure the text size differently for different scripts (i.e., writing systems). __________________________________________________________ 4.2 Allow global configuration of the font family of all rendered text, with an option to override font families specified by the author or by user agent defaults. Allow the user to choose from among the full range of font families supported by the operating environment. [Priority 1] Content only. (Checkpoint 4.2) Note: For example, allow the user to specify that all text is to be rendered in a particular sans-serif font family. For text that cannot be rendered properly using the user's preferred font family, the user agent may substitute an alternative font family. Example techniques: 1. Inherit font family information from user preferences specified for the operating environment. 2. Implement the 'font-family' property in CSS ([CSS2], section 15.2.2). 3. Allow the user to override author-specified font families with differing levels of detail. For instance, use font A in place of any sans-serif font and font B in place of any serif font. Doing more: 1. Allow the user to configure font families on an element level (i.e., more precisely than globally). User style sheets allow such detailed configurations. __________________________________________________________ 4.3 Allow global configuration of the foreground and background color of all rendered text, with an option to override foreground and background colors specified by the author or user agent defaults. Allow the user to choose from among the full range of colors supported by the operating environment. [Priority 1] Content only. (Checkpoint 4.3) Note: User configuration of foreground and background colors may inadvertently lead to the inability to distinguish ordinary text from selected text, focused text, etc. See checkpoint 10.3 for more information about highlight styles. Example techniques: 1. Inherit foreground and background color information from user preferences specified for the operating environment. 2. Implement the 'color' and 'border-color' properties in CSS 2 ([CSS2], sections 14.1 and 8.5.2, respectively). 3. Implement the 'background-color' property (and other background properties) in CSS 2 ([CSS2], section 14.2.1). Doing more: 1. Allow the user to specify minimal contrast between foreground and background colors, adjusting colors dynamically to meet those requirements. __________________________________________________________ Checkpoints for multimedia presentations and other presentations that change continuously over time 4.4 Allow the user to slow the presentation rate of audio and animations (including video and animated images). For a visual track, provide at least one setting between 40% and 60% of the original speed. For a prerecorded audio track including audio-only presentations, provide at least one setting between 75% and 80% of the original speed. When the user agent allows the user to slow the visual track of a synchronized multimedia presentation to between 100% and 80% of its original speed, synchronize the visual and audio tracks. Below 80%, the user agent is not required to render the audio track. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. [Priority 1] Content only. (Checkpoint 4.4) Note: Purely stylistic effects include background sounds, decorative animated images, and effects caused by style sheets. The style exception of this checkpoint is based on the assumption that authors have satisfied the requirements of the "Web Content Accessibility Guidelines 1.0" [WCAG10] not to convey information through style alone (e.g., through color alone or style sheets alone). See checkpoint 2.6 and checkpoint 4.7. Notes and rationale: 1. Allowing the user to slow the presentation of audio and animations will benefit individuals with specific learning disabilities, cognitive disabilities, or individuals with newly acquired sensory limitations (such as a person who is newly blind and learning to use a screen reader). The same feature will benefit individuals who have beginning familiarity with a natural language. Slowing one track (e.g., video) may make it harder for a user to understand another synchronized track (e.g., audio), but if the user can understand content after two passes, this is better than not being able to understand it at all. 2. Some formats (e.g., streaming formats), might not enable the user agent to slow down playback and would thus be subject to applicability. Example techniques: 1. When changing the rate of audio, avoid pitch distortion. 2. HTML 4 [HTML4], background animations may be specified with the deprecated background attribute. 3. The SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the speed attribute, which can be used to change the playback rate (as well as forward or reverse direction) of any animation. 4. Authors sometimes specify background sounds with the "bgsound" attribute. Note: This attribute is not part of HTML 4 [HTML4]. Doing more: 1. Allowing the user to speed up audio is also useful. For example, some users who access content serially benefit from the ability to speed up audio. References: 1. Refer to variable playback speed techniques used for Digital Talking Books [TALKINGBOOKS]. __________________________________________________________ 4.5 Allow the user to stop, pause, resume, fast advance, and fast reverse audio and animations (including video and animated images) that last three or more seconds at their default playback rate. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. The user agent is not required to play synchronized audio during fast advance or reverse of animations (though doing so may help orient the user). [Priority 1] Content only. (Checkpoint 4.5) Note: See checkpoint 4.4 for more information about the exception for purely stylistic effects. This checkpoint applies to content that is either rendered automatically or on request from the user. The requirement of this checkpoint is for control of each source of audio and animation that is recognized as distinct. Respect synchronization cues per checkpoint 2.6. Notes and rationale: 1. Some formats (e.g., streaming formats), might not enable the user agent to fast advance or fast reverse content and would thus be subject to applicability. Example techniques: 1. Allow the user to advance or rewind the presentation in increments. This is particularly valuable to users with physical disabilities who may not have fine control over advance and rewind functionalities. Allow users to configure the size of the increments. 2. If buttons are used to control advance and rewind, make the advance/rewind distances proportional to the time the user activates the button. After a certain delay, accelerate the advance/rewind. 3. The SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the speed attribute, which can be used to change the playback direction (forward or reverse) of any animation. See also the accelerate and decelerate attributes. 4. Some content lends itself to different forward and reverse functionalities. For instance, compact disk players often let listeners fast forward and reverse, but also skip to the next or previous song. Doing more: 1. The user agent should display time codes or represent otherwise position in content to orient the user. 2. Apply techniques for changing audio speed without introducing distortion. References: 1. Refer to fast advance and fast reverse techniques used for Digital Talking Books [TALKINGBOOKS]. 2. Home Page Reader [HPR] lets users insert bookmarks in presentations. __________________________________________________________ 4.6 For graphical viewports, allow the user to position text transcripts, collated text transcripts, and captions in the viewport. Allow the user to choose from among at least the range of positions available to the author (e.g., the range of positions allowed by the markup or style language). [Priority 1] Content only. (Checkpoint 4.6) Notes and rationale: 1. Some users need to be able to position captions, etc. so that they do not obscure other content or are not obscured by other content. Other users (e.g., users with screen magnifiers or who have other visual disabilities) require pieces of content to be in a particular relation to one another, even if this means that some content will obscure other content. Example techniques: 1. User agents should implement the positioning features of the employed markup or style sheet language. Even when a markup language does not explicitly allow positioning, when a user agent can recognize distinct text transcripts, collated text transcripts, or captions, the user agent should allow the user to reposition them. User agents are not required to allow repositioning when the captions, etc. cannot be separated from other media (e.g., the captions are part of the video track). 2. For the purpose of applying this clause, SMIL 1.0 [SMIL] user agents should recognize as captions any media object whose reference from SMIL is guarded by the 'system-captions' test attribute. 3. Implement the CSS 2 'position' property ([CSS2], section 9.3.1). 4. Allow the user to choose whether captions appear at the bottom or top of the video area or in other positions. Currently authors may place captions overlying the video or in a separate box. Captions prevent users from being able to view other information in the video or on other parts of the screen, making it necessary to move the captions in order to view all content at once. In addition, some users will find captions easier to read if they can place them in a location best suited to their reading style. 5. Allow users to configure a general preference for caption position and to be able to fine tune specific cases. For example, the user may want the captions to be in front of and below the rest of the presentation. 6. Allow the user to drag and drop the captions to a place on the screen. To ensure device-independence, allow the user to enter the screen coordinates of one corner of the caption. 7. Do not require users to edit the source code of the presentation to achieve the desired effect. Doing more: 1. Allow the user to position all parts of a presentation rather than trying to identify captions specifically (i.e., solving the problem generally may be easier than for captions alone). 2. Allow the user to resize (graphically) the captions, etc. __________________________________________________________ 4.7 Allow the user to slow the presentation rate of audio and animations (including video and animated images) not covered by checkpoint 4.4. The same speed percentage requirements of checkpoint 4.4 apply. [Priority 2] Content only. (Checkpoint 4.7) Note: User agents automatically satisfy this checkpoint if they satisfy checkpoint 4.4 for all audio and animations. Related techniques: 1. See the techniques for checkpoint 4.4. __________________________________________________________ 4.8 Allow the user to stop, pause, resume, fast advance, and fast reverse audio and animations (including video and animated images) not covered by checkpoint 4.5. [Priority 2] Content only. (Checkpoint 4.8) Note: User agents automatically satisfy this checkpoint if they satisfy checkpoint 4.5 for all audio and animations. Related techniques: 1. See the techniques for checkpoint 4.5. __________________________________________________________ Checkpoints for audio volume control 4.9 Allow global configuration and control of the volume of all audio, with an option to override audio volumes specified by the author or user agent defaults. The user must be able to choose zero volume (i.e., silent). [Priority 1] Content only. (Checkpoint 4.9) Note: User agents should allow configuration and control of volume through available operating environment controls. Example techniques: 1. Use audio control mechanisms provided by the operating environment. Control of volume mix is particularly important, and the user agent should provide easy access to those mechanisms provided by the operating environment. 2. Implement the CSS 2 'volume' property ([CSS2], section 19.2). 3. Implement the 'display', 'play-during', and 'speak' properties in CSS 2 ([CSS2], sections 9.2.5, 19.6, and 19.5, respectively). 4. Authors sometimes specify background sounds with the "bgsound" attribute. Note: This attribute is not part of HTML 4 [HTML4]. References: 1. Refer to guidelines for audio characteristics used for Digital Talking Books [TALKINGBOOKS]. __________________________________________________________ 4.10 Allow independent control of the volumes of distinct audio sources synchronized to play simultaneously. [Priority 1] Content only. (Checkpoint 4.10) Note: Sounds that play at different times are distinguishable and therefore independent control of their volumes is not required by this checkpoint (since volume control required by checkpoint 4.9 suffices). The user agent may satisfy this checkpoint by allowing the user to control independently the volumes of all distinct audio sources. The user control required by this checkpoint includes the ability to override author-specified volumes for the relevant sources of audio. See also checkpoint 4.12. Related techniques: 1. For each source of audio recognized as distinct, allow the user to control the volume using the same user interface used to satisfy the requirements of checkpoint 4.5. __________________________________________________________ Checkpoints for synthesized speech See also techniques for synthesized speech. 4.11 Allow configuration and control of the synthesized speech rate, according to the full range offered by the speech synthesizer. [Priority 1] Content only. (Checkpoint 4.11) Note: The range of speech rates offered by the speech synthesizer may depend on natural language. Example techniques: 1. For example, many speech synthesizers offer a range for English speech of 120 - 500 words per minute or more. The user should be able to increase or decrease the speech rate in convenient increments (e.g., in large steps, then in small steps for finer control). 2. User agents may allow different speech rate configurations for different natural languages. For example, this may be implemented with CSS2 style sheets using the :lang pseudo-class ([CSS2], section 5.11.4). 3. Use synthesized speech mechanisms provided by the operating environment. 4. Implement the CSS 2 'speech-rate' property ([CSS2], section 19.8). Doing more: 1. Content may include commands that are interpreted by a speech engine to change the speech rate (or control other speech parameters). This checkpoint does not require the user agent to allow the user to override author-specified speech rate changes (e.g., by transforming or otherwise stripping out these commands before passing on the content to the speech engine). Speech engines themselves may allow user override of author-specified speech rate changes. For these such speech engines, the user agent should ensure access to this feature as part of satisfying this checkpoint. __________________________________________________________ 4.12 Allow control of the synthesized speech volume, independent of other sources of audio. [Priority 1] Content only. (Checkpoint 4.12) Note: The user control required by this checkpoint includes the ability to override author-specified speech volume. See also checkpoint 4.10. Example techniques: 1. The user agent should allow the user to make synthesized speech louder and softer than other audio sources. 2. Use synthesized speech mechanisms provided by the operating environment. 3. Implement the CSS 2 'volume' property ([CSS2], section 19.2). __________________________________________________________ 4.13 Allow configuration of speech characteristics according to the full range of values offered by the speech synthesizer. [Priority 1] Content only. (Checkpoint 4.13) Note: Some speech synthesizers allow users to choose values for speech characteristics at a higher abstraction layer, i.e., by choosing from present options that group several characteristics. Some typical options one might encounter include: "adult male voice", "female child voice", "robot voice", "pitch", "stress", etc. Ranges for values may vary among speech synthesizers. Example techniques: 1. Use synthesized speech mechanisms provided by the operating environment. 2. One example of a speech API is Microsoft's Speech Application Programming Interface [SAPI]. 3. ViaVoice control panel for configuration of voice characteristics This image shows how ViaVoice [VIAVOICE] allows users to configure voice characteristics of the speech synthesizer. References: 1. For information about these speech characteristics, please refer to descriptions in section 19.8 of Cascading Style Sheets Level 2 [CSS2]. __________________________________________________________ 4.14 Allow configuration of the following speech characteristics: pitch, pitch range, stress, richness. Pitch refers to the average frequency of the speaking voice. Pitch range specifies a variation in average frequency. Stress refers to the height of "local peaks" in the intonation contour of the voice. Richness refers to the richness or brightness of the voice. [Priority 2] Content only. (Checkpoint 4.14) Note: This checkpoint is more specific than checkpoint 4.13: it requires support for the voice characteristics listed. Definitions for these characteristics are taken from section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; please refer to that specification for additional informative descriptions. Some speech synthesizers allow users to choose values for speech characteristics at a higher abstraction layer, i.e., by choosing from present options distinguished by "gender", "age", "accent", etc. Ranges of values may vary among speech synthesizers. Related techniques: 1. See checkpoint 4.13. __________________________________________________________ 4.15 Provide support for user-defined extensions to the speech dictionary, as well as the following functionalities: spell-out (spell text one character at a time or according to language-dependent pronunciation rules), speak-numeral (speak a numeral as individual digits or as a full number), and speak-punctuation (speak punctuation literally or render as natural pauses). [Priority 2] Content only. (Checkpoint 4.15) Note: Definitions for the functionalities listed are taken from section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; please refer to that specification for additional informative descriptions. Example techniques: 1. ViaVoice control panel for editing the user dictionary This image shows how ViaVoice [VIAVOICE] allows users to add entries to the user's personal dictionary. References: 1. For information about these functionalities, please refer to descriptions in section 19.8 of Cascading Style Sheets Level 2 [CSS2]. __________________________________________________________ Checkpoints related to style sheets 4.16 For user agents that support style sheets, allow the user to choose from (and apply) available author and user style sheets or to ignore them. [Priority 1] Both content and user agent. (Checkpoint 4.16) Note: By definition, the user agent's default style sheet is always present, but may be overridden by author or user styles. Developers should not consider that the user's ability to turn off author and user style sheets is an effective way to improve content accessibility; turning off style sheet support means losing the many benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off author and user style sheets as a last resort. Example techniques: 1. For HTML [HTML4], make available "class" and "id" information so that users can override styles. 2. Implement user style sheets. 3. Implement the "!important" semantics of CSS 2 ([CSS2], section 6.4.2). References: 1. For information about how alternative style sheets are specified in HTML 4 [HTML4], please refer to section 14.3.1. 2. For information about how alternative style sheets are specified in XML 1.0 [XML], please refer to "Associating Style Sheets with XML documents Version 1.0" [XMLSTYLE]. __________________________________________________________ [next guideline 5] [review guideline 4] [previous guideline 3] [contents] Guideline 5. Ensure user control of user interface behavior. Checkpoints 5.1 Allow configuration so that the current focus does not move automatically to viewports that open without explicit user request. Configuration is not required if the current focus can only ever be moved by explicit user request. [Priority 2] Both content and user agent. (Checkpoint 5.1) Note: For example, allow configuration so that neither the current focus nor the pointing device jump automatically to a viewport that opens without explicit user request. Notes and rationale: 1. Moving the focus automatically to a new viewport, this may disorient users with cognitive disabilities or who are blind, and it may be difficult to restore the previous point of regard. Example techniques: 1. Allow the user to configure how the current focus changes when a new viewport opens. For instance, the user might choose between these two options: 1. Do not change the focus when a viewport opens, but alert the user (e.g., with a beep, flash, and text message on the status bar). Allow the user to navigate directly to the new window upon demand. 2. Change the focus when a window opens and use a subtle alert (e.g., a beep, flash, and text message on the status bar) to indicate that the focus has changed. 2. If a new viewport or prompt appears but focus does not move to it, alert assistive technologies (per checkpoint 6.5) so that they may discreetly inform the user. 3. When a viewport is duplicated, the focus in the new viewport should initially be the same as the focus in the original viewport. Duplicate viewports allow users to navigate content (e.g., in search of some information) in one viewport while allowing the user to return with little effort to the point of regard in the duplicate viewport. There are other techniques for accomplishing this (e.g., "registers" in Emacs). 4. In JavaScript, the focus may be changed with myWindow.focus(); 5. For user agents that implement CSS 2 [CSS2], the following rule will generate a message to the user at the beginning of link text for links that are meant to open new windows when followed: A[target=_blank]:before{content:"Open new window"} Doing more: 1. The user agent may also allow configuration about whether the pointing device moves automatically to windows that open without an explicit user request. __________________________________________________________ 5.2 For graphical user interfaces, allow configuration so that the viewport with the current focus remains "on top" of all other viewports with which it overlaps. [Priority 2] Both content and user agent. (Checkpoint 5.2) Notes and rationale: 1. The alert is important to ensure that the user realizes a new viewport has opened; the new viewport may be hidden by the viewport configured to remain on top. Example techniques: 1. None. Doing more: 1. The user agent may also allow configuration about whether the viewport designated by the pointing device always remains on top. __________________________________________________________ 5.3 Allow configuration so that viewports only open on explicit user request. In this configuration, instead of opening a viewport automatically, alert the user and allow the user to open it on demand (e.g., by following a link or confirming a prompt). Allow the user to close viewports. If a viewport (e.g., a frame set) contains other viewports, these requirements only apply to the outermost container viewport. [Priority 2] Both content and user agent. (Checkpoint 5.3) Note: User creation of a new viewport (e.g., empty or with a new resource loaded) through the user agent's user interface constitutes an explicit user request. See also checkpoint 5.1 (for control over changes of focus when a viewport opens) and checkpoint 6.5 (for programmatic alert of changes to the user interface). Notes and rationale: 1. Navigation of multiple open viewports may be difficult for some users who navigate viewports serially (e.g., users with visual or physical disabilities) and for some users with cognitive disabilities (who may be disoriented). Example techniques: 1. For HTML [HTML4], allow the user to control the process of opening a document in a new "target" frame or a viewport created by a script. For example, for target="_blank", open the window according to the user's preference. 2. For SMIL [SMIL], allow the user to control viewports created with the "new" value of the "show" attribute. 3. In JavaScript, windows may be opened with: o myWindow.open("example.com", "My New Window"); o myWindow.showHelp(URI); __________________________________________________________ 5.4 Allow configuration to prompt the user to confirm (or cancel) any form submission that is not caused by an explicit user request to activate a form submit control. [Priority 2] Content only. (Checkpoint 5.4) Note: For example, do not submit a form automatically when a menu option is selected, when all fields of a form have been filled out, or when a "mouseover" or "change" event event occurs. The user agent may satisfy this checkpoint by prompting the user to confirm all form submissions. Example techniques: 1. In HTML 4 [HTML4], form submit controls are the INPUT element (section 17.4) with type="submit" and type="image", and the BUTTON element (section 17.5) with type="submit". 2. Allow the user to configure script-based submission (e.g., form submission accomplished through an "onChange" event). For instance, allow these settings: 1. Do not allow script-based submission. 2. Allow script-based submission after confirmation from the user. 3. Allow script-based submission without prompting the user (but not by default). 3. Authors may write scripts that submit a form when particular events occur (e.g., "onchange" events). Be aware of this type of practice: