#Techniques for UAAG 1.0 Postscript format Techniques for UAAG 1.0 PDF format Techniques for UAAG 1.0 plain text format Techniques for UAAG 1.0 zip archive [contents] _________________________________________________________________ W3C Techniques for User Agent Accessibility Guidelines 1.0 W3C Working Draft 3 October 2002 This version: http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20021003/ Latest version: http://www.w3.org/WAI/UA/UAAG10-TECHS/ Previous version: http://www.w3.org/WAI/UA/WD-UAAG10-TECHS-20020807/ Editors: Ian Jacobs, W3C Jon Gunderson, University of Illinois at Urbana-Champaign Eric Hansen, Educational Testing Service Authors and Contributors: See acknowledgements. This document is also available in these non-normative formats: single HTML, plain text, gzip PostScript, Black/white gzip PostScript, gzip PDF, gzip tar file of HTML, and zip archive of HTML. Note: Some user agents unzip the gzipped files on the fly without changing the file suffix. If you encounter problems reading the gzipped files, remove the .gz suffix and try again. Copyright © 1999 - 2002 W3C^® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. _________________________________________________________________ Abstract This document provides techniques for satisfying the checkpoints defined in "User Agent Accessibility Guidelines 1.0" [UAAG10]. These techniques address key aspects of the accessibility of user interfaces, content rendering, application programming interfaces (APIs), and languages such as the Hypertext Markup Language (HTML), Cascading Style Sheets (CSS) and the Synchronized Multimedia Integration Language (SMIL). The techniques listed in this document are not required for conformance to the Guidelines. These techniques are not necessarily the only way of satisfying the checkpoint, nor are they a definitive set of requirements for satisfying a checkpoint. Status of this document This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this document series is maintained at the W3C. This is the 3 October 2002 Working Draft of "Techniques for User Agent Accessibility Guidelines 1.0". It is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress". This is work in progress and does not imply endorsement by, or the consensus of, either W3C or participants in the User Agent Accessibility Guidelines Working Group (UAWG). While User Agent Accessibility Guidelines 1.0 strives to be a stable document (as a W3C Recommendation), the current document is expected to evolve as technologies change and content developers discover more effective techniques for designing accessible Web sites and pages. A list of changes to this document is available. The latest information regarding patent disclosures related to this document is available on the Web. As of this publication, there are no disclosures. Please send comments about this document, including suggestions for additional techniques, to the public mailing list w3c-wai-ua@w3.org; public archives are available. This document is part of a series of accessibility documents published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C). WAI Accessibility Guidelines are produced as part of the WAI Technical Activity. The goals of the User Agent Accessibility Guidelines Working Group are described in the charter. A list of current W3C Recommendations and other technical documents can be found at the W3C Web site. Table of contents * Abstract * Status of this document * 1 Introduction * 2 The user agent accessibility guidelines + 1. Support input and output device-independence. + 2. Ensure user access to all content. + 3. Allow configuration not to render some content that may reduce accessibility. + 4. Ensure user control of rendering. + 5. Ensure user control of user interface behavior. + 6. Implement interoperable application programming interfaces. + 7. Observe operating environment conventions. + 8. Implement specifications that benefit accessibility. + 9. Provide navigation mechanisms. + 10. Orient the user. + 11. Allow configuration and customization. + 12. Provide accessible user agent documentation and help. * 3 Accessibility topics + 3.1 Access to content + 3.2 User control of rendering and style + 3.3 Link techniques + 3.4 List techniques + 3.5 Table techniques + 3.6 Image map techniques + 3.7 Frame techniques + 3.8 Form techniques + 3.9 Generated content techniques + 3.10 Content repair techniques + 3.11 Script and applet techniques + 3.12 Input configuration techniques + 3.13 Synthesized speech techniques + 3.14 Techniques for reducing dependency on spatial interactions + 3.15 Accessibility and internationalization techniques + 3.16 Appendix: Impact matrix + 3.17 Appendix: Accessibility features of some operating systems + 3.18 Appendix: Loading assistive technologies for access to the document object model * 4 Glossary * 5 References + 5.1 How to refer to this document + 5.2 Normative references + 5.3 Informative references * 6 Resources + 6.1 Operating system and programming guidelines + 6.2 User agents and other tools + 6.3 Accessibility resources + 6.4 Standards resources * 7 Acknowledgments Note: With a user agent that implements HTML 4 [HTML4] access keys, readers may navigate directly to the table of contents via the "c" character. Users may have to use additional keyboard strokes depending on their operating environment. _________________________________________________________________ 1 Introduction This document discusses implementation details that should be helpful to understanding how to satisfy the requirements of "User Agent Accessibility Guidelines 1.0" [UAAG10]. This document includes: * This section provides some context for using this document. * Section 2 lists each checkpoint of "User Agent Accessibility Guidelines 1.0" along with some possible techniques for satisfying it. * Section 3 discusses general topics related to the implementation of accessibility features in user agents. Differences from User Agent Accessibility Guidelines 1.0 In an effort to improve the readability of this document, some information from User Agent Accessibility Guidelines 1.0 has been copied here: * each checkpoint (including the Notes following the checkpoints); * the priority definitions; * the glossary; * the acknowledgments. In an effort to reduce the size of the current document, some information that is in User Agent Accessibility Guidelines 1.0 has not been copied here: * the introduction; * the descriptions of how the guidelines and checkpoints are structured and organized; * the prose of each guideline (i.e., the text after the guideline title and before the list of checkpoints); * the conformance section (since one does not conform to the current document, only to User Agent Accessibility Guidelines 1.0). The current document includes a list of references resources that is not part of User Agent Accessibility Guidelines 1.0. Related resources "Techniques for User Agent Accessibility Guidelines 1.0" and the "User Agent Accessibility Guidelines 1.0" are part of a series of accessibility guidelines published by the Web Accessibility Initiative (WAI). These documents explain the responsibilities of user agent developers in making the Web more accessible to users with disabilities. The series also includes the "Web Content Accessibility Guidelines 1.0" [WCAG10] (and techniques [WCAG10-TECHS]), which explain the responsibilities of authors, and the "Authoring Tool Accessibility Guidelines 1.0" [ATAG10] (and techniques [ATAG10-TECHS]), which explain the responsibilities of authoring tool developers. 2 The user agent accessibility guidelines This section lists each checkpoint of "User Agent Accessibility Guidelines 1.0" [UAAG10] along with some possible techniques for satisfying it. Each checkpoint definition includes a link to the checkpoint definition in "User Agent Accessibility Guidelines 1.0". Each checkpoint definition is followed by one or more of the following: * Notes and rationale: Additional rationale and explanation of the checkpoint; * Who benefits: Which users with disabilities are expected to benefit from user agents that satisfy the checkpoint; * Example techniques: Some techniques to illustrate how a user agent might satisfy the requirements of the checkpoint. Screen shots and other information about deployed user agents have been included as sample techniques. References to products are not endorsements of those products by W3C; * Doing more: Techniques to achieve more than what is required by the checkpoint; * Related techniques: Links to other techniques in section 3. The accessibility topics of section 3 generally apply to more than one checkpoint. * References: References to other guidelines, specifications, or resources. Note: Most of the techniques in this document are designed for graphical browsers and multimedia players running on desktop computers. However, some of them also make sense for assistive technologies and other user agents. In particular, techniques about communication between user agents will benefit assistive technologies. Refer, for example, to the appendix on loading assistive technologies for access to the document object model. Priorities Each checkpoint in this document is assigned a priority that indicates its importance for users with disabilities. Priority 1 (P1) If the user agent does not satisfy this checkpoint, one or more groups of users with disabilities will find it impossible to access the Web. Satisfying this checkpoint is a basic requirement for enabling some people to access the Web. Priority 2 (P2) If the user agent does not satisfy this checkpoint, one or more groups of users with disabilities will find it difficult to access the Web. Satisfying this checkpoint will remove significant barriers to Web access for some people. Priority 3 (P3) If the user agent satisfies this checkpoint, one or more groups of users with disabilities will find it easier to access the Web. Note: This information about checkpoint priorities is included for convenience only. For detailed information about conformance to "User Agent Accessibility Guidelines 1.0" [UAAG10], please refer to that document. Guideline 1. Support input and output device-independence. Checkpoints: 1.1, 1.2, 1.3 Checkpoint definitions 1.1 Full keyboard access. (P1) Checkpoint 1.1 1. Ensure that the user can operate through keyboard input alone any user agent functionality available through the user interface. Normative inclusions and exclusions 1. This checkpoint excludes the requirements of checkpoint 1.2. 2. Conformance detail: For both content and user agent. Note: For example, ensure that the user can interact with enabled elements, select content, navigate viewports, configure the user agent, access documentation, install the user agent, and operate user interface controls, all entirely through keyboard input. User agents generally support at least three types of keyboard operation: 1. Direct (e.g., keyboard shortcuts such a "F1" to open the help menu; see checkpoint 11.4 for single-key access requirements), 2. Sequential (e.g., navigation through cascading menus), and 3. Spatial (e.g., when the keyboard is used to move the pointing device in two-dimensional visual space to manipulate a bitmap image). User agents should support direct or sequential keyboard operation for all functionalities. Furthermore, the user agent should satisfy this checkpoint by offering a combination of keyboard-operable user interface controls (e.g., keyboard operable print menus and settings) and direct keyboard shortcuts (e.g., to print the current page). It is also possible to claim conformance to User Agent Accessibility Guidelines 1.0 [UAAG10] for full support through pointing device input and/or voice input. See the section on Input modality labels in UAAG 1.0. Notes and rationale 1. It is up to the user agent developer to decide which functionalities are best served by direct access, sequential access, and access through two-dimensional visual space. The UAAG 1.0 does not discourage a pointing device interface, but it does require redundancy through the keyboard. In most cases, developers can allow operation of the user agent without relying on motion through two-dimensional visual space; this includes text selection (a text caret may be used to establish the start and end of the selection), region selection (allow the user to describe the coordinates or position of the region, e.g., relative to the viewport), and drag-and-drop (allow the user to designate start and end points and then say "go"). 2. For instance, the user must be able to do the following through the keyboard alone (or pointing device alone or voice alone): + Select content and operate on it. For example, if the user can select rendered text with the mouse and make it the content of a new link by pushing a button, they also need to be able to do so through the keyboard and other supported devices. Other operations include cut, copy, and paste. + Set the focus on viewports and on enabled elements. + Install, configure, uninstall, and update the user agent software. + Use the graphical user interface menus. Some users may wish to use the graphical user interface even if they cannot use or do not wish to use the pointing device. + Fill out forms. + Access documentation. 3. Suppose a user agent such as a Web browser does not allow complete operation through the keyboard alone. It is still possible to claim conformance for the user agent in conjunction with another software component that "fills in the gap". Who benefits 1. Users with blindness are most likely to benefit from direct access through the keyboard, including navigation of user interface controls; this is a logical navigation, not navigation in two-dimensional visual space. 2. Users with physical disabilities are most likely to benefit from a combination of direct access and spatial access through the keyboard. For some users with physical disabilities, moving the pointing device using a physical mouse may be significantly more difficult than moving the pointing device with arrow keys, for example. 3. This checkpoint will also benefit users of many other alternative input devices (which make use of the keyboard API) and also anyone without a mouse. 4. While keyboard operation is expected to improve access for many users, operation by keyboard shortcuts alone may reduce accessibility (and usability) by requiring users to memorize a long list of shortcuts. Developers should provide mechanisms for contextual access to user agent functionalities (including, for example, keyboard-operable cascading menus, context-sensitive help, and keyboard operable configuration tabs) as well as direct access to those functionalities. See also checkpoint 11.5. _________________________________________________________________ 1.2 Activate event handlers. (P1) Checkpoint 1.2 1. Allow the user to activate, through keyboard input alone, all input device event handlers that are explicitly associated with the element designated by the content focus. 2. In order to satisfy provision one of this checkpoint, the user must be able to activate as a group all event handlers of the same input device event type. Normative inclusions and exclusions 1. Provision one of this checkpoint applies to handlers of any input device event type, including event types for keyboard, pointing device, and voice input. 2. The user agent is not required to allow activation of event handlers associated with a given device (e.g., the pointing device) in any order other than what the device itself allows (e.g., a mouse down event followed by a mouse drag event followed by a mouse up event). 3. The requirements for this checkpoint refer to any explicitly associated input device event handlers associated with an element, independent of the input modalities for which the user agent conforms. For example, suppose that an element has an explicitly associated handler for pointing device events. Even when the user agent only conforms for keyboard input (and does not conform for the pointing device, for example), this checkpoint requires the user agent to allow the user to activate that handler with the keyboard. 4. This checkpoint is mutually exclusive of checkpoint 1.1 since it may be excluded from a conformance profile, unlike other keyboard operation requirements. 5. Conformance profile labels: Events. Note: Refer to the checkpoints of guideline 9 for more information about focus requirements. Notes and rationale 1. For example, users without a pointing device need to be able to activate form controls and links (including the links in a client-side image map). 2. Events triggered by a particular device generally follow a set pattern, and often in pairs: start/end, down/up, in/out. One would not expect a "key down" event for a given key to be followed by another "key down" event without an intervening "key up" event. Who benefits 1. Users with blindness or some users with a physical disability, and anyone without a pointing device. Example techniques 1. When using the "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], activate an event handler as described in section 1.5: 1. Create an event of the given type by calling DocumentEvent.createEvent, which takes an event type as parameter, then 2. Dispatch this event using EventTarget.dispatchEvent. 2. To preserve the expected order of events, provide a dynamically changing menu of available handlers. For example, an initial menu of handlers might only allow the user to trigger a "mousedown" event. Once triggered, the menu would not allow "mousedown" but would allow "mouseup" and "mouseover". 3. In some markup languages, it is possible (though somewhat nonsensical) for two actions to be assigned to the same input event type for a given element (e.g., one through an explicit event handler and one "intrinsic" to the element). In this case, offer the user a choice of which action to take. 4. For example, in HTML 4 [HTML4], input device event handlers are described in section 18.2.3. They are: onclick, ondblclick, onmousedown, onmouseover, onmouseout, onfocus, onblur, onkeypress, onkeydown, and onkeyup. 5. In "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], focus and activation types are discussed in section 1.6.1. They are: DOMFocusIn, DOMFocusOut, and DOMActivate. These events are specified independent of a particular input device type. 6. In "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS], mouse event types are discussed in section 1.6.2. They are: click, mousedown, mouseup, mouseover, mousemove and mouseout. 7. The DOM Level 2 Events specification does not provide a key event module. 8. Sequential navigation technique: Add each input device event handler to the navigation order (refer to checkpoint 9.3). Alert the user when the user has navigated to an event handler, and allow activation. For example, an link that also has onMouseOver and onMouseOut event handlers defined, might generate three "stops" in the navigation order: one for the link and two for the event handlers. If this technique is used, allow configuration so that input device event handlers are not inserted in the navigation order. 9. Query technique: Allow the user to query the element with content focus for a menu of input device event handlers. 10. Descriptive information about handlers can allow assistive technologies to choose the most important functions for activation. This is possible in the Java Accessibility API [JAVAAPI], which provides an an AccessibleAction Java interface. This interface provides a list of actions and descriptions that enable selective activation. See also checkpoint 6.3. 11. Using MSAA [MSAA] on the Windows platform: + Retrieve the node in the document object that has current focus. + Call the IHTMLDocument4::fireEvent method on that node. Related techniques 1. See image map techniques. References 1. For information on how to register event handlers through the DOM, and dispatch events properly, refer to Section 1.3 Event listener registration in "Document Object Model (DOM) Level 2 Events Specification" [DOM2EVENTS]. 2. For example, section 16.5 of the SVG 1.0 Candidate Recommendation [SVG] specifies processing order for user interface events. _________________________________________________________________ 1.3 Provide text messages. (P1) Checkpoint 1.3 1. Ensure that every message (e.g., prompt, alert, or notification) that is a non-text element and is part of the user agent user interface has a text equivalent. Note: For example, if the user is alerted of an event by an audio cue, a visually-rendered text equivalent in the status bar could satisfy this checkpoint. Per checkpoint 6.5, a text equivalent for each such message must be available through an API. See also checkpoint 6.6 for requirements for programmatic notification of changes to the user interface. Notes and rationale 1. User agents should use modality-specific messages in the user interface (e.g., graphical scroll bars, beeps, and flashes) as long as redundant mechanisms are available or possible. These redundant mechanisms will benefit all users, not just users with disabilities. Who benefits 1. Users with blindness, deafness, or who are hard of hearing. Mechanisms that are redundant to audio will benefit individuals who are deaf, hard of hearing, or operating the user agent in a noisy or silent environment where the use of sound is not practical. Example techniques 1. Render text messages on the status bar of the graphical user interface. Allow users to query the viewport for this status information (in addition to having access through graphical rendering). 2. Make available information in a manner that allows other software to present it according to the user's preferences. For instance, if proportional scroll bars are used in the graphical interface to indicate the position of the viewport in content, make available this same information in text form. For instance, this will allow other software to render the proportion of content viewed as synthesized speech or as braille. Doing more 1. Allow configuration to render or not render status information (e.g., allow the user to hide the status bar). _________________________________________________________________ [next guideline: 2] [review guideline: 1] [contents] Guideline 2. Ensure user access to all content. Checkpoints: 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 2.10 Checkpoint definitions 2.1 Render content according to specification. (P1) Checkpoint 2.1 1. Render content according to format specification (e.g., for a markup language or style sheet language). Normative inclusions and exclusions 1. Rendering requirements include format-defined interactions between author preferences and user preferences/capabilities (e.g., when to render the "alt" attribute in HTML, the rendering order of nested OBJECT elements in HTML, test attributes in SMIL, and the cascade in CSS2). 2. When a rendering requirement of another specification contradicts a requirement of UAAG 1.0, the user agent may disregard the rendering requirement of the other specification and still satisfy this checkpoint; see the section on the relation of User Agent Accessibility Guidelines 1.0 to general software design guidelines and other specifications. for more information. 3. The user agent is not required to satisfy this checkpoint for all implemented specifications; see the section on conformance profiles for more information. 4. This checkpoint excludes the requirements of checkpoint 2.6. Note: If a conforming user agent does not render a content type, it should allow the user to choose a way to handle that content (e.g., by launching another application or by saving it to disk). Notes and rationale 1. Provision two of the checkpoint only applies when the rendering requirement of another specification contradicts the requirements of the current document; no exemption is granted if the other specification is consistent with or silent about a requirement made by the current document. Who benefits 1. Users with disabilities when specifications include features that promote accessibility (e.g., scalable graphics benefit users with low vision, style sheets allow users to override author and user style sheets). Example techniques 1. Provide access to attribute values (one at a time, not as a group). For instance, allow the user to select an element and read values for all attributes set for that element. For many attributes, this type of inspection should be significantly more usable than a view of the text source. 2. When content changes dynamically (e.g., due to embedded scripts or automatic content retrieval), users need to have access to the content before and after the change. 3. Make available information about abbreviation and acronym expansions. For instance, in HTML, look for abbreviations specified by the ABBR and ACRONYM elements. The expansion may be given with the "title" attribute (refer to the Web Content Accessibility Guidelines 1.0 [WCAG10], checkpoint 4.2). To provide expansion information, user agents may: + Allow the user to configure that the expansions be used in place of the abbreviations, + Provide a list of all abbreviations in the document, with their expansions (a generated glossary of sorts) + Generate a link from an abbreviation to its expansion. + Allow the user to query the expansion of a selected or input abbreviation. + If an acronym has no expansion in one location, look for another occurrence in content that does. User agents may also look for possible expansions (e.g., in parentheses) in surrounding context, though that is a less reliable technique. Related techniques 1. See the sections on access to content, link techniques, table techniques, frame techniques, and form techniques. Doing more 1. If the requirements of the current document contradict the rendering requirements of another specification, the user agent may offer a configuration to allow conformance to one or the other specification. References 1. Sections 10.4 ("Client Error 4xx") and 10.5 ("Server Error 5xx") of the HTTP/1.1 specification [RFC2616] state that user agents should have the following behavior in case of these error conditions: Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user. _________________________________________________________________ 2.2 Provide text view. (P1) Checkpoint 2.2 1. For content authored in text formats, provide a view of the text source. Normative inclusions and exclusions 1. For the purposes of this checkpoint, a text format is: + any media object given an Internet media type of "text" (e.g., "text/plain", "text/html", or "text/*") as defined in RFC 2046 [RFC2046], section 4.1, or + any media object identified by Internet media type to be an an XML document (as defined in [XML], section 2) or SGML application. Refer, for example, to Internet media types defined in "XML Media Types" [RFC3023]. 2. The user agent is only required to satisfy this checkpoint for text formats that are part of a conformance claim; see the section on conformance profiles for more information. However, user agents should provide a text view for all implemented text formats. Notes and rationale 1. In general, user agent developers should not rely on a "source view" to convey information to users, most of whom are not familiar with markup languages. A source view is still important as a "last resort" to some users as content might not otherwise be accessible at all. Who benefits 1. Users with blindness, low vision, deafness, hard of hearing, and any user who requires the text source to understand the content. Example techniques 1. Make the text view useful. For instance, enable links (i.e., URIs), allowing searching and other navigation within the view. 2. A source view is an easily-implementable view that will help users inspect some types of content, such as style sheet fragments or scripts. This does not mean, however, that a source view of style sheets is the best user interface for reading or changing style sheets. Doing more 1. In the absence of authoritative Internet media type information to the contrary, provide a text view of any media object identified as XML or SGML by a DOCTYPE indication conforming to the rules of those formats. User agents receive media objects in many ways. The HTTP protocol is somewhat reliable as regards providing Internet media type information for the data so delivered. The file: and ftp: URI schemes, on the other hand, do not provide this metadata. In cases like these, using the built-in markings of XML and SGML objects is appropriate and should be expected. _________________________________________________________________ 2.3 Render conditional content. (P1) Checkpoint 2.3 1. Allow configuration to provide access to each piece of unrendered conditional content "C". 2. When a specification does not explain how to provide access to this content, do so as follows: + If C is a summary, title, alternative, description, or expansion of another piece of content D, provide access through at least one of the following mechanisms: o (1a) render C in place of D; o (2a) render C in addition to D; o (3a) provide access to C by allowing the user to query D. In this case, the user agent must also alert the user, on a per-element basis, to the existence of C (so that the user knows to query D); o (4a) allow the user to follow a link to C from the context of D. + Otherwise, provide access to C through at least one of the following mechanisms: o (1b) render a placeholder for C, and allow the user to view the original author-supplied content associated with each placeholder; o (2b) provide access to C by query (e.g., allow the user to query an element for its attributes). In this case, the user agent must also alert the user, on a per-element basis, to the existence of C; o (3b) allow the user to follow a link in context to C. Sufficient techniques 1. To satisfy provision one of this checkpoint, the configuration may be a switch that, for all content, turns on or off the access mechanisms described in provision two. 2. To satisfy provision two of this checkpoint, the user agent may provide access on a per-element basis (e.g., by allowing the user to query individual elements) or for all elements (e.g., by offering a configuration to render conditional content all the time). Normative inclusions and exclusions 1. Conformance detail: For all content. Note: For instance, an HTML user agent might allow users to query each element for access to conditional content supplied for the "alt", "title", and "longdesc" attributes. Or, the user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism). Notes and rationale 1. There may be more than one piece of conditional content associated with another piece of content (e.g., multiple captions tracks associated with the visual track of a presentation). 2. Note that the alert requirement of this checkpoint is per-element. A single resource-level alert (e.g., "there is conditional content somewhere here") does not satisfy the checkpoint, but may be part of a solution for satisfying this checkpoint. For example, the user agent might indicate the presence of conditional content "somewhere" with menu in the toolbar. The menu items could provide both per-element alert and access to the content (e.g., by opening a viewport with the conditional content rendered). Who benefits 1. Any user for whom the author has provided conditional content for accessibility purposes. This includes: text equivalents for users with blindness or low vision, or users who are deaf-blind, and captions, for users who with deafness or who are hard of hearing. Example techniques 1. Allow users to choose more than one piece of conditional content at a given time. For instance, users with low vision may want to view images (even imperfectly) but require a text equivalent for the image; the text may be rendered with a large font or as synthesized speech. 2. In HTML 4 [HTML4], conditional content mechanisms include the following: + For the IMG element (section 13.2): the "alt" (section 13.8), "title" (section 7.4.3), and "longdesc" (section 13.2) attributes. See the section on long descriptions. + For the OBJECT element (section 13.3): the content of the element and the "title" attribute. + For the deprecated APPLET element (section 13.4): the "alt" attribute and the content of the element. + For the AREA element (section 13.6.1): the "alt" attribute. + For the INPUT element (section 17.4): the "alt" attribute. + For the ACRONYM and ABBR elements (section 9.2.1): the "title" attribute (for acronym or abbreviation expansion). + For the TABLE element (section 11.2.1): the "summary" attribute. + For frames: the NOFRAMES element (section 16.4.1) and the "longdesc" attribute (section 16.2.2) on FRAME and IFRAME (section 16.5). + For scripts: the NOSCRIPT element (section 18.3.1). 3. Allow the user to configure how the user agent renders a long description (e.g., "longdesc" in HTML 4 [HTML4]). Some possibilities include: 1. Render the long description in a separate view. 2. Render the long description in place of the associated element. 3. Do not render the long description, but allow the user to query whether an element has an associated long description (e.g., with a context-sensitive menu) and provide access to it. 4. Use an icon (with a text equivalent) to indicate the presence of a long description. 5. Use an audio cue to indicate the presence of a long description when the user navigates to the element. 4. For an object (e.g., an image) with an author-specified geometry that the user agent does not render, allow the user to configure how the conditional content should be rendered. For example, within the specified geometry, or by ignoring the specified geometry altogether. 5. For multimedia presentations with several alternative tracks, ensure access to all tracks and allow the user to select individual tracks. (As an example, the QuickTime player [QUICKTIME] allows users to turn on and off any number of tracks separately.) For example, construct a list of all available tracks from short descriptions provided by the author (e.g., through the "title" attribute). 6. For multimedia presentations with several alternative tracks, allow users to choose tracks based on natural language preferences. SMIL 1.0 [SMIL] allows users to specify captions in different natural languages. By setting language preferences in the SMIL player (e.g., the G2 player [G2]), users may access captions (or audio) in different languages. Allow users to specify different languages for different content types (e.g., English audio and Spanish captions). 7. If a multimedia presentation has several captions (or subtitles) available, allow the user to choose from among them. Captions might differ, for example, in level of detail, reading level or natural language. Multilingual audiences may wish to have captions in different natural languages on the screen at the same time. Users may wish to use both captions and audio descriptions concurrently as well. 8. Make apparent through the user agent user interface which audio tracks are meant to be played separately (e.g., by allowing the user to select each one independently from a menu). 9. Section 7.8.1 of SMIL 2.0 [SMIL20] defines the 'readIndex' attribute, which specifies the position of the current element in the order in which values of the longdesc, title, and alt attributes are to be read aloud. Related techniques 1. See the section on access to content. Doing more 1. If the user agent satisfies the checkpoint by implementing 1b (placeholders), allow the user to toggle back and forth between a placeholder and the original author-supplied content. Some users with a cognitive disability may find it difficult to access content after turning on rendering of too many images (even when those images were turned on one by one). Sample technique: allow the user to designate a placeholder and request to view the associated content in a separate viewport (e.g., through a context menu), leaving the placeholder in context. Allow the user to close the new viewport manually. 2. Make information available with different levels of detail. For example, for a voice browser, offer two options for HTML IMG elements: 1. Speak only "alt" text by default, but allow the user to hear "longdesc" text on an image by image basis. 2. Speak "alt" text and "longdesc" for all images. 3. Allow the user to configure different natural language preferences for different types of conditional content (e.g., captions and audio descriptions). Users with disabilities may need to choose the language they are most familiar with in order to understand a presentation for which supplementary tracks are not all available in all desired languages. In addition, some users may prefer to hear the program audio in its original language while reading captions in another, fulfilling the function of subtitles or to improve foreign language comprehension. In classrooms, teachers may wish to configure the language of various multimedia elements to achieve specific educational goals. How the user selects preferred natural language for captions in RealPlayer This image shows how users select a natural language preference in the RealPlayer. This setting, in conjunction with language markup in the presentation, determines what content is rendered. _________________________________________________________________ 2.4 Allow time-independent interaction. (P1) Checkpoint 2.4 1. For rendered content where user input is only possible within a finite time interval controlled by the user agent, allow configuration to provide a view where user interaction is time-independent. Sufficient techniques 1. The user agent may satisfy this checkpoint by pausing processing automatically to allow for user input, and resuming processing on explicit user request. When this technique is used, pause at the end of each time interval where user input is possible. In the paused state: + Alert the user that the rendered content has been paused (e.g., highlight the pause button in a multimedia player's control panel). + Highlight which enabled elements are time-sensitive. + Allow the user to interact with the enabled elements. + Allow the user to resume on explicit user request (e.g., by pressing the play button in a multimedia player's control panel; see also checkpoint 4.5). 2. The user agent may satisfy this checkpoint by generating a time-independent (or, "static") view, based on the original content, that offers the user the same opportunities for interaction. The static view should reflect the structure and flow of the original time-sensitive presentation; orientation cues will help users understand the context for various interaction opportunities. Normative inclusions and exclusions 1. When satisfying this checkpoint for a real-time presentation, the user agent may discard packets that continue to arrive after the construction of the time-independent view (e.g., when paused or after the construction of a static view). 2. This checkpoint does not apply when the user agent cannot recognize the time interval in the presentation format, or when the user agent cannot control the timing (e.g., because it is controlled by the server). Note: If the user agent satisfies this checkpoint by pausing automatically, it may be necessary to pause more than once when there are multiple opportunities for time-sensitive user interaction. When pausing, pause synchronized content as well (whether rendered in the same or different viewports) per checkpoint 2.6. In SMIL 1.0 [SMIL], for example, the "begin", "end", and "dur" attributes synchronize presentation components. See also checkpoint 3.5, which involves client-driven content retrieval. Notes and rationale 1. The user agent could satisfy this checkpoint by allowing the user to step through an entire presentation manually (as one might advance frame by frame through a movie). However, this is likely to be tedious and lead to information loss, so the user agent should preserve as much of the flow and order of the original presentation as possible. 2. The requirement to pause at the end (rather than at the beginning) of a time-interval is to allow the user to review content that may change during the elapse of this time. 3. The configuration option is important because techniques used to satisfy this checkpoint may lead to information loss for some types of content (e.g., highly interactive real-time presentations). 4. When different streams of time-sensitive content are not synchronized (and rendered in the same or different viewports), the user agent is not required to pause the pieces all at once. The assumption is that both streams of content will be available at another time. Who benefits 1. Some users with a physical disability who may not have the time to interact with the content, and users with serial access to content or who navigate sequentially. Example techniques 1. Some HTML user agents recognize time intervals specified through the META element, although this usage is not defined in HTML 4 [HTML4]. 2. Render time-dependent links as a static list that occupies the same screen real estate; authors may create such documents in SMIL 1.0 [SMIL]. Include temporal context in the list of links. For example, provide the time at which the link appeared along with a way to easily jump to that portion of the presentation. 3. For a presentation that is not "live", allow the user to choose from a menu of available time-sensitive links (essentially making them time-independent). Doing more 1. Provide a view where time intervals are lengthened, but not infinitely (e.g., allow the user to multiple time intervals by 3, 5, and 10). Or, allow the user to add extra time (e.g., 10 seconds) to each time interval. 2. Allow the user to view a list of all media elements or links of the presentations sorted by start or end time or alphabetically. 3. Alert the user whenever pausing the user agent may lead to packet loss. References 1. Refer to section 4.2.4 of SMIL 1.0 [SMIL] for information about the SMIL time model. _________________________________________________________________ 2.5 Make captions, transcripts, audio descriptions available. (P1) Checkpoint 2.5 1. Allow configuration or control to render text transcripts, collated text transcripts, captions, and audio descriptions in content at the same time as the associated audio tracks and visual tracks. Normative inclusions and exclusions 1. Conformance profile labels: Video, Audio. 2. Conformance detail: For all content. Notes and rationale 1. Users may wish to a read a transcript at the same time as a related visual or audio track and pause the visual or audio track while reading; see checkpoint 4.5. Who benefits 1. Users with blindness or low vision (audio descriptions and text captions) and users with deafness or who are hard of hearing. Example techniques 1. Allow users to turn on and off audio descriptions and captions. 2. For the purpose of applying this clause, SMIL 1.0 [SMIL] user agents should recognize as captions any media object whose reference from SMIL is guarded by the 'system-captions' test attribute. 3. SMIL user agents should allow users to configure whether they want to view captions, and this user interface switch should be bound to the 'system-captions' test attribute. Users should be able to indicate a preference for receiving available audio descriptions. Note: SMIL 1.0 [SMIL] does not include a mechanism analogous to 'system-captions' for audio descriptions, though [SMIL20] does, called 'systemAudioDesc'. 4. Another SMIL 1.0 test attribute, 'system-overdub-or-captions', allows users to choose between subtitles and overdubs in multilingual presentations. User agents should not interpret a value of 'caption' for this test attribute as meaning that the user prefers accessibility captions; that is the purpose of the 'system-captions' test attribute. When subtitles and accessibility captions are both available, users who are deaf may prefer to view captions, as they generally contain information not in subtitles, including information on music, sound effects, and who is speaking. 5. User agents that play QuickTime movies should allow the user to turn on and off the different tracks embedded in the movie. Authors may use these alternative tracks to provide content for accessibility purposes. The Apple QuickTime player provides this feature through the menu item "Enable Tracks." 6. User agents that play Microsoft Windows Media Object presentations should provide support for Synchronized Accessible Media Interchange (SAMI [SAMI]), a protocol for creating and displaying captions) and should allow users to configure how captions are viewed. In addition, user agents that play Microsoft Windows Media Object presentations should allow users to turn on and off other conditional content, including audio description and alternative visual tracks. References 1. Developers implementing SMIL 1.0 [SMIL] should consult "Accessibility Features of SMIL" [SMIL-ACCESS]. _________________________________________________________________ 2.6 Respect synchronization cues. (P1) Checkpoint 2.6 1. Respect synchronization cues (e.g., in markup) during rendering. Normative inclusions and exclusions 1. This checkpoint is mutually exclusive of checkpoint 2.1 since it may be excluded from a conformance profile. 2. Conformance profile labels: Video, Audio. Notes and rationale 1. The term "synchronization cues" refers to pieces of information that may affect synchronization, such as the size and expected duration of tracks and their segments, the type of element and how much those elements can be sped up or slowed down (both from technological and intelligibility standpoints). 2. Captions and audio descriptions may not make sense unless rendered synchronously with related video or audio content. For instance, if someone with a hearing disability is watching a video presentation and reading associated captions, the captions should be synchronized with the audio so that the individual can use any residual hearing. For audio descriptions, it is crucial that an audio track and an audio description track be synchronized to avoid having them both play at once, which would reduce the clarity of the presentation. Who benefits 1. Users with deafness or who are hard of hearing (e.g., for audio descriptions and audio tracks), and some users with a cognitive disability. Example techniques 1. For synchronization in SMIL 2.0 [SMIL20], refer to section 10, the timing and synchronization module. 2. The idea of "sensible time-coordination" of components in the definition of synchronize centers on the idea of simultaneity of presentation, but also encompasses strategies for handling deviations from simultaneity resulting from a variety of causes. Consider how deviations might be handled for captions for a multimedia presentation such as a movie clip. Captions consist of a text equivalent of the audio track that is synchronized with the visual track. Typically, a segment of the captions appears visually near the video for several seconds while the person reads the text. As the visual track continues, a new segment of the captions is presented. However, a problem arises if the captions are longer than can fit in the display space. This can be particularly difficult if due to a visual disability, the font size has been enlarged, thus reducing the amount of rendered caption text that can be presented. The user agent needs to respond sensibly to such problems, for example by ensuring that the user has the opportunity to navigate (e.g., scroll down or page down) through the caption segment before proceeding with the next segment of the visual track. 3. Developers of user agents need to determine how they will handle other synchronization challenges, such as: 1. Under what circumstances will the presentation automatically pause? Some circumstances where this might occur include: o the segment of rendered caption text is more than can fit on the visual display o the user wishes more time to read captions or the collated text transcript o the audio description is of longer duration than the natural pause in the audio. 2. Once the presentation has paused, then under what circumstances will it resume (e.g., only when the user signals it to resume, or based on a predefined pause length)? 3. If the user agent allows the user to jump to a location in a presentation by activating a link, then how will related tracks behave? Will they jump as well? Will the user be able to return to a previous location or undo the action? _________________________________________________________________ 2.7 Repair missing content. (P2) Checkpoint 2.7 1. Allow configuration to generate repair text when the user agent recognizes that the author has failed to provide conditional content that was required by the format specification. Sufficient techniques 1. The user agent may satisfy this checkpoint by basing the repair text on any of the following available sources of information: URI reference, content type, or element type. Note, however, that additional information that would enable more helpful repair might be available but not "near" the missing conditional content. For instance, instead of generating repair text on a simple URI reference, the user agent might look for helpful information near a different instance of the URI reference in the same document object, or might retrieve useful information (e.g., a title) from the resource designated by the URI reference. Normative inclusions and exclusions 1. Conformance detail: For all content. Note: Some markup languages (such as HTML 4 [HTML4] and SMIL 1.0 [SMIL] require the author to provide conditional content for some elements (e.g., the "alt" attribute on the IMG element). Notes and rationale 1. Following are some examples of conditional content that is required by format specification: + In HTML 4 [HTML4], "alt" is required for the IMG and AREA elements (for validation). (In SMIL 1.0 [SMIL], on the other hand, "alt" is not required on media objects.) + Whatever the format, text equivalents for non-text content are required by the Web Content Accessibility Guidelines 1.0 [WCAG10]. 2. Conditional content may come from markup, or even inside images (e.g., refer to "Describing and retrieving photos using RDF and HTTP" [PHOTO-RDF]). Who benefits 1. Users with blindness or low vision. Example techniques 1. When HTTP is used, HTTP headers provide information about the URI of the Web resource ("Content-Location") and its type ("Content-Type"). Refer to the HTTP/1.1 specification [RFC2616], sections 14.14 and 14.17, respectively. Refer to "Uniform Resource Identifiers (URI): Generic Syntax" ([RFC2396], section 4) for information about URI references, as well as the HTTP/1.1 specification [RFC2616], section 3.2.1. 2. An image or another piece of content may appear several times in content. If one instance has associated conditional content but others do not, reuse what the author did provide. 3. Repair content may be part of another piece of content. For instance, some image formats allow authors to store metadata there; refer to "Describing and retrieving photos using RDF and HTTP" [PHOTO-RDF]. Related techniques 1. See content repair techniques, and cell header repair strategies. Doing more 1. When configured per provision one of this checkpoint, inform the user (e.g., in the generated text itself) that this content was not provided by the author. 2. Use heuristics based on the specification format. For instance, if the alt attribute is missing on the IMG element in HTML, but the title attribute is present, base the repair content on the title. References 1. The "Altifier Tool" [ALTIFIER] illustrates smart techniques for generating text equivalents (e.g., for images) when the author has not specified any. 2. Additional repair techniques may be available from W3C's Evaluation and Repair Tools Working Group. _________________________________________________________________ 2.8 No repair text. (P3) Checkpoint 2.8 1. Allow at least two configurations for when the user agent recognizes that conditional content required by the format specification is present but empty content: + generate no repair text, or + generate repair as described in checkpoint 2.7. Normative inclusions and exclusions 1. Conformance detail: For all content. Note: In some authoring scenarios, empty content (e.g., alt="" in HTML) may make an appropriate text equivalent, such as when non-text content has no other function than pure decoration, or when an image is part of a "mosaic" of several images and does not make sense out of the mosaic. Refer to the Web Content Accessibility Guidelines 1.0 [WCAG10] for more information about text equivalents. Notes and rationale 1. User agents should render nothing in this case because the author may specify an empty text equivalent for content that has no function in the page other than as decoration. Who benefits 1. Users with blindness or low vision. Example techniques 1. The user agent should not render generic labels such as "[INLINE]" or "[GRAPHIC]" for empty conditional content (unless configured to do so). 2. If no captioning information is available and captioning is turned on, render "No captioning information available" in the captioning region of the viewport (unless configured not to generate repair content). Doing more 1. Labels (e.g., "[INLINE]" or "[GRAPHIC]") may be useful in some situations, so the user agent may allow configuration to render "No author text" (or similar) instead of empty conditional content. _________________________________________________________________ 2.9 Render conditional content automatically. (P3) Checkpoint 2.9 1. Allow configuration to render all conditional content automatically. 2. As part of satisfying provision one of this checkpoint, provide access according to specification, or where unspecified, by applying one of the techniques 1a, 2a, or 1b defined in checkpoint 2.3. Normative inclusions and exclusions 1. The user agent is not required to render all conditional content at the same time in a single viewport. 2. Conformance detail: For all content. Note: For instance, an HTML user agent might allow configuration so that the value of the "alt" attribute is rendered in place of all IMG elements (while other conditional content might be made available through another mechanism). The user agent may offer multiple configurations (e.g., a first configuration to render one type of conditional content automatically and a second to render another type). Who benefits 1. Users who have difficulties with navigation and manual access to content, including some users with a physical disability and users with blindness or low vision. Example techniques 1. Provide a "conditional content view", where all content that is not rendered by default is rendered in place of associated content. For example, Amaya [AMAYA] offers a "Show alternate" view that accomplishes this. Note, however, cases where an element has more than one piece of associated conditional content. For long conditional content, instead of rendering in place, link to the content. _________________________________________________________________ 2.10 Don't render text in unsupported writing systems. (P3) Checkpoint 2.10 1. For graphical user agents, allow configuration not to render text in unsupported scripts (i.e., writing systems) when that text would otherwise be rendered. 2. When configured per provision one of this checkpoint, indicate to the user in context that author-supplied content has not been rendered due to lack of support for a writing system. Normative inclusions and exclusions 1. This checkpoint does not require the user agent to allow different configurations for different writing systems. Note: This checkpoint is designed primarily to benefit users with serial access to content or who navigate sequentially, allowing them to skip portions of content that would be unusable if rendered graphically as "garbage". Notes and rationale 1. A script is a means of supporting the visual rendering of content in a particular natural language. So, for user agents that render content visually, a user agent might not recognize "the Cyrillic script", which would mean that it would not support the visual rendering of Russian, Ukrainian, and other languages that employ Cyrillic when written. 2. There may be cases when a conforming user agent supports a natural language but a speech synthesizer does not, or vice versa. Who benefits 1. Users with serial access to content or who navigate sequentially. Example techniques 1. Use a text substitute or accessible graphical icon to indicate that content in a particular language has not been rendered. For instance, a user agent that does not support Korean (e.g., does not have the appropriate fonts or voice set) should allow configuration to announce the language change with the message "Unsupported language - unable to render" (e.g., when the language itself is not recognized) or "Korean not supported - unable to render" (e.g., when the language is recognized by the user agent does not have resources to render it). The user should also be able to choose no alert of language changes. Rendering could involve speaking in the designated natural language in the case of a voice browser or screen reader. If the natural language is not supported, the language change alert could be spoken in the default language by a screen reader or voice browser. 2. A user agent may not be able to render all characters in a document meaningfully, for instance, because the user agent lacks a suitable font. For instance, section 5.4 of HTML 4 [HTML4] recommends the following for undisplayable characters: 1. Adopt a clearly visible (or audible), but unobtrusive mechanism to alert the user of missing resources. 2. If missing characters are presented using their numeric representation, use the hexadecimal (not decimal) form since this is the form used in character set standards. 3. When HTTP is used, HTTP headers provide information about content encoding content language ("Content-Language"). Refer to the HTTP/1.1 specification [RFC2616], section 14.12. 4. CSS2's attribute selector may be used with the HTML "lang" or XML "xml:lang" attributes to control rendering based on recognized natural language information. Refer also to the ':lang' pseudo-class ([CSS2], section 5.11.4). Related techniques 1. See techniques for generated content, which may be used to insert text to indicate a language change. 2. See content repair techniques and accessibility and internationalization techniques. 3. See techniques for synthesized speech. Doing more 1. The UAWG recognizes that the intent of this checkpoint -- to reduce confusion and save time for the user -- applies to text rendered as speech or braille as well, but those requirements are not part of UAAG 1.0, in part due to lack of implementation experience. References 1. For information on language codes, refer to "Codes for the representation of names of languages" [ISO639]. 2. Refer to "Character Model for the World Wide Web" [CHARMOD]. It contains basic definitions and models, specifications to be used by other specifications or directly by implementations, and explanatory material. In particular, this document addresses early uniform normalization, string identity matching, string indexing, and conventions for URIs. _________________________________________________________________ [next guideline: 3] [review guideline: 2] [previous guideline: 1] [contents] Guideline 3. Allow configuration not to render some content that may reduce accessibility. Checkpoints: 3.1, 3.2, 3.3, 3.4, 3.5, 3.6 In addition to the techniques below, refer also to the section on user control of style. Checkpoint definitions 3.1 Toggle background images. (P1) Checkpoint 3.1 1. Allow configuration not to render background image content. Sufficient techniques 1. The user agent may satisfy this checkpoint with a configuration to not render any images, including background images. However, user agents should satisfy this checkpoint by allowing users to turn off background images alone, independent of other types of images in content. Normative inclusions and exclusions 1. This checkpoint must be satisfied for all implemented image specifications; see the section on conformance profiles. 2. When configured not to render background images, the user agent is not required to retrieve them until the user requests them explicitly. When background images are not rendered, user agents should render a solid background color instead; see checkpoint 4.3 for information about text colors. 3. This checkpoint only requires control of background images for "two-layered renderings", i.e., one rendered background image with all other content rendered "above it". 4. Conformance profile labels: Image. Note: When background images are not rendered, they are considered conditional content. See checkpoint 2.3 for information about providing access to conditional content. Notes and rationale 1. This checkpoint does not address issues of multi-layered renderings and does not require the user agent to change background rendering for multi-layer renderings (refer, for example, to the 'z-index' property in Cascading Style Sheets, level 2 ([CSS2], section 9.9.1). Who benefits 1. Some users with a cognitive disability or color deficiencies who may find it difficult or impossible to read superimposed text or understand other superimposed content. Example techniques 1. If background images are turned off, make available to the user associated conditional content. 2. In CSS, background images may be turned on/off with the 'background' and 'background-image' properties ([CSS2], section 14.2.1). Doing more 1. Allow control of image depth in multi-layer presentations. _________________________________________________________________ 3.2 Toggle audio, video, animated images. (P1) Checkpoint 3.2 1. Allow configuration not to render audio, video, or animated image content, except on explicit user request. Sufficient techniques 1. The user agent may satisfy this checkpoint by making video and animated images invisible and audio silent, but this technique is not recommended. Normative inclusions and exclusions 1. This configuration is required for content rendered without any user interaction (including content rendered on load or as the result of a script), as well as content rendered as the result of user interaction that is not an explicit user request (e.g., when the user activates a link). 2. This checkpoint must be satisfied for all implemented audio, video, and animated image specifications; see the section on conformance profiles. 3. When configured not to render audio, video, or animated images except on explicit user request, the user agent is not required to retrieve them until the user requests them explicitly. 4. Conformance profile labels: Animation, Video, Audio. Note: See guideline 4 for additional requirements related to the control of rendered audio, video, and animated images. When these content types are not rendered, they are considered conditional content. See checkpoint 2.3 for information about providing access to conditional content. Who benefits 1. Some users with a cognitive disability, for whom an excess of visual information (and in particular animated information) might make it impossible to understand parts of content. Also, audio rendered automatically on load may interfere with speech synthesizers. Example techniques: 1. For user agents that hand off content to different rendering engines, the configuration should cause the content not to be handed off, and instead a placeholder rendered. 2. The "silent" or "invisible" solution for satisfying this checkpoint (e.g., by implementing the 'visibility' property defined in section 11.2 of CSS 2 [CSS2]) is not recommended. This solution means that the content is processed, though not rendered, and processing may cause undesirable side effects such as firing events. Or, processing may interfere with the processing of other content (e.g., silent audio may interfere with other sources of sound such as the output of a speech synthesizer). This technique should be deployed with caution. 3. As a placeholder for an animated image, render a motionless image built from the first frame of the animated image. _________________________________________________________________ 3.3 Toggle animated or blinking text. (P1) Checkpoint 3.3 1. Allow configuration to render animated or blinking text content as motionless, unblinking text. Blinking text is text whose visual rendering alternates between visible and invisible, at any rate of change. Sufficient techniques 1. In this configuration, the user must still have access to the same text content, but the user agent may render it in a separate viewport (e.g., for large amounts of streaming text). 2. The user agent may satisfy this checkpoint by always rendering animated or blinking text as motionless, unblinking text. Normative inclusions and exclusions 1. This checkpoint must be satisfied for all implemented specifications that support blinking; see the section on conformance profiles. 2. This checkpoint does not apply for blinking and animation effects that are caused by mechanisms that the user agent cannot recognize. 3. User control of blinking effects caused by rapid color changes is addressed by checkpoint 4.3. 4. Conformance profile labels: VisualText. Note: Animation (a rendering effect) differs from streaming (a delivery mechanism). Streaming content might be rendered as an animation (e.g., an animated stock ticker or vertically scrolling text) or as static text (e.g., movie subtitles, which are rendered for a limited time, but do not give the impression of movement). Notes and rationale 1. The definition of blinking text is based on the CSS2 definition of the 'blink' value; refer to [CSS2], section 16.3.1. Who benefits 1. Users with photosensitive epilepsy (for whom flashing content may trigger seizures) and users with some cognitive disorders (for whom the distraction may make the content unusable). Blinking text can also affect screen reader users, since screen readers (in conjunction with speech synthesizers or braille displays) may re-render the text every time it blinks. 2. Configuration is preferred as some users may benefit from blinking effects (e.g., users who are deaf or hard of hearing). However, the priority of this checkpoint was assigned on the basis of requirements unrelated to this benefit. Example techniques 1. The user agent may render the motionless text in a number of ways. Inline is preferred, but for extremely long text, it may be better to render the text in another viewport, easily reachable from the user's browsing context. 2. Allow the user to turn off animated or blinking text through the user agent user interface (e.g., by pressing the Escape key to stop animations). 3. Some sources of blinking and moving text are: + The BLINK element in HTML. The BLINK element is not defined by a W3C specification. + The MARQUEE element in HTML. The MARQUEE element is not defined by a W3C specification. + The 'blink' value of the 'text-decoration' property in CSS ([CSS2], section 16.3.1). + In JavaScript, to control the start and speed of scrolling for a MARQUEE element: o document.all.myBanner.start(); o document.all.myBanner.scrollDelay = 100 _________________________________________________________________ 3.4 Toggle scripts. (P1) Checkpoint 3.4 1. Allow configuration not to execute any executable content (e.g., scripts and applets). Normative inclusions and exclusions 1. This checkpoint does not apply to plug-ins and other programs that are not part of content. Note: Scripts and applets may provide very useful functionality, not all of which causes accessibility problems. Developers should not consider that the user's ability to turn off scripts is an effective way to improve content accessibility; turning off scripts means losing the benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off scripts as a last resort. Notes and rationale 1. Executable content includes scripts, applets, and ActiveX controls. This checkpoint does not apply to plug-ins; they are not part of content. 2. Executable content includes those that run "on load" (e.g., when a document loads into a viewport) and when other events occur (e.g., user interface events). 3. Where possible, authors should encode knowledge in a declarative manner (i.e., through static definitions and expressions) rather than in scripts. Knowledge and behaviors embedded in scripts can be difficult or impossible to extract, which means that user agents are less likely to be able to offer control by the user over the script's effect. For instance, with SVG animation (see chapter 19 of SVG 1.0 [SVG]), one can create animation effects in a declarative manner, using recognizable elements and attributes. Who benefits 1. Some users with photosensitive epilepsy; flickering or flashing, particularly in the 4 to 59 flashes per second (hertz) range, may trigger seizures. Peak sensitivity to flickering or flashing occurs at 20 hertz. Some executable content can cause the screen to flicker. Example techniques 1. Some user agents allow users to turn off scripts in the "Security" part of the user interface. Since some users seeking accessibility features may not think to look there, include the on/off switch in an accessibility part of the user interface as well. Also, include a "How to turn off scripts" entry in the documentation index. 2. Allow users to turn on and off, independently, support for different scripting languages. Note, however, that global configuration for all executable content is likely to be more convenient for some users. 3. Configuration of support for executable content should be decoupled from other user agent features. For instance, the user should not lose style sheet capabilities when executable content is turned off, or the inverse. Related techniques 1. See the section on script techniques. Doing more 1. When support for scripts is turned on, and when the user agent recognizes that there are script alternatives available (e.g., NOSCRIPT in HTML), alert the user to the presence of the alternative (and make it easily available). If a user cannot access the script content, the alert will raise the user's awareness of the alternative, which may be more accessible. 2. While this checkpoint only requires an on/off configuration switch, user agents should allow finer control over executable content. For instance, in addition to the switch, allow users to turn off just input device event handlers, or to turn on and off scripts in a given scripting language only. _________________________________________________________________ 3.5 Toggle automatic content retrieval. (P1) Checkpoint 3.5 1. Allow configuration so that the user agent only retrieves content on explicit user request. Normative inclusions and exclusions 1. When the user chooses not to retrieve (fresh) content, the user agent may ignore that content; buffering is not required. 2. This checkpoint only applies when the user agent (not the server) automatically initiates the request for fresh content. However, the user agent is not required to satisfy this checkpoint for "client-side redirects", i.e., author-specified instructions that a piece of content is temporary and intermediate, and is replaced by content that results from a second request. Note: For example, if the user agent supports automatic content retrieval (e.g., via the HTML meta element), allow configurations such as "Never retrieve content automatically" and "Require confirmation before content retrieval." Notes and rationale 1. Some HTML authors specify automatic content retrieval using a META element with http-equiv="refresh", with the frequency specified by the "content" attribute (seconds between retrievals). 2. Note to authors: Use server-side redirects rather than client-side redirects; in practice, server-side redirects do not result in intermediate and temporary content that may disorient the user. Who benefits 1. Some users with a cognitive disability, users with blindness or low vision, and any user who may be disoriented (or simply annoyed) by automatically changing content. Example techniques 1. Alert the user that suppressing the retrieval may lead to loss of information (e.g., packet loss). Doing more 1. When configured not to retrieve content automatically, alert the user of the frequency of retrievals specified in content, and allow the user to retrieve fresh content manually (e.g., by following a link or confirming a prompt). 2. Allow users to specify their own retrieval frequency. 3. Allow at least one configuration for low-frequency retrieval (e.g., every 10 minutes). 4. Retrieve new content without displaying it automatically. Allow the user to view the differences (e.g., by highlighting or filtering) between the currently rendered content and the new content (including no differences). 5. Allow configuration so that a client-side redirect only changes content on explicit user request. This configuration need not apply to client-side redirects specified to occur instantaneously (i.e., after no delay). Client-side redirects may disorient the user, but are less serious than automatic content retrieval since the intermediate state (just before the redirect) is generally not important content that the user might regret missing. Some HTML user agents support client-side redirects authored using a META element with http-equiv="refresh". This use of META is not a normative part of any W3C Recommendation and may pose interoperability problems. 6. Provide a configuration so that when the user navigates "back" through the user agent history to a page with a client-side redirect, the user agent does not re-execute the client-side redirect. References 1. For Web content authors: refer to the HTTP/1.1 specification [RFC2616] for information about using server-side redirect mechanisms (instead of client-side redirects). _________________________________________________________________ 3.6 Toggle images. (P2) Checkpoint 3.6 1. Allow configuration not to render image content. Sufficient techniques 1. The user agent may satisfy this checkpoint by making images invisible, but this technique is not recommended. Normative inclusions and exclusions 1. This checkpoint must be satisfied for all implemented image specifications; see the section on conformance profiles. 2. When configured not to render images, the user agent is not required to retrieve them until the user requests them explicitly. 3. Conformance profile labels: Image. Note: When images are not rendered, they are considered conditional content. See checkpoint 2.3 for information about providing access to conditional content. Notes and rationale 1. This priority of checkpoint 3.2 is higher than the priority of this checkpoint because an excess of moving visual information is likely to be more distracting to some users than an excess of still visual information. Who benefits 1. Some users with a cognitive disability, for whom an excess of visual information might make it difficult to understand parts of content. Related techniques 1. See techniques for checkpoint 3.1. _________________________________________________________________ [next guideline: 4] [review guideline: 3] [previous guideline: 2] [contents] Guideline 4. Ensure user control of rendering. Checkpoints: 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 4.10, 4.11, 4.12, 4.13, 4.14 In addition to the techniques below, refer also to the section on user control of style. Checkpoint definitions for visually rendered text 4.1 Configure text scale. (P1) Checkpoint 4.1 1. Allow global configuration of the scale of visually rendered text content. Preserve distinctions in the size of rendered text as the user increases or decreases the scale. 2. As part of satisfying provision one of this checkpoint, provide a configuration option to override rendered text sizes specified by the author or user agent defaults. 3. As part of satisfying provision one of this checkpoint, offer a range of text sizes to the user that includes at least: + the range offered by the conventional utility available in the operating environment that allows users to choose the text size (e.g., the font size), or + if no such utility is available, the range of text sizes supported by the conventional APIs of the operating environment for drawing text. Sufficient techniques 1. The user agent may satisfy provision one of this checkpoint through a number of mechanisms, including zoom, magnification, and allowing the user to configure a reference size for rendered text (e.g., render text at 36 points unless otherwise specified). For example, for CSS2 [CSS2] user agents, the 'medium' value of the 'font-size' property corresponds to a reference size. Normative inclusions and exclusions 1. The word "scale" is used in this checkpoint to mean the general size of text. 2. The user agent is not required to satisfy this requirement through proportional scaling. What must hold is that if rendered text A is smaller than rendered text B at one value of this configuration setting, then text A will still be smaller than text B at another value of this configuration setting. 3. Conformance profile labels: VisualText. Notes and rationale 1. For example, allow the user to configure the user agent to apply the same font family across Web resources, so that all text is displayed by default using that font family. Or, allow the user to control the text scale dynamically for a given element, e.g., by navigating to the element and zooming in on it. 2. The choice of optimal techniques depends in part on which markup language is being used. For instance, HTML user agents may allow the user to change the font size of a particular piece of text (e.g., by using CSS user style sheets) independent of other content (e.g., images). Since the user agent can reflow the text after resizing the font, the rendered text will become more legible without, for example, distorting bitmap images. On the other hand, some languages, such as SVG, do not allow text reflow, which means that changes to font size may cause rendered text to overlap with other content, reducing accessibility. SVG is designed to scale, making a zoom functionality the more natural technique for SVG user agents satisfying this checkpoint. 3. The primary intention of this checkpoint is to allow users with low vision to increase the size of text. Full configurability includes the choice of very small text sizes that may be available, though this is not considered by the User Agent Accessibility Guidelines Working Group to be part of the priority 1 requirement. This checkpoint does not include a "lower bound" (above which text sizes would be required) because of how users' needs may vary across writing systems and hardware. Who benefits 1. Users with low vision, who benefit from the ability to increase the text scale. Note that some users may also benefit from the ability to choose small font sizes (e.g., users of screen readers who wish to have more content per screen so they have to scroll less frequently). People who use captions may need to change the text scale. Example techniques 1. Inherit text size information from user preferences specified for the operating environment. 2. Use operating environment magnification features. 3. The ratios of the sizes should be compressed at large text sizes, as the same number of different sizes must be packed into a smaller dynamic range. 4. Vectorial formats such as Scalable Vector Graphics specification [SVG] are designed to scale. For bitmap fonts, the user agent may need to round off font sizes when the user increases or decreases the scale. 5. Implement the 'font-size' property in CSS ([CSS2], section 15.2.4). 6. In Windows, the ChooseFont function in the Comdlg32 library creates the conventional utility that allows users to choose text (font) size. The DrawText API is the lower-level API for drawing text. Doing more 1. Allow the user to configure the text size on an element level (i.e., more precisely than globally). User style sheets allow such detailed configurations. 2. Allow the user to configure the text size differently for different scripts (i.e., writing systems). _________________________________________________________________ 4.2 Configure font family. (P1) Checkpoint 4.2 1. Allow global configuration of the font family of all visually rendered text content. 2. As part of satisfying provision one of this checkpoint, provide a configuration option to override font families specified by the author or by user agent defaults. 3. As part of satisfying provision one of this checkpoint, offer a range of font families to the user that includes at least: + the range offered by the conventional utility available in the operating environment that allows users to choose the font family, or + if no such utility is available, the range of font families supported by the conventional APIs of the operating environment for drawing text. Sufficient techniques 1. For text that cannot be rendered properly using the user's preferred font family, the user agent should substitute an alternative font family. Normative inclusions and exclusions 1. Conformance profile labels: VisualText. Note: For example, allow the user to specify that all text is to be rendered in a particular sans-serif font family. Who benefits 1. Users with low vision or some users with a cognitive disability or reading disorder. Some people require the ability to change the font family of text in order to read it. People who use captions may also need to change the font family. Example techniques 1. Inherit font family information from user preferences specified for the operating environment. 2. Implement the 'font-family' property in CSS ([CSS2], section 15.2.2). 3. Allow the user to override author-specified font families with differing levels of detail. For instance, use font A in place of any sans-serif font and font B in place of any serif font. 4. In Windows, the ChooseFont function in the Comdlg32 library creates the conventional utility that allows users to choose font families. Doing more 1. Allow the user to configure font families on an element level (i.e., more precisely than globally). User style sheets allow such detailed configurations. 2. The global imposition of a Latin-only font family, for example, might lead to problems rendering text in non-Latin writing systems. User agents should therefore allow the user to configure font families on a per script group basis (as many user agents, for example, allow users to specify default font families for Unicode ranges). _________________________________________________________________ 4.3 Configure text colors. (P1) Checkpoint 4.3 1. Allow global configuration of the foreground and background color of all visually rendered text content. 2. As part of satisfying provision one of this checkpoint, provide a configuration option to override foreground and background colors specified by the author or user agent defaults. 3. As part of satisfying provision one of this checkpoint, offer a range of colors to the user that includes at least: + the range offered by the conventional utility available in the operating environment that allows users to choose colors, or + if no such utility is available, the range of colors supported by the conventional APIs of the operating environment for specifying colors. Normative inclusions and exclusions 1. Color includes black, white, and grays. 2. Conformance profile labels: VisualText. Note: User configuration of foreground and background colors may inadvertently lead to the inability to distinguish ordinary text from selected text or focused text. See checkpoint 10.2 for more information about highlight styles. Who benefits 1. Users with color deficiencies and some users with a cognitive disability. People who use captions may need to change the text color. Example techniques 1. Inherit foreground and background color information from user preferences specified for the operating environment. 2. Implement the 'color' and 'border-color' properties in CSS 2 ([CSS2], sections 14.1 and 8.5.2, respectively). 3. Implement the 'background-color' property (and other background properties) in CSS 2 ([CSS2], section 14.2.1). 4. SMIL does not have a global property for "background color", but allows specification of background color by region (refer, for example, to the definition of the 'background-color' attribute defined in section 3.3.1 of SMIL 1.0 [SMIL]). In the case of SMIL, the user agent would satisfy this checkpoint by applying the user's preferred background color to all regions (and to all root-layout elements as well). SMIL 1.0 does not have a way to specify the foreground color of text, so that portion of the checkpoint would not apply. 5. In SVG 1.0 [SVG] the fill (section 11.3) and stroke (section 11.4) properties are used to paint foreground colors. 6. In Windows, the ChooseColor function in the Comdlg32 library creates the conventional utility that allows users to choose colors. Doing more 1. Allow the user to specify minimal contrast between foreground and background colors, adjusting colors dynamically to meet those requirements. _________________________________________________________________ Checkpoint definitions for multimedia presentations and other presentations that change continuously over time 4.4 Slow multimedia. (P1) Checkpoint 4.4 1. Allow the user to slow the presentation rate of rendered audio and animation content (including video and animated images). 2. As part of satisfying provision one of this checkpoint, for a visual track, provide at least one setting between 40% and 60% of the original speed. 3. As part of satisfying provision one of this checkpoint, for a prerecorded audio track including audio-only presentations, provide at least one setting between 75% and 80% of the original speed. 4. When the user agent allows the user to slow the visual track of a synchronized multimedia presentation to between 100% and 80% of its original speed, synchronize the visual and audio tracks (per checkpoint 2.6). Below 80%, the user agent is not required to render the audio track. Normative inclusions and exclusions 1. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect. Purely stylistic effects include background sounds, decorative animated images, and effects caused by style sheets. 2. Conformance profile labels: Animation, Audio. Note: The style exception of this checkpoint is based on the assumption that authors have satisfied the requirements of the "Web Content Accessibility Guidelines 1.0" [WCAG10] not to convey information through style alone (e.g., through color alone or style sheets alone). Notes and rationale 1. Slowing one track (e.g., video) may make it harder for a user to understand another synchronized track (e.g., audio), but if the user can understand content after two passes, this is better than not being able to understand it at all. 2. Some formats (e.g., streaming formats) might not enable the user agent to slow down playback and would thus be subject to applicability. Who benefits 1. Some users with a learning or cognitive disability, or some users with newly acquired sensory limitations (such as a person who is newly blind and learning to use a screen reader). Users who have beginning familiarity with a natural language may also benefit. Example techniques 1. When changing the rate of audio, avoid pitch distortion. 2. HTML 4 [HTML4], background animations may be specified with the deprecated background attribute. 3. The SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the speed attribute, which can be used to change the playback rate (as well as forward or reverse direction) of any animation. 4. Authors sometimes specify background sounds with the "bgsound" attribute. Note: This attribute is not part of HTML 4 [HTML4]. Doing more 1. Allowing the user to speed up audio is also useful. For example, some users with serial access to content or who navigate sequentially benefit from the ability to speed up audio. 2. Provide the same functionality for audio and animations whose recognized role is to create a purely stylistic effect. References 1. Refer to variable playback speed techniques used for Digital Talking Books [TALKINGBOOKS]. _________________________________________________________________ 4.5 Start, stop, pause, and navigate multimedia. (P1) Checkpoint 4.5 1. Allow the user to stop, pause, and resume rendered audio and animation content (including video and animated images) that last three or more seconds at their default playback rate. 2. Allow the user to navigate efficiently within audio and animations (including video and animated images) that last three or more seconds at their default playback rate. Sufficient techniques 1. The user agent may satisfy the navigation requirement of provision two of this checkpoint through forward and backward serial access techniques (e.g., advance five seconds), or direct access techniques (e.g., play starting at the 10-minute mark), or some combination. Normative inclusions and exclusions 1. When serial access techniques are used to satisfy provision two of this checkpoint, the user agent is not required to play back content during advance or rewind (though doing so may help orient the user). 2. When the user pauses a real-time audio or animation, the user agent may discard packets that continue to arrive during the pause. 3. This checkpoint applies to content that is either rendered automatically (e.g., on load) or on explicit request from the user. 4. The user agent is not required to satisfy this checkpoint for audio and animations whose recognized role is to create a purely stylistic effect; see checkpoint 4.4 for more information about what constitutes a stylistic effect. 5. Conformance profile labels: Animation, Audio. Note: The lower bound of three seconds is part of this checkpoint since control is not required for brief audio and animation content, such as short clips or beeps. Respect synchronization cues per checkpoint 2.6. Notes and rationale 1. Some formats (e.g., streaming formats), might not enable the user agent to fast forward or rewind content and would thus be subject to applicability. 2. For some streaming media formats, the user agent might not be able to offer some functionalities (e.g,. fast forward) when the content is being delivered over the Web in real time. However, the user agent is expected to offer these functionalities for content (in the same format) that is fully available, for example on the user's computer. 3. Playback during serial advance or rewind is not required. For example, the user agent is not required to play an animation at double speed during a continuous fast forward. Similarly, when the user fast forwards or rewinds an animation, the user agent is not required to play back a synchronized audio track. Who benefits 1. Some users with a cognitive disability. Example techniques 1. Serial access and sequential navigation techniques include, for example, rewind in large or small time increments, and forward to the next audio track. Direct access techniques include, for example, access to visual track number 7, or to the 10-minute mark. The best choice of serial, sequential, or direct access techniques will depend on the nature of the content being rendered. 2. If buttons are used to control advance and rewind, make the advance/rewind distances proportional to the time the user activates the button. After a certain delay, accelerate the advance/rewind. 3. The SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the speed attribute, which can be used to change the playback direction (forward or reverse) of any animation. See also the accelerate and decelerate attributes. 4. Some content lends itself to different forward and reverse functionalities. For instance, compact disk players often let listeners fast forward and rewind, but also skip to the next or previous song. Doing more 1. Allow fine control over advance and rewind functionalities. Some users with a physical disability will find useful the ability to advance or rewind the presentation in (configurable) increments. 2. The user agent should display time codes or represent otherwise position in content to orient the user. 3. Apply techniques for changing audio speed without introducing distortion. 4. Alert the user whenever pausing the user agent may lead to packet loss. 5. Provide the same functionality for audio and animations whose recognized role is to create a purely stylistic effect. 6. Allow users to insert temporal bookmarks in presentation; Home Page Reader [HPR] provides this feature. References 1. Refer to fast forward and rewind techniques used for Digital Talking Books [TALKINGBOOKS]. _________________________________________________________________ 4.6 Do not obscure captions. (P1) Checkpoint 4.6 1. For graphical viewports, allow configuration so that captions synchronized with a visual track in content are not obscured by it. Sufficient techniques 1. Render captions "on top" of the visual track and, as part of satisfying checkpoint 4.3, allow the user to configure the foreground and background color of the rendered captions text. 2. Render captions and video in separate viewports. Notes and rationale 1. Rendering captions in a separate viewport may make it easier for users with screen readers to access the captions. 2. Traditionally, captions text is rendered with a solid background color. Research shows that some users prefer white lettering above a black background. Who benefits 1. Some users with a cognitive disability or with color deficiencies, who may need to configure rendering to make captions more legible. Example techniques 1. For the purpose of applying this clause, SMIL 1.0 [SMIL] and SMIL 2.0 [SMIL20] user agents should recognize as captions any media object whose reference from SMIL is affected by the 'system-captions' test attribute. Doing more 1. Allow the user to turn off rendering of captions. 2. Allow users to position captions. Some users (e.g., users with low vision and a hearing disability (or who are not fluent in the language of an audio track) may need captions and video to have a particular spatial relation to each other, even if this results in partially obscured content. Positioning techniques include the following: + User agents should implement the positioning features of the employed markup or style sheet language. Even when a markup language does not specify a positioning mechanism, when a user agent can recognize distinct text transcripts, collated text transcripts, or captions, the user agent should allow the user to reposition them. User agents are not expected to allow repositioning when the captions other cannot be separated from other media (e.g., the captions are part of the visual track). + Implement the CSS 2 'position' property ([CSS2], section 9.3.1). + Allow the user to choose whether captions appear at the bottom or top of the video area or in other positions. Currently authors may place captions overlying the video or in a separate box. Captions prevent users from being able to view other information in the video or on other parts of the screen, making it necessary to move the captions in order to view all content at once. In addition, some users will find captions easier to read if they can place them in a location best suited to their reading style. + Allow users to configure a general preference for caption position and to be able to fine-tune specific cases. For example, the user may want the captions to be in front of and below the rest of the presentation. + Allow the user to drag and drop the captions to a place on the screen. To ensure device-independence, allow the user to enter the screen coordinates of one corner of the caption. + Do not require users to edit the source code of the presentation to achieve the desired effect. + Allow the user to position all parts of a presentation rather than trying to identify captions specifically (i.e., solving the problem generally may be easier than for captions alone). _________________________________________________________________ Checkpoint definitions for audio volume control 4.7 Global volume control. (P1) Checkpoint 4.7 1. Allow global configuration of the volume of all rendered audio, with an option to override audio volumes specified by the author or user agent defaults. 2. As part of satisfying provision one of this checkpoint, allow the user to choose zero volume (i.e., silent). Normative inclusions and exclusions 1. This checkpoint must be satisfied for all implemented specifications that produce sound; see the section on conformance profiles. 2. Conformance profile labels: Audio. 3. Conformance detail: For both content and user agent. Note: User agents should allow configuration of volume through available operating environment mechanisms. Example techniques 1. Use audio control mechanisms provided by the operating environment. Control of volume mix is particularly important, and the user agent should provide easy access to those mechanisms provided by the operating environment. 2. Implement the CSS 2 'volume' property ([CSS2], section 19.2). 3. Implement the 'display', 'play-during', and 'speak' properties in CSS 2 ([CSS2], sections 9.2.5, 19.6, and 19.5, respectively). 4. Authors sometimes specify background sounds with the "bgsound" attribute. Note: This attribute is not part of HTML 4 [HTML4]. Who benefits 1. Users who are hard of hearing or who rely on audio and synthesized speech rendering. Users in a noisy environment will also benefit. References 1. Refer to guidelines for audio characteristics used for Digital Talking Books [TALKINGBOOKS]. _________________________________________________________________ 4.8 Independent volume control. (P1) Checkpoint 4.8 1. Allow independent control of the volumes of rendered audio content synchronized to play simultaneously. Normative inclusions and exclusions 1. The user control required by this checkpoint includes the ability to override author-specified volumes for the relevant sources of audio. 2. The user agent is not required to satisfy this checkpoint for audio whose recognized role is to create a purely stylistic effect; see checkpoint 4.4 for more information about what constitutes a stylistic effect. 3. Conformance profile labels: Audio. Note: The user agent should satisfy this checkpoint by allowing the user to control independently the volumes of all audio sources (e.g., by implementing a general audio mixer type of functionality). See checkpoint 4.10 for information about controlling the volume of synthesized speech. Notes and rationale 1. Sounds that play at different times are distinguishable and therefore independent control of their volumes is not required by this checkpoint (since volume control required by checkpoint 4.7 suffices). 2. There are at least three good reasons for strongly recommending that the volume of all audio sources be independently configurable, not just those synchronized to play simultaneously: 1. Sounds that are not synchronized may end up playing simultaneously. 2. If the user cannot anticipate when a sound will play, the user cannot adjust the global volume control at appropriate times to affect this sound. 3. It is extremely inconvenient to have to adjust the global volume frequently. 3. Sounds specified by the author to play "on document load" are likely to overlap with each other. If they continue to play, they are also likely to overlap with subsequent sounds played manually or automatically. Who benefits 1. Users (e.g., with blindness or low vision) who rely on audio and synthesized speech rendering. Related techniques 1. For each source of audio, allow the user to control the volume using the same user interface used to satisfy the requirements of checkpoint 4.5. Doing more 1. Provide the same functionality for audio whose recognized role is to create a purely stylistic effect. _________________________________________________________________ Checkpoint definitions for synthesized speech rendering See also techniques for synthesized speech rendering. 4.9 Configure synthesized speech rate. (P1) Checkpoint 4.9 1. Allow configuration of the synthesized speech rate, according to the full range offered by the speech synthesizer. Normative inclusions and exclusions 1. Conformance profile labels: Speech. Note: The range of synthesized speech rates offered by the speech synthesizer may depend on natural language. Example techniques 1. For example, many speech synthesizers offer a range for English speech of 120 - 500 words per minute or more. The user should be able to increase or decrease the rendering rate in convenient increments (e.g., in large steps, then in small steps for finer control). 2. User agents may allow different synthesized speech rate configurations for different natural languages. For example, this may be implemented with CSS2 style sheets using the :lang pseudo-class ([CSS2], section 5.11.4). 3. Use synthesized speech mechanisms provided by the operating environment. 4. Implement the CSS 2 'speech-rate' property ([CSS2], section 19.8). Who benefits 1. Users (e.g., with blindness or low vision) who rely on audio and synthesized speech rendering. Doing more 1. Content may include commands that are interpreted by a speech synthesizer to change the rate (or control other synthesized speech parameters). This checkpoint does not require the user agent to allow the user to override author-specified rate changes (e.g., by transforming or otherwise stripping out these commands before passing on the content to the speech synthesizer). Speech synthesizers themselves may allow user override of author-specified rate changes. For these such synthesizers, the user agent should ensure access to this feature. _________________________________________________________________ 4.10 Configure synthesized speech volume. (P1) Checkpoint 4.10 1. Allow control of the synthesized speech volume, independent of other sources of audio. Normative inclusions and exclusions 1. The user control required by this checkpoint includes the ability to override author-specified synthesized speech volume. 2. Conformance profile labels: Speech. Note: See checkpoint 4.8 for information about independent volume control of different sources of audio. Example techniques 1. The user agent should allow the user to make synthesized speech louder and softer than other audio sources. 2. Use synthesized speech mechanisms provided by the operating environment. 3. Implement the CSS 2 'volume' property ([CSS2], section 19.2). Who benefits 1. Users (e.g., with blindness or low vision) who rely on audio and synthesized speech rendering. _________________________________________________________________ 4.11 Configure synthesized speech characteristics. (P1) Checkpoint 4.11 1. Allow configuration of synthesized speech characteristics according to the full range of values offered by the speech synthesizer. Normative inclusions and exclusions 1. Conformance profile labels: Speech. Note: Some speech synthesizers allow users to choose values for synthesized speech characteristics at a higher abstraction layer, i.e., by choosing from present options that group several characteristics. Some typical options one might encounter include: "voice (e.g., "adult male voice", "female child voice", "robot voice"), "pitch", and "stress". Ranges for values may vary among speech synthesizers. Example techniques 1. Use synthesized speech mechanisms provided by the operating environment. 2. One example of a synthesized speech API is Microsoft's Speech Application Programming Interface [SAPI]. 3. ViaVoice control panel for configuration of voice characteristics This image shows how ViaVoice [VIAVOICE] allows users to configure voice characteristics of the speech synthesizer. Who benefits 1. Users (e.g., with blindness or low vision) who rely on audio and synthesized speech rendering. References 1. For information about these synthesized speech characteristics, refer to descriptions in section 19.8 of Cascading Style Sheets Level 2 [CSS2]. _________________________________________________________________ 4.12 Specific synthesized speech characteristics. (P2) Checkpoint 4.12 1. Allow configuration of synthesized speech pitch. Pitch refers to the average frequency of the speaking voice. 2. Allow configuration of synthesized speech pitch range. Pitch range specifies a variation in average frequency. 3. Allow configuration of synthesized speech stress. Stress refers to the height of "local peaks" in the intonation contour of the voice. 4. Allow configuration of synthesized speech richness. Richness refers to the richness or brightness of the voice. Normative inclusions and exclusions 1. Conformance profile labels: Speech. Note: This checkpoint is more specific than checkpoint 4.11. It requires support for the voice characteristics listed in the provisions of this checkpoint. Definitions for these characteristics are based on descriptions in section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; refer to that specification for additional informative descriptions. Some speech synthesizers allow users to choose values for synthesized speech characteristics at a higher abstraction layer, for example, by choosing from present options distinguished by "gender", "age", or "accent." Ranges of values may vary among speech synthesizers. Who benefits 1. Users (e.g., with blindness or low vision) who rely on audio and synthesized speech rendering. Some users with a hearing disability as well may require control over these parameters. Related techniques 1. See the techniques for checkpoint 4.11. _________________________________________________________________ 4.13 Configure synthesized speech features. (P2) Checkpoint 4.13 1. Provide support for user-defined extensions to the synthesized speech dictionary. 2. Provide support for spell-out: where text is spelled one character at a time, or according to language-dependent pronunciation rules. 3. Allow at least two configurations for speaking numerals: one where numerals are spoken as individual digits, and one where full numbers are spoken. 4. Allow at least two configurations for speaking punctuation: one where punctuation is spoken literally, and one where punctuation is rendered as natural pauses. Normative inclusions and exclusions 1. Conformance profile labels: Speech. Note: Definitions for the functionalities listed in the provisions of this checkpoint are based on descriptions in section 19 of the Cascading Style Sheets Level 2 Recommendation [CSS2]; refer to that specification for additional informative descriptions. Example techniques 1. ViaVoice control panel for editing the user dictionary This image shows how ViaVoice [VIAVOICE] allows users to add entries to the user's personal dictionary. Who benefits 1. Users (e.g., with blindness or low vision) who rely on audio and synthesized speech rendering. References 1. For information about these functionalities, refer to descriptions in section 19.8 of Cascading Style Sheets Level 2 [CSS2]. _________________________________________________________________ Checkpoint definitions related to style sheets 4.14 Choose style sheets. (P1) Checkpoint 4.14 1. Allow the user to choose from and apply alternative author style sheets (such as linked style sheets). 2. Allow the user to choose from and apply at least one user style sheet. 3. Allow the user to turn off (i.e., ignore) author and user style sheets. Normative inclusions and exclusions 1. This checkpoint only applies to user agents that support style sheets. Note: By definition, the user agent's default style sheet is always present, but may be overridden by author or user styles. Developers should not consider that the user's ability to turn off author and user style sheets is an effective way to improve content accessibility; turning off style sheet support means losing the many benefits they offer. Instead, developers should provide users with finer control over user agent or content behavior known to raise accessibility barriers. The user should only have to turn off author and user style sheets as a last resort. Example techniques 1. For HTML [HTML4], make available "class" and "id" information so that users can override styles. 2. Implement user style sheets. 3. Implement the "!important" semantics of CSS 2 ([CSS2], section 6.4.2). Who benefits 1. Any user with a disability who needs to override the author's style sheets or user agent default style sheets in order to have control over style and presentation, or who needs to tailor the style of rendered content to meet their own needs. Doing more 1. Allowing the user to select more than one style sheet may be a useful way to implement other requirements of this document. Also, if the user agent offers several default style sheets, the user agent can also use these to satisfy some requirements. By making available alternative style sheets on the Web, people can thus improve the accessibility of deployed user agents. 2. Inform the user (e.g., through a discreet flag in the user interface) when alternate author style sheets are available. Allow the user to easily choose from among them. References 1. Chapter 7 of the CSS1 Recommendation [CSS1] recommends that the user be able to specify user style sheets, and that the user be able to turn off individual style sheets. 2. For information about how alternative style sheets are specified in HTML 4 [HTML4], refer to section 14.3.1. 3. For information about how alternative style sheets are specified in XML 1.0 [XML], refer to "Associating Style Sheets with XML documents Version 1.0" [XMLSTYLE]. _________________________________________________________________ [next guideline: 5] [review guideline: 4] [previous guideline: 3] [contents] Guideline 5. Ensure user control of user interface behavior. Checkpoints: 5.1, 5.2, 5.3, 5.4, 5.5 Checkpoint definitions 5.1 No automatic content focus change. (P2) Checkpoint 5.1 1. Allow configuration so that if a viewport opens without explicit user request, neither its content focus nor its user interface focus automatically becomes the current focus. Sufficient techniques 1. To satisfy provision one of this checkpoint, configuration is preferred, but is not required if the content focus can only ever be moved on explicit user request. Who benefits 1. Some users with a cognitive disability, blindness, or low vision, who may be disoriented if the focus moves automatically (and unexpectedly) to a new viewport. These users may also find it difficult to restore the previous point of regard. Example techniques 1. Allow the user to configure how the current focus changes when a new viewport opens. For instance, the user might choose between these two options: 1. Do not change the focus when a viewport opens, but alert the user (e.g., with a beep, flash, and text message on the status bar). Allow the user to navigate directly to the new window upon demand. 2. Change the focus when a window opens and use a subtle alert (e.g., a beep, flash, and text message on the status bar) to indicate that the focus has changed. 2. If a new viewport or prompt appears but focus does not move to it, alert assistive technologies (per checkpoint 6.6) so that they may discreetly inform the user. 3. When a viewport is duplicated, the focus in the new viewport should initially be the same as the focus in the original viewport. Duplicate viewports allow users to navigate content (e.g., in search of some information) in one viewport while allowing the user to return with little effort to the point of regard in the duplicate viewport. There are other techniques for accomplishing this (e.g., "registers" in Emacs). 4. In JavaScript, the focus may be changed with myWindow.focus(); 5. For user agents that implement CSS 2 [CSS2], the following rule will generate a message to the user at the beginning of link text for links that are meant to open new windows when followed: A[target=_blank]:before { content:"Open new window" } Doing more 1. The user agent may also allow configuration about whether the pointing device moves automatically to windows that open without an explicit user request. _________________________________________________________________ 5.2 Keep viewport on top. (P2) Checkpoint 5.2 1. For graphical user interfaces, allow configuration so that the viewport with the current focus remains "on top" of all other viewports with which it overlaps. Notes and rationale 1. In most operating environments, the viewport with focus is generally the viewport "on top". In some environments, it's possible to allow a viewport that is not on top to have focus. Who benefits 1. Some users with a cognitive disability, who may find it disorienting if the viewport being viewed unexpectedly changes. Doing more 1. The user agent may also allow configuration about whether the viewport designated by the pointing device always remains on top. 2. When configured to keep the viewport with current focus on top, discreetly alert the user when another viewport opens. _________________________________________________________________ 5.3 Manual viewport open only. (P2) Checkpoint 5.3 1. Allow configuration so that viewports only open on explicit user request. 2. When configured per provision one of this checkpoint, instead of opening a viewport automatically, alert the user and allow the user to open it with an explicit request (e.g., by confirming a prompt or following a link generated by the user agent). 3. Allow the user to close viewports. Sufficient techniques 1. To satisfy provision one of this checkpoint, configuration is preferred, but is not required if viewports can only ever open on explicit user request. Normative inclusions and exclusions 1. If a viewport (e.g., a frame set) contains other viewports, these requirements only apply to the outermost container viewport. 2. User creation of a new viewport (e.g., empty or with a new resource loaded) through the user agent's user interface constitutes an explicit user request. Note: Generally, viewports open automatically as the result of instructions in content. See also checkpoint 5.1 (for control over changes of focus when a viewport opens) and checkpoint 6.6 (for programmatic notification of changes to the user interface). Who benefits 1. Some users with serial access to content or who navigate sequentially, who may find navigation of multiple open viewports difficult. Also, some users with a cognitive disability may be disoriented by multiple open viewports. Example techniques 1. For HTML [HTML4], allow the user to control the process of opening a document in a new "target" frame or a viewport created by a script. For example, for target="_blank", open the window according to the user's preference. 2. For SMIL [SMIL], allow the user to control viewports created with the "new" value of the "show" attribute. 3. In JavaScript, windows may be opened with: + myWindow.open("example.com", "My New Window"); + myWindow.showHelp(URI); Doing more 1. Allow configuration to prompt the user to confirm (or cancel) closing any viewport that starts to close without explicit user request. For instance, in JavaScript, windows may be closed with myWindow.close();. Some users with a cognitive disability may find it disorienting if a viewport closes automatically. On the other hand, some users with a physical disability may wish these same viewports to close automatically (rather than being required to close them manually). _________________________________________________________________ 5.4 Selection and focus in viewport. (P2) Checkpoint 5.4 1. Ensure that when a viewport's selection or content focus changes, it is at least partially in the viewport after the change. Normative inclusions and exclusions 1. Conformance profile labels: Selection. Note: For example, if users navigating links move to a portion of the document outside a graphical viewport, the viewport should scroll to include the new location of the focus. Or, for users of audio viewports, allow configuration to render the selection or focus immediately after the change. Who benefits 1. Users who may be disoriented by a change in focus or selection that is not reflected in the viewport. This includes some users with blindness or low vision, and some users with a cognitive disability. Example techniques 1. There are times when the content focus changes (e.g., link navigation) and the viewport should track it. There are other times when the viewport changes position (e.g., scrolling) and the content focus should follow it. In either case, the focus (or selection) should be in the viewport after the change. 2. If a search causes the selection or focus to change, ensure that the found content is not hidden by the search prompt. 3. When the content focus changes, register the newly focused element in the navigation sequence; sequential navigation should start from there. 4. Unless viewports have been coordinated, changes to selection or focus in one viewport should not affect the selection or focus in another viewport. 5. The persistence of the selection or focus in the viewport will vary according to the type of viewport. For any viewport with persistent rendering (e.g., a two-dimensional graphical or tactile viewport), the focus or selection should remain in the viewport after the change until the user changes the viewport. For any viewport without persistent rendering (e.g., an audio viewport), once the focus or selection has been rendered, it will no longer be "in" the viewport. In a pure audio environment, the whole persistent context is in the mind of the user. In a graphical viewport, there is a large shared buffer of dialog information in the display. In audio, there is no such sensible patch of interaction that is maintained by the computer and accessed, at will, by the user. The audio rendering of content requires the elapse of time, which is a scarce resource. Consequently, the flow of content through the viewport has to be managed more carefully, notably when the content was designed primarily for graphical rendering. 6. If the rendered selection or focus does not fit entirely within the limits of a graphical viewport, then: 1. if the region actually displayed prior to the change was within the selection or focus, do not move the viewport. 2. otherwise, if the region actually displayed prior to the change was not within the newly selected or focused content, move to display at least the initial fragment of such content. _________________________________________________________________ 5.5 Confirm form submission. (P2) Checkpoint 5.5 1. Allow configuration to prompt the user to confirm (or cancel) any form submission. Sufficient techniques 1. Configuration is preferred, but it not required if forms can only ever be submitted on explicit user request. Note: Examples of automatic form submission include: script-driven submission when the user changes the state of a particular form control associated with the form (e.g., via the pointing device), submission when all fields of a form have been filled out, and submission when a "mouseover" or "change" event occurs. Notes and rationale 1. Many user agents offer this configuration as a security feature. Example techniques 1. In HTML 4 [HTML4], form submit controls are the INPUT element (section 17.4) with type="submit" and type="image", and the BUTTON element (section 17.5) with type="submit". 2. XForms 1.0 [XFORMS10] relies on XML events. User agents can register event handlers for submit events. 3. Allow the user to configure script-based submission (e.g., form submission accomplished through an "onChange" event). For instance, allow these settings: 1. Do not allow script-based submission. 2. Allow script-based submission after confirmation from the user. 3. Allow script-based submission without prompting the user (but not by default). 4. Authors may write scripts that submit a form when particular events occur (e.g., "onchange" events). Be watchful for this type of code, which may disorient users: