W3C Working Draft 30-Oct-1998
This document provides guidelines to user agent manufacturers (producers of browsers, players, etc.) for making their products more accessible to people with disabilities and for increasing usability for all users.
This document is part of a series of accessibility documents published by the W3C Web Accessibility Initiative.
This is a W3C Working Draft for review by W3C Members and other interested parties. It is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress". This is work in progress and does not imply endorsement by, or the consensus of, either W3C or Members of the WAI User Agent (UA) Working Group.
This document has been produced as part of the W3C WAI Activity, and is intended as a draft of a Proposed Recommendation for how to improve user agent accessibility. User agents include browsers (graphic, text, voice, etc.), multimedia players, and assistive technologies such as screen readers, screen magnifiers, and voice input software.
The goals of the WAI-UA Working Group are discussed in the WAI UA charter. A list of the current Working Group members is available.
This document is available in the following formats:
In case of a discrepancy between the various formats of the specification, http://www.w3.org/WAI/UA/WD-WAI-USERAGENT-19981030/wai-useragent.html is considered the definitive version.
Please send comments about this document to the public mailing list: firstname.lastname@example.org.
The guidelines in this document are meant to guide user agent developers and vendors as they design their products. Each guideline is an abstract principle of accessibility, stated from the user's perspective. User agent developers are not expected to "implement" the guidelines, they are meant to follow them. User agent developers are expected to implement the techniques described in detail in the accompanying techniques document. The techniques have been listed in this document as well, organized according to the principles that motivated them (the guidelines).
This document organizes the guidelines according to the following general principles of accessible user agent design:
All users must be able to access all functionalities of a user agent. Users access these functionalities through controls (menus, buttons, keyboard shortcuts, etc.). User interfaces must have redundant controls for the same functionality (e.g., menu, mouse, and keyboard equivalents) and must be operable without a pointing device. Redundancy includes offering visual as well as aural representations of a control. User agents must also allow users to customize and configure the controls to meet their needs. Controls should be arranged in a manner consistent with their importance.
In order to guide user agent developers more effectively, each guideline in this document is assigned a priority. Some guidelines are more important to follow than others, either because they help a wide spectrum of users or because they are critical to users with specific disabilities.
This section defines terms used in this document.
A document may be seen as a hierarchy of elements. Elements are defined by a language specification (e.g., HTML 4.0 or an XML application). Each element may have content, which generally contributes to the document's content. Elements may also have attributes that take values. Some attributes are integral to document accessibility (e.g., the "alt", "title", and "longdesc" attributes in HTML).
An element's rendered content is that which a user agent renders for the element. Often, this is what lies between the element's start and end tags. However, some elements cause external data to be rendered (e.g., the IMG element in HTML). At times, a browser may render the value of an attribute (e.g., "alt", "title" in HTML) in place of the element's content. Rendering is not limited to only visual displays, but can also include audio (speech and sound) and tactile displays (Braille and haptic displays).
A user agent renders a document by applying formatting algorithms and style information to the document's elements. Formatting depends on a number of factors, including where the document is rendered: on screen, paper, through speakers, a braille device, a mobile device, etc. Style information (e.g., fonts, colors, voice inflection, etc.) may come from the elements themselves (e.g., certain style attributes in HTML), from style sheets, or from user agent settings. For the purposes of these guidelines, each formatting or style option is governed by a property and each property may take one value from a set of legal values.
The value given to a property by a user agent when it is started up is called the property's default value. User agents may allow users to change default values through a variety of mechanisms (e.g., the user interface, style sheets, initialization files, etc.).
Once the user agent is running, the value of a property for a given document or part of a document may be changed from the default value. The value of the property at a given moment is called its current value. Note that changes in the current value of a property do not change its default value.
Current values may come from documents, style sheets, scripts, or the user interface. Values that come from documents, their associated style sheets, or via a server are called author styles. Values that come from user interface settings, user style sheets, or other user interactions are called user styles.
User agents may handle different types of source information: documents, sound objects, video objects, etc. The user The user perceives the information through a viewport, which may be a window, frame, a piece of paper, a speaker, a virtual magnifying glass, etc. A viewport may contain another viewport (e.g., nested frames, plug-ins, etc.).
User agents may render the same source information in a variety of ways; each rendering is called a view. For instance, a user agent may allow users to view a document in one window and a generated list of headers for the document in another.
The view is how source information is rendered and the viewport is where it is rendered.
Generally, viewports give users access to all rendered information, though not always at once. For example, a video player shows a certain number of frames per second, but allows the user to rewind and fast forward. A visual browser viewport generally features scrollbars or some other paging mechanism that allows the user to bring the rendered content into view.
Some of the guidelines below involve tracking the user's point of regard in the view. The point of regard describes where the user is expected to interact with the rendered information. As the guidelines below state, user agents should avoid displacing the viewport away from the user's point of regard as this can disorient users.
Identifying the point of regard depends on the viewport. For paper, for example, it is difficult to identify the point of regard any more precisely than on the entire page. For sound and audio players (and linear devices in general), the point of regard designates the information currently being rendered.
When the viewport gives the user access to information in more than one dimension (e.g., on the screen), there are several mechanisms generally offered by user agents that may be used to identify the point of regard:
The insertion point is generally rendered specially (e.g., on the screen, by a vertical bar or similar cursor).
A view has only one focus. When several views co-exist, each may have a focus, but only one is active, called the current focus.The current focus is generally highlighted.
The user selection is generally highlighted. On the screen, the selection may be highlighted using colors, fonts, graphics, or other mechanisms. Highlighted text is often used by third-party assistive technologies to indicate through speech or Braille output what the user wants to read. Most screen readers are sensitive to highlight colors. Third-party assistive technologies may provide alternative presentation of the selection through speech, enlargement, or dynamic Braille display.
The user selection may be used, for example, to identify the "current cell" of a table that the user is navigating.
Both the current focus and the current user selection must be in the same view, called the current view. The current view is generally highlighted when several views co-exist.
Which of the three mechanisms - insertion point, selection, and focus - is used to designate the point of regard depends on context. For example, for navigating among form controls, the focus determines which control has the point of regard. For navigating table cells, the selection determines which cell has the point of regard. When a technique involves the point of regard, it specifies which mechanism is used to designate it.
When certain events (document loading or unloading events, mouse press or hover events, keyboard events, etc.), user agents often perform some task (e.g., execute a script). For instance, in most user agents, when a mouse button is released over a link, the link is activated and the linked resource retrieved. User agents may also execute author-defined scripts when certain events occur. The script bound to a particular event is called an event handler. Note. The interaction of HTML, style sheets, he Document Object Model [DOM1] and scripting is commonly referred to as "Dynamic HTML" (DHTML). However, as here is no W3C specification that formally defines DHTML, this document will only refer to event handlers and scripts.
Certain types of content (e.g., images, audio, video, applets, etc.) may not be accessible to all users, so user agents must ensure that users have access to author-supplied alternative representations of content. The techniques document describes the different mechanisms authors may use to supply alternative representations of content.
An independent user agent only requires the source document (and associated style sheets, etc.) to render the document's content. It may use an accessibility API to retrieve information about a document.
A dependent user agent relies on the output of some other user agent in order to render a page. Dependent user agents include screen magnifiers and screen readers, both of which rely on the visual output of another user agent.
Ideally, able-bodied as well as disabled users would be aware of user agent features that improve accessibility and experienced peers could share their knowledge with new users. In reality, most able-bodied peers do not know about features that improve accessibility or do not find them useful for their own use and so ignore them. Consequently, users with disabilities must find help information on their own to optimize the user agent to their preferences. This information must be readily available and in accessible formats.
Otherwise, the user cannot even begin to user the software.
The organization and layout of accessibility feature controls has a major impact on helping users with disabilities find and learn about configuration options and the range of browsing options available to them. If the user agent interface does not offer coherent and consistent means for configuring the software, users with disabilities may not be able to optimize the user interface to their preferences.
Users should be able to turn on and off certain features that may interfere with accessibility.
Documentation must be available in accessible formats for people with print impairments. If users with visual impairments and learning disabilities and movement impairments that limit the use of paper or on-line documents, they may not know about user agent features or be able to use the user agent efficiently or at all.
Keyboard access can be a boon to users having any of several disabilities. The more apparent the keyboard commands are to all users, the more likely it is that new users with disabilities ill find them. Readily available information about keyboard access is crucial to its effective use by users with visual impairments and some types of movement impairments, otherwise a user with disabilities may not think they can accomplish a particular task or may try to use a very inefficient technique with a pointing device, like a mouse.
Users must be able to emulate events when their user agents do not allow them to trigger events in the anticipated way. For instance, users who cannot use a mouse pointing device must be able to trigger the same events that users with mice can trigger. Furthermore, some users may not realize when an event occurs because the event has an effect the user cannot perceive (e.g., a visual or auditory effect). The user agent must be able to inform users when events take place and to a certain extent, what has been the effect of the event.
In order to access a document, some users may require that it be rendered in a manner vastly different from that which the author intended. Users who are color-blind may not be able to perceive certain color combinations. Users with low vision may require large fonts. Blind users may require audio or tactile rendering. Deaf users may require textual equivalents of audio files.
User agents must therefore allow the user to control:
The following guidelines address user control over rendering.
The user must be able to control the colors, fonts and other stylistic aspects of a document and to override author styles and user agent default styles. Otherwise, users who are blind, have visual impairments, some types of learning disabilities, or any user who cannot or has chosen not to view the primary representation of information will not know content of the information on the page.
Techniques for text:
Techniques for the selection and focus:
Techniques for images, applets, and animations:
Techniques for video:
Techniques for audio:
Techniques for speech:
User agents must give users access to author-supplied alternative representations of content (descriptions of images, video clips, sounds, applets, etc.) Some users cannot perceive the primary content due to a disability or a technological limitation (e.g., browser configured not to display images) If users cannot access alternative representations of content, users who are blind, have visual impairments, some types of learning disabilities, or any user who cannot or has chosen not to view the primary representation of information will not understand the intention or content of the page.
Techniques for images:
Techniques for audio and video:
Techniques for other types of objects
By offering several formatting solutions to users, user agents allow them to select the solution most adapted to their needs. For instance, users with screen readers will have difficulty understanding some tables whose cell contents exceed one line. In this case, if the independent user agent can linearize the table (i.e., render it one cell at a time), the screen reader's output will not cause confusion. Similarly, some users may require that a user agent present an outline view of a document so they may have an easier time navigating the document.
Orientation - the sense of where one is within a document or series of documents - is fundamental to a successful Web experience. Authors may contribute to orientation through sight maps, navigation bars, visual associations between documents using frames, etc. User agents also facilitate orientation with proportional scrollbars on viewports. Not all users can use visual orientation clues, however, so user agents must complement them by:
If users cannot orient themselves to the types of elements in a document, users who are blind, have visual impairments, some types of learning disabilities, or any user who cannot or has chosen not to view the author's representation of information will have incomplete knowledge of the content of the document.
Navigation and searching are closely related to orientation. There are different ways a user may want to navigate while browsing the Web, including:
For all of the navigation techniques described below, the user agent must allow the user to navigate via the keyboard.
Users that are viewing documents through linear channels of perception like speech (since speech is temporal in nature) and tactile displays (current tactile technology is limited in the amount of information that can be displayed) have difficultly maintaining a sense of their relative position in a document. The meaning of "relative position" depends on the situation. It may mean the distance from the beginning of the document to the point of regard (how much of the document has been read), it may mean the cell currently being examined in a table, or the position of the current document in a set of documents.
For people with visual impairments, it is important that the point of regard remain as stable as possible. For instance, when returning to a document previously viewed, the user's previous point of regard on the document should be restored. The user agent should not disturb the user's point of regard by shifting focus to a different frame or window when an event occurs without notifying the user of the change. Disturbing the user's point of regard may cause problems for users who have movement impairments, who are blind, who have visual impairments, who have certain types of learning disabilities, or for any user who cannot or has chosen not to view the authors representation of information.
Often, users wish to navigate a linear sequence of elements within a document. If sequential navigation is not available users with some types of movement impairments, visual impairments and learning disabilities may not be able to access the links and controls in a document.
Hierarchical navigation (through the document tree) is useful for efficiently navigating the major topics and sub-topics of a document. Topical navigation (section by section) conveys significant information about the organization of a document. If hierarchical navigation is not available users with some types of movement impairments, visual impairments and learning disabilities may not be to access the links and controls on a page efficiently nor understand the topics and topic relationships within a document.
Tabular navigation is required by people with visual impairments and some types of learning disabilities to determine the content of a particular cell and spatial relationships between cells (which may convey information). If table navigation is not available users with some types of visual impairments and learning disabilities may not be able to understand the purpose of a table or table cell.
For efficient browsing, users may want to navigate to a particular element of a certain type. For instance, in a document with many links, sequential navigation will require a great number of key strokes to reach a desired link. In addition to being wearisome, this may be physically difficult for some users. Without direct access (e.g, by element identifier or element position), users with some types of movement impairments, visual impairments and learning disabilities may not be able to access the links and controls and other elements in a document efficiently.
When a search matches, the point of regard is moved to the matched location and the selection is moved to that location.
[Ed. In the following text searches, what happens when the search succeeds? Is the user selection updated? The focus? Is the viewport moved?]
A user agent that supports a language should implement the accessibility features for the language. The techniques document lists the accessibility features of the following languages, defined by W3C specifications:
Whether or not a user agent chooses to implement a particular feature, it must make pertinent information available to other software in an interoperable manner. Even though a user agent may implement a feature, specialized assistive software may furnish a better implementation and must be given access to all pertinent information through standard interfaces.
Many thanks to the following people who have contributed through review and comment: Paul Adelson, James Allen, Denis Anson, Kitch Barnicle, Harvey Bingham, Judy Brewer, Kevin Carey, Wendy Chisholm, Chetz Colwell, Daniel Dardailler, Neal Ewers, Geoff Freed, Larry Goldberg, Markku Hakkinen, Chris Hasser, Kathy Hewitt, Phill Jenkins, Leonard Kasday, George Kerscher, Marja-Riitta Koivunen, Josh Krieger, Greg Lowney, Scott Luebking, William Loughborough, Napoleon Maou, Charles McCathieNevile, Masafumi Nakane, Charles Opperman, Mike Paciello, David Pawson, Helen Petrie, David Poehlman, Michael Pieper, Jan Richards, Greg Rosmaita, Liam Quinn, T.V. Raman, Robert Savellis, Constantine Stephanidis, Jim Thatcher, Jutta Treviranus, Claus Thøgersen, Steve Tyler, Gregg Vanderheiden, Jaap van Lelieveld, Jon S. von Tetzchner, Ben Weiss, Evan Wies, Chris Wilson, Henk Wittingen, and Tom Wlodkowski.
If you have contributed to the UA guidelines and your name does not appear please contact the editors to add your name to the list.