This document provides guidelines for designing user agents that lower barriers to Web
accessibility for people with disabilities (visual, hearing, physical,
cognitive, and neurological). User agents include browsers and other types
of software that retrieve and render Web content. A user agent that
conforms to these guidelines will
promote accessibility through its own user interface and through other internal
facilities, including its ability to communicate with other technologies
(especially assistive technologies).
Furthermore, all users, not just users with disabilities, should find
conforming user agents to be more usable.
In addition to helping developers of browsers and media players, this
document will also benefit developers of assistive technologies because it
explains what types of information and control an assistive technology may
expect from a conforming user agent. Technologies not addressed directly by
this document (e.g., technologies for braille rendering) will be essential to
ensuring Web access for some users with disabilities.
The "User Agent Accessibility Guidelines 2.0" (UAAG 2.0)
is part of a series of accessibility guidelines published by the W3C Web
Accessibility Initiative (WAI).
May be Superseded
This section describes the status of this document at the time of its
publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be
found in the W3C technical reports index at
http://www.w3.org/TR/.
Editor's Draft of ATAG 2.0
This is an internal Editor's Draft.
The Working Group (UAWG) intends
to publish UAAG 2.0 as a W3C Recommendation. Until that time User Agent Accessibility Guidelines 1.0 (UAAG 1.0) [UAAG10] is
the stable, referenceable version. This Working Draft does not supersede
UAAG 1.0.
Web Accessibility Initiative
This document has been produced as part of the W3C Web
Accessibility Initiative (WAI). The goals of the AUWG are discussed
in the Working Group charter.
The AUWG is part of the WAI
Technical Activity.
No Endorsement
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
Patents
This document was produced by a group operating under the 5
February 2004 W3C Patent Policy. W3C maintains a public
list of any patent disclosures made in connection with the deliverables
of the group; that page also includes instructions for disclosing a patent.
An individual who has actual knowledge of a patent which the individual
believes contains Essential
Claim(s) must disclose the information in accordance with section
6 of the W3C Patent Policy.
Editing Styles:
- changed-from-version1: Any text that has changed since version 1.0 except new changes.
- newly-approved-text: New changes to this draft
- proposed-text: Proposals that have not been accepted
- @@editor-notes@@: Notices from the editor(s).
This section is informative.
This document specifies requirements that, if satisfied by
user agent developers, will lower barriers
to accessibility. This document includes the following:
- This introduction provides context for understanding the requirements
listed in section 2.
- Section 2 explains 12 general
principles of accessible design, called "guidelines." Each guideline consists
of a list of requirements, called "checkpoints," which must be satisfied in
order to conform to this document. Note: Section 2 is numbered
differently than the other sections; it consists of a list of guidelines. In
section 2, "checkpoint 1.2" refers to the second checkpoint of the first
guideline. "Section 1.2" refers to a subsection of the Introduction.
- Section 3 explains how to make
claims that software components satisfy the requirements of section 2.
- An appendix offers a summary of this document's principal goals and
structure [UAAG10-SUMMARY].
- A second appendix lists all the checkpoints for convenient reference (e.g.,
as a tool for developers to evaluate software for conformance)
[UAAG10-CHECKLIST].
A separate document, entitled "Techniques for User Agent Accessibility
Guidelines 1.0" (the "Techniques document" from here on)
[UAAG10-TECHS], provides
suggestions and examples of how each checkpoint might be satisfied. It also
includes references to other accessibility resources (such as platform-specific
software accessibility guidelines) that provide additional information on how a
user agent may satisfy each checkpoint. The techniques in the Techniques
document are informative examples only,
and other strategies may be used or required to satisfy the checkpoints. The
UAWG expects to update the Techniques document more
frequently than the current guidelines. Developers, W3C Working Groups, users,
and others are encouraged to contribute techniques.
"User Agent Accessibility Guidelines 1.0" (UAAG 1.0) is
part of a series of accessibility guidelines published by the
Web Accessibility Initiative
(WAI). The documents in
this series reflect an accessibility model in which Web content authors, format
designers, and software developers have roles in ensuring that users with
disabilities have access to the Web. The accessibility-related interests of
these stakeholders intersect and complement each other as follows:
- Designers of formats (e.g., HTML, XHTML, XML, SVG, SMIL, MathML, and
XForms) and protocols (e.g., HTTP) create specifications that allow
communication on the Web. Format designers include features in these
specifications that authors should use to create accessible content and that
user agents should support through an accessible user interface. The "XML
Accessibility Guidelines (XAG)"
[XAG10] explains the
responsibilities of XML format designers; many XAG requirements make sense for
non-XML formats as well.
- Authors make use of the accessibility features of different format
specifications, use markup appropriately, write in clear and simple language,
and organize a Web site consistently. The "Web Content Accessibility Guidelines
1.0" [WCAG10] explains the
responsibilities of authors in meeting the needs of users with disabilities.
The "Web Content Accessibility Guidelines (WCAG) 1.0" is
considered within UAAG 1.0 to be the reference set of requirements to make Web
content accessible. The "Authoring Tool Accessibility Guidelines 1.0"
[ATAG10] explains the
responsibilities of authoring tool developers. An accessible authoring tool
facilitates the creation of accessible Web content and may be operated by users
with disabilities.
- User agent developers design software that meets the needs of users with
disabilities through conformance to other specifications, an accessible user
interface, accessible documentation, and communication with other software
(notably assistive
technologies).
The requirements of this document interact with those of the "Web Content
Accessibility Guidelines 1.0" [WCAG10] in a number of ways:
- UAAG 1.0 checkpoint 8.1 requires implementation of the accessibility
features of specifications. Features are those identified as such and those
that satisfy all of the requirements of WCAG 1.0
[WCAG10].
- UAAG 1.0 checkpoint 12.1 requires conformance to WCAG 1.0 for user
agent documentation.
- UAAG 1.0 also incorporates some terms and concepts from WCAG 1.0, a
consequence of fact that the documents were designed to complement one
another.
Some requirements of this document take into account limitations of formats,
authors, and designers. For example, formats generally do not enable authors to
encode all of their knowledge in a way that a user agent can fully
recognize. A format may lack features
required for accessibility. An author may not make use of the accessibility
features of a format or may misuse a format (which can cause problems for user
agents). A user agent designer may not implement a format specification
correctly or completely.
Some of these limitations are taken into account as follows:
- UAAG 1.0 includes requirements to satisfy the expectations set by WCAG 1.0
"until user agent" clauses. These clauses make additional requirements of
authors in order to compensate for some limitations of deployed user
agents.
- UAAG 1.0 includes several repair requirements (e.g., checkpoints
checkpoint 2.7
and checkpoint 2.10) for cases where content does not conform to
WCAG 1.0. Furthermore, this document includes some requirements to address
certain widespread authoring practices that are discouraged because they may
cause accessibility or usability problems (e.g., some uses of HTML
frames).
- Except for the indicated repair checkpoints, UAAG 1.0 only requires user
agents to handle what may be recognized through protocols and formats.
For example, user agents are not expected to recognize that the author has used
"clear and simple" language to express ideas (WCAG 1.0, checkpoint 14.1). See
the section on checkpoint
applicability for more information about what the user agent is expected to
recognize.
The Web Accessibility Initiative
provides other resources and
educational materials to promote Web accessibility. Resources include
information about accessibility policies, links to translations of WAI
materials into languages other than English, information about specialized user
agents and other tools, accessibility training resources, and more.
This document was designed specifically to improve the accessibility of user
agents with multimedia capabilities running in the following type of
environment (typically that of a desktop computer):
- The operating environment includes a keyboard (or keyboard equivalent)
- Assistive technologies may be used in the operating environment and may
communicate with the conforming user agent
The target user agent is one designed for the general public to handle
general-purpose content in ordinary operating conditions.
This document does not forbid conformance by other types of user
agents, but some requirements (e.g., implementation of certain application
programming interfaces, or APIs) are not
likely to be satisfied in environments (e.g., handheld devices or kiosks) other
than the target environment. Future work by the UAWG may
address the accessibility of user agents running on handheld devices, for
example.
Technologies not addressed directly by this document (e.g., those for
braille rendering) will be essential to ensuring Web access for some users with
disabilities. Note that the ability of conforming user agents to communicate
well with assistive technologies will depend in part on the willingness of
assistive technology developers to follow the same standards and conventions
for communication.
In general, a conforming user
agent will consist of several coordinated components, such as a Web
browser, a multimedia player, several plug-ins, features or applications
provided by the operating environment, and documentation distributed with the
software or available on the Web. These components may run on the user's
computer or on a server. A conforming user agent may also include assistive
technologies and applications provided by the operating environment. The
current document places no restrictions on the type or number of components
used for conformance.
This does not mean that every component that one has chosen as part of the
user agent has to satisfy every single requirement; some requirements may not
be relevant for a particular component. For instance, if a component does not
have a user interface, the user interface requirements would not be relevant.
On the other hand, if a component has a user interface, the user interface
requirements are relevant. Conformance addresses the composite user agent
as a whole.
The UAWG encourages developers to satisfy the requirements of this document
by adopting operating environment
conventions and features that benefit accessibility. When an operating
environment feature (e.g., the operating system's audio control panel,
including its user interface) is adopted to satisfy the requirements of this
document, it is part of the user agent.
See additional information on conformance of user agents running in
multiple operating environments.
People with (or without) disabilities access the Web with widely varying
sets of capabilities, software, and hardware. Some users with disabilities:
- May not be able to see, hear, move, or speak.
- May not be able to perceive, read, or process some types of information
easily or at all.
- May not have or be able to use a keyboard or pointing device.
This document does not include requirements to meet all known accessibility
needs. Some known limitations of this document include the following:
- Input modalities
- This document only includes requirements for keyboard, pointing device, and
voice input modalities. This document includes several checkpoints related to
voice input as part of general input requirements (e.g., the checkpoints of
guideline 7 and
guideline 11) but
does not otherwise address voice-based navigation or control.
- Note: The UAWG intends to coordinate
further work on the topics of voice input and synthesized speech rendering with
groups in W3C's Voice Browser Activity
and Multimodal Interaction
Activity.
- Output modalities
- This document does not include requirements for braille rendering. Some
requirements are specific to graphical rendering and others specific to audio
output or synthesized speech output. Speech rendering requirements are made by
checkpoint
4.9 to checkpoint 4.13. Many of the requirements of this document
are generic enough to apply to a variety of output modalities, including
braille. User agents conform to this
document by supporting some combination of graphical and audio/speech rendering
output; see the section on
Content type labels for more
information.
- Size and color of non-text content
- This document includes some checkpoints to ensure that the user is able to
control the size and color of visually rendered text content (checkpoints
4.1 and
4.3).
This document does not in general address control of the size and color of
visually rendered non-text content (e.g.,
images).
- Note: A user agent may implement resizing functionalities
as part of conformance to other specifications (e.g., Scalable Vector Graphics
[SVG]).
- Background image interference
- The requirement of
checkpoint 3.1 to allow the user to turn off rendering of
background images does not extend to multi-layered rendering.
- User control of every user interface component
- This document distinguishes user interface features that are part of the
user agent user interface
and those that are part of content. Some checkpoints (e.g., those in
guideline 5)
require user control over rendering and behavior that is driven by
content only. This document does not always
explicitly require the same control over features of the user agent user
interface. Nevertheless, this document (see
checkpoint
7.3) does require user agents to follow software usability guidelines. The
UAWG expects such usability guidelines to include requirements for
user control over user interface behavior.
- Note: It is more difficult for users to distinguish
content from user interface when both are rendered as sound in one temporal
dimension, than it is when both
are rendered visually in two spatial dimensions. Thus, the UAWG encourages
developers of user agents that include audio output or synthesized speech
output to apply the requirements of this document to both content and user
agent components.
- Time parameters
- This document includes requirements (see checkpoints
2.4,
4.4,
4.5, and
4.9)
for control of some time parameters. The requirements are for time parameters
that the user agent recognizes and controls. This document does not include
requirements for control of time parameters managed on the server.
- Digital rights management
- The User Agent Accessibility Guidelines Working Group recognizes that
further work is necessary in the area of digital rights management as it
relates to accessibility. Digital rights management refers to methods of
describing and perhaps enforcing intellectual property associated with Web
resources.
Note: The User Agent Accessibility Guidelines Working Group
may address these and other topics in a future version of the User Agent
Accessibility Guidelines. Even though UAAG 1.0 does not
address these topics, the UAWG encourages user agent developers to consider
them in their designs.
One the goals of the authors of this document is to ensure that the
requirements are compatible with other good software design practices. However,
this document does not purport to be a complete guide to good software design.
For instance, the general topic of user interface design for computer software
exceeds the scope of this document, though some user interface requirements
have been included because of their importance to accessibility. The Techniques
document [UAAG10-TECHS] includes some
references to general software design guidelines and platform-specific
accessibility guidelines (in particular, see
checkpoint
7.3). Involving people with disabilities in the design and testing of
software will generally improve the accessibility of the software.
This document promotes conformance to other specifications as part of
accessible design. Conformance to specifications makes it easier to design
assistive technologies, and helps ensure the implementation of built-in
accessibility functions.
This document also includes some requirements to implement an accessibility
feature that may only be optional in another specification.
In rare cases, a requirement in UAAG 1.0 may conflict with a requirement in
another specification. UAAG 1.0 does not define a process for resolving such
conflicts. The authors of this document anticipate that developers will
consider accessibility implications in determining how to resolve them.
Installation is an important aspect of both accessibility and general
software usability. On platforms where a user can install a user agent, the
installation (and update) procedures need to be accessible. Furthermore, the
installation procedure should provide and install all components necessary to
satisfy the requirements of this document, since the risk of installation
failure increases with the number of components (e.g., plug-ins) to be installed.
This document does not include a checkpoint requiring that installation
procedures be accessible. Since this document considers installation to be part
of software usage, the different aspects of installation (e.g., user interface,
documentation, and operating environment
conventions) are already covered by the complete set of checkpoints.
Some of the requirements of this document may have security implications,
such as communication through APIs, and allowing programmatic read and write
access to content and user interface control. This
document assumes that features required by this document will be built on top
of an underlying security architecture. Consequently, unless permitted
explicitly in a checkpoint (as in checkpoint 6.5), this document grants no conformance
exemptions based on security issues.
Developers should design user agents that enable communication with trusted
assistive technologies. Sensitive information that the user agent can access
through the user agent's user interface should also be available to assistive
technologies through secure means. For instance, if the user types a password
in the user agent user interface, do not communicate substitute characters
(such as asterisks) through an API, but rather the real password, properly
encrypted.
Note also that appropriate user agent behavior with respect to security may
depend on the user's context. For instance, hiding typed passwords with
asterisks is much less important for someone alone in a room than for someone
in a crowded room. Similarly, while unencrypted passwords rendered as
synthesized speech should not be broadcast in a crowded room, they may pose no
security risk if the user is wearing an earphone.
For information related to security, refer to "XML-Signature Syntax and
Processing" [XMLDSIG] and "XML Encryption
Syntax and Processing" [XMLENC].
This document emphasizes the goal of ensuring that users, including users
with disabilities, have control over their environment for accessing the Web.
Key methods for achieving that goal include: optional self-pacing,
configurability, device-independence, interoperability, direct support for both
graphical and auditory output, and adherence to published conventions.
Chapter 2 addresses these issues in
detail.
This document also acknowledges the importance of author preferences and the
proper implementation of specifications. However, this document includes
requirements to override certain author preferences when the user would not
otherwise be able to access that content.
Many of the requirements in this document give the user additional control
over behavior that would otherwise occur automatically. For instance, there is
a requirement to allow configuration to not open a viewport automatically
(checkpoint
5.3) and one that requires user confirmation before submitting a form
(checkpoint
5.5). This type of manual configuration option may be essential for some
users with disabilities, since automatic behavior may be disorienting or
interfere with navigation.
This document includes requirements for users with a variety of
disabilities, in part because some users may have more than one disability. In
some cases, it may appear that two requirements contradict each other. For
instance, a user with a physical disability may prefer that the user agent
offer more automatic behavior (to reduce demand for physical effort) than a
user with a cognitive disability (for whom automatic behavior may cause
confusion). Thus, many of the requirements in this document involve
configuration as one way to ensure that a functionality designed to improve
accessibility for one user does not interfere with accessibility for another.
Also, since a default user agent setting may be useful for one user but
interfere with accessibility for another, this document prefers configuration
requirements to requirements for default settings. Finally, there may be some
cases where, for some content, a feature required by this document is
ineffective or causes content to be less accessible, making it imperative that
the user be able to turn off the feature.
To avoid overwhelming users with an abundance of configuration options, this
document includes requirements that promote ease of configuration and
documentation of accessibility features (see
guideline
12).
Many requirements in this document promote different kinds of
independence:
- Input and output device independence. This document includes some
requirements to promote device-independence natively, as well as requirements
for interoperability with assistive technologies that provide complementary
input and output functionalities.
- Spatial independence. Some users may not navigate effectively in
two-dimensional visual space
(e.g., users who do not use a pointing device) or may be constrained to one
temporal dimension (e.g., users of audio-only output).
- Temporal independence. Some users (e.g., users with a physical or cognitive
disability) may not be able to interact with content that changes over time, or
interaction with content that is time-sensitive.
In meeting the goals of users with disabilities, user agent developers will
also improve access to the Web for users in general. For example, users without
disabilities:
- may have a text-only screen, a small screen, or a slow Internet connection
(e.g., via a mobile phone browser). These users are likely to benefit from the
same features that provide access to people with low vision or blindness.
- may be in a situation where their eyes, ears, or hands are busy or
interfered with (e.g., driving to work or working in a noisy environment).
These users are likely to benefit from the same features that provide access to
people who cannot use a mouse or keyboard due to a visual, hearing, or physical
disability.
- may not understand fluently the natural language of spoken content. These
users are likely to benefit from the same visual rendering of
text equivalents that make spoken
language accessible to people with a hearing disability.
The UAWG expects that software which satisfies the requirements of this
document will be more flexible, manageable, extensible, and beneficial to all
users. For example, a user agent architecture that allows programmatic access
to content and the user interface will encourage software
modularity and reuse, and will enable operation by scripting tools and
automated test engines in addition to assistive technologies.
UAAG 2.0 Guidelines
PRINCIPLE 1. Follow applicable specifications and conventions
1.1 Observe operating environment conventions
Level A Success Criteria for Guideline 1.2
- A.1.3.1 Follow and Cite Conventions: Operating environment conventions are followed and the convention sources are cited for all of the following:
@@7.1@@
- (a) Input: Keyboard, mouse, etc. including non-interference with keyboard accessibility features of the platform (e.g., StickyKeys, SlowKeys, browser link navigation)@@7.2@@
- (b) Content Focus and User Interface Focus
- (c) Selection, and
- (d) Product installation.@@7.3@@
Level AA Success Criteria for Guideline 1.2
- A.1.3.2 Follow and Cite Conventions: Operating environment conventions are followed and the convention sources are cited for all of the following:
Level AAA Success Criteria for Guideline 1.2
- A: Implement the accessibility features listed in the technology accessibility features benchmark @@NEEDS DEFINITION@@ for all technologies listed in the conformance profile.@@8.1@@
- A: Render content according to technology specification
(e.g., for a markup language or style sheet language). Note: This includes any accessibility features of the technology (see Checkpoint 1.2) . @@2.1@@
- AA: If the user agent does not render a technology, it should allow the user to choose a way to handle content in that technology (e.g., by
launching another application or by saving it to disk).@@NEW@@
Note: When a rendering requirement of another specification contradicts a
requirement of UAAG 2.0, the user agent may disregard the rendering requirement
of the other specification and still satisfy this checkpoint; see the section
on the relation of this document to general
software design guidelines and other specifications for more
information.
PRINCIPLE 2. Facilitate access
by assistive technologies
- Provide programmatic read access to XML content by making available all of the information items defined by the W3C XML Infoset [INFOSET].
- Provide programmatic read access to HTML content by making available all of the
following information items defined by the W3C XML Infoset [INFOSET]:
- Document Information item: children, document element, base URI,
charset
- Element Information items: element-type name, children, attributes,
parent
- Attribute Information items: attribute-type name, normalized value,
specified, attribute type, references, owner element
- Character Information items: character code, parent element
- Comment Information items: content, parent
- If the user can modify the state or value of a
piece of HTML or XML content through the user interface (e.g., by checking a
box or editing a text area), allow programmatic read access to the current
state or value, and allow the same degree of write access programmatically as
is available through the user interface.
- Provide access to the content required in checkpoint 6.1 by conforming
to the following modules of the W3C Document Object Model
(DOM) Level 2 Core
Specification [DOM2CORE] and exporting bindings
for the interfaces they define:
- for HTML: the Core module
- for XML: the Core and XML modules
- As part of satisfying provision one of this
checkpoint:
- In the Java and ECMAScript operating environments, export the normative
bindings specified in the DOM Level 2 Core Specification [DOM2CORE], or
- In other operating environments, the exported bindings (e.g., C++) must be
publicly documented.
- Refer to the "Document Object Model (DOM) Level 2 Core Specification" [DOM2CORE] for information about
which versions of HTML, XML, Java, and
ECMAScript are covered. Appendix
D contains the Java bindings and Appendix E contains the ECMAScript bindings.
- The user agent is not required to export the bindings outside of the user
agent process (though doing so may be useful to assistive technology
developers).
Note: This checkpoint stands apart from checkpoint 6.1 to emphasize
the distinction between what information is required and how to provide access
to that information. Furthermore, the DOM Level 2 Core Specification does not
provide access to current states and values referred to in provision three of checkpoint 6.1. For HTML
content, the interfaces defined in [DOM2HTML] do provide access to
current states and values.
- For content other than HTML and XML, provide structured programmatic read access to content.
- If the user can modify the state or value
of a piece of non-HTML/XML content through the user interface (e.g., by checking a
box or editing a text area), allow programmatic read access to the current
state or value, and allow the same degree of write access programmatically as
is available through the user interface.
- As part of satisfying provision one of this
checkpoint, implement at least one API according
to this API cascade:
- The API is defined by a W3C Recommendation, or the API is
publicly documented and designed to enable interoperability with assistive
technologies.
- If no such API is available, or if available APIs do not enable the user
agent to satisfy the requirements,
- "Structured programmatic access" means access through an API to recognized
information items of the content (such as the information items of the XML
Infoset [INFOSET]). Plain text has little
structure, so an API that provides access to it will be correspondingly less
complex than an API for XML content. For content more structured than plain
text, an API that only provides access to a stream of characters does not
satisfy the requirement of providing structured programmatic access. This
document does not otherwise define what is sufficiently structured access.
- An API is considered "available" if the specification of the API is
published (e.g., as a W3C Recommendation) in time for integration into a user
agent's development cycle.
Note: This checkpoint addresses content not covered by
checkpoints 6.1 and 6.2.
2.4 Programmatic access to
information about rendered content (P1) Techniques for checkpoint 6.4@@6.4@@
- For graphical user agents, make available
bounding dimensions and coordinates of rendered graphical objects. Coordinates
must be relative to the point of origin in the graphical environment (e.g.,
with respect to the desktop), not the viewport.
- For graphical user agents, provide access
to the following information about each piece of rendered text: font family,
font size, and foreground and background colors.
- As part of satisfying provisions one and
two of this checkpoint, implement at least one API according to the API cascade
described in provision two of checkpoint 6.3.
Note: User agents should provide programmatic access to
additional useful information about rendered content that is not available
through the APIs required by checkpoints 6.2 and 6.3, including the correspondence (in both directions)
between graphical objects and their source in the document object, and information
about the role of each graphical object.
2.5 Programmatic operation of user agent
user interface (P1) Techniques for checkpoint 6.5@@6.5@@
- Provide programmatic read access to user agent user interface
controls, selection, content focus, and user interface focus.
- If the user can modify the state or value of a user agent user interface
control (e.g., by checking a box or editing a text area), allow
programmatic read access to the current state or value, and allow the same
degree of write access programmatically as is available through the user
interface.
- As part of satisfying provisions one and two of
this checkpoint, implement at least one API according to the API cascade
described in provision two of checkpoint 6.3.
Note: APIs used to satisfy the requirements of this
checkpoint may vary. For instance, they may be independent of a particular
operating environment (e.g., the W3C DOM), or the conventional APIs for a
particular operating environment, or the conventional APIs for programming
languages, plug-ins, or virtual machine
environments. User agent developers are encouraged to implement APIs that allow
assistive technologies to interoperate with multiple types of software in a
given operating environment (e.g., user agents, word processors, and
spreadsheet programs), as this reuse will benefit users and assistive
technology developers. User agents should always follow operating environment
conventions for the use of input and output APIs.
- Provide programmatic notification of changes
to content, states and values of content, user agent user interface controls, selection, content focus, and user interface focus.
- As part of satisfying provision one of this
checkpoint, implement at least one API according to the API cascade of
provision two of checkpoint
6.3.
Note: For instance, provide programmatic notification when
user interaction in one frame causes automatic changes to content in
another.
- Implement APIs for the keyboard (@@better defn needed@@) as follows:
Note: An operating environment may define more than one
conventional API for the keyboard. For instance, for Japanese and Chinese,
input may be processed in two stages, with an API for each stage.
- For an API implemented to satisfy
requirements of this document, support the character encodings required for
that API.
Note: Support for character encodings is an important part
of ensuring that text is correctly communicated to assistive technologies. For
example, the DOM Level 2 Core Specification [DOM2CORE], section 1.1.5
requires that the DOMString
type be encoded using UTF-16.
- For user agents that implement Cascading Style Sheets
(CSS), provide programmatic access to style sheets by
conforming to the CSS module of the W3C Document Object Model
(DOM) Level 2 Style
Specification [DOM2STYLE] and exporting
bindings for the interfaces it defines.
- As part of satisfying provision one of this
checkpoint:
- In the Java and ECMAScript operating environments, export the normative
bindings specified in the CSS module of the DOM Level 2
Style Specification [DOM2STYLE], or
- In other operating environments, the exported bindings (e.g., C++) must be
publicly documented.
- For the purposes of satisfying this checkpoint, Cascading Style Sheets
(CSS) are defined by either CSS Level 1 [CSS1] or CSS Level 2 [CSS2].
- Refer to the "Document Object Model (DOM) Level 2 Style Specification" [DOM2STYLE] for information
about which versions of Java and ECMAScript are covered. Appendix B contains the Java bindings and Appendix C contains the ECMAScript bindings.
- The user agent is not required to export the bindings outside of the user
agent process.
- For APIs implemented to satisfy the
requirements of this document, ensure that programmatic exchanges proceed in a
timely manner.
Note: For example, the programmatic exchange of information
required by other checkpoints in this document should be efficient enough to
prevent information loss, a risk when changes to content or user interface
occur more quickly than the communication of those changes. Timely exchange is
also important for the proper synchronization of alternative renderings. The
techniques for this checkpoint explain how developers can reduce communication
delays. This will help ensure that assistive technologies have timely access to
the document object model and other
information that is important for providing access.
PRINCIPLE 3: @@RENDERING (perceivable)@@
Level A Success Criteria for Guideline 3.1
Level AA Success Criteria for Guideline 3.1
- (No level AA success criteria for Guideline 3.1)
Level AAA Success Criteria for Guideline 3.1
- (No level AAA success criteria for Guideline 3.1)
Level A Success Criteria for Guideline 3.2
- 3.2.1 Alert to Non-Rendered: Users have the option to be alerted to the presence of non-rendered alternative content (e.g., short text alternatives, long descriptions, captions, audio descriptions) for any given piece of rendered content. Note: The rendered content and its non-rendered alternatives constitute the alternative content stack.
- 3.2.2 Browse and Render: The user can browse the alternative content stack and render items according to the following:
- (a) synchronized alternatives for synchronized media (e.g., captions, audio descriptions, sign language) can be rendered at the same time as their associated audio tracks and visual tracks, and
- (b) non-synchronized alternatives (e.g., short text alternatives, long descriptions) can be rendered as replacements for the original peice of content. If the dimensions of the new item differs, then a user option should control whether the dimensions of the original item are used or the dimensions of the new item are used, which will cause the document to reflow accordingly.
- 3.2.3 Available Programmatically: If an item in the alternative content stack is plain text (e.g., short text alternative) then it is available programmatically, even when not rendered.
Level AA Success Criteria for Guideline 3.2
- 3.2.4 Simultaneous Rendering: Users have the option to simultaneously render any and all items from the alternative content stack unless the user agent can recognize a mutual exclusion (e.g. conflicting soundtracks).
- 2.3.5 Configurable Default Rendering: The user can set preferences for which items in a alternative content stack are rendered by default.@@2.9@@
Level AAA Success Criteria for Guideline 3.2
- (No level AAA success criteria for Guideline 3.2)
New Technique 2.3.5=User agents should expose configuration choices in as highly visible a fashion as is practical such as on a menu entry or dialog settings devoted to accessibility.
Note: Success criteria only apply to recognized images, animations, video, audio, etc.
Level A Success Criteria for Guideline 2.X
- The user can slow the presentation rate
of recognized prerecorded audio and animation content (including video and
animated images), such that both of the following are true:@@4.4@@
- if only an audio track is present, provide at least one setting between 75% and 80% of the
original speed.
- if a visual track is present, provide at
least one setting between 40% and 60% of the original speed.
- when audio and video tracks are synchronized: above 75% of the original speed, maintain synchronization; below 75% the user agent is not required to render the audio track.
- A: Allow the user to stop, pause, and resume
rendered audio and animation content (including
video and animated images) that last three or more seconds at their default
playback rate.@@4.5@@
- A: Allow the user to navigate efficiently
within rendered audio and animations (including video and animated
images) that last three or more seconds at their default playback
rate.@@4.5@@
Level AA Success Criteria for Guideline 2.X
- (No level AA success criteria for Guideline 2.X)
Level AAA Success Criteria for Guideline 2.X
- (No level AAA success criteria for Guideline 2.X)
@@Tech=Provide the user with the ability to toggle whether the base user agent executes content that it is able to . - if cond. content exists reveal it (2.3)
@@Tech=Provide the user with the ability to toggle the loading of plugins that execute content the base browser is unable to execute - if cond. content exists reveal it (2.3)
3.4 Provide access to relationship information @@NEW 10.1@@
Level A Success Criteria for Guideline 2.X
- 3.3.1 Relationships Available Programmatically: Make explicitly-defined relationships in the content (e.g., labeled_by, table_header_for, etc.) available programmatically. @@NEW 10.1@@
- 3.3.2 Access Relationships: Allow the user to access information from explicitly-defined relationships in the content (e.g., what is form control's label?, what is label's form control?, what is cell's table header?, etc.). @@NEW 10.1@@
Level AA Success Criteria for Guideline 2.X
- 3.3.3 Location in Hierarchy: For content in a hierarchy (e.g., tree node, nested frame), allow the user to view the path of nodes leading from the root to the content.@@NEW 10.1@@
Level AAA Success Criteria for Guideline 2.X
- (No level AAA success criteria for Guideline 2.X)
- A:
Users have the option of receiving generated
repair text when the user agent
recognizes that the author has not provided
alternative
content required by the format specification (e.g., short text alternative for image).@@2.7@@
- AAA: Users have the option of receiving generated repair text when the user agent recognizes that the author has provided empty alternative
content for an enabled element. @@2.8@@
3.6 Highlight selection, content
focus, enabled elements, visited links Techniques for checkpoint 10.2
- A. Highlighting options are provided for the following classes of information:
- (a) selection,
- (b) content focus,
- (c) "recognized" enabled elements, and
- (d) recently visited links.@@REM?@@
- A. The highlighting options (with the same configurable range as the platform's conventional selection utilities) include at least:
- (a) foreground colors
- (b) background colors, and
- (c) borders (with configurable color and width).
- User has the option so that the user agent only
retrieves content on
explicit user
request.
- This checkpoint only applies when the user agent (not the server)
automatically initiates the request for fresh content. However, the user agent
is not required to satisfy this checkpoint for "client-side redirects," i.e.,
author-specified instructions that a piece of content is temporary and
intermediate, and is replaced by content that results from a second
request.
Note: When the user chooses not to retrieve (fresh) content, the user agent may
ignore that content; buffering is not required.@@WEB2.0 content may completely break@@
Note: For example, if the user agent supports automatic
content retrieval, to ensure that the user does not become disoriented by
sudden automatic changes, allow configurations such as "Never retrieve content
automatically" and "Require confirmation before content retrieval."
- A: User has the option to globally set the following text characteristics, overriding any specified by
the author or user agent defaults:
- text scale (i.e., the general size of text) of visually rendered text content,
- font
family, and
- text color (i.e., foreground and background).
- A: When rendered text is rescaled, preserve
distinctions in the size of rendered text (e.g., headers continue to be larger than body text).
- A: The range of options for each text characteristic includes at
least:
- the range offered by the conventional utility available in the
operating
environment, or
- if no such utility is available, the range supported by the
conventional APIs of the
operating environment for drawing text.
- AAA: The user has the option to constrain the configuration of the default text foreground color, background
color and highlighting colors, so that text contrast is maintained.
@@NEW@@
- A: User has the option to globally set the
volume of all rendered audio tracks (including a "mute" setting) through available operating environment mechanisms.@@4.7@@
- A: If the user agent can recognize speech and non-speech audio tracks, then the volume of these two types of audio tracks can be set independently.@@NEW 4.8@@
- A: The speech synthesizer must include the following characteritics, controllable by the user, overriding any values specified by
the author:
@@4.9,4.10@@
- (a) speech
rate and
- (b) speech volume (independently of other sources of audio).
- A: User can set all of the speech characteristics offered by the speech synthesizer, according to the full range of values available, overriding any values specified by
the author:
@@4.11@@
- AA: User can set the following synthesized speech characteristics, overriding any values specified by
the author:
@@4.12@@
- (a) pitch ("pitch" refers to the average frequency of the speaking voice),
- (b) pitch
range ("pitch range" specifies a variation in average frequency), and
- (c) speech stress.
("speech stress" refers to the height of "local peaks" in the intonation contour of the
voice).@@richness deleted since not in CSS3 http://www.w3.org/TR/2004/WD-css3-speech-20041216/@@
- AA: Provide support for all of the following speech features:
- (a) user-defined extensions to the
synthesized speech dictionary,
- (b) "spell-out", where text is spelled
one character at a time, or according to language-dependent pronunciation
rules,
- (c) at least two ways of speaking numerals: one
where numerals are spoken as individual digits, and one where full numbers are
spoken, and
- (d) at least two ways of speaking punctuation:
one where punctuation is spoken literally, and one where punctuation is
rendered as natural pauses.
- A: If the author has supplied one or more style sheets, the user has the following options:
- (a) select between the style sheets, or
- (b) turn off the style sheets.
- A: If the user has supplied one or more style sheets, the user has the following options:
- (a) select between the style sheets, or
- (b) turn off the style sheets.
3.12 Help user to use and orient within viewports
Level A Success Criteria for Guideline 5.X
- A: Highlight the viewport with the current focus (including any frame that
takes current focus) using a highlight
mechanism that does not rely on rendered text foreground and background
colors alone (e.g., a thick outline).
- When a viewport's selection changes, the viewport moves as necessary to ensure that the new selection is at least
partially in the viewport.@@5.4@@
- When a viewport's content focus changes, the viewport moves as necessary to ensure that the new content focus is at least
partially in the viewport.@@5.4@@
- User has the option to make graphical viewports resizable, within the limits of the display, overriding any values specified by
the author.
- Graphical viewports must include scrollbars if the rendered content (including after user preferences have been applied) extends beyond the viewport dimensions, overriding any values specified by
the author.
- If the user agent maintains a viewport history mechanism (e.g., via the "back button") that stores previous "viable" states (i.e., that have not been negated by the content, user agent settings or user agent extensions), it must maintain
information about the point of regard and it must restore the saved values when the user returns to a state in the history.
Level AA Success Criteria for Guideline 5.X
Level AAA Success Criteria for Guideline 5.X
- AAA: Indicate the viewport's position relative to rendered content (e.g., the proportion along an audio or video timeline, the
proportion of a Web page before the current position ).
3.15 Focus Management
Level A Success Criteria for Guideline 9.X
- Provide at least one content focus for each viewport (including frames), where enabled elements are part of the rendered
content. @@ 9.1@@
- Allow the user to make the content focus of
each viewport the current focus. @@ 9.1@@
- Provide a user interface
focus.@@from 9.2@@
- Ensure user interface focus can navigate within extensions to the user interface "chrome". @@If it knows how to insert and render the extension in its chrome, then it should have good enough programmatic access and knowledge to properly give focus. - Tech XUL spec for FF@@
- User agent notifies any nested user agent(s) that focus has moved into it.
- User agents must be able to retrieve (escape) focus from a nested viewport (including nested viewports that are user agents).
- Embedded user agents are responsible for notifying embedding user agent that focus should move back to it. @@Embedded user agents must write to AccessAPI and HTML DOM if applicable@@
- Allow the user to move the content focus forward or backward to any enabled element in the viewport.(@@ 9.3, 9.7@@)
- If
the author has not specified a navigation order, default to sequential
navigation, in document order.(@@ 9.3@@)
- User has the option so that the content focus of
a viewport only changes on explicit user request.(@@ 9.3@@)
- User has the option so that moving the content focus to or from an enabled element does cause the user agent to take any further action.(@@9.5@@)
- Follow operating environment conventions that benefit accessibility when implementing content focus and user interface focus.(@@ 7.1@@)
Level AA Success Criteria for Guideline 9.X
Level AAA Success Criteria for Guideline 9.X
- A: For content authored in text formats, provide a view of the text source.@@2.2@@
- AA: Make available to the user an "outline"
view of rendered content,
composed of labels for important structural elements (e.g., heading text, table
titles, form titles, and other labels that are part of the content).@@10.4@@
- What constitutes a label is defined by each markup language specification.
For example, in HTML, a heading (
H1
-H6
) is a label
for the section that follows it, a CAPTION
is a label for a table,
and the title
attribute is a label for its element.
- The user agent is not required to generate a label for an important element
when no label is present in content. The user agent may generate a label when
one is not present.
- A label is not required to be text only.
Note: This outline view will provide the user with a
simplified view of content (e.g, a table of contents). For information about
what constitutes the set of important structural elements, see the Note
following checkpoint 9.9. By
making the outline view navigable, it is possible to satisfy this checkpoint
and checkpoint 9.9 together:
allow users to navigate among the important elements of the outline view, and
to navigate from a position in the outline view to the corresponding position
in a full view of content. See checkpoint 9.10 for additional configuration options.
- To
help the user decide whether to traverse a link in content, make available the following
information about it:
- link element content,
- link title,
- whether the link is internal to the resource (e.g., the link is to a target
in the same Web page),
- whether the user has traversed the link recently, and
- information about the type, size, and natural language of linked Web
resources.
- User agents are expected to compute information about recently traversed
links. For the other link information of this checkpoint, the user agent is
only required to make available what is present in content.
- The user agent is not required to compute or make available information
that requires retrieval of linked Web resources.
PRINCIPLE 4. User interface must be operable
Level A Success Criteria for Guideline 1.1
- 3.1.1 Keyboard (user interface "chrome", content display): Users can, through keyboard input alone, to navigate to and operate all of the functions included in the user interface (e.g., navigating and selecting content within views, operating the user interface "chrome", installing
and configuring the user agent, and accessing documentation), except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints (e.g. freeform drawing). This applies to at least one mechanism per browsing outcome@@DEFINE@@, allowing
non-keyboard accessible mechanisms to remain available (e.g.,
providing resizing with mouse-"handles" and with keystrokes).[ATAG 2.0]
- 3.1.2 Precedence of Keystroke Processing: Document the precedence of keystroke processing between the user agent interface, user agent extensions, content keystroke operations administered by the user agent (e.g., access keys), and exectuable content (e.g., key press events in scripts, etc.).
- 3.1.3 No Keyboard Trap: If focus can be moved to a component with the keyboard, then at least one of the following is true:
- (a) standard keys: focus can be moved away from the component with the keyboard using standard navigation keys (i.e., unmodified arrow or tab keys), or
- (b) documented non-standard keys: focus can be moved away from the component with non-standard keys and the author is advised of the method.
- 3.1.4 Separate Activation: The user
has the option to have selection separate from activation
(e.g., navigating through the items in a dropdown menu without
activating any of the items).[ATAG 2.0]
- 3.1.5 Available Keystrokes: The user can always determine available binding information in a centralized fashion (e.g., a list of bindings) or a distributed fashion (e.g., by keyboard shortcuts listed in user interface menus) for the following:
@@11.1,11.2@@
- (a) user interface "chrome" and extensions (including any user re-mappings), and
- (b) content keybindings that the user agent can recognize.
- 3.1.6 Standard Text Area Conventions: Views that render text support the standard text area conventions for
the platform including, but not necessarily limited to:
character keys, backspace/delete, insert, "arrow" key
navigation (e.g., "caret" browsing), page up/page down, navigate to start/end, navigate
by paragraph, shift-to-select mechanism, etc. @@7.1@@ [ATAG 2.0]
- 3.1.7 "Chrome" Navigation: Authors can use the keyboard to traverse all of the controls forwards and backwards, including controls in floating toolbars, panels, user agent extensions@@DEFINE@@, etc. using conventions of the platform (e.g., via "tab", "shift-tab", "ctrl-tab", "ctrl-shift-tab").[ATAG 2.0]
Level AA Success Criteria for Guideline A.3.1
- 3.1.8 Accelerator Keys: If any of the following functionalities are implemented by the
authoring tool, the author must have the option to enable
key-plus-modifier-key (or single-key) access to them:@@11.5@@ [ATAG 2.0]
- (a) move content focus to the next/previous enabled element in document order,
- (b) activate the link designated by the content focus,
- (c) open find function, find again
- (d) increase/decrease the scale of rendered text,
- (e) increase/decrease global volume,
- (f) stop/pause/resume and navigate efficiently audio and animations, including video and animated images,
- (g) next/previous history state (i.e., forward/back),
- (h) enter a URI for a new resource,
- (i) add a URI to favorites (i.e., bookmarked resources),
- (j) view favorites,
- (k) reload a resource,
- (l) interrupt a request to load or reload a resource,
- (m) for graphical viewports@@DEFINE?@@: navigate forward and backward through rendered content by approximately the height of the viewport, and
- (n) for user agents @@Line based user agents? DEFINE@@ that render content in lines of (at least) text: move the point of regard to the next and previous line.
- 3.1.9 Precedence of Keystroke Processing: Keystrokes are processed in the following order: user agent user interface, user agent extensions, content keystroke operations administered by the user agent (e.g., access keys), and exectuable content (e.g., key press events in scripts, etc.).
- 3.1.10 User override any binding: Allow the user to override any binding that is part of the user agent default input configuration except for conventional bindings for the operating environment (e.g., for access to help). The keyboard combinations offered for rebinding include single key and key plus modifier keys if these are available in the operating environment. @@11.3,11.4@@
Level AAA Success Criteria for Guideline A.3.1
- 3.1.11 Intergroup Navigation: If logical groups of focusable controls (e.g., toolbars, dialogs, labeled groups, panels) are present, authors must be able to use the keyboard to navigate to a focusable control in the next and previous groups.[ATAG 2.0]
- 3.1.12 Group Navigation: If logical groups of focusable controls are present, authors must be able to use the keyboard to navigate to the first, last, next and previous focusable controls in the current group.[ATAG 2.0]
Level A Success Criteria for Guideline 3.2
- Allow the user to activate, through keyboard input alone, all
input device event handlers (including those for pointing devices, voice, etc.) that are
explicitly associated with the element designated by the content focus.
- User has the option so that moving the content focus to or from an enabled element does not automatically activate any explicitly associated event handlers of any event type. @@moved from 9.5@@
- For the element with content focus, make available the list
of input device event types for which there are event handlers explicitly associated
with the element.@@moved from 9.6@@
- In order to satisfy provision one
of this checkpoint, the user must be able to activate as a group all event
handlers of the same input device event type, for the same control.
Level AA Success Criteria for Guideline 3.2
- (No level AA success criteria for Guideline 3.2)
Level AAA Success Criteria for Guideline 3.2
- (No level AAA success criteria for Guideline 3.2)
- A: 3.4.1 Timing Adjustable: Where time limits for user input are recognized and controllable by the user agent, provide an option to extend the time limit.
4.5 Help users avoid flashing that could cause seizures.
Level A Success Criteria for Guideline A.3.3
Level AA Success Criteria for Guideline A.3.3
- (No level AA success criteria for Guideline A.3.3)
Level AAA Success Criteria for Guideline A.3.3
Level A Success Criteria for Guideline 3.6
Level AA Success Criteria for Guideline 3.6
- 3.6.2 User Profiles (user interface "chrome"): User can save and retrieve multiple sets of user agent preference settings.@@11.6@@
Level AAA Success Criteria for Guideline 3.6
- 3.6.3 Portable Profiles (user interface "chrome"): Sets of preferences are stored as individual files (allowing them to be transmitted electronically). @@NEW@@
- 3.6.4 Preferences Wizard (user interface "chrome"): Users are provided with a "wizard" that helps them configure at least the accessibility-related user agent preferences. @@NEW@@
- For graphical user agent user interfaces with tool bars, allow the user to configure the position of user agent user interface
controls on those tool bars.
- Offer a predefined set of controls that may
be added to or removed from tool bars.
- Allow the user to restore the default tool
bar configuration.
4.8 Document the user interface
including all accessibility features
- A:At least one
version of the documentation is either:
@@12.1@@
- (a) "A" Accessible: Web content and conforms to WCAG 2.0 Level "A" (although it is not necessary
for the documentation to be delivered on-line), or
- (b) Accessible Platform Format: not Web content and conforms to a published accessibility
benchmark that is identified in the conformance
claim (e.g.,
when platform-specific documentation systems are used).
- A: Provide documentation of all user agent
features that benefit accessibility.@@12.2@@
- AA: Provide documentation of changes since the
previous version of the user agent to features that benefit
accessibility.@@12.4@@
- AA: Provide a centralized view of all
features of the user agent that benefit accessibility, in a dedicated section
of the documentation.@@12.5@@
- AAA: Provide context-sensitive help on all user agent
features that benefit accessibility.
- Allow the user to search within rendered (e.g., not hidden with a style) content for text and text alternatives for a sequence
of characters from the document character set.
- Allow the user to start a forward or backward search (in document order) from any selected
or focused location in content.
- When there is a match, do both of the following:
- move the viewport so that the matched text content is at least partially
within it, and
- allow the user to search for the next instance of the text from the
location of the match.
- Alert the user when there is no match or after the last match in content (i.e.,
prior to starting the search over from the beginning of content).
- Provide a case-insensitive search option.
- Provide efficient navigation over important (structural) elements in rendered content.
- As part of satisfying provision one of this
checkpoint, allow forward and backward sequential
navigation.
- User has the option of the set of important
elements and attributes (for checkpoints 9.9 and 10.4).
- As part of satisfying provision one of
this checkpoint, allow the user to include and exclude element types in the
set.
Note: For example, allow the user to navigate only
paragraphs, or only headings and paragraphs, or to suppress and restore
navigation bars, or to navigate within and among tables and table cells
Principle 5: Understandable - Information and the operation of user interface must be understandable
5.1 Help users avoid unneccessary messages
- AA: User has the option not to render non-essential or low priority text messages based on priority properties defined by the author.(e.g., ignoring ARIA messages marked "polite").@@NEW@@
Level A Success Criteria for Guideline 3.2
- (No level A success criteria for Guideline 3.2)
Level AA Success Criteria for Guideline 3.2
- User has the option to confirm (or cancel) any
form submission that is made on the basis of an action that occurs while content focus is not on the submitting control (e.g., forms that submit when Enter is pressed).@@5.5@@
Level AAA Success Criteria for Guideline 3.2
- (No level A success criteria for Guideline 3.2)
Glossary Changes
Base Background: The base background is the background of the content as a whole, such that no content may be layered behind it. In graphics applications, the base background is often referred to as the canvas.).
blinking text: text whose visual rendering alternates between visible and invisible at any rate of change.
This glossary is normative. However, some
terms (or parts of explanations of terms) may not have an impact on
conformance.
Note: In this document, glossary terms generally link to
the corresponding entries in this section. These terms are also highlighted
through style sheets and identified as glossary terms through markup.
- Activate
- In this document, the verb "to activate" means (depending
on context) either:
The effect of activation depends on the type of the user interface control. For
instance, when a link is activated, the user agent generally retrieves the
linked Web resource. When a form element is
activated, it may change state (e.g., check boxes) or may take user input
(e.g., a text entry field).
- Alert
- In this document, "to alert" means to make the user aware
of some event, without requiring acknowledgement. For example, the user agent
may alert the user that new content is available on the server by displaying a
text message in the user agent's status bar. See
checkpoint 1.3 for
requirements about alerts.
- Animation
- In this document, an "animation" refers to
content that, when rendered, creates a visual
movement effect automatically (i.e., without explicit user interaction). This
definition of animation includes video and animated images. Animation
techniques include:
- graphically displaying a sequence of snapshots within the same region
(e.g., as is done for video and animated images). The series of snapshots may
be provided by a single resource (e.g., an animated GIF image) or from distinct
resources (e.g., a series of images downloaded continuously by the user
agent).
- scrolling text (e.g., achieved through markup or style sheets).
- displacing graphical objects around the viewport (e.g., a picture of a ball
that is moved around the viewport giving the impression that it is bouncing off
of the viewport edges). For instance, the SMIL 2.0
[SMIL20] animation modules explain
how to create such animation effects in a declarative manner (i.e., not by
composition of successive snapshots).
- Applet
- An applet is a program (generally written in the Java
programming language) that is part of content,
and that the user agent executes.
- Application
Programming Interface (API), conventional input/output/device
API
- An application programming interface
(API) defines how
communication may take place between applications.
Implementing APIs that are independent of a particular operating environment
(as are the W3C DOM Level 2 specifications) may reduce implementation costs for
multi-platform user agents and promote the development of multi-platform
assistive technologies. Implementing conventional APIs for a particular
operating environment may reduce implementation costs for assistive technology
developers who wish to interoperate with more than one piece of software
running on that operating environment.
A "device API" defines how communication may take place
with an input or output device such as a keyboard, mouse, or video card.
In this document, an "input/output API" defines how
applications or devices communicate with a user agent. As used in this
document, input and output APIs include, but are not limited to, device APIs.
Input and output APIs also include more abstract communication interfaces than
those specified by device APIs. A "conventional input/output API" is one that
is expected to be implemented by software running on a particular operating
environment. For example, the conventional input APIs of the
target user agent are for the mouse and
keyboard. For touch screen devices or mobile devices, conventional input
APIs may include stylus, buttons, and voice. The graphical
display and sound card are considered conventional output devices for a
graphical desktop computer environment, and each has an associated
API.
- Assistive technology
- In the context of this document, an assistive technology
is a user agent that:
- relies on services (such as retrieving Web
resources and parsing markup) provided by one or more other "host" user
agents. Assistive technologies communicate data and messages with host user
agents by using and monitoring APIs.
- provides services beyond those offered by the host user agents to meet the
requirements of users with disabilities. Additional services include
alternative renderings (e.g., as synthesized speech or magnified content),
alternative input methods (e.g., voice), additional navigation or orientation
mechanisms, and content transformations (e.g., to make tables more
accessible).
Examples of assistive technologies that are important in the context of this
document include the following:
- screen magnifiers, which are used by people with visual disabilities to
enlarge and change colors on the screen to improve the visual readability of
rendered text and images.
- screen readers, which are used by people who are blind or have reading
disabilities to read textual information through synthesized speech or braille
displays.
- voice recognition software, which may be used by people who have some
physical disabilities.
- alternative keyboards, which are used by people with certain physical
disabilities to simulate the keyboard.
- alternative pointing devices, which are used by people with certain
physical disabilities to simulate mouse pointing and button
activations.
- Beyond this document, assistive technologies consist of
software or hardware that has been specifically designed to assist people with
disabilities in carrying out daily activities. These technologies include
wheelchairs, reading machines, devices for grasping, text telephones, and
vibrating pagers. For example, the following very general definition of
"assistive technology device" comes from the (U.S.) Assistive Technology Act of
1998 [AT1998]:
Any item, piece of equipment, or product system, whether acquired
commercially, modified, or customized, that is used to increase, maintain, or
improve functional capabilities of individuals with
disabilities.
- Attribute
- This document uses the term "attribute" in the XML sense:
an element may have a set of attribute specifications (refer to the XML 1.0
specification [XML] section 3).
- Audio
- In this document, the term "audio" refers to content that
encodes prerecorded sound.
- Audio-only
presentation
- An audio-only presentation is content consisting
exclusively of one or more audio tracks presented
concurrently or in series. Examples of an audio-only presentation include a
musical performance, a radio-style news broadcast, and a narration.
- Audio track
- An audio object is content rendered as sound through an
audio viewport. An audio track is an audio object
that is intended as a whole or partial presentation. An audio track may, but is
not required to, correspond to a single audio channel (left or right audio
channel).
- Audio description
- An audio description (called an "auditory description" in
the Web Content Accessibility Guidelines 1.0
[WCAG10]) is either a prerecorded
human voice or a synthesized voice (recorded or generated dynamically)
describing the key visual elements of a movie or other animation. The audio
description is synchronized with (and possibly included
as part of) the audio track of the presentation, usually
during natural pauses in the audio track. Audio
descriptions include information about actions, body language, graphics, and
scene changes.
- Author styles
- Authors styles are style property
values that come from content (e.g., style sheets
within a document, that are associated with a document, or that are generated
by a server).
- Captions
- Captions are text transcripts that are
synchronized with other
audio tracks or visual tracks. Captions convey
information about spoken words and non-spoken sounds such as sound effects.
They benefit people who are deaf or hard-of-hearing, and anyone who cannot hear
the audio (e.g., someone in a noisy environment). Captions are generally
rendered graphically superimposed ("on top of") the
synchronized visual track.
The term "open captions" generally refers to captions that are always
rendered with a visual track; they cannot be turned off. The term "closed
captions" generally refers to captions that may be turned on and off. The
captions requirements of this document assume that the user agent can
recognize the captions as such; see the
section on applicability for more
information.
Note: Other terms that include the word "caption" may have
different meanings in this document. For instance, a "table caption" is a title
for the table, often positioned graphically above or below the table. In this
document, the intended meaning of "caption" will be clear from
context.
- Character encoding
- A "character encoding" is a mapping from a character set
definition to the actual code units used to represent the data. Refer to the
Unicode specification [UNICODE] for more information
about character encodings. Refer to "Character Model for the World Wide Web"
[CHARMOD] for additional
information about characters and character encodings.
- Collated text
transcript
- A collated text transcript is a text
equivalent of a movie or other animation. More specifically, it is the
combination of the text transcript of the
audio track and the text equivalent of
the visual track. For example, a collated
text transcript typically includes segments of spoken dialogue interspersed
with text descriptions of the key visual elements of a presentation (actions,
body language, graphics, and scene changes). See also the definitions of
text transcript and
audio description. Collated text
transcripts are essential for individuals who are deaf-blind.
- alternative content
- alternative content is content that should be made available to users only under certain conditions (e.g., based on user preferences or operating environment limitations). Some examples include:
- The
alt
attribute of the IMG
element in HTML 4 [HTML4].
OBJECT
elements in HTML 4 [HTML4].
- The
switch
element and test attributes in SMIL 1.0 [SMIL].
- The
NOSCRIPT
and NOFRAMES
elements in HTML 4
[HTML4].
Note: Specifications vary in how completely they define how and when to render alternative content.
alternative content stack: The set of alternative content items for a given position in content.
The items may be mutually exclusive (e.g., regular contrast graphic vs.
high contrast graphic) or non-exclusive (e.g., caption track that can
play at the same time as a sound track).
- Configure, control, user option
- In the context of this document, the verbs "to control"
and "to configure" share in common the idea of governance such as a user may
exercise over interface layout, user agent behavior, rendering style, and other
parameters required by this document. Generally, the difference in the terms
centers on the idea of persistence. When a user makes a change by
"controlling" a setting, that change usually does not persist beyond that user
session. On the other hand, when a user "configures" a setting, that setting
typically persists into later user sessions. Furthermore, the term "control"
typically means that the change can be made easily (such as through a keyboard
shortcut) and that the results of the change occur immediately. The term
"configure" typically means that making the change requires more time and
effort (such as making the change via a series of menus leading to a dialog
box, or via style sheets or scripts). The results of "configuration" might not
take effect immediately (e.g., due to time spent reinitializing the system,
initiating a new session, or rebooting the system).
In order to be able to configure and control the user agent, the user needs
to be able to "write" as well as "read" values for these parameters.
Configuration settings may be stored in a profile.
The range and granularity of the changes that can be controlled or configured
by the user may depend on limitations of the operating environment or
hardware.
Both configuration and control can apply at different "levels": across
Web resources (i.e., at the user agent
level, or inherited from the operating environment), to the
entirety of a Web resource, or to components of a Web resource (e.g., on a
per-element basis).
A global configuration is one
that applies across elements of the same Web resource, as well as across Web
resources.
User agents may allow users to choose configurations based on various
parameters, such as hardware capabilities or natural language preferences.@@POINT TO NEW CHECKPOINT ON HOW TO SAVE SETTTINGS@@
Note: In this document, the noun "control" refers to a
user interface
control.
- Content
- In this specification, the noun "content" is used in three
ways:
- It is used to mean the document object as a
whole or in parts.
- It is used to mean the content of an HTML or XML element, in the sense
employed by the XML 1.0 specification ([XML], section 3.1): "The text between
the start-tag and end-tag is called the element's content." Context should
indicate that the term content is being used in this sense.
- It is used in the terms non-text content and
text content.
Empty
content (which may be alternative content) is either a
null value or an empty string (i.e., one that is zero characters long). For
instance, in HTML, alt=""
sets the value of the alt
attribute to the empty string. In some markup languages, an element may have
empty content (e.g., the HR
element in HTML).
- Device-independence
- In this document, device-independence refers to the
desirable property that operation of a user agent feature is not bound to only
one input or output device.
- Document object,
Document Object Model
(DOM)
- In general usage, the term "document object" refers to the
user agent's representation of data (e.g., a document). This data generally
comes from the document source, but
may also be generated (e.g., from style sheets, scripts, or transformations),
produced as a result of preferences set within the user agent, or added as the
result of a repair performed automatically by the user agent. Some data that is
part of the document object is routinely rendered (e.g., in HTML, what
appears between the start and end tags of elements and the values of attributes
such as
alt
, title
, and summary
). Other
parts of the document object are generally processed by the user agent without
user awareness, such as
DTD- or schema-defined
names of element types and attributes, and other attribute values such as
href
and id
. Most of the requirements of this
document apply to the document object after its construction. However, a few
checkpoints (e.g., checkpoints 2.7 and
2.10) may affect the construction of the document
object.
- A "document object model" is the abstraction that governs
the construction of the user agent's document object. The document object model
employed by different user agents may vary in implementation and sometimes in
scope. This specification requires that user agents implement the
APIs defined
in Document Object Model (DOM) Level 2 specifications
([DOM2CORE] and
[DOM2STYLE]) for access to
HTML, XML, and CSS
content. These DOM APIs allow authors to access and modify the content via a
scripting language (e.g., JavaScript) in a consistent manner across different
scripting languages.
- Document character set
- In this document, a document character set (a concept from
SGML) is a collection of abstract characters that a format specification allows
to appear in an instance of the format. A document character set consists of:
- A "repertoire": A set of abstract characters, such as the Latin letter "A,"
the Cyrillic letter "I," and the Chinese character meaning "water."
- Code positions: A set of integer references to characters in the
repertoire.
For instance, the character set required by the HTML 4 specification
[HTML4] is defined in the Unicode
specification [UNICODE]. Refer to "Character
Model for the World Wide Web" [CHARMOD] for more information
about document character sets.
- Document source,
text
source
- In this document, the term "document source" refers to the
data that the user agent receives as the direct result of a request for a
Web resource (e.g., as the result of an
HTTP/1.1 [RFC2616] "GET", or as the result
of viewing a resource on the local file system). The document source generally
refers to the "payload" of the user agent's request, and does not generally
include information exchanged as part of the transfer protocol. The document
source is data that is prior to any repair by the user agent (e.g., prior to
repairing invalid markup). "Text source" refers to the text portion of
the document source.
- Documentation
- Documentation refers to information that supports the use
of a user agent. This information may be found, for example, in manuals,
installation instructions, the help system, and tutorials. Documentation may be
distributed (e.g., some parts may be delivered on CD-ROM, others on the Web).
See guideline 12
for information about documentation requirements.
- Element, element type
- This document uses the terms "element" and "element type"
primarily in the sense employed by the XML 1.0 specification
([XML], section 3): an element type is
a syntactic construct of a document type definition (DTD) for its application.
This sense is also relevant to structures defined by XML schemas. The document
also uses the term "element" more generally to mean a type of content (such as
video or sound) or a logical construct (such as a header or list).
- Enabled element,
disabled
element
- An enabled element is a piece of content
with associated behaviors that can be activated through the user interface or
through an API. The set
of elements that a user agent enables is generally derived from, but is not
limited to, the set of interactive
elements defined by implemented markup languages.
Some elements may only be enabled elements for part of a user session. For
instance, an element may be disabled by a script as the result of user
interaction. Or, an element may only be enabled during a given time period
(e.g., during part of a SMIL 1.0 [SMIL] presentation). Or, the user
may be viewing content in "read-only" mode, which may disable some
elements.
A disabled element is a piece of content that is potentially an
enabled element, but is not in the current session. One example of a disabled
element is a menu item that is unavailable in the current session; it might be "grayed out" to show that it is disabled. Generally, disabled elements will be
interactive elements that are not
enabled in the current session. This document distinguishes disabled elements
(not currently enabled) from non-interactive elements
(never enabled).
For the requirements of this document, user
selection does not constitute user interaction with enabled elements. See
the definition of content focus.
Note: Enabled and disabled elements come from content; they
are not part of the user agent user
interface.
Note: The term "active element" is not used in this
document since it may suggest several different concepts, including:
interactive element, enabled element, an element "in the process of being
activated" (which is the meaning of :active
in CSS2
[CSS2], for example).
- Equivalent (for content)
- The term "equivalent" is used in this document as it is
used in the Web Content Accessibility Guidelines 1.0
[WCAG10]:
Content is "equivalent" to other content when both fulfill essentially the
same function or purpose upon presentation to the user. In the context of this
document, the equivalent must fulfill essentially the same function for the
person with a disability (at least insofar as is feasible, given the nature of
the disability and the state of technology), as the primary content does for
the person without any disability.
Equivalents include text equivalents
(e.g., text equivalents for images, text transcripts for audio tracks, or
collated text transcripts for a movie) and non-text equivalents (e.g., a
prerecorded audio description of a visual track of a movie, or a sign
language video rendition of a written text).
Each markup language defines its own mechanisms for specifying
alternative content, and these
mechanisms may be used by authors to provide text equivalents. For instance, in
HTML 4 [HTML4] or SMIL 1.0
[SMIL], authors may use the
alt
attribute to specify a text equivalent for some elements. In
HTML 4, authors may provide equivalents and other alternative content in
attribute values (e.g., the summary
attribute for the
TABLE
element), in element content (e.g., OBJECT
for
external content it specifies, NOFRAMES
for frame equivalents, and
NOSCRIPT
for script equivalents), and in prose. Please consult the
Web Content Accessibility Guidelines 1.0
[WCAG10] and its associated
Techniques document [WCAG10-TECHS] for more
information about equivalents.
- Events and
scripting, event handler, event type
- User agents often perform a task when an event having a
particular "event type" occurs, including user interface events, changes to
content, loading of content, and requests from the operating environment. Some
markup languages allow authors to specify that a script, called an
event
handler, be executed when an event of a given type occurs. An
event handler is explicitly associated with an
element when the event handler is associated with that element
through markup or the DOM. The term "event bubbling" describes a
programming style where a single event handler dispatches events to more than
one element. In this case, the event handlers are not explicitly associated
with the elements receiving the events (except for the single element that
dispatches the events).
Note: The combination of HTML, style sheets, the Document
Object Model (DOM), and scripting is commonly referred to as "Dynamic HTML" or DHTML. However, as there is no W3C specification that
formally defines DHTML, this document only refers to event handlers and
scripts.
- Explicit user request
- In this document, the term "explicit user request" refers
to any user interaction through the user agent user
interface (not through rendered content),
the focus, or the selection. User requests are made, for
example, through user agent user interface
controls and keyboard bindings.
- Some examples of explicit user requests include when the
user selects "New viewport," responds "yes" to a prompt in the user agent's
user interface, configures the user agent to behave in a certain way, or
changes the selection or focus with the keyboard or pointing device.
- Note: Users make mistakes. For example, a
user may inadvertently respond "yes" to a prompt instead of "no." In this
document, this type of mistake is still considered an explicit user
request.
- Focus, content focus,
user interface
focus, current focus
- In this document, the term "content focus" (required by
checkpoint
9.1) refers to a user agent mechanism that has all of the following
properties:
- It designates zero or one element in content
that is either enabled or
disabled. In general, the focus
should only designate enabled elements, but it may also designate disabled
elements.
- It has state, i.e., it may be "set" on an enabled element, programmatically
or through the user interface. Some content specifications (e.g., HTML, CSS)
allow authors to associate behavior with focus set and unset
events.
- Once it has been set, it may be used to trigger other behaviors associated
with the enabled element (e.g., the user may activate a link or change the
state of a form control). These behaviors may be triggered programmatically or
through the user interface (e.g., through keyboard events).
User interface mechanisms may resemble content focus, but do not satisfy all
of the properties. For example, designers of word processing software often
implement a "caret" that indicates the current location of text input or
editing. The caret may have state and may respond to input device events, but
it does not enable users to activate the behaviors associated with enabled
elements.
The user interface focus shares the properties of the content focus except
that, rather than designating pieces of content, it designates zero or one
control of the
user agent user interface
that has associated behaviors (e.g., a radio button, text box, or menu).
On the screen, the user agent may highlight the content focus in a variety of
ways, including through colors, fonts, graphics, and magnification. The user
agent may also highlight the content focus when rendered as synthesized speech,
for example through changes in speech prosody. The
dimensions of the rendered content focus may
exceed those of the viewport.
In this document, each viewport is expected to have at most one content
focus and at most one user interface focus. This document includes requirements
for content focus only, for user interface focus only, and for both. When a
requirement refers to both, the term "focus" is used.
When several viewports coexist, at most one viewport's
content focus or user interface focus responds to input
events; this is called the current focus.
- Graphical
- In this document, the term "graphical" refers to
information (including text, colors, graphics, images, and animations) rendered
for visual consumption.
- Highlight
- In this document, "to highlight" means to emphasize
through the user interface. For example, user agents highlight which content is
selected or focused. Graphical highlight mechanisms include dotted boxes,
underlining, and reverse video. Synthesized speech highlight mechanisms include
alterations of voice pitch and volume ("speech prosody").
- Image
- This document uses the term "image" to refer (as is
commonly the case) to pictorial content. However, in this
document, term image is limited to static (i.e., unmoving) visual information.
See also the definition of animation.
- Important elements
- This specification intentionally does not identify which "important elements" must be navigable as this will vary by specification. What constitutes "efficient navigation" may depend on a number of factors as well, including the "shape" of content (e.g., sequential navigation of long lists is not efficient) and desired granularity (e.g., among tables, then among the cells of a given table). Refer to the Techniques document [UAAG10-TECHS] for information about identifying and navigating important elements.
- Input configuration
- An input configuration is the set of "bindings" between
user agent functionalities and user interface input mechanisms (e.g.,
menus, buttons, keyboard keys, and voice commands). The default input
configuration is the set of bindings the user finds after installation of the
software; see checkpoint 12.3 for relevant documentation requirements.
Input configurations may be affected by author-specified bindings (e.g.,
through the
accesskey
attribute of HTML 4
[HTML4]).
- Interactive element,
non-interactive
element,
- An interactive element is piece of content that, by specification or by programmatic enablement, may have associated behaviors to be executed or carried out as a result of user or programmatic interaction."
An interactive element is piece of content that, by
specification, may have associated behaviors to be executed or carried out as a
result of user or programmatic interaction. @@edit the rest@@For instance, the interactive
elements of HTML 4
[HTML4] include: links, image maps,
form elements, elements with a value for the longdesc
attribute,
and elements with event handlers
explicitly associated with them (e.g., through the various "on" attributes).
The role of an element as an interactive element is subject to
applicability. A non-interactive
element is an element that, by format specification, does not have associated
behaviors. The expectation of this document is that interactive elements become
enabled elements in some sessions,
and non-interactive elements never become enabled elements.
- Natural language
- Natural language is spoken, written, or signed human
language such as French, Japanese, and American Sign Language. On the Web, the
natural language of content may be specified by markup or HTTP
headers. Some examples include the
lang
attribute in HTML 4
([HTML4] section 8.1), the
xml:lang
attribute in XML 1.0
([XML], section 2.12), the
hreflang
attribute for links in HTML 4
([HTML4], section 12.1.5), the HTTP
Content-Language header ([RFC2616], section 14.12) and the
Accept-Language request header ([RFC2616], section 14.4). See also
the definition of script.
- Normative, informative [WCAG 2.0, ATAG 2.0]
- What is identified as "normative" is required for
conformance (noting that one may
conform in a variety of well-defined ways to this document). What is identified
as "informative" (sometimes, "non-normative") is never required for
conformance.
- Operating environment
- The term "operating environment" refers to the environment
that governs the user agent's operation, whether it is an operating system or a
programming language environment such as Java.
- override
- In this document, the term "override" means that one
configuration or behavior preference prevails over another. Generally, the
requirements of this document involve user preferences prevailing over author
preferences and user agent default settings and behaviors. Preferences may be
multi-valued in general (e.g., the user prefers blue over red or yellow), and
include the special case of two values (e.g., turn on or off blinking text
content).
- placeholder
- A placeholder is content generated by the user agent to
replace author-supplied content. A placeholder may be generated as the result
of a user preference (e.g., to not render images) or as repair content (e.g., when an image
cannot be found). Placeholders can be any type of content, including text,
images, and audio cues. Placeholders should identify the technology of the object of which it is holding the place. Placeholders will appear in the alternative content stack.
- plug-in [ATAG 2.0]
- A plug-in is a program that runs as part of the user agent
and that is not part of content. Users generally
choose to include or exclude plug-ins from their user agent.
- point of regard
- The point of regard is a position in
rendered content that the user is
presumed to be viewing. The dimensions of the point of regard may vary. For
example, it may be a point (e.g., a moment during an audio rendering or a
cursor position in a graphical rendering), or a range of text (e.g., focused
text), or a two-dimensional area (e.g., content rendered through a
two-dimensional graphical viewport). The point of regard is almost always
within the viewport, but it may exceed the spatial or temporal
dimensions of the viewport (see the
definition of rendered content for
more information about viewport dimensions). The point of regard may also refer
to a particular moment in time for content that changes over time (e.g., an
audio-only presentation).
User agents may determine the point of regard in a number of ways, including
based on viewport position in content, content focus, and
selection. The stability of the point of
regard is addressed by guideline 5 and
checkpoint
9.4.
- profile
- A profile is a named and persistent representation of user
preferences that may be used to configure a user agent. Preferences include
input configurations, style preferences, and natural language preferences. In
operating environments with
distinct user accounts, profiles enable users to reconfigure software quickly
when they log on. Users may share their profiles with one another.
Platform-independent profiles are useful for those who use the same user agent
on different platforms.
- prompt [ATAG 2.0]
- Any user agent initiated
request for a decision or piece of information from users.
- properties, values, and
defaults
- A user agent renders a document by applying formatting
algorithms and style information to the document's elements. Formatting depends
on a number of factors, including where the document is rendered: on screen, on
paper, through loudspeakers, on a braille display, or on a mobile device. Style
information (e.g., fonts, colors, and synthesized speech prosody) may come from
the elements themselves (e.g., certain font and phrase elements in HTML), from
style sheets, or from user agent settings. For the purposes of these
guidelines, each formatting or style option is governed by a property and each
property may take one value from a set of legal values. Generally in this
document, the term
"property"
has the meaning defined in CSS 2 ([CSS2], section 3). A reference to "styles" in this document means a set of style-related properties. The value
given to a property by a user agent at installation is called the property's
default value.
- recognize
- Authors encode information in many ways, including in
markup languages, style sheet languages, scripting languages, and protocols.
When the information is encoded in a manner that allows the user agent to
process it with certainty, the user agent can "recognize" the information. For
instance, HTML allows authors to specify a heading with the
H1
element, so a user agent that implements HTML can recognize that content as a
heading. If the author creates a heading using a visual effect alone (e.g.,
just by increasing the font size), then the author has encoded the heading in a
manner that does not allow the user agent to recognize it as a heading.
Some requirements of this document depend on content roles, content
relationships, timing relationships, and other information supplied by the
author. These requirements only apply
when the author has encoded that information in a manner that the user agent
can recognize. See the section on
conformance for more information
about applicability.
In practice, user agents will rely heavily on information that the author
has encoded in a markup language or style sheet language. On the other hand,
behaviors, style, meaning encoded in a script, and
markup in an unfamiliar XML namespace may not be recognized by the user agent
as easily or at all. The Techniques document
[UAAG10-TECHS] lists some
markup known to affect accessibility that user agents can recognize.
- rendered content,
rendered
text
- Rendered content is the part of content
that the user agent makes available to the user's senses of sight and hearing
(and only those senses for the purposes of this document). Any content that
causes an effect that may be perceived through these senses constitutes
rendered content. This includes text characters, images, style sheets, scripts,
and anything else in content that, once processed, may be perceived through
sight and hearing.
- The term "rendered text" refers to text content
that is rendered in a way that communicates information about the characters
themselves, whether visually or as synthesized speech.
- In the context of this document,
invisible
content is content that is not rendered but that may influence
the graphical rendering (e.g., layout) of other content. Similarly,
silent
content is content that is not rendered but that may influence
the audio rendering of other content. Neither invisible nor silent content is
considered rendered content.
- repair content,
repair
text
- In this document, the term "repair content" refers to
content generated by the user agent in order to correct an error condition. "Repair text" refers to the text portion of repair content.
Some error conditions that may lead to the generation of repair content
include:
- Erroneous or incomplete content (e.g., ill-formed markup, invalid markup,
or missing alternative
content that is required by format specification);
- Missing resources for handling or rendering content (e.g., the user agent
lacks a font family to display some characters, or the user agent does not
implement a particular scripting language).
This document does not require user agents to include repair content in the
document object. Repair content
inserted in the document object should conform to the Web Content Accessibility
Guidelines 1.0 [WCAG10]. For more information
about repair techniques for Web content and software, refer to "Techniques for
Authoring Tool Accessibility Guidelines 1.0"
[ATAG10-TECHS].
- script
- In this document, the term "script" almost always refers
to a scripting (programming) language used to create dynamic Web content.
However, in checkpoints referring to the written (natural) language of content,
the term "script" is used as in Unicode
[UNICODE] to mean "A collection of
symbols used to represent textual information in one or more writing
systems."
- Information encoded in (programming) scripts may be
difficult for a user agent to recognize. For instance, a
user agent is not expected to recognize that, when executed, a script will
calculate a factorial. The user agent will be able to recognize some
information in a script by virtue of implementing the scripting language or a
known program library (e.g., the user agent is expected to recognize when a
script will open a viewport or retrieve a resource from the Web).
- selection,
current
selection
- In this document, the term "selection" refers to a user
agent mechanism for identifying a (possibly empty) range of
content. Generally, user agents limit the
type of content that may be selected to text content (e.g., one or more
fragments of text). In some user agents, the value of the
selection is constrained by the structure
of the document tree.
On the screen, the selection may be highlighted in a variety of ways, including
through colors, fonts, graphics, and magnification. The selection may also be
highlighted when rendered as synthesized speech, for example through changes in
speech prosody. The dimensions of the rendered selection may exceed those of
the viewport.
The selection may be used for a variety of purposes, including for cut and
paste operations, to designate a specific element in a document for the
purposes of a query, and as an indication of point of regard.
The selection has state, i.e., it may be "set," programmatically or through
the user interface.
In this document, each viewport is expected to have at most one selection.
When several viewports coexist, at most one viewport's
selection responds to input events; this is called the current selection.
See the section on the
Selection label for
information about implementing a selection and
conformance.
Note: Some user agents may also implement a selection for
designating a range of information in the user agent user interface.
The current document only includes requirements for a content
selection mechanism.
- serial access,
sequential navigation
- In this document, the expression "serial access" refers to
one-dimensional access to rendered content.
Some examples of serial access include listening to an audio stream or watching
a video (both of which involve one temporal dimension), or reading a series of
lines of braille one line at a time (one spatial dimension). Many users with
blindness have serial access to content rendered as audio, synthesized speech,
or lines of braille.
The expression "sequential navigation" refers to navigation through an
ordered set of items (e.g., the enabled elements in a document, a
sequence of lines or pages, or a sequence of menu options). Sequential
navigation implies that the user cannot skip directly from one member of the
set to another, in contrast to direct or structured navigation (see
guideline 9 for
information about these types of navigation). Users with blindness or some
users with a physical disability may navigate content sequentially (e.g., by
navigating through links, one by one, in a graphical viewport with or without
the aid of an assistive technology). Sequential navigation is important to
users who cannot scan rendered content visually for context and also benefits
users unfamiliar with content. The increments of sequential navigation may be
determined by a number of factors, including element type (e.g., links only),
content structure (e.g., navigation from heading to heading), and the current
navigation context (e.g., having navigated to a table, allow navigation among
the table cells).
Users with serial access to content or who navigate sequentially may require
more time to access content than users who use direct or structured
navigation.
- support, implement, conform
- In this document, the terms "support," "implement," and "conform" all refer to what a developer has designed a user agent to do, but
they represent different degrees of specificity. A user agent "supports"
general classes of objects, such as "images" or "Japanese." A user agent "implements" a specification (e.g., the PNG and SVG image format specifications
or a particular scripting language), or an API (e.g.,
the DOM API) when it has been programmed to follow all or part of a
specification. A user agent "conforms to" a specification when it implements
the specification and satisfies its conformance criteria.
- synchronize
- In this document, "to synchronize" refers to the act of
time-coordinating two or more presentation components (e.g., a
visual track with captions, or several
tracks in a multimedia presentation). For Web content developers, the
requirement to synchronize means to provide the data that will permit sensible
time-coordinated rendering by a user agent. For example, Web content developers
can ensure that the segments of caption text are neither too long nor too
short, and that they map to segments of the visual track that are appropriate
in length. For user agent developers, the requirement to synchronize means to
present the content in a sensible time-coordinated fashion under a wide range
of circumstances including technology constraints (e.g., small text-only
displays), user limitations (slow reading speeds, large font sizes, high need
for review or repeat functions), and content that is sub-optimal in terms of
accessibility.
- technology (Web content) - or shortened to technology [WCAG 2.0, ATAG 2.0]
- A mechanism for encoding instructions to be rendered, played or executed by user agents. Web Content technologies may include markup languages, data formats, or programming languages that authors may use alone or in combination to create end-user experiences that range from static Web pages to multimedia presentations to dynamic Web applications. Some common examples of Web content technologies include HTML, CSS, SVG, PNG, PDF, Flash, and JavaScript.
- text
- In this document, the term "text" used by itself refers to
a sequence of characters from a markup language's document character set. Refer
to the "Character Model for the World Wide Web"
[CHARMOD] for more information
about text and characters. Note: This document makes use of
other terms that include the word "text" that have highly specialized meanings:
collated text transcript,
non-text content,
text content, non-text element,
text element, text
equivalent, and text transcript.
- text content,
non-text
content, text element,
non-text
element, text
equivalent, non-text equivalent
- As used in this document a "text element" adds
text characters to either
content or the user interface. Both in the Web
Content Accessibility Guidelines 1.0 [WCAG10] and in this document, text
elements are presumed to produce text that can be understood when rendered
visually, as synthesized speech, or as Braille. Such text elements benefit at
least these three groups of users:
- visually-displayed text benefits users who are deaf and adept in reading
visually-displayed text;
- synthesized speech benefits users who are blind and adept in use of
synthesized speech;
- braille benefits users who are blind, and possibly deaf-blind, and adept at
reading braille.
A text element may consist of both text and non-text data. For instance, a
text element may contain markup for style (e.g., font size or color), structure
(e.g., heading levels), and other semantics. The essential function of the text
element should be retained even if style information happens to be lost in
rendering.
A user agent may have to process a text element in order to have access to
the text characters. For instance, a text element may consist of markup, it may
be encrypted or compressed, or it may include embedded text in a binary format
(e.g., JPEG).
"Text content" is content that is composed of one or more text elements. A "text equivalent" (whether in content or the user interface) is an
equivalent composed of one
or more text elements. Authors generally provide text equivalents for content
by using the alternative
content mechanisms of a specification.
A "non-text element" is an element (in content or the user interface) that
does not have the qualities of a text element. "Non-text content" is composed
of one or more non-text elements. A "non-text equivalent" (whether in content
or the user interface) is an equivalent
composed of one or more non-text elements.
- text decoration
- In this document, a "text decoration" is any stylistic
effect that the user agent may apply to visually rendered text that does not affect the
layout of the document (i.e., does not require reformatting when applied or
removed). Text decoration mechanisms include underline, overline, and
strike-through.
- text format
- Any media object given an Internet media type of "text" (e.g., "text/plain", "text/html", or "text/*") as defined in RFC 2046 [RFC2046], section 4.1, or any media object identified by Internet media type to be an XML document
(as defined in [XML], section 2) or SGML application.
Refer, for example, to Internet media types defined in "XML Media Types" [RFC3023].
- text transcript
- A text transcript is a text equivalent of audio
information (e.g., an audio-only presentation or
the audio track of a movie or other
animation). It provides text for both spoken words and non-spoken sounds such
as sound effects. Text transcripts make audio information accessible to people
who have hearing disabilities and to people who cannot play the audio. Text
transcripts are usually created by hand but may be generated on the fly (e.g.,
by voice-to-text converters). See also the definitions of
captions and collated text
transcripts.
- user agent
- In this document, the term "user agent" is used in two
ways:
- The software and documentation components that together,
conform to the requirements of this
document. This is the most common use of the term in this document and is the
usage in the checkpoints.
- Any software that retrieves and renders Web content for users. This may
include Web browsers, media players, plug-ins,
and other programs — including assistive technologies —
that help in retrieving and rendering Web content.
- user agent default styles
- User agent default styles are style property
values applied in the absence of any author or user styles. Some markup
languages specify a default rendering for content in that markup language;
others do not. For example, XML 1.0
[XML] does not specify default styles
for XML documents. HTML 4
[HTML4] does not specify default
styles for HTML documents, but the CSS 2
[CSS2] specification suggests a
sample
default style sheet for HTML 4 based on current practice.
- user interface,
user interface
control
- For the purposes of this document, user interface includes
both:
- the user agent user
interface, i.e., the controls (e.g., menus, buttons, prompts, and
other components for input and output) and mechanisms (e.g., selection and
focus) provided by the user agent ("out of the box") that are not created by
content.
- the "content user interface," i.e., the enabled elements that are part of
content, such as form controls, links, and applets.
The document distinguishes them only where required for clarity. For more
information, see the section on
requirements for content, for user
agent features, or both.
The term "user interface control" refers to a component of the user agent
user interface or the content user interface, distinguished where
necessary.
- user styles
- User styles are style property
values that come from user interface settings, user style sheets, or other
user interactions.
- view, viewport
- The user agent renders content through one or more
viewports. Viewports include windows, frames, pieces of paper, loudspeakers,
and virtual magnifying glasses. A viewport may contain another viewport (e.g.,
nested frames). User agent user interface
controls such as prompts, menus, and alerts are not viewports.
Graphical and tactile viewports have two spatial
dimensions. A viewport may also have
temporal dimensions, for instance when audio, speech, animations, and movies
are rendered. When the dimensions (spatial or temporal) of rendered content
exceed the dimensions of the viewport, the user agent provides mechanisms such
as scroll bars and advance and rewind controls so that the user can access the
rendered content "outside" the viewport. Examples include: when the user can
only view a portion of a large document through a small graphical viewport, or
when audio content has already been played.
When several viewports coexist, only one has the current focus at a given moment. This
viewport is highlighted to make it stand out.
User agents may render the same content in a variety of ways; each rendering
is called a view. For instance, a user agent may allow users to view
an entire document or just a list of the document's headers. These are two
different views of the document.
"Top-Level" Viewports are viewports that are not contained within other user agent viewports.
- visual-only
presentation
- A visual-only presentation is content consisting
exclusively of one or more visual tracks presented
concurrently or in series. A silent movie is an example of a visual-only
presentation.
- visual track
- A visual object is content rendered through a graphical
viewport. Visual objects include graphics,
text, and visual portions of movies and other animations. A visual track is a
visual object that is intended as a whole or partial presentation. A visual
track does not necessarily correspond to a single physical object or software
object.
- voice browser
- From "Introduction and Overview of W3C Speech Interface
Framework" [VOICEBROWSER]: "A voice
browser is a device (hardware and software) that interprets voice markup
languages to generate voice output, interpret voice input, and possibly accept
and produce other modalities of input and output."
- web resource
- The term "Web resource" is used in this document in
accordance with Web Characterization Terminology and Definitions Sheet
[WEBCHAR] to mean anything that
can be identified by a Uniform Resource Identifier (URI);
refer to RFC 2396 [RFC2396].
Appendix D: Acknowledgments
Participants active in the UAWG at the time of publication:
- Jim Allan (WG Chair, Texas School for the Blind and Visually Impaired)
- Kelly Ford (Microsoft)
- Cathy Laws (IBM)
- Peter Parente (IBM)
- Jan Richards (Adaptive Technology Resource Centre, University of Toronto)
- Gregory Rosmaita
Other previously active UAWG participants and other contributors to UAAG 2.0:
???.
This document would not have been possible without the work of those who contributed to UAAG 1.0.
This publication has been funded in part with Federal funds from the U.S. Department of Education under contract number ED05CO0039. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.
Appendix E: Checklist
Appendix F: Comparison of UAAG 1.0 guidelines to UAAG 2.0