Mobile Accessibility Examples from UAAG 2.0 Reference

Updated: 25 September 2014

This page lists mobile examples from UAAG 2.0 Reference: Explanations, Examples, and Resources for User Agent Accessibility Guidelines 2.0. It includes the guidelines, success critieria, and intent to provide context for the mobile examples. For background, see the UAAG Overview.

These examples show how web browsers that follow UAAG benefit people with disabilities using the Web on mobile devices.

Browser support is just one aspect of mobile accessibility. W3C WAI's broader work related to mobile accessibility is introduced in Mobile Accessibility.


PRINCIPLE 1 - Ensure that the user interface and rendered content are perceivable

Guideline 1.1 - Provide access to alternative content [Guideline 1.1]

 

1.1.2 Indicate Unrendered Alternative Content:

The user can specify that indicators be displayed along with rendered content when recognized unrendered alternative content is present. (Level A)

Mobile Examples for Success Criterion 1.1.2:
  • Brin is deaf. The video player she is using has a button displayed beneath the playing video that indicates that captions are available. She clicks the button to toggle the captions on so she can understand the video. On her mobile phone, Brin touches a video, which displays the controls including the "display caption" control.

1.1.3 Replace Non-Text Content:

The user can request a placeholder that incorporates recognized text alternative content instead of recognized non-text content, until explicit user request to render the non-text content. (Level A)

Mobile Examples for Success Criterion 1.1.3:
  • Ben has low vision and needs to use a very large font size to be able to read text. On his mobile device, enlarging the page makes any images so large that they use up too much screen space and require excessive scrolling. He sets a preference to render all images as text (if available) and to reflow the page so that the text flows smoothly with no space for the missing images.
  • Betty is a low vision user and has difficulty reading text on her mobile device when it is displayed over a background image. Using her user-defined style sheet, she can disable all background images from being rendered in her browser.

1.1.4 Provide Configurable Alternative Content Defaults:

The user can specify which type(s) of alternative content to render by default for each type of non-text content, including time based media. (Level AA)

Mobile Examples for Success Criterion 1.1.4:
  • Ben has low vision and keeps his mobile phone browser "zoomed" so he can read the text. Because images can become pixelated when enlarged, he prefers the alternative text. In the mobile settings dialog box, he chooses to always display the alternative ("fallback") content for images and to reflow the page without a placeholder for the image. This saves screen space and reduces the amount of scrolling he has to do.

1.1.5 Facilitate Clear Display of Alternative Content for Time-based Media:

For recognized on-screen alternative content for time-based media (e.g. captions, sign language video), the following are all true: (Level AA)

  • Don't obscure controls: Displaying time-based media alternatives doesn't obscure recognized controls for the primary time-based media.
  • Don't obscure primary media: The user can specify that displaying time-based media alternatives doesn't obscure the primary time-based media.
  • Use configurable text: The user can configure recognized text within time-based media alternatives (e.g. captions) in conformance with 1.4.1.
  • Note: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size to meet this requirement.
Mobile Examples for Success Criterion 1.1.5:
  • Jaime is deaf and prefers to always display captions on her mobile phone. She has set her global settings on the phone to turn on closed captions. All videos displayed on the phone will automatically display captions.
  • Ben has low vision that becomes worse throughout the day as he becomes more tired. He keeps a floating control on his mobile phone that allows one touch access to his configuration so that he can change the font size. The floating control can be easily moved around the screen so it is not in the way of other controls, and it becomes translucent after it is idle for a few seconds.

1.1.6 Allow Resize and Reposition of Time-based Media Alternatives:

The user can configure recognized alternative content for time-based media (e.g. captions, sign language video) as follows: (Level AAA)

  • Resize: The user can resize alternative content for time-based media up to the size of the user agent's viewport.
  • Reposition: The user can reposition alternative content for time-based media to two or more of the following: above, below, to the right, to the left, and overlapping the primary time-based media.
  • Note 1: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size or hidden to meet this requirement.
  • Note 2: Implementation may involve displaying alternative content for time-based media in a separate viewport, but this is not required.
Mobile Examples for Success Criterion 1.1.6:
  • Raymond has one functioning hand. He positions captions so that they're not covered by the hand he's using to hold his tablet.
  • Tom is deaf. When Tom watches narrow-aspect video on a wide-aspect screen or in landscape mode on his mobile device, he moves the window displaying sign language interpretation to the side, allowing the primary video to take up the entire height of the screen without the interpretation getting in the way.

Guideline 1.2 - Repair missing content [Guideline 1.2]

Guideline 1.3 - Provide highlighting for selection, keyboard focus, enabled elements, visited links [Guideline 1.2]

1.3.1 Highlighted Items:

The user can specify that the following classes be highlighted so that each is uniquely distinguished: (Level A)

  • Selection
  • Active keyboard focus (indicated by focus cursors and/or text cursors)
  • Recognized enabled input elements (distinguished from disabled elements)
  • Recently visited links
  • Found search results
Mobile Examples for Success Criterion 1.3.1:
  • George has limited hand use and uses custom gestures on his mobile phone. He wants a visible focus indicator to know what element on the page has focus so when gestures are used on the mobile phone, he will know what element will be activated.
  • Brin is deaf. The video player she is using has a button displayed beneath the playing video that indicates that captions are available. She clicks the button to toggle the captions on so she can understand the video. On her mobile phone, Brin touches a video, which displays the controls including the "display caption" control.

Guideline 1.4 - Provide text configuration [Guideline 1.4]

1.4.1 Basic text formatting (Globally):

The user can globally set all of the following characteristics of visually rendered text content: (Level A)

  • Text scale with preserved size distinctions (e.g. keeping headings proportional to main font)
  • Text color and background color, choosing from all platform color options
  • Font family, choosing from all installed fonts
  • Line spacing, choosing from a range with at least three values
Mobile Examples for Success Criterion 1.4.1:
  • Ben has low vision. In the mobile settings dialog box, he chooses a large text for font size. All applications on the mobile phone display text in large font.
  • Sebeeya has low vision. She finds text easiest to read at 16 pt Palatino and chooses to have her browser display body text in the 16 pt Palatino font. She needs the headlines to scale proportionally (e.g. 24 pt) in order to preserve headline prominence.

Guideline 1.5 - Provide volume configuration [Guideline 1.5]

Guideline 1.6 - Provide synthesized speech configuration [Guideline 1.6]

1.6.1 Speech Rate, Volume, and Voice:

If synthesized speech is produced, the user can specify the following: (Level A)

1.6.2 Speech Pitch and Range:

If synthesized speech is produced, the user can specify the following if offered by the speech synthesizer: (Level AA)

1.6.3 Advanced Speech Characteristics:

If synthesized speech is produced, the user can adjust all of the speech characteristics provided by the speech synthesizer. (Level AAA)

Mobile Examples for Success Criteria 1.6.1, 1.6.2, and 1.6.3:
  • Jamie is blind. He uses a mobile-based web browser to read a web page. He presses a key to increase the rate at which the information is read back. He also uses a mobile browser in a noisy environments such as a crowded subway. With a key press, Jamie quickly increases the volume.

1.6.5 Synthesized Speech Language:

If synthesized speech is produced and more than one language is available, the user can change the language. (Level AA)

Mobile Examples for Success Criterion 1.6.5:
  • Hosea is blind. He speaks Spanish but his instructors only speak English. Hosea keeps a floating control on his mobile device that allows one-touch access to his configuration so he can quickly change the language the speech synthesizer reads. He is reading class-related material on the internet in Spanish, but must refer to an explanatory reference link in English. Because the reference link isn't properly coded with a language attribute, his speech synthesizer doesn't recognize the language change. Hosea uses the floating control to quickly switch to English for the reference, then back to Spanish when he returns to the main article he was reading.

Guideline 1.7 - Enable configuration of user stylesheets [Guideline 1.7]

1.7.1 Support User Stylesheets:

If the user agent supports a mechanism for author stylesheets, the user agent also provides a mechanism for user stylesheets. (Level A)

1.7.2 Apply User Stylesheets:

If user stylesheets are supported, then the user can enable or disable user stylesheets for: (Level A)

1.7.3 Disable Author Stylesheets:

If the user agent supports a mechanism for author stylesheets, the user can disable the use of author stylesheets on the current page. (Level A)

Mobile Examples for Success Criteria 1.7.1, 1.7.2, and 1.7.3:
  • Lee has low vision and finds text easiest to read on her mobile device when it is presented in yellow on a black background. She has configured her browser to override the author stylesheets to always display text in her browser using this color scheme.
  • Mattias has attention deficit hyperactivity disorder (ADHD) and finds text easiest to read if text is highlighted in blue as it is being read out loud on his desktop or mobile device. Both the highlight and text color are configurable and override the author stylesheets so text is readable and has sufficient color contrast.

1.7.4 Save Copies of Stylesheets:

The user can save copies of the stylesheets referenced by the current page. This allows the user to edit and load the copies as user stylesheets. (Level AA)

Mobile Examples for Success Criterion 1.7.4:
  • Tanya has low vision. She browses to a new website on her mobile phone and finds that the site is not optimized for mobile devices. She alters the stylesheet to provide better layout and larger fonts. The custom settings for the stylesheet are saved and applied when she returns.

Guideline 1.8 - Help users to orient within, and control, windows and viewports [Guideline 1.8]

1.8.2 Move Viewport to Selection and Focus:

When a viewport's selection or input focus changes, the viewport's content moves as necessary to ensure that the new selection or input focus location is at least partially in the visible portion of the viewport. (Level A)

Mobile Examples for Success Criterion 1.8.2:
  • Taja typically views web content on her mobile phone at a high level of zoom. This can frequently position elements outside the viewport, requiring scrolling. When moving between focusable elements, the user agent viewport automatically scrolls to the element currently in focus.

1.8.3 Provide Viewport Scrollbars:

When the rendered content extends beyond the viewport dimensions, users can have graphical viewports include scrollbars, overriding any values specified by the author. (Level A)
Mobile Examples for Success Criterion 1.8.3:
  • Terry has memory issues. She configures her mobile computer so that scrollbars are always on so she can instantly see where she is in a document.

1.8.4 Indicate Viewport Position:

The user can determine the viewport's position relative to the full extent of the rendered content. (Level A)

Mobile Examples for Success Criterion 1.8.4:
  • Ally has cognitive issues that make it difficult to orient. When looking at a map on her mobile device, she must frequently zoom in to view her current location or destination and zoom out to put the location into the context of the large map.

1.8.5 Allow Zoom:

The user can rescale content within top-level graphical viewports as follows: (Level A)

Mobile Examples for Success Criterion 1.8.5:
  • Alexandra has low vision. When she views a website on her mobile phone, she first scans the website at a very small size to guess where she wants to zoom in first. The zoom feature increases the size of both text and images.

1.8.6 Maintain Point of Regard:

The point of regard remains visible and at the same location within the viewport when the viewport is resized, when content is zoomed or scaled, or when content formatting is changed. (Level A)

Mobile Examples for Success Criterion 1.8.6:
  • Xu has a reading disability. He is reading a page with footnotes that are too small to read on his mobile device. Xu places the footnote at the top of the browser, and using the increase font-size feature, he increases the font-size of the text on the page. The footnotes stay on the top of the viewport.

1.8.9 Provide Viewport History:

For user agents that implement a history mechanism for top-level viewports (e.g. "back" button), the user can return to any state in the viewport history that is allowed by the content, including: (Level AA)

  1. restored point of regard
  2. input focus
  3. selection, and
  4. user's form field entries

1.8.10 Allow Top-Level Viewport Open on Request:

The user can specify whether author content can open new top-level viewports (e.g. windows or tabs). (Level AA)

1.8.11 Allow Top-Level Viewport Focus Control:

If new top-level viewports (e.g. windows or tabs) are configured to open without explicit user request, the user can specify whether or not top-level viewports take the active keyboard focus when they open. (Level AA)

Mobile Examples for Success Criteria 1.8.9, 1.8.10 & 1.8.11:
  • Ray is blind. His mobile device automatically opens location links and calendar dates found on web pages in native apps available on the device. When he returns to the browser, focus on the original link is maintained.

1.8.16 Provide Web Page Bookmarks:

The user can mark items in a web page, then use shortcuts to navigate back to marked items. The user can specify whether a navigation mark disappears after a session, or is persistent across sessions. (Level AAA)

Mobile Examples for Success Criterion 1.8.16:
  • Jamie is a quadriplegic who uses speech input. She is a professor who reads long documents online and often finds herself comparing different portions of the same document. It is tedious carrying out multiple scrolling commands by speech every time she needs to change to another portion of the document. She sets several bookmarks instead. This allows her to instantly jump among sections, eliminating the time and effort penalties she usually has to pay for slow scrolling. Jamie also uses bookmarks on her mobile phone to cut down on scrolling.

Guideline 1.9 - Provide alternative views [Guideline 1.9]

1.9.1 Outline View:

Users can view a navigable outline of the rendered content that allows focus to be moved to the corresponding element in the main viewport. (Level AA)

Mobile Examples for Success Criterion 1.9.1:
  • George uses a screen reader. He reads long standards documents and uses the headings to navigate quickly so he can compare sections of the standards. George also finds the outline view useful when he is quickly checking a reference on his mobile phone.

1.9.2 Source View:

The user can view all source text that is available to the user agent. (Level AAA)

Mobile Examples for Success Criterion 1.9.2:
  • George is a web developer who uses a screenreader. He visits a web page where the content author failed to provide alt text or a long description for an image he wants to access. As a last resort, George examines the source to see the image's URI, class, and similar attributes. He sees that part of the URI for the image is "home.jpg" and concludes that he can click on that image to return to the home page of the site. George also uses the source view feature on his mobile device when he needs to identify an image.

Guideline 1.10 - Provide element information [Guideline 1.10]

PRINCIPLE 2. Ensure that the user interface is operable

Guideline 2.1 - Ensure full keyboard access [Guideline 2.1]

2.1.1 Provide Full Keyboard Functionality:

All functionality can be operated via the keyboard using sequential or direct keyboard commands that do not require specific timings for individual keystrokes, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints (e.g. free hand drawing). This does not forbid and should not discourage providing other input methods in addition to keyboard operation including mouse, touch, gesture and speech. (Level A)

Mobile Examples for Success Criterion 2.1.1:
  • Karen has muscular dystrophy and cannot easily use the onscreen keyboard to navigate web pages on her mobile phone. Instead, she uses simple gestures to move between elements on the page. As focus moves from one element to another, there is a visible focus indicator.

2.1.2 Show Keyboard Focus:

Every viewport has an active or inactive keyboard focus at all times. (Level A)

Mobile Examples for Success Criterion 2.1.2:
  • Jeremy is a speech-input user who cannot use his hands to control his tablet. He opens a web page using a speech command. The web page has a search field, and normally comes up with the keyboard focus in the search field. Jeremy sees the indicator in the search field and knows he does not have to navigate to the search field before saying a search term.
  • Erin has dyslexia which often causes her to confuse directions. She uses gestures to navigate her mobile phone. As focus moves from one element to another, there is a visible focus indicator, which allows her to find the focus easily.

2.1.4 Separate Selection from Activation:

The user can specify that focus and selection can be moved without causing further changes in focus, selection, or the state of controls, by either the user agent or author-supplied content. (Level A)

Mobile Examples for Success Criterion 2.1.4:
  • Malak is blind. He uses the screen reader on his smartphone to navigate a web page. He selects an item and is able to activate the element using gestures. This requires sufficient screen real estate to perform gestures without changing focus.

2.1.6 Make Keyboard Access Efficient:

The user agent user interface includes mechanisms to make keyboard access more efficient than sequential keyboard access. (Level A)

Mobile Examples for Success Criterion 2.1.6:
  • George is blind and uses the gestures on his mobile device to move focus to the top of the page, return to the previous web page and activate links.

Guideline 2.2 - Provide sequential navigation [Guideline 2.2]

2.2.1 Sequential Navigation Between Elements:

The user can move the keyboard focus backwards and forwards through all recognized enabled elements in the rendered content of the current viewport. (Level A)

Mobile Examples for Success Criterion 2.2.1:
  • George is blind and uses a screenreader on his computer and the speech output and gesture features of his mobile phone. When completing a web form on his phone, he uses the swipe gesture to advance through the form. If George goes past the next form field, or wishes to return to a previous form field, he can use a gesture to go backward.

2.2.3 Default Navigation Order:

If the author has not specified a navigation order, the default sequential navigation order is the document order. (Level A)

Mobile Examples for Success Criterion 2.2.3:
  • Alec is filling out an HTML form. Because the form's author has not specified a navigation order using the tabindex attribute, when Alec presses the Tab key the focus moves to the next control in the order defined in the underlying HTML. This order is logical as long as the author is not using styles to change the visual order. Alec has the same experience completing this form on his mobile phone.

2.2.4 Options for Wrapping in Navigation:

The user can request notification when sequential navigation wraps at the beginning or end of a document, and can prevent such wrapping. (Level AA)

Mobile Examples for Success Criterion 2.2.4:
  • Jeff has a mobility impairment. He uses gestures to navigate the page. When he reaches the last active element on the page there is an indicator that the end of the page is reached before changing focus (e.g. wrapping to the top, switching pages).

Guideline 2.3 - Provide direct navigation and activation [Guideline 2.3]

2.3.1 Allow Direct Navigation to Enabled Elements:

The user can move keyboard focus directly to any enabled element in the rendered content. (Level AA)

Mobile Examples for Success Criterion 2.3.1:
  • Mary cannot use the mouse or keyboard due to a repetitive strain injury. She uses speech input with a mouseless browsing plug-in for her browser. She is able to use the same plug-in on her smartphone. The plug-in overlays each link with a number that can then be used to directly select it (e.g. by speaking the command "link 12"). This prevents Mary from having to say "tab" numerous times to select a link.

2.3.2 Allow Direct Activation of Enabled Elements:

The user can, in a single action, move keyboard focus directly to any enabled element in the rendered content and perform an activation action on that element. (Level AA)

Mobile Examples for Success Criterion 2.3.2:
  • Mary cannot use the mouse or keyboard due to a repetitive strain injury. On her mobile phone, Mary uses a single speech command to launch the app, rather than having to use multiple commands to page through screens to find the app icon and activate it.

2.3.3 Present Direct Commands from Rendered Content:

The user can have any recognized direct commands in rendered content (e.g. accesskey, landmark) be presented with their associated elements (e.g. Alt+R to reply to a web email). (Level AA)

Mobile Examples for Success Criterion 2.3.3:
  • Mary cannot use the mouse or keyboard. She uses speech input. When reading email on her tablet, Mary touches a control which opens a toolbar with a setting to display the accesskeys and other direct commands that the author created. She sees that a 3-finger swipe will delete the current email.

2.3.4 Present Direct Commands in User Interface:

The user can have any direct commands in the UA user interface (e.g. keyboard shortcuts) be presented with their associated user interface controls (e.g. "Ctrl+S" displayed on the "Save" menu item and toolbar button). (Level AA)

Mobile Examples for Success Criterion 2.3.4:
  • Neta has a repetitive strain injury. She relies on gestures and shortcuts to complete tasks. Using a specialized command on her mobile device, she can pull up an overlay of arrows and text showing all the commands that can be completed in that context. This allows her to learn new programs as efficiently as possible, making it less likely she will overtax her hands.

2.3.5 Allow Customized Keyboard Commands:

The user can remap any keyboard shortcut including recognized author supplied shortcuts (e.g. accesskeys) and UA user interface controls, except for conventional bindings for the operating environment (e.g. arrow keys for navigating within menus). (Level AA)

Mobile Examples for Success Criterion 2.3.5:
  • Laura types with one hand. On her mobile device, Laura maps common website actions to numeric shortcut keys. For example, she prefers to have the 1 key to activate a site's search function. An author of a site visited daily by this user defines "S" as the accesskey for the search function. Laura overrides the author-specified accesskey of "S" with "1".

Guideline 2.4 - Provide text search [Guideline 2.4]

Guideline 2.5 - Provide structural navigation [Guideline 2.5]

 

2.5.2 Provide Structural Navigation by Heading and within Tables:

The user agent provides at least the following types of structural navigation, where the structure types are recognized: (Level AA)

  • By heading
  • By content sections
  • Within tables
Mobile Examples for Success Criterion 2.5.2:
  • Armand is blind. When he reads a long web page on his iOS smartphone, Armand navigates from heading to heading using the Rotor commands on his phone.

2.5.3 Configure Structural Navigation and Views:

The user can configure which elements are used for structural navigation and outline views. (Level AAA)

Mobile Examples for Success Criterion 2.5.3:
  • Fred is blind and uses a screen reader. When Fred is using his smartphone, he selects a control that allows him to change from navigation by heading to navigation by links.

Guideline 2.6 - Provide access to event handlers [Guideline 2.6]

2.6.1 Allow Access and Activation of Input Methods:

The user agent provides a means for the user to determine recognized input methods explicitly associated with an element, and a means for the user to activate those methods in a modality independent manner. (Level AA)

Mobile Examples for Success Criterion 2.6.1:
  • Ingrid has low vision and cannot easily keep track of the mouse cursor. When navigating a page with a smartphone, she can choose a bluetooth keyboard or gestures to operate all of the controls within the page.

Guideline 2.7 - Configure and store preference settings [Guideline 2.7]

2.7.1 Allow Persistent Accessibility Settings:

User agent accessibility preference settings persist between sessions. (Level A)

  • Note: User agents may have a public access setting that turns this off.
Mobile Examples for Success Criterion 2.7.1:
  • Betty has low vision. She customizes her mobile browser's color and font settings to make text much easier to read. Her browser incorporates a cloud-based profile so she can retain her settings across her browsing sessions and her desktop and tablet browsers.

2.7.2 Allow Restore All to Default:

The user can restore all preference settings to default values. (Level A)

Mobile Examples for Success Criterion 2.7.2:
  • Kathy has repetitive stress injuries which makes it painful to for her to experiment with settings. She accidently turns on a zoom feature on her smartphone and cannot figure out how to turn it off. She gestures to navigate to the preferences menus and selects a command to reset preferences to default.

2.7.3 Allow Multiple Sets of Preference Settings:

The user can save and retrieve multiple sets of user agent preference settings. (Level AA)

Mobile Examples for Success Criterion 2.7.3:
  • Hiroki has low vision. When he is carrying his tablet computer he operates it with the built-in touchscreen. When at his desk he links it to a Bluetooth keyboard and mouse, and redirects the display to a large computer monitor. The browser allows him to quickly switch between different configurations for different environments.

2.7.4 Allow Preference Changes from outside the User Interface:

The user can adjust any preference settings required to meet the User Agent Accessibility Guidelines (UAAG) 2.0 from outside the UA user interface. (Level AAA)

Mobile Examples for Success Criterion 2.7.4:
  • Jan is easily confused by new interfaces. Using the screen reader capabilities on her mobile phone she changes the interface of the updated browser, then can't figure out how to undo them. She uses an app from the browser developer to reset the browser settings to default.

2.7.5 Make Preference Settings Transferable:

The user can transfer all compatible user agent preference settings between devices. (Level AAA)

Mobile Examples for Success Criterion 2.7.5:
  • Betty has low vision and has a highly customized color palette defined in her browser. She saves her customizations to a cloud-based storage service, so her preferences can be transferred to the other desktop and mobile browsers that she uses.

Guideline 2.8 - Customize display of graphical controls [Guideline 2.8]

2.8.1 Customize Display of Controls for User Interface Commands, Functions, and Extensions:

The user can customize which user agent commands, functions, and extensions are displayed within the user agent user interface as follows: (Level AA)

  • Show: The user can choose to display any controls available within the user agent user interface, including user-installed extensions. It is acceptable to limit the total number of controls that are displayed onscreen.
  • Simplify: The user can simplify the default user interface by choosing to display only commands essential for basic operation (e.g. by hiding some controls).
  • Reposition: The user can choose to reposition individual controls within containers (e.g. toolbars or tool palettes), as well as reposition the containers themselves to facilitate physical access (e.g. to minimize hand travel on touch screens, or to facilitate preferred hand access on handheld mobile devices).
  • Assign Activation Keystrokes or Gestures: The user can choose to view, assign or change default keystrokes or gestures used to activate controls.
  • Reset: The user has the option to reset the containers and controls to their default configuration.
Mobile Examples for Success Criterion 2.8.1:
  • Laura has one hand. When she holds her mobile phone in her left hand, she must use her thumb to press the controls. She configures her mobile apps so that the toolbars are at the left side or the bottom, so she can reach them.
  • Linda has rheumatoid arthritis and finds it difficult to perform the pinch gesture that's commonly used to zoom on mobile phones. She changes the default gesture for zooming to a gesture she can more easily do. Linda's left hand is less damaged than her right hand. She moves a common control from the right side of the screen to the left side of the screen to make it easier to access with her left hand.
  • Jennifer is blind. She sometimes configures apps on her friend Linda's mobile phone. When Jennifer picks up Linda's mobile phone, she turns on the built-in screen reader so she so she can quickly find her way around Linda's phone. When Jennifer is done, she changes the controls back to Linda's original settings.

Guideline 2.9 - Allow time-independent interaction [Guideline 2.9]

Guideline 2.10 - Help users avoid flashing that could cause seizures [Guideline 2.10]

Guideline 2.11 - Provide control of time-based media [Guideline 2.11]

2.11.2 Execution Placeholder:

The user can request a placeholder instead of executable content that would normally be contained within an on-screen area (e.g. Applet, Flash), until explicit user request to execute. (Level A)

Mobile Examples for Success Criterion 2.11.2:
  • Evan has configured his mobile phone so that any audio or video file displays a placeholder with a triangle "play" icon. That allows him to control when the audio or video starts.

Guideline 2.12 - Support other input devices [Guideline 2.12]

2.12.2 Operation With Any Device:

If an input device is supported by the platform, all user agent functionality other than text input can be operated using that device. (Level AA)

Mobile Examples for Success Criterion 2.12.2:
  • Randall has repetitive stress injuries. The web browser on his smart phone allows him to perform most operations using speech commands. Unfortunately, a few features are only available through the touchscreen, which Randall finds painful to use. In the next version of the browser, the remaining features are enabled using speech, and Randall finds the product safer and more convenient to use.

2.12.3 Text Input With Any Device:

If an input device is supported by the platform, all user agent functionality including text input can be operated using that device. (Level AAA)

Mobile Examples for Success Criterion 2.12.3:
  • Randall has a web browser on his smart phone that allows him to perform most operations using speech commands. By offloading the speech recognition to an Internet server, it is able to perform large vocabulary speech recognition, so Randall can use his voice to compose email and fill in forms, as well as controlling the browser itself.

PRINCIPLE 3: Ensure that the user interface is understandable

Guideline 3.1 - Help users avoid and correct mistakes [Guideline 3.1]

3.1.2 Settings Changes can be Reversed or Confirmed:

If the user agent provides mechanisms for changing its user interface settings, it either allows the user to reverse the setting changes, or the user can require user confirmation to proceed. (Level A)

Mobile Examples for Success Criterion 3.1.2:
  • Davy has moderately low vision. He is adjusting the contrast of the background on his mobile phone when he accidentally selects a white background with the previously selected white text. This causes all the icon labels to disappear.  He can see a highlighted rectangle on the screen that usually contains the word "undo" when he makes a change on his phone. He selects that box and the dark background returns, so he can now read the text.  He then changes the background to a color with sufficient contrast for comfortable reading.

Guideline 3.2 - Document the user agent user interface including accessibility features [Guideline 3.2]

3.2.2 Describe Accessibility Features:

For each user agent feature that is used to meet UAAG 2.0, at least one of the following is true: (Level A)

  1. Described in the Documentation: Use of the feature is explained in the user agent's documentation; or
  2. Described in the Interface: Use of the feature is explained in the UA user interface; or
  3. Platform Service: The feature is a service provided by an underlying platform; or
  4. Not Used by Users: The feature is not used directly by users (e.g., passing information to a platform accessibility service).
Mobile Examples for Success Criterion 3.2.2:
  • Neta has a repetitive strain injury. She relies on gestures and shortcuts to complete tasks. Using a specialized command, she pulls up a list of all the gesture commands available including descriptions.

3.2.5 Centralized View:

There is a dedicated section of the documentation that presents a view of all features of the user agent necessary to meet the requirements of User Agent Accessibility Guidelines 2.0. (Level AAA)

Mobile Examples for Success Criterion 3.2.5:
  • Bob is blind and uses a screen reader that is part of his phone's operating system. He downloads a new web browser on his mobile phone. The browser's online help includes a section on accessibility that point him to pages on non-visual access, such as interaction with screen readers, helpful hints such as an explanation of the screen layout, and a list of supported touch gestures.

Guideline 3.3 - Make the user agent behave in predictable ways [Guideline 3.3]

 

PRINCIPLE 4: Facilitate programmatic access

Guideline 4.1 - Facilitate programmatic access to assistive technology [Guideline 4.1]

Note: UAAG 2.0 assumes that a platform accessibility API will be built on top of underlying security architectures that will allow user agents to comply with both the success criteria and security needs.

 

4.1.3 Provide Equivalent Accessible Alternatives:

If a component of the UA user interface cannot be exposed through platform accessibility services, then the user agent provides an equivalent alternative that is exposed through the platform accessibility service. (Level A)

Mobile Examples for Success Criterion 4.1.3:
  • Doug uses a mouse-stick on his smart phone. He uses the assistive touch option on his mobile phone to control an app for 3D design drawing. The app provides a single, complex control for 3-dimensional manipulation of a virtual object. This custom control cannot be represented in the platform accessibility service, so the app provides Doug the option to achieve the same functionality through an alternate user interface: a panel that adjusts the yar, spin, and roll independently using arrow keys.

PRINCIPLE 5: Comply with applicable specifications and conventions

Guideline 5.1 - Comply with applicable specifications and conventions [Guideline 5.1]

5.1.3 Implement Accessibility Features of the Platform:

If the user agent contains non-web-based user interfaces, then those user interfaces follow user interface accessibility guidelines for the platform. (Level A)

  • Note: When a requirement of another specification contradicts a requirement of UAAG 2.0, the user agent may disregard the rendering requirement of the other specification and still satisfy this guideline.
Mobile Examples for Success Criterion 5.1.3:
  • Martin uses a mouth stick to control his mobile browser. Even though he cannot use a pinch gesture, he controls the zoom in his mobile browser with a custom gesture. He can do this because the app developer followed the guidance provided in the "Accessibility Programming Guide for iOS".

Appendix A: Glossary

This glossary is normative.

a · b · c · d · e · f · g · h · i · j · k · l · m · n · o · p · q · r · s · t · u · v · w · x · y · z

activate
To carry out the behaviors associated with an enabled element in the rendered content or a component of the UA user interface.
alternative content
Web content that user agents can programmatically determine is usable in place of other content that some people are not able to access. Alternative content fulfills essentially the same function or purpose as the original content. There are several general types of alternative content: Note: According to WCAG 2.0, alternative content may or may not be programmatically determinable (e.g., a short description for an image might appear in the image's description attribute or within text near the image). However, UAAG 2.0 adds the programmatically available condition because this is the only type of alternative content that user agents can recognize.
animation
Graphical content rendered to automatically change over time, giving the user a visual perception of movement. Examples include video, animated images, scrolling text, programmatic animation (e.g. moving or replacing rendered objects).
application programming interface (API)
A mechanism that defines how communication may take place between applications.
assistive technology
For the purpose of UAAG 2.0 conformance, assistive technology meets the following criteria:
  1. Relies on services (such as retrieving web resources and parsing markup) provided by one or more host user agents.
  2. Communicates data and messages with host user agents by monitoring and using APIs.
  3. Provides services beyond those offered by the host user agents to meet the requirements of users with disabilities. Additional services include alternative renderings (e.g. as synthesized speech or magnified content), alternative input methods (e.g. voice), additional navigation or orientation mechanisms, and content transformations (e.g. to make tables more accessible).
Examples of assistive technologies that are important in the context of UAAG 2.0 include the following:
audio
The technology of sound transmission. Audio can be created synthetically (including speech synthesis), streamed from a live source (e.g. a radio broadcast), or recorded from real world sounds. There may be multiple audio tracks in a presentation.
audio description
A type of alternative content that takes the form of narration added to the audio to describe important visual details that cannot be understood from the main soundtrack alone. Audio description of video provides information about actions, characters, scene changes, on-screen text, and other visual content. In standard audio description, narration is added during existing pauses in dialogue.
audio track
All or part of the audio portion of a presentation (e.g. each instrument may have a track, or each stereo channel may have a track).
author
A person who works alone or collaboratively to create content (e.g. content author, designer, programmer, publisher, tester).
available printing devices
Printing devices that are identified as available to applications via the platform.
captions
A type of alternative content that takes the form of text presented and synchronized with time-based media to provide not only the speech, but also non-speech information conveyed through sound, including meaningful sound effects and identification of speakers. In some countries, the term "subtitle" is used to refer to dialogue only and "captions" is used as the term for dialogue plus sounds and speaker identification. In other countries, "subtitle" (or its translation) is used to refer to both. Note: Other terms that include the word "caption" may have different meanings. For instance, a "table caption" is a title for a table, often positioned graphically above or below the table.
commands
Actions made by users to control the user agent. These include:
content (web content)
Information and sensory experience to be communicated to the user by means of a user agent, including code or markup that defines the content's structure, presentation, and interactions.
continuous scale
When interacting with a time-based media presentation, a continuous scale allows user (or programmatic) action to set the active playback position to any time point on the presentation time line. The granularity of the positioning is determined by the smallest resolvable time unit in the media timebase.
default
see properties
directly
using a direct command
disabled element
see element
document character set
The internal representation of data in the source content by a user agent.
document object, Document Object Model (DOM)
A platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. Overview of DOM-related materials: http://www.w3.org/DOM/#what.
documentation
Any information that supports the use of a user agent. This information may be provided electronically or otherwise and includes help, manuals, installation instructions, tutorials, etc. Documentation may be accessed in various ways (e.g. as files included in the installation, available on the web).
Note: The level of technical detail in documentation for users should match the technical level of the feature. For example, user documentation for a browser's zoom function should not refer users to the source code repository for that browser.
element
Primarily, a syntactic construct of a document type definition (DTD) for its application. This is the sense employed by the XML 1.0 specification ([XML], section 3). UAAG 2.0 also uses the term "element" more generally to refer to any discrete unit within the content (e.g. a specific image, video, sound, heading, list, or list item).
events and scripting, event handler, event type
User agents often perform a task when an event having a particular "event type" occurs, including a user interface event, a change to content, loading of content, or a request from the operating environment. Some markup languages allow authors to specify that a script, called an event handler, be executed when an event of a given type occurs. An event handler is explicitly associated with an element through scripting, markup or the DOM.
enabled element
see element
explicit user request
An interaction by the user through the UA user interface, the focus, or the selection. User requests are made, for example, through user agent user interface controls and keyboard commands. Some examples of explicit user requests include when the user selects "New viewport," responds "yes" to a prompt in the user agent's user interface, configures the user agent to behave in a certain way, or changes the selection or focus with the keyboard or pointing device. Note: Users can make errors when interacting with the user agent. For example, a user may inadvertently respond "yes" to a prompt instead of "no." This type of error is still considered an explicit user request.
extended audio description
see audio description
focus, input focus
The location where input will occur if a viewport is active. Examples include: The active input focus is in the active viewport. The inactive input focus is in the inactive viewport. Focus is typically indicated by a focus cursor.
focus cursor
Visual indicator that highlights a user interface element to show that it has input focus (e.g. the dotted line around a button, outline around a pane, or brightened title bar on a window). Cursors are active when in the active viewport, and inactive when in an inactive viewport.
focusable element
Any element capable of having input focus (e.g. a link, text box, or menu item). In order to be accessible and fully usable, every focusable element should take keyboard focus, and ideally would also take pointer focus.
globally, global configuration
A setting is one that applies to the entire user agent or all content being rendered by it, rather than to a specific feature within the user agent or a specific document being viewed.
graphical
Information (e.g. text, colors, graphics, images, or animations) rendered for visual consumption.
highlight, highlighted, highlighting
Emphasis indicated through the user interface. For example, user agents highlight content that is selected, focused, or matched by a search operation. Graphical highlight mechanisms include dotted boxes, changed colors or fonts, underlining, adjacent icons, magnification, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume ( i.e. speech prosody). User interface items may also be highlighted, for example a specific set of foreground and background colors for the title bar of the active window. Content that is highlighted may or may not be a selection.
image
Pictorial content that is static (i.e. not moving or changing). Also see animation.
informative (non-normative)
see normative
keyboard
The letter, symbol and command keys or key indicators that allow a user to control a computing device. Assistive technologies have traditionally relied on the keyboard interface as a universal, or modality independent interface. In this document references to keyboard include keyboard emulators and keyboard interfaces that make use of the keyboard's role as a modality independent interface (see Modality Independent Controls). Keyboard emulators and interfaces may be used on devices which do not have a physical keyboard, such as mobile devices based on touchscreen input.
keyboard interface
Keyboard interfaces are programmatic services provided by many platforms that allow operation in a device independent manner. A keyboard interface can allow keystroke input even if particular devices do not contain a hardware keyboard (e.g. a touchscreen-controlled device can have a keyboard interface built into its operating system to support onscreen keyboards as well as external keyboards that may be connected).
Note: Keyboard-operated mouse emulators, such as MouseKeys, do not qualify as operation through a keyboard interface because these emulators use pointing device interfaces, not keyboard interfaces.
keyboard command (keyboard binding, keyboard shortcuts, accesskey, access key, accelerator keys, direct keyboard command)
A key or set of keys that are tied to a particular UI control or application function, allowing the user to navigate to or activate the control or function without traversing any intervening controls (e.g. CTRL+"S" to save a document). It is sometimes useful to distinguish keyboard commands that are associated with controls that are rendered in the current context (e.g. ALT+"D" to move focus to the address bar) from those that may be able to activate program functionality that is not associated with any currently rendered controls (e.g. "F1" to open the Help system). Keyboard commands can be triggered using a physical keyboard or keyboard emulator (e.g. on-screen keyboard or speech recognition). (See Modality Independent Controls). Sequential keyboard commands require multiple keystrokes to carry out an action (e.g. a series of Tab or arrow presses followed by Enter, or a sequence like ALT-F, V to drop down a File menu and choose Print Preview).
non-text content (non-text element, non-text equivalent)
see text
normative, informative (non-normative)
Required (or not required) for conformance. Abilities identified as "normative" are required for conformance (noting that one may conform in a variety of well-defined ways to UAAG 2.0). Abilities identified as "informative" (or, "non-normative") are never required for conformance.
notify
To make the user aware of events or status changes. Notifications can occur within the UA user interface (e.g. a status bar) or within the content display. Notifications may be passive and not require user acknowledgment, or they may be presented in the form of a prompt requesting a user response (e.g. a confirmation dialog).
obscure
To render a visual element in the same screen space as a second visual element in a way that prevents the second visual element from being visually perceived.
Note: The use of transparent backgrounds for the overlaying visual element (e.g., video captions) is an acceptable technique for reducing obscuration, if space is available.
operating environment
The software environment that governs the user agent's operation, whether it is an operating system or a programming language environment such as Java.
operating system (OS)
Software that supports a device's basic functions, such as scheduling tasks, executing applications, and managing hardware and peripherals.
Note: Many operating systems mediate communication between executing applications and assistive technology via a platform accessibility service.
override
When one configuration or behavior preference prevails over another. Generally, the requirements of UAAG 2.0 involve user preferences prevailing over author preferences and user agent default settings and behaviors. Preferences may be multi-valued in general (e.g. the user prefers blue over red or yellow), and include the special case of two values (e.g. turn on or off blinking text content).
placeholder
Content generated by the user agent to replace author-supplied content. A placeholder may be generated as the result of a user preference (e.g. to not render images) or as repair content (e.g. when an image cannot be found). A placeholder can be any type of content, including text, images, and audio cues. A placeholder should identify the technology of the replaced object.
platform
The software and hardware environment(s) within which the user agent operates. Platforms provide a consistent operational environment. There may be layers of software in an hardware architecture and each layer may be considered a platform. Non-web-based platforms include desktop operating system (e.g. Linux, Mac OS, Windows, etc.), mobile operating systems (e.g. Android, Blackberry, iOS, Windows Phone, etc.), and cross-OS environments (e.g. Java). Web-based platforms are other user agents. User agents may employ server-based processing, such as web content transformations, text-to-speech production, etc.
Note 1: A user agent may include functionality hosted on multiple platforms (e.g. a browser running on the desktop may include server-based pre-processing and web-based documentation).
Note 2: Accessibility guidelines for developers exist for many platforms.
platform accessibility service
A programmatic interface that is engineered to enhance communication between mainstream software applications and assistive technologies (e.g. MSAA, UI Automation, and IAccessible2 for Windows applications, AXAPI for Mac OSX applications, Gnome Accessibility Toolkit API for GNOME applications, Java Access for Java applications). On some platforms it may be conventional to enhance communication further by implementing a DOM.
plug-in
see user agent
point of regard
The position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard may vary. For example,it may be a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport), or a point (e.g. a moment during an audio rendering or a cursor position in a graphical rendering), or a range of text (e.g. focused text), or a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport). The point of regard is almost always within the viewport, but it may exceed the spatial or temporal dimensions of the viewport (see the definition of rendered content for more information about viewport dimensions). The point of regard may also refer to a particular moment in time for content that changes over time (e.g. an audio-only presentation). User agents may determine the point of regard in a number of ways, including based on viewport position in content, keyboard focus, and selection.
pointer
see focus cursor
profile
A named and persistent representation of user preferences that may be used to configure a user agent. Preferences include input configurations, style preferences, and natural language preferences. In operating environments with distinct user accounts, profiles enable users to reconfigure software quickly when they log on. Users may share their profiles with one another. Platform-independent profiles are useful for those who use the same user agent on different devices.
programmatically available
Information that is encoded in a way that allows different software, including assistive technologies, to extract and use the information relying on published, supported mechanisms, such as, platform accessibility services, APIs, or the document object models (DOM). For web-based user interfaces, this means ensuring that the user agent can pass on the information (e.g. through the use of WAI-ARIA). Something is programmatically available if the entity presenting the information does so in a way that is explicit and unambiguous, in a way that can be understood without reverse-engineering or complex (and thus potentially fallible) heuristics, and only relying on methods that are published, and officially supported by the developers of the software being evaluated.
prompt
Any user agent-initiated request for a decision or piece of information from a user.
properties, values, and defaults
A user agent renders a document by applying formatting algorithms and style information to the document's elements. Formatting depends on a number of factors, including where the document is rendered (e.g. on screen, on paper, through loudspeakers, on a braille display, on a mobile device). Style information (e.g. fonts, colors, synthesized speech prosody) may come from the elements themselves (e.g. certain font and phrase elements in HTML), from stylesheets, or from user agent settings. For the purposes of these guidelines, each formatting or style option is governed by a property and each property may take one value from a set of legal values. Generally in UAAG 2.0, the term "property" has the meaning defined in CSS 2.1 Conformance ([CSS21], ). A reference to "styles" in UAAG 2.0 means a set of style-related properties.
recognize
Information or events that can be identified unambiguously by user agents.
recognized content: Information that is encoded within content in a way that can be unambiguously recognized by user agents. Authors encode information in many ways, including in markup languages, style sheet languages, scripting languages, and protocols. When the information is encoded in a manner that allows the user agent to process it with certainty, the user agent can "recognize" the information. For instance, HTML allows authors to specify a heading with the H1 element, so a user agent that implements HTML can recognize that content as a heading. If the author creates a heading using a visual effect alone (e.g. just by increasing the font size), then the author has encoded the heading in a manner that does not allow the user agent to recognize it as a heading. Some requirements of UAAG 2.0 depend on content roles, content relationships, timing relationships, and other information supplied by the author. These requirements only apply when the author has encoded that information in a manner that the user agent can recognize. See the section on conformance for more information about applicability. User agents will rely heavily on information that the author has encoded in a markup language or style sheet language. Behaviors, style, meaning encoded in a script, and markup in an unfamiliar XML namespace may not be recognized by the user agent as easily or at all.
recognized actions: Actions or events that can be unambiguously identified by a user agent. This can include actions or events initiated by users, scripts, extensions, or other sources. For example, if the keyboard focus is on a web page when the user presses a key, the user agent can recognize the keystroke and can act upon it. If the keyboard focus is on an embedded media player when the user presses a key, the host user agent may or may not be able to detect the keystroke, depending on the embedding architecture. Similarly, when the user activates an INPUT element with type="submit", the user agent will recognize this as a form submission action and carry out the proper interchange with the server. However, if a page includes a custom control that looks like a button labeled "Submit**" but whose actions are entirely handled by an author-provided script, the user agent would not be able to recognize the user action as equivalent to a form submission. Actions such as opening of new browser window would always be implemented by the user agent, so the action would be recognized regardless of whether it was initiated by the user clicking a button or by a script calling a browser function.
reflowable content
Web content that can be arbitrarily wrapped over multiple lines. The primary exceptions to reflowable content are graphics and video.
relative time units
Time intervals for navigating media relative to the current point (e.g. move forward 30 seconds). When interacting with a time-based media presentation, a user may find it beneficial to move forward or backward via a time interval relative to their current position. For example, a user may find a concept unclear in a video lecture and elect to skip back 30 seconds from the current position to review what had been described. Relative time units may be preset by the user agent, configurable by the user, and/or automatically calculated based upon media duration (e.g. jump 5 seconds in a 30-second clip, or 5 minutes in a 60-minute clip). Relative time units are distinct from absolute time values such as the 2 minute mark, the half-way point, or the end.
rendered content
The presentation generated by the user agent based on the author supplied code. This includes: rendered text: Text content that is rendered in a way that communicates information about the characters themselves, whether visually or as synthesized speech.
repair content, repair text
Content generated by the user agent to correct an error condition. "Repair text" refers to the text portion of repair content. Error conditions that may lead to the generation of repair content include: Note: UAAG 2.0 does not require user agents to include repair content in the document object. Repair content inserted in the document object should conform to the Web Content Accessibility Guidelines 2.0 [WCAG20]. For more information about repair techniques for web content and software, refer to "Techniques for Authoring Tool Accessibility Guidelines 1.0" [ATAG10-TECHS].
RFC 2119
A publication of the Internet Engineering Task Force (IETF) on Key words for use in Request for Comments (RFC) to Indicate Requirement Levels. The key words are "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" . This information is provided for explanation. UAAG 2.0 does not use these terms as defined in RFC 2119.
script
Instructions to create dynamic web content that are written in a programming (scripting) language. In guidelines referring to the written (natural) language of content, as referenced in Unicode [UNICODE]), script can also refer to "a collection of symbols used to represent textual information in one or more writing systems". Information encoded in (programming) scripts may be difficult for a user agent to recognize. For instance, a user agent is not expected to recognize that, when executed, a script will calculate a factorial. The user agent will be able to recognize some information in a script by virtue of implementing the scripting language or a known program library (e.g. the user agent is expected to recognize when a script will open a viewport or retrieve a resource from the web).
selection
A user agent mechanism for identifying a (possibly empty) range of content that will be the implicit source or target for subsequent operations. The selection may be used for a variety of purposes, including for cut-and-paste operations, to designate a specific element in a document for the purposes of a query, and as an indication of point of regard (e.g. the matched results of a search may be automatically selected). The selection should be highlighted in a distinctive manner. On the screen, the selection may be highlighted in a variety of ways, including through colors, fonts, graphics, and magnification. When rendered using synthesized speech, the selection may be highlighted through changes in pitch, speed, or prosody.
source text
Text that the user agent renders upon user request to view the source of specific viewport content (e.g. selected content, frame, page).
style properties
Properties whose values determine the presentation (e.g. font, color, size, location, padding, volume, synthesized speech prosody) of content elements as they are rendered (e.g. onscreen, via loudspeaker, via braille display) by user agents. Style properties can have several origins:
style sheet
A mechanism for communicating style property settings for web content, in which the style property settings are separable from other content resources. This separation allows author style sheets to be toggled or substituted, and user style sheets defined to apply to more than one resource. Style sheet web content technologies include Cascading Style Sheets (CSS) and Extensible Stylesheet Language (XSL).
synchronize
The act of time-coordinating two or more presentation components (e.g. a visual track with captions, several tracks in a multimedia presentation). For authors, the requirement to synchronize means to provide the data that will permit sensible time-coordinated rendering by a user agent. For example, web content developers can ensure that the segments of caption text are neither too long nor too short, and that they map to segments of the visual track that are appropriate in length. For user agent developers, the requirement to synchronize means to present the content in a sensible time-coordinated fashion under a wide range of circumstances including technology constraints (e.g. small text-only displays), user limitations (e.g. slow reading speeds, large font sizes, high need for review or repeat functions), and content that is sub-optimal in terms of accessibility.
technology (web content technology)
A mechanism for encoding instructions to be rendered, played or executed by user agents. Web content technologies may include markup languages, data formats, or programming languages that authors may use alone or in combination to create end-user experiences that range from static web pages to multimedia presentations to dynamic web applications. Some common examples of web content technologies include HTML, CSS, SVG, PNG, PDF, Flash, and JavaScript.
text
A sequence of characters that are programmatically available, where the sequence is expressing something in human language.
text transcript
A type of alternative content that takes the form of text equivalents of audio information (e.g. an audio-only presentation or the audio track of a movie or other animation). A text transcript provides text for both spoken words and non-spoken sounds such as sound effects. Text transcripts make audio information accessible to people who have hearing disabilities and to people who cannot play the audio. Text transcripts are usually created by hand but may be generated on the fly (e.g. by voice-to-text converters).
top-level viewport
see viewport
user agent
Any software that retrieves, renders and facilitates end user interaction with web content. UAAG 2.0 identifies four user agent architectures: Note: Many web applications retrieve, render and facilitate interaction with very limited data sets (e.g. online ticket booking). In such cases, WCAG 2.0, without UAAG 2.0, may be appropriate for assessing the application's accessibility.
Examples of software that are generally considered user agents under UAAG 2.0: Examples of software that are not considered user agents under UAAG 2.0 (in all cases, WCAG 2.0 still applies if the software is web-based):
user agent add-on (add-in, extension, plug-in)
Software installed into a user agent that adds one or more additional features that modify the behavior of the user agent. Extensions and plug-ins are types of add-ons. Two common capabilities for user agent add-ons are the ability to
user interface
For the purposes of UAAG 2.0, the user interface includes both: This document distinguishes UA user interface and content user interface only where required for clarity.
user interface control
A component of the user agent user interface or the content user interface, distinguished where necessary.
video
The technology of moving pictures or images. Video can be made up of animated or photographic images, or both.
view
A user interface function that lets users interact with web content. UAAG 2.0 recognizes a variety of approaches to presenting the content in a view, including: Note: A view can be visual, audio, or tactile.
viewport
A mechanism for presenting only part of a visual or tactile view to the user via a screen or tactile display. There may be multiple viewports on to the same underlying view (e.g. when a split-screen is used to present the top and bottom of a document simultaneously) and viewports may be nested (e.g. a scrolling frame located within a larger document). When the viewport is smaller than the view it is presenting, some of the view will not be presented. Mechanisms are typically provided to move the view or the viewport such that all of the view can be brought into the viewport (e.g. scrollbars).
Note: In UAAG 1.0 viewports were defined as having a temporal dimension. In UAAG 2.0, this is not the case. Since audio content is inherently time-based, audio viewports are excluded.
viewport dimensions
The onscreen size of a viewport, or the temporal duration of a viewport displaying time-based media. When the dimensions (spatial or temporal) of rendered content exceed the dimensions of the viewport, the user agent provides mechanisms such as scroll bars and advance and rewind controls so that the user can access the rendered content "outside" the viewport (e.g. when the user can only view a portion of a large document through a small graphical viewport, or when audio content has already been played).
visual-only
Content consisting exclusively of one or more visual tracks presented concurrently or in series (e.g. a silent movie is an example of a visual-only presentation).
visual track
Content rendered through a graphical viewport. Visual objects include graphics, text, and visual portions of movies and other animations. A visual track is a visual object that is intended as a whole or partial presentation. A visual track does not necessarily correspond to a single physical object or software object.
voice browser
A device (hardware and software) that interprets voice markup languages to generate voice output, interpret voice input, and possibly accept and produce other modalities of input and output. Definition from "Introduction and Overview of W3C Speech Interface Framework" [VOICEBROWSER].
web resource
Anything that can be identified by a Uniform Resource Identifier (URI).

Appendix C: References

This section is informative.

For the latest version of any W3C specification please consult the list of W3C Technical Reports at http://www.w3.org/TR/. Some documents listed below may have been superseded since the publication of UAAG 2.0.

Note: In UAAG 2.0, bracketed labels such as "[WCAG20]" link to the corresponding entries in this section. These labels are also identified as references through markup.

[ATAG10]
"Authoring Tool Accessibility Guidelines 1.0," J. Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-ATAG10-20000203/.
[ATAG10-TECHS]
"Techniques for Authoring Tool Accessibility Guidelines 1.0," J. Treviranus, C. McCathieNevile, J. Richards, eds., 29 Oct 2002. This W3C Note is http://www.w3.org/TR/2002/NOTE-ATAG10-TECHS-20021029/.
[CHARMOD]
"Character Model for the World Wide Web," M. Dürst and F. Yergeau, eds., 30 April 2002. This W3C Working Draft is http://www.w3.org/TR/2002/WD-charmod-20020430/. The latest version is available at http://www.w3.org/TR/charmod/.
[CSS21]
"Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification," B. Bos, T. Celik, I. Hickson, H. Lie, eds., 07 June 2011. This W3C Recommendation is http://www.w3.org/TR/2011/REC-CSS2-20110607/.
[DOM2HTML]
"Document Object Model (DOM) Level 2 HTML Specification," J. Stenback, P. Le Hégaret, A. Le Hors, eds., 8 November 2002. This W3C Proposed Recommendation is http://www.w3.org/TR/2002/PR-DOM-Level-2-HTML-20021108/. The latest version is available at http://www.w3.org/TR/DOM-Level-2-HTML/.
[HTML4]
"HTML 4.01 Recommendation," D. Raggett, A. Le Hors, and I. Jacobs, eds., 24 December 1999. This W3C Recommendation is http://www.w3.org/TR/1999/REC-html401-19991224/.
[RFC2616]
"Hypertext Transfer Protocol — HTTP/1.1," J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, T. Berners-Lee, June 1999.
[RFC3023]
"XML Media Types," M. Murata, S. St. Laurent, D. Kohn, January 2001.
[SMIL]
"Synchronized Multimedia Integration Language (SMIL) 1.0 Specification," P. Hoschka, ed., 15 June 1998. This W3C Recommendation is http://www.w3.org/TR/1998/REC-smil-19980615/.
[SMIL20]
"Synchronized Multimedia Integration Language (SMIL 2.0) Specification," J. Ayars, et al., eds., 7 August 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-smil20-20010807/.
[SVG]
"Scalable Vector Graphics (SVG) 1.0 Specification," J. Ferraiolo, ed., 4 September 2001. This W3C Recommendation is http://www.w3.org/TR/2001/REC-SVG-20010904/.
[UAAG10]
"User Agent Accessibility Guidelines 1.0," I. Jacobs, J. Gunderson, E. Hansen, eds.17 December 2002. This W3C Recommendation is available at http://www.w3.org/TR/2002/REC-UAAG10-20021217/.
[UAAG10-CHECKLIST]
An appendix to UAAG 2.0 lists all of the checkpoints, sorted by priority. The checklist is available in either tabular form or list form.
[UAAG10-ICONS]
Information about UAAG 1.0 conformance icons and their usage is available at http://www.w3.org/WAI/UAAG10-Conformance.
[UAAG10-SUMMARY]
An appendix to UAAG 2.0 provides a summary of the goals and structure of User Agent Accessibility Guidelines 1.0.
[UAAG10-TECHS]
"Techniques for User Agent Accessibility Guidelines 1.0," I. Jacobs, J. Gunderson, E. Hansen, eds. The latest draft of the techniques document is available at http://www.w3.org/TR/UAAG10-TECHS/.
[UNICODE]
The Unicode Consortium. The Unicode Standard, Version 6.1.0, (Mountain View, CA: The Unicode Consortium, 2012. ISBN 978-1-936213-02-3)
http://www.unicode.org/versions/Unicode6.1.0/
[VOICEBROWSER]
"Introduction and Overview of W3C Speech Interface Framework," J. Larson, 4 December 2000. This W3C Working Draft is http://www.w3.org/TR/2000/WD-voice-intro-20001204/. The latest version is available at http://www.w3.org/TR/voice-intro/. UAAG 2.0 includes references to additional W3C specifications about voice browser technology.
[W3CPROCESS]
"World Wide Web Consortium Process Document," I. Jacobs ed. The 19 July 2001 version of the Process Document is http://www.w3.org/Consortium/Process-20010719/. The latest version is available at http://www.w3.org/Consortium/Process/.
[WCAG20]
"Web Content Accessibility Guidelines (WCAG) 2.0" B. Caldwell, M. Cooper, L. Guarino Reid, G. Vanderheiden, eds., 8 December 2008. This W3C Recommendation is http://www.w3.org/TR/2008/REC-WCAG20-20081211/. The latest version is available at http://www.w3.org/TR/WCAG20/. Additional format-specific techniques documents are available from this Recommendation.
[WCAG20-TECHS]
"Techniques for Web Content Accessibility Guidelines 2.0," B. Caldwell, M. Cooper, L. Guarino Reid, G. Vanderheiden, eds., 8 December 2008. This W3C Note is http://www.w3.org/TR/2010/NOTE-WCAG20-TECHS-20101014/. The latest version is available at http://www.w3.org/TR/WCAG20-TECHS/. Additional format-specific techniques documents are available from this Note.
[WCAG-EM]
"Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0" E. Velleman, S. Abou-Zahra, eds., 26 February 2013. This is an informative draft of a Working Group Note. The latest version is available at http://www.w3.org/TR/WCAG-EM/
[WCAG2ICT]
Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies (WCAG2ICT) M. Cooper, P. Korn, A. Snow-Weaver, G. Vanderheiden, eds., 5 September 2013. This document is available in an expandable / collapsible alternate version in which the “Intent” sections copied from Understanding WCAG 2.0 are hidden and individually expandable, for easier reading.
[WEBCHAR]
"Web Characterization Terminology and Definitions Sheet," B. Lavoie, H. F. Nielsen, eds., 24 May 1999. This is a W3C Working Draft that defines some terms to establish a common understanding about key Web concepts. This W3C Working Draft is http://www.w3.org/1999/05/WCA-terms/01.
[XAG10]
"XML Accessibility Guidelines 1.0," D. Dardailler, S. Palmer, C. McCathieNevile, eds., 3 October 2001. This W3C Working Draft is http://www.w3.org/TR/2002/WD-xag-20021003. The latest version is available at http://www.w3.org/TR/xag.
[XML]
"Extensible Markup Language (XML) 1.0 (Second Edition)," T. Bray, J. Paoli, C.M. Sperberg-McQueen, eds., 6 October 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xml-20001006.
[XHTML10]
"XHTML[tm] 1.0: The Extensible HyperText Markup Language," S. Pemberton, et al., 26 January 2000. This W3C Recommendation is http://www.w3.org/TR/2000/REC-xhtml1-20000126/.
[XMLDSIG]
"XML-Signature Syntax and Processing," D. Eastlake, J. Reagle, D. Solo, eds., 12 February 2002. This W3C Recommendation is http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/.
[XMLENC]
"XML Encryption Syntax and Processing," D. Eastlake, J. Reagle, eds., 10 December 2002. This W3C Recommendation is http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/.