Why assistive technology matters!
- Legal requirement to comply with current web
accessibility legislation
- Ensuring all your customers can access your
goods and services online - good business sense!
- Users with disabilities use Web sites in a different
way than the developer may expect
- Output as speech, braille, alternate colours, large size, etc.
- Input as pointer only, keyboard only, sip-and-puff, wands, speech, etc.
- See types
of assistive technology products
Assistive technology and Web 2.0
- Native apps and Web 1.0, basic accessibility
- Operating system exposes API for connecting
assistive technology
- Works effectively with Graphical user interfaces built
from operating system supplied components
- Browsers interface HTML forms controls to this API
- Problems introduced by Web 2.0 and scripting
- UI controls created with a mix of HTML, CSS and JavaScript
- Browser has no idea of author's intent
- Unusable with assistive technology
- How to get authors to provide cues that browsers
can interpret to restore a decent level of accessibility?
ARIA roles, properties and states
"WAI-ARIA, the Accessible Rich Internet
Applications Suite, defines a way to make Web content and Web
applications more accessible to people with disabilities. It
especially helps with dynamic content and advanced user interface
controls developed with Ajax, HTML, JavaScript, and related
technologies."
- Additional markup attributes to provide missing cues
- Roles for controls, e.g. tree, menubar, slider
- Static properties, e.g. reference to element with the
control's description
- Dynamic states, e.g. current position of slider
Design choice
- Allow tab to shift focus within control, e.g. days in calendar
- Or restrict tab key for moving between controls and not inside them
- Indicate "focus" with visual cues and aria-activedescendant
Basic principles and how to get started
To create an ARIA widget you should follow:
- Pick the widget type (role) from the WAI-ARIA taxonomy
- WAI-ARIA provides a role taxonomy ([ARIA], Section 3.4)
constituting the most common UI component types. Choose
the role type from the provided table.
- From the role, get the list of supported states and properties
- Once you have chosen the role of your widget, consult the
WAI-ARIA specification [ARIA] for an in-depth definition for
the role to find the supported states, properties, and other
attributes.
- Set the role, states and properties as appropriate
- Note that the states should change to reflect the current
state of the control, e.g. valuenow for a slider
These three steps need to be repeated for the children of
the parent element.
General UI guidelines
- Try to provide markup that will work if the user
has turned off scripting
- Use server-side scripts to compensate (form submissions)
- Web page scripts can change the markup as needed when page
loads, including adding all of the ARIA attributes
- Bind keys to common operations consistently
- Escape hide control without performing action
- Home/End move to the start/end of the control's range
- Page Up/Down move up/down a chunk, e.g. months in a data picker
- Arrow keys move one step in each direction
- Enter apply control's action, hiding the control if appropriate
- Tab move between controls (+Shift for backwards)
- Keep tabbing sequence simple, avoid excess tab sequences
- tabindex "0" but tabindex "-1" to exclude from sequence
- aria-activedescendant and visual cues
ARIA and MBUI
- Aria defines roles, properties and states
- If new roles are defined the browser code needs to be updated
- ARIA doesn't define events, and I believe that the browser
watches for changes to the DOM, perhaps via listeners for DOM
mutation events.
- How to support assistive technology for new controls?
- MBUI defines events at multiple levels of abstraction
- We need to define a taxonomy of such events and to align this
with the needs for authoring for accessibility.
ARIA and MBUI
This also relates to other work in W3C
- EMMA
- proposed work on multi-touch gestures
EMMA is a standard for annotating
interpretations of user input expressed in XML. This can be used
to relate higher level events to lower level ones, e.g.
interpretations of speech and touch gestures as commands.
How does EMMA relate to MBUI and
Cameleon?
Multi-touch UI has been popularized by the
Apple iPhone, but dates back a long way. The gestures are bound to
particular actions, e.g. squeeze to shrink a photo. Can gestures
be patented?
Multimodal interaction and MBUI
Text works with most modes of interaction
- Speech, GUI, Braille, various kinds of input devices
- Text descriptions for audio, video, drawings and images
But free text input can be difficult for some users
- Suck and puff, eye movements, or head mounted wand
MBUI architecture needs to support text and controlled use of
language
- Text-based models at abstract UI layer
- Text + high-level events for associated dialog models
Standards for interoperable authoring tools at this level?