MBUI and accessibility

W3C MBUI Workshop, Rome, May 2010

Dave Raggett <dsr@w3.org>

Stata building photo by See-ming Lee

MIT stata building W3C logo

Why assistive technology matters!

disabled boy at compuyer

Assistive technology and Web 2.0

ARIA roles, properties and states

"WAI-ARIA, the Accessible Rich Internet Applications Suite, defines a way to make Web content and Web applications more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface controls developed with Ajax, HTML, JavaScript, and related technologies."

Design choice

Basic principles and how to get started

To create an ARIA widget you should follow:

  1. Pick the widget type (role) from the WAI-ARIA taxonomy
    • WAI-ARIA provides a role taxonomy ([ARIA], Section 3.4) constituting the most common UI component types. Choose the role type from the provided table.
  2. From the role, get the list of supported states and properties
    • Once you have chosen the role of your widget, consult the WAI-ARIA specification [ARIA] for an in-depth definition for the role to find the supported states, properties, and other attributes.
  3. Set the role, states and properties as appropriate
    • Note that the states should change to reflect the current state of the control, e.g. valuenow for a slider

These three steps need to be repeated for the children of the parent element.

Example: Rating control

Value for money:

<div id="value" class="rating">
<span title="pick rating">Value for money:</span><br/>
<label><input type="radio" name="rating1" value="1" />terrible</label><br />
<label><input type="radio" name="rating1" value="2" />very poor</label><br />

General UI guidelines

What do most developers do?

state of web development 2010
with thanks to webdirections.org



This also relates to other work in W3C

  1. EMMA
  2. proposed work on multi-touch gestures

EMMA is a standard for annotating interpretations of user input expressed in XML. This can be used to relate higher level events to lower level ones, e.g. interpretations of speech and touch gestures as commands.

How does EMMA relate to MBUI and Cameleon?

Multi-touch UI has been popularized by the Apple iPhone, but dates back a long way. The gestures are bound to particular actions, e.g. squeeze to shrink a photo. Can gestures be patented?

Multimodal interaction and MBUI

Text works with most modes of interaction

But free text input can be difficult for some users

MBUI architecture needs to support text and controlled use of language

Standards for interoperable authoring tools at this level?