simultaneous modalities

Travel reservations over phone/pda/wap

picture

III.2 Who you gonna call?

WG

The WG was mostly a spin-off of the Voice Browser WG, after the realisation that voice interaction with computers isn't just for call centres. Instead it can now happen on the device itself. And then why limit ourselves to speech, how about ink, gestures, many input modes, concurrent or sequential. How about dynamic modality switching upon environmental conditions? etc.

III.3 How do they do that?

The Framework

picture

What to standardise?

Q: But why standardise things that are happening inside the browser?

A1: browsers are plurilithic

A2: you can't detach interaction form application

Deeper into the framework

picture

Deeper still:output

picture

Deeper still: input

picture

Work items

Now that we've constructed a framework, what new pieces do we add?

picture

III.1. Linking things together: the MID

4