Multimodal Interaction on the Web

Multimodal Web Applications for Embedded Systems

W3C Seminar - 21 June 2005 - Toulouse, France

Register!

Online registration is closed

Introduction

MMI genieW3C wishes to bring Web technologies to new environments such as mobile devices, automotive telematics and ambient intelligence. Already, many innovative multimodal Web applications, using established W3C standards, have been developed. Some of them will be presented during the seminar.

In order to coordinate closely with industrial and research communities in the Midi-Pyrénées Region, such as in aeronautics, automotive, telecommunications and electronics, the W3C decided to host this seminar in Toulouse with the support of 3RT.

To know more about how multimodality and how this could benefit to your business, we invite organizations:

Attendance to the seminar is free and open to the public.

W3C and the Multimodal Web

W3C is developing standards that support multiple modes of interaction: aural, visual and tactile. The Web then becomes accessible by using voice or hands via a key pad, keyboard, mouse, or stylus. You could also listen to spoken prompts and audio, and view information on graphical displays.

The Multimodal Web transforms how we interact with applications:

PC genieW3C is developing the Multimodal Interaction Framework in order to:

Agenda

All the program speakers are confirmed.
8h30-9h30 Registration
9h30-9h45 Welcome
9h45-10h00 W3C Overview Philipp Hoschka, Deputy Director for W3C Europe
10h00-10h30 W3C Multimodal Interaction Activity Dave Raggett, W3C Multimodal Interaction Activity Lead
10h30-10h45 Coffee break
10h45-11h15 A model-driven environment for UI designers Stéphane Sire and Stéphane Chatty, Intuilab
11h15-12h00 SNOW: a multimodal framework for authoring and exploiting aeronautic maintenance procedures Nicolas Chevassus, EADS
12h00-13h30 Lunch
13h30-14h15 Multi-Modality Trends and Strategies in Automotive Man-Machine Interfaces Chris Wild, Siemens VDO Automotive
14h15-15h00 HIC: a multimodal adaptive interaction platform for complex systems - Application to maritime surveillance Olivier Grisvard, Thales
15h00-15h30 Coffee break
15h30-16h15 Apache Cocoon: A versatile middleware for multi-{format, channel, device, modal} applications Sylvain Wallez, Anyware Technologies
16h15-16h40 The Ubiquitous Web Dave Raggett, W3C Multimodal Interaction Activity Lead
16h40-17h00 Concluding Remarks

Venue

Espace Diagora is located in southern Toulouse. See the access map.

Espace Diagora

Technopôle Sud
Rue Pierre Gilles de Gennes
BP 71907
31319 Labège Cedex

Tel: +33 5 61 39 93 39
Fax: +33 5 61 39 79 80

More on Multimodal Access

W3C is developing the Multimodal Interaction Framework. It is intended as a basis for developing multimodal applications with markup, scripting, styling and other resources.

Voice Interaction

Voice interaction can escape the physical limitations of keypads and displays as mobile devices become ever smaller. Voice provides an accessible alternative to using the keyboard or screen. This can be especially important in automobiles or in other situations where hands- and eyes-free operation is essential (VoiceXML – dialog; SSML – speech synthesis; SRGS – speech recognition; CCXML – call center; SI – semantic interpretation for speech recognition; Pronunciation Lexicon - supplemental pronunciation information for speech recognition and synthesis; EMMA - extensible multi-modal annotations for interpreted user input).

Stylus Interaction

Complementing speech, a stylus can be used for handwriting, gestures, and drawings. Also, it will enable users to input specific notations for mathematics, music, chemistry and other fields, otherwise difficult to enter via a keyboard. Handwriting is expected to be popular for form filling and instant messaging on a mobile device (InkML – pen input).

Delivery Context

The framework enables applications to dynamically adapt to the current device capabilities, device configuration, user preferences and environmental conditions, such as low battery alerts or loss of network connectivity, muting the microphone and disabling audio output. Dynamic configurations include snapping a camera attachment onto a cell phone or bringing devices using a wireless network, e.g. a camera phone and a color printer (CC/PP – device capabilities; DISelect – content adaptation; DPF -delivery context interfaces).

About the World Wide Web Consortium (W3C)

The W3C was created to lead the Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability. It is an international industry consortium jointly run by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in the USA, the European Research Consortium for Informatics and Mathematics (ERCIM) headquartered in France and Keio University in Japan. Services provided by the Consortium include: a repository of information about the World Wide Web for developers and users, and various prototype and sample applications to demonstrate use of new technology. More than 350 organizations are Members of W3C. To learn more, see http://www.w3.org/

Press Contact Marie-Claire Forgue: +33 6 76 86 33 41 <mcf@w3.org>

Press Resources