Skip to contents |
W3C Ubiquitous Web Domain | Multimodal Interaction home

Multimodal Interaction Activity Statement

The Multimodal Interaction Activity seeks to extend the Web to allow users to dynamically select the most appropriate mode of interaction for their current needs including any disabilities in order to enable Web application developers to provide an effective user interface for whichever modes the user selects. With multimodal Web applications, users can provide input via speech, handwriting and keystrokes, with output presented via displays, pre-recorded and synthetic speech, audio, and tactile mechanisms such as mobile phone vibrators and Braille strips.

The goal of the Multimodal Interaction Activity is to clearly define how to author concrete multimodal Web applications, for example, coupling a local GUI (e.g., HTML user agent) with a remote Speech I/F (e.g., VoiceXML user agent). The Multimodal Interaction Working Group is important as a central point of coordination within W3C for multimodal activities, and the group collaborates with other related Working Groups, e.g. Voice Browser, Scalable Vector Graphics, Compound Document Formats, Web Applications and Ubiquitous Web Applications.

Highlights Since the Previous Advisory Committee Meeting

EmotionML was published as a W3C Recommendation on 22 May 2014. Also the updated EmotionML Vocabularies Note was published on 1 April 2014.

Now the group is working on 2.0 version of EMMA specificaiton to extend the capability of EMMA, e.g., sensor output, streaming data and incremental recognition. The latest editor's draft is available on Githab. Also the group is discussing MMI Modality Component discovery and registration, and an updated Working Draft will be published shortly.

The procedure for rechartering the group is being finalized.

Upcoming Activity Highlights

The group will continue the discussion on (1) EMMA 2.0 and (2) MMI Modality Component discovery and registration.

The group thinks the newly generated Web of Things topic is related to the group's work, so some of the group participants joined the W3C Web of Things Workshop in June in Berlin, and saw how the MMI Architecture could be applied to the ecosystem of "Web of Things".

The group will hold its F2F meeting during TPAC 2014 in Santa Clara in November.

Summary of Activity Structure

GroupChairTeam ContactCharter
Multimodal Interaction Working Group
(participants)
Deborah DahlKazuyuki AshimuraChartered until 31 March 2014

This Activity Statement was prepared for TPAC 2014 per section 5 of the W3C Process Document. Generated from group data.

Kazuyuki Ashimura, Multimodal Interaction Activity Lead

$Id: Activity.html,v 1.605 2014-10-27 10:38:28 ashimura Exp $
Valid XHTML 1.0!