Call for Participation: W3C Workshop on Multimodal Architecture and Interfaces

W3C is holding the Workshop on Multimodal Architecture and Interfaces.

The Workshop will be held at Keio University in Fujisawa, Japan hosted
by W3C/Keio.

The Call for Participation is available at:
http://www.w3.org/2007/08/mmi-arch/cfp.html

Important dates and deadlines for this Workshop are:

Workshop dates: 16 and 17 November 2007
Position papers due: 5 October 2007
Final agenda: 20 October 2007
Registration closes: 3 November 2007

Registration details and information about expected audience are in
the Call for Participation.

Please note that:
- There will be a limit of 30 participants.
- Attendance is open to everyone, including non-W3C Members, but each
  organization or individual wishing to participate must submit a
  position paper.
- To ensure maximum diversity among participants, only two
  participants may attend per organization.
- There is no registration fee.

The Workshop will be chaired by Deborah Dahl and Kazuyuki Ashimura.

Scope of the Workshop
---------------------

The scope of this workshop is restricted in order to make the best use
of participants' time. In general, discussion at the workshop and in
the position papers should stay focused on the workshop goal: identify
and prioritize requirements for extensions and additions to the MMI
Architecture that will improve the use of the MMI Architecture to
better support speech, GUI and Ink interfaces on multimodal
devices. Descriptions of new requirements with usage scenarios and
clear explanations of the problems to be solved is of top priority for
the workshop, while examples of the MMI Architecture syntax extensions
is secondary priority.

Workshop Goals
--------------

Identify and prioritize requirements for changes, extensions and
additions to the MMI Architecture to better support speech, GUI, Ink
and other Modality Components.

Attendees SHOULD be familiar with the MMI Architecture . The main
focus of the workshop is requirements for the interfaces between the
Runtime Framework and various Modality Components (e.g., voice, pen,
ink, etc.) within the MMI Architecture. Specifically, we ask the
participants (browser vendors, device vendors, application vendors,
etc.) to clarify:
- How to integrate specific modality components e.g ink and voice into
  the MMI Architecture?
- What are the limitations of the MMI Architecture?
- What should be done in the MMI Architecture to enable applications
  to adapt to different modality combinations?

Contact Information
-------------------
The W3C contact is Kazuyuki Ashimura.
email: ashimura@w3.org
voice: +81.466.49.1170
fax: +81.466.49.1171

-- 
Kazuyuki Ashimura / W3C Multimodal & Voice Activity Lead
mailto: ashimura@w3.org
voice: +81.466.49.1170 / fax: +81.466.49.1171

Received on Monday, 3 September 2007 07:34:26 UTC