Multimodal Interaction Specifications and Notes
This is intended to provide a brief summary of each of the Multimodal Interaction Working Group's major work items.
This suite of specifications is known as the W3C Multimodal Interaction Framework.
- Introduction, 6 May 2003. The Multimodal Interaction Framework introduces a general framework for multimodal interaction, and the kinds of markup languages being considered.
- Use cases, 4 December 2002. Multimodal Interaction Use Cases describes several use cases that are helping us to better understand the requirements for multimodal interaction.
- Core requirements, 8 January 2003. Multimodal Interaction Requirements describes fundamental requirements for the specifications under development in the W3C Multimodal Interaction Activity.
The following lists current and completed specifications. Additional work is expected on topics described in the Scope section of the charter.
Main Architecture specification
- Recommendation, 25 October 2012
The MMI Architecture provides a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the use of well-defined interfaces between its constituents. The framework is motivated by several basic design goals including (1) Encapsulation, (2) Distribution, (3) Extensibility, (4) Recursiveness and (5) Modularity. The MMI Architecture includes Modality Components, which process specific modalities such as speech or handwriting, an Interaction Manager, which coordinates processing among the Modality Components, and the Life Cycle events, which support communication between the Interaction Manager and the Modality Components.
Discovery & Registration of Multimodal Modality Components
- Editor's Draft on GitHub
- Use Cases and Requirements Note, 5 July 2012
- First Public Working Draft, 11 June 2015
This document is addressed to people who want either to develop Modality Components for Multimodal Applications distributed over a local network or “in the cloud”. With this goal, in a multimodal system implemented according to the Multimodal Architecture Specification, the system must discover and register its Modality Components in order to preserve the overall state of the distributed elements. In this way, Modality Components can be composed with automation mechanisms in order to adapt the Application to the state of the surrounding environment.
- Multimodal Application Developer Feedback Working Group Note, 14 April 2006
- Common Sense Suggestions for Developing Multimodal User Interfaces Working Group Note, 11 September 2006
- Authoring Applications for the Multimodal Architecture Working Group Note, 2 July 2008
MMI Best Practices
Extensible Multi-Modal Annotations (EMMA)
- Recommendation, 10 February 2009
EMMA is a data exchange format for the interface between different levels of input processing and interaction management in multimodal and voice-enabled systems. It provides the means for input processing components, such as speech recognizers, to annotate application specific data with information such as confidence scores, time stamps, and input mode classification (e.g. key strokes, touch, speech, or pen). EMMA also provides mechanisms for representing alternative recognition hypotheses including lattice and groups and sequences of inputs. EMMA 1.0 has been completed. The group will publish a new EMMA 1.1 version of the specification which incorporates new features that address issues brought up through EMMA implementations.
- Second Public Working Draft, 27 June 2013
Emma 1.1 includes a set of new features based on feedback from implementers as well as added clarification text in a number of places throughout the specification. The new features include: support for adding human annotations (emma:annotation, emma:annotated-tokens), support for inline specification of process parameters (emma:parameters, emma:parameter, emma:parameter-ref), support for specification of models used in processing beyond grammars (emma:process-model, emma:process-model-ref), extensions to emma:grammar to enable inline specification of grammars, a new mechanism for indicating which grammars are active (emma:grammar-active, emma:active), support for non-XML semantic payloads (emma:result-format), support for multiple emma:info elements and reference to the emma:info relevant to an interpretation (emma:info-ref), and a new attribute to complement the emma:medium and emma:mode attributes that enables specification of the modality used to express an input (emma:expressed-through).
- Editor's Draft on GitHub
- Use Cases for Possible Future EMMA Features (WG Note), 15 December 2009
- First Public Working Draft, 8 September 2015
The W3C Multimodal Interaction Working Group aims to develop specifications to enable access to the Web using multimodal interaction. This document is part of a set of specifications for multimodal systems, and provides details of an XML markup language for containing and annotating the interpretation of user input and production of system output. Examples of interpretation of user input are a transcription into words of a raw signal, for instance derived from speech, pen or keystroke input, a set of attribute/value pairs describing their meaning, or a set of attribute/value pairs describing a gesture. The interpretation of the user's input is expected to be generated by signal interpretation processes, such as speech and ink recognition, semantic interpreters, and other types of processors for use by components that act on the user's inputs such as interaction managers. Examples of stages in the production of a system output, are creation of a semantic representation, an assignment of that representation to a particular modality or modalities, and a surface string for realization by, for example, a text-to-speech engine. The production of the system's output is expected to be generated by output production processes, such as a dialog manager, multimodal presentation planner, content planner, and other types of processors such as surface generation.
InkML - an XML language for digital ink traces
- Recommendation, 20 September 2011
InkML provides a range of features to support real-time ink streaming, multi-party interactions and richly annotated ink archival. Applications may make use of as much or as little information as required, from minimalist applications using only simple traces to more complex problems, such as signature verification or calligraphic animation, requiring full dynamic information. As a platform-neutral format for digital ink, InkML can support collaborative or distributed applications in heterogeneous environments, such as courier signature verification and distance education. This work is complete as InkML has reached the Recommendation stage. However, the Multimodal Interaction Working Group welcomes feedback on the InkML standard.
Emotion Markup Language (EmotionML) 1.0
- Recommendation, 22 May 2014
EmotionML provides representations of emotions and related states for technological applications. As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions. The language is conceived as a "plug-in" language suitable for use in three different areas: (1) manual annotation of data; (2) automatic recognition of emotion-related states from user behavior; and (3) generation of emotion-related system behavior.
Working Group Notes
Working Group Notes are non standards-track documents that support, clarify, or otherwise provide additional information about the specifications.
The group has published several Notes that provide additional information about the Multimodal Architecture and Interfaces specification.
- Authoring Applications for the Multimodal Architecture
- Multimodal Application Developer Feedback
- Common Sense Suggestions for Developing Multimodal User Interfaces
- Interoperability Testing Report
- Emotion Vocabularies Note -- describes vocabularies to be used in conjunction with Emotion Markup Language
Since EMMA 1.0 became a W3C Recommendation, a number of new possible use cases for the EMMA language have emerged. These include the use of EMMA to represent multimodal output, biometrics, emotion, sensor data, multi-stage dialogs, and interactions with multiple users. So the Working Group have decided to work on a document capturing use cases and issues for a series of possible extensions to EMMA, and published a Working Group Note to seek feedback on the various different use cases.
- Use Cases for Possible Future EMMA Features Working Group Note, 15 December 2009