W3C

Testimonials for EMMA 1.0 Recommendation

These testimonials are in support of the W3C Press Release "W3C Multimodal Standard Brings Web to More People, More Ways".


Avaya

As a common language for representing multimodal input, EMMA lays a cornerstone upon which more advanced architectures and technologies can be developed to enable natural multimodal interactions. We are glad that EMMA has become a W3C Recommendation and pleased with the capabilities that EMMA brings to the multimodal interactions over the Web.

— Wu Chou, Director, Avaya Labs Research, Avaya

Conversational Technologies

Conversational Technologies strongly supports the W3C Extensible MultiModal Annotation 1.0 (EMMA) standard. By providing a standardized yet extensible and flexible basis for representing user input, we believe EMMA has tremendous potential for making possible a wide variety of innovative multimodal applications and research directions. Conversational Technologies has also found EMMA to be very helpful in helping students understand the principles of natural language processing through its open source EMMA implementation.

— Deborah Dahl, Principal, Conversational Technologies

DFKI

DFKI appreciates that the Extensible MultiModal Annotation markup language has become a W3C Recommendation.

The definition of EMMA represents a significant step towards the realization of a multimodal interaction infrastructure in a wide range of ICT applications. DFKI found EMMA a very useful instrument for the realization of multimodal dialog systems and has adopted it for the representation of user input in the context of several large consortia projects like SMARTWEB and THESEUS together with its industrial shareholders including SAP, Bertelsmann, Deutsche Telekom BMW and Daimler.

DFKI is pleased to have contributed to the realization of EMMA and will support future work on new EMMA features such as the representation of multimodal output and support for emotion detection and representation.

— Professor Wolfgang Wahlster, Chief Executive Officer and Scientific Director of DFKI GmbH, The German Research Centre for AI

Kyoto Institute of Technology

Kyoto Institute of Technology (KIT) strongly supports the Extensible MultiModal Annotation 1.0 (EMMA) specification. We have been using EMMA within our multimodal human-robot interaction system. EMMA documents are dynamically generated by (1) the Automatic Speech Recognition (ASR) component and (2) the Face Detection/Behavior Recognition component in our implementation.

In addition, the Information Technology Standards Commission of Japan (ITSCJ), which includes KIT as a member, also has a plan to use EMMA as a data format for their own multimodal interaction architecture specification. ITSCJ believes EMMA is very useful for both uni-modalrecognition component, e.g., ASR, and multimodal integration component, e.g., speech with pointing gesture.

— Associate Professor Masahiro Araki, Interactive Intelligence lab., Department of Information Science, Graduate School of Science and Technology, Kyoto Institute of Technology

Loquendo

Extensible MultiModal Annotation (EMMA) 1.0 provides a rich language for representing a variety of input modes within speech-enabled and multimodal applications - such as speech, handwriting and gesture recognition. Loquendo welcomes the EMMA 1.0 W3C Recommendation because it will facilitate the creation of multimodal applications as well as more powerful speech applications, and, the company believes, will facilitate innovation, advance the Web and give businesses a decisive competitive edge.

Loquendo is a longstanding, participating member of the W3C Multimodal Interaction and Voice Browser working groups, as well as the IETF and the Voice XML Forum, and has already implemented EMMA 1.0 into the Loquendo MRCP Server.

— Daniele Sereno, Vice President Product Engineering, Loquendo

University of Trento

We believe that EMMA covers a wide variety of innovative multimodal applications. We expect that EMMA 1.0 will play a key role in the development of interoperable communication technologies as well as enable innovative research platforms.

— Prof. Dr. Ing. Giuseppe Riccardi, Director of the Adaptive Multimodal Information and Interfaces Lab, Department of Information Engineering and Computer Science, University of Trento

About the World Wide Web Consortium (W3C)

The W3C was created to lead the Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability. It is an international industry consortium jointly run by the MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) in the USA, the European Research Consortium for Informatics and Mathematics (ERCIM) headquartered in France and Keio University in Japan. Services provided by the Consortium include: a repository of information about the World Wide Web for developers and users, and various prototype and sample applications to demonstrate use of new technology. To date, nearly 400 organizations are Members of the Consortium. For more information see http://www.w3.org/