Talks by W3C Speakers

Many in the W3C community — including staff, chairs, and Member representatives — present W3C work at conferences and other events. Below you will find a list some of the talks. All material is copyright of the author, except where otherwise noted.

Listing is based on the following search constraints:

  • Possible presentation dates: past few months and upcoming
  • Technology area: Browsers and Other Agents

September 2015

  • 2015-09-09 (9 SEP)

    Special Event: Standardization Brown-Bag (panel)

    by Deborah Dahl

    Interspeech 2015

    Dresden, Germany

    Relevant technology areas: Browsers and Other Agents and Web of Devices.

    This special event is dedicated to standards in speech and multimodal technology. Three distinguished panelists representing different standardization bodies will be discussing a variety of topics concerning standardization activities with the audience. Topics include: •Goals of standardization •(Dis)Advantages of standardization •How do different standardization bodies address these goals? What are their pros and cons? •What use cases are especially interesting to the Interspeech community? •What existing standards should the Interspeech community know about? •What new standards are needed to support use cases of interest to the Interspeech community? What aspects are missing in standardization, how should they be handled? •How can one get engaged in standardization activities? •What different levels of standards are there (note, working draft, recommendation, etc.)? •What is a typical timeline to develop a standard? •Types of standards publishers (informal standards based on industry/research consortia vs. formal standards bodies; “open specifications” published by a single vendor) •Best practices vs. standards

November 2015

December 2015

  • 2015-12-04 (4 DEC)

    Multimodal Interaction Standards at the World Wide Web Consortium

    by Deborah Dahl

    Relevant technology areas: Browsers and Other Agents and Web of Devices.

    The W3C Multimodal Interaction Working Group has developed several standards that can be used to represent multimodal user inputs and system outputs. We will discuss three of them in this talk. Extensible Multimodal Annotation (EMMA) represents cross-modality metadata for user inputs and system outputs. Ink Markup Language (InkML) represents traces in electronic ink, for example, for applications such as handwriting or gesture recognition. Finally, Emotion Markup Language (EmotionML) can be used to represent emotions. We will describe these three standards, talk about how they interoperate (including a demo of EMMA and EmotionML), and discuss future directions.

Extra links