W3C

W3C Emotion Markup Language Workshop — Summary

5-6 October 2010

Hosted by Telecom ParisTech, Paris, France

EmotionML Workshop

On October 5th and 6th, 2010, W3C (the World Wide Web Consortium) held a workshop on Emotion Markup Language.

The detailed minutes of the workshop are available on the W3C Web server at:
    http://www.w3.org/2010/10/emotionml/minutes.html

The goal of the workshop was to collect feedback from the community on the current EmotionML specification, and we had discussions to clarify concrete use cases and requirements for each of the following three categories of possible EmotionML applications:

The workshop had 18 attendees from Telecom ParisTech, DFKI, Queens University of Belfast, Roma Tre University, University of Greenwich, Dublin Institute of Technology, Loquendo, Deutsche Telekom, Cantoche, Dwango, nViso, and W3C Team.

During the workshop we had great discussion on actual emotion-related services as well as the latest emotion research results. The presentations at the workshop included a number of practical variants of possible use cases of EmotionML for all the above three categories:

Category 1 (manual annotation):
human annotation of (1) emotional material in "crowd-sourcing" scenarios and (2) live video using emotionally expressive annotations
Category 2 (automatic recognition):
emotion detection from face for consumer analysis (emotional reaction to commercials)
Category 3 (automatic recognition):
synthesis of expressive speech and animated avatar character which expresses emotion information; relationship with SSML/VoiceXML; visualization

Also a number of requirements for emotion-ready applications were discussed, e.g.:

1. Discrete scales
represent discrete scale values in addition to continuous values
2. Multiple categories per emotion
relationship between a component and emotion categories; what if a component with more than one emotion categories?
3. Default emotion vocabulary
after concentrated discussion on pros and cons of default emotion vocabularies, we concluded we should stick to the current mechanism which doesn't require any default vocabulary
4. Time stamps since program start
time annotations on a time axis with a custom-defined zero point, corresponding to the start of a session
5. Extended list of modalities
need for an extended list of modalities or channels

The use cases and requirements discussed during the workshop will next be reviewed by the Emotion subgroup of the W3C Multimodal Interaction Working Group, and the subgroup will consider how to modify the existing EmotionML specification.


The Call for Participation, the Logistics, the Presentation Guideline, the Agenda and the Minutes are also available on the W3C Web server.


Marc Schröder, Catherine Pelachaud, Deborah Dahl and Kazuyuki Ashimura, Workshop Organizing Committee

$Id: summary.html,v 1.3 2010/12/13 22:47:22 ashimura Exp $