Emotion Markup

From Cognitive Accessibility Task Force

EmotionML is a markup language for annotating emotions developed by the W3C Multimodal Interaction Working Group. It addresses three major use cases -- automatically assigning a label to an expressed emotion, providing instructions for rendering an emotion, and finally providing support for human annotation of emotions for use in emotion research (for example, for training machine learning-based systems to recognize emotions). It is designed to be independent of the modality used to express the emotion, but provides annotations for metadata that describes how the emotion is expressed. Current implementations include recognition of emotions expressed by tone of voice and manner of speaking, emotions expressed by facial expressions, and emotions expressed in language. Emotion output could include text-to-speech systems, the facial expressions of an avatar, or simply a verbal description of the emotion. The last output format might be useful as an assistive technology for people who have difficulty recognizing emotions expressed by facial expressions or tone of voice, such as people with autism. Similarly, the verbal description could be useful to people who can't see or hear a visual or auditory expression of emotion. EmotionML is designed to be used independently, or embedded in other markup, such as EMMA. In addition, because there is no currently agreed-on set of universal emotions (although there are several sets in common use), EmotionML provides a mechanism for referring to and extending a set of emotions by defining a registry for Emotion Vocabularies.