Copyright © 2012 W3C® (MIT , ERCIM , Keio, Beihang), All Rights Reserved. W3C liability, trademark, document use rules apply.
The EmotionML Specification entered the Candidate Recommendation period on 10 May 2012.
The planned date for entering Proposed Recommendation is XX December 2012. This document summarizes the results from the EmotionML implementation reports received and describes the process that the Multimodal Working Group followed in preparing the report.
An implementation report must indicate the outcome of evaluating the implementation with respect to each of the test assertions. Possible outcomes are "pass", "fail" or "not-impl". Criteria for Producers and Consumers of EmotionML for determining the outcome of each test assertion are defined as follows.
If a consumer:
If a producer:
During the CR period, the Working Group will carry out the following activities:
Implementers were invited to contribute to the assessment of the W3C EMMA 1.0 Specification by submitting implementation reports describing the coverage of their EMMA implementations with respect to the test assertions outlined in the table below.
Implementation reports, comments on this document, or requests made for further information were posted to the Working Group's public mailing list www-multimodal@w3.org (archive).
The Multimodal Working Group established the following entrance criteria for the Proposed Recommendation phase in the Request for CR:
All three of these criteria have been met. A total of 9 implementations were received from 9 different companies and universities. The testimonials below indicate the broad base of support for the specification. All of the required features of EmotionML had at least two implementations, many had eight or nine implementations. All of the optional features received at least two implementations.
pass
", "fail
" or
"not-impl
". "EmotionML Implementation Report will not cover:
This section contains testimonials on EmotionML from the 9 companies and universities that submitted EMMA implementation reports.
Gerhard Fobe is happy to provide a library for C#, released under FreeBSD license, that can support developers to work with EmotionML. With the help of the integrated EmotionML-parser it is possible to create related object instances automatically. Furthermore object instances can be converted to EmotionML too (in DOM and XML mode). Beside a standalone EmotionML document the plug-in version for the inclusion of emotions in other languages is supported.
DFKI GmbH is very happy to add support for EmotionML to its expressive speech synthesis system MARY TTS. A standard format for representing the expressivity to be expressed in speech has been missing, so it is good to have it available now. Particularly the support for both categorical and dimensional representations of emotions is important to our system.
The W3C EmotionML 1.0 Specification is now supported by the ALMA software. ALMA relies on EmotionML for describing appraisal of emotions, the numeric simulation of emotion intensity decay, and mood characteristic changes within the pad dimensions.
Deutsche Telekom Innovation Laboratories is happy to support EmotionML to deploy it as a standard format in its developments concerning emotion processing services. Working primarily as a system integrator, standardized interfaces are extremely important for us to plug together components from different providers.
Dr. C. Becker-Asano of the Foundations of Artificial Intelligence lab at the Department of Computer Science, Freiburg University, is happy to support EmotionML and will use it as a standard format in its developments concerning emotion simulation services. To let the WASABI architecture seamlessly interface with other software modules, a standard interface such as EmotionML is indispensable.
nViso 3D Facial Imaging API is an online service for recognition of emotions depicted through facial expressions in still images and videos. The focus of this implementation of EmotionML is on using the media type and uri time for video which are used in the service. Key features used in API which supports EmotionML include defining our own catagory of emotion key words and linking emotions measurements to media analyzed. As the report was produced as part of a commerical implementation, some parts of the specifications were not deemed appropriate for our service, so were not implemented. More information about our service can be found here: www.nviso.ch
EMLPy is a Python based library for generation of EmotionML compliant documents.
This is a second iteration of the initial implementation of EMLPy Python library that is intended as a utility for generation of EmotionML-compliant XML documents. EMLPy generates EmotionML documents by transforming the user specified and populated Python object tree into XML representation. EMLPy performs EmotionML checks covered in assertions while executing this object to XML transformation. From an API perspective, user interacts with an object tree hierarchy that maps directly to EmotionML hierarchy of elements and attributes. User generates an XML document by invoking the .to_xml() function. At this time EMLPy validates the object tree and its properties against the EmotionML schema and specification rules. There are few cases that require special consideration and handling:
The value of the "media-type", if present, MUST be a valid MIME type - we check the MIME types against the http://www.iana.org/assignments/media-types. This is a live check that performs URI resolution for types and page search for sub-types. We pursued this approach in attempt to dynamically validate MIME types against the authoritative repository.
xsd:anyURI types - anyURI checks are syntactic, not semantic. Checks are performed against common URI patterns - http,ftp, structure of IP address, proper domains, etc. We are not performing checks against the actual live URIs to check if the content is there.
Vocabularies - we perform vocabulary checks for those vocabularies defined within EmotionML instance being created. If the dimension, category, appraisal or sets are pointing to external URL we do not fail the generation of the document if the URL defining the set is not valid, but we only issue warning. We have chosen this approach as aligned with Python's dynamic typing philosophy.
Project URL: https://github.com/ebegoli/EMLPy Implementation Team: Edmon Begoli, begolie@ornl.gov, Oak Ridge National Laboratory/University of Tennessee-Knoxville Chelsey Dunnivan, ckd002@morningside.edu, Morningside University
The HUMAINE centre (School of Psychology, Queen’s University Belfast) welcomes EmotionML and will deploy it as a standard format in its work. That primarily involves use case 1, manual annotation of material involving emotionality, such as annotation of videos, of speech recordings, and of faces, in naturalistic databases. As psychologists, it is important to have a format that we can use, which allows us to use psychologically well-founded descriptors, and know that consumers in the engineering community will be able both to access, and to understand them. The system currently implements tracing for category and dimensional descriptors. We will extend that to appraisal and action tendency descriptors if research confirms that they can be traced reliably, but at present that is not clear.
Emotion twenty questions (EMO20Q) is a experimental framework for studying how people describe emotions in language and how computers can simulate this type of verbal behavior. In EMO20Q, the familiar spoken parlor game of twenty questions is restricted to words that players feel refers to emotions.
In this implementation report, we examine the case where a server-side computer agent plays the role of the questioner and a human plays the answerer via a web browser.
The EMO20Q questioner agent can be decomposed into several components, notably a **vocabulary**, **semantic knowledge**, an **episodic buffer**, and a **belief state**. The vocabulary is a list of 110 emotion words and this vocabulary is expected to grow over time as more data is collected, but remains constant during the agent's instantiation. Semantic knowledge is a large object that remains the same across different agent instantiations and states, while the episodic buffer and belief state are smaller objects that vary over time for each interactive session. Because of the size of the semantic knowledge object, serialization of the agent for each session is not possible. Rather, the episodic buffer and belief states are serialized while the semantic knowledge persists as a static object in the server memory. The belief state is represented as a probability vector indexed by items of the vocabulary.
EmotionML is used to implement the questioner agent's vocabulary and belief state. The agent's vocabulary is implemented using the EmotionML ``vocabulary`` idiom and the agent's belief state is represented using the ``emotion`` idiom with a ``category`` as a child and the ``value`` attribute to hold numerical probability values.
The only thing that produced a failure to validate was having a space in the name of a vocabulary item.
The aim of this section is to describe the range of test assertions developed for the EmotionML 1.0 Specification. The table lists all the assertions that were derived from the EmotionML 1.0 Specification.
The Assert ID column uniquely identifies the assertion. The Feature column indicates the specific elements or attributes which the test assertion applies to. The Spec column identifies the section of the EmotionML 1.0 Specification from which the assertion was derived. The REQ column is a Y/N value indicating whether the test assertion is for a feature which is required. The SUB column is a Y/N value indicating whether the test assertion is a subconstraint which is dependent on the implementation of the preceding non subconstraint feature test assertion. The Semantics column specifies the semantics of the feature or the constraint which must be met. The Result column will be annotated with the number of 'pass', 'fail', and 'not implemented' (P/F/NI) in the set of implementation reports.
Test assertions are classified into two types, basic test assertions which test for the presence of each feature, and sub constraints which only apply if that particular feature is implemented. Generally, sub constraints encode structural constraints that could not be expressed in the EmotionML schema. Sub constraints are marked with 'SUB CONSTRAINT:' in the Semantics field.
The most fundamental test of a conforming EmotionML implementation is that the EmotionML documents it utilizes must successfully validate with respect to the EmotionML XML Schema.
Assert ID | Feature | Spec | Req | Sub | Semantics | Results | ||
---|---|---|---|---|---|---|---|---|
P | F | NI | ||||||
Document structure | ||||||||
100 | [2.1.1] | Y | N | All EmotionML documents must validate against the XML schema. | 9 | 0 | 0 | |
101 | emotionml | [2.1.1] | Y | N | The root element of standalone EmotionML documents MUST be <emotionml>. | 9 | 0 | 0 |
102 | emotionml | [2.1.1] | Y | N | The <emotionml> element MUST define the EmotionML namespace: "http://www.w3.org/2009/10/emotionml". | 9 | 0 | 0 |
103 | emotion | [2.1.1] | N | N | The <emotionml> element MAY contain one or more <emotion> elements. | 9 | 0 | 0 |
104 | vocabulary | [2.1.1] | N | N | The <emotionml> element MAY contain one or more <vocabulary> elements. | 7 | 0 | 2 |
105 | info | [2.1.1] | N | N | The <emotionml> element MAY contain a single <info> element. | 5 | 0 | 4 |
110 | version | [2.1.1] | Y | N | The root element of a standalone EmotionML document MUST have an attribute "version". | 9 | 0 | 0 |
111 | version | [2.1.1] | Y | N | The "version" attribute of <emotionml> MUST have the value "1.0" | 9 | 0 | 0 |
112 | category-set | [2.1.1] | N | N | The <emotionml> element MAY contain an attribute "category-set". | 8 | 0 | 1 |
113 | category-set | [2.1.1] | Y | N | The "category-set" attribute of <emotionml>, if present, MUST be of type xsd:anyURI. | 8 | 0 | 1 |
114 | [2.1.1] | Y | Y | SUB CONSTRAINT: The "category-set" attribute of <emotionml>, if present, MUST refer to the ID of a <vocabulary> element with type="category". | 8 | 0 | 1 | |
115 | dimension-set | [2.1.1] | N | N | The <emotionml> element MAY contain an attribute "dimension-set". | 6 | 0 | 3 |
116 | dimension-set | [2.1.1] | Y | N | The "dimension-set" attribute of <emotionml>, if present, MUST be of type xsd:anyURI. | 6 | 0 | 3 |
117 | [2.1.1] | Y | Y | SUB CONSTRAINT: The "dimension-set" attribute of <emotionml>, if present, MUST refer to the ID of a <vocabulary> element with type="dimension". | 6 | 0 | 3 | |
118 | appraisal-set | [2.1.1] | N | N | The <emotionml> element MAY contain an attribute "appraisal-set". | 4 | 0 | 5 |
119 | appraisal-set | [2.1.1] | Y | N | The "appraisal-set" attribute of <emotionml>, if present, MUST be of type xsd:anyURI. | 4 | 0 | 5 |
120 | [2.1.1] | Y | Y | SUB CONSTRAINT: The "appraisal-set" attribute of <emotionml>, if present, MUST refer to the ID of a <vocabulary> element with type="appraisal". | 4 | 0 | 5 | |
121 | action-tendency-set | [2.1.1] | N | N | The <emotionml> element MAY contain an attribute "action-tendency-set". | 4 | 0 | 5 |
122 | action-tendency-set | [2.1.1] | Y | N | The "action-tendency-set" attribute of <emotionml>, if present, MUST be of type xsd:anyURI. | 4 | 0 | 5 |
123 | [2.1.1] | Y | Y | SUB CONSTRAINT: The "action-tendency-set" attribute of <emotionml>, if present, MUST refer to the ID of a <vocabulary> element with type="action-tendency". | 4 | 0 | 5 | |
124 | emotionml | [2.1.1] | N | N | The <emotionml> element MAY contain arbitrary plain text. | 4 | 0 | 5 |
150 | category | [2.1.2] | N | N | The <emotion> element MAY contain one or more <category> elements. | 8 | 0 | 1 |
151 | dimension | [2.1.2] | N | N | The <emotion> element MAY contain one or more <dimension> elements. | 7 | 0 | 2 |
152 | appraisal | [2.1.2] | N | N | The <emotion> element MAY contain one or more <appraisal> elements. | 4 | 0 | 5 |
153 | action-tendency | [2.1.2] | N | N | The <emotion> element MAY contain one or more <action-tendency> elements. | 4 | 0 | 5 |
154 | reference | [2.1.2] | N | N | The <emotion> element MAY contain one or more <reference> elements. | 5 | 0 | 4 |
155 | info | [2.1.2] | N | N | The <emotion> element MAY contain a single <info> element. | 4 | 0 | 5 |
156 | emotion | [2.1.2] | Y | N | The <emotion> element MUST contain at least one <category> or <dimension> or <appraisal> or <action-tendency> element. | 9 | 0 | 0 |
157 | emotion | [2.1.2] | N | N | The allowed child elements of <emotion> MAY occur in any order. | 9 | 0 | 0 |
158 | emotion | [2.1.2] | N | N | The allowed child elements of <emotion> MAY occur in any combination. | 9 | 0 | 0 |
159 | category-set | [2.1.2] | N | N | The <emotion> element MAY contain an attribute "category-set". | 7 | 0 | 2 |
160 | category-set | [2.1.2] | Y | N | The "category-set" attribute of <emotion>, if present, MUST be of type xsd:anyURI. | 8 | 0 | 1 |
161 | [2.1.2] | Y | Y | SUB CONSTRAINT: The "category-set" attribute of <emotion>, if present, MUST refer to the ID of a <vocabulary> element with type="category". | 8 | 0 | 1 | |
162 | dimension-set | [2.1.2] | N | N | The <emotion> element MAY contain an attribute "dimension-set". | 7 | 0 | 2 |
163 | dimension-set | [2.1.2] | Y | N | The "dimension-set" attribute of <emotion>, if present, MUST be of type xsd:anyURI. | 7 | 0 | 2 |
164 | [2.1.2] | Y | Y | SUB CONSTRAINT: The "dimension-set" attribute of <emotion>, if present, MUST refer to the ID of a <vocabulary> element with type="dimension". | 7 | 0 | 2 | |
165 | appraisal-set | [2.1.2] | N | N | The <emotion> element MAY contain an attribute "appraisal-set". | 4 | 0 | 5 |
166 | appraisal-set | [2.1.2] | Y | N | The "appraisal-set" attribute of <emotion>, if present, MUST be of type xsd:anyURI. | 4 | 0 | 5 |
167 | [2.1.2] | Y | Y | SUB CONSTRAINT: The "appraisal-set" attribute of <emotion>, if present, MUST refer to the ID of a <vocabulary> element with type="appraisal". | 4 | 0 | 5 | |
168 | action-tendency-set | [2.1.2] | N | N | The <emotion> element MAY contain an attribute "action-tendency-set". | 4 | 0 | 5 |
169 | action-tendency-set | [2.1.2] | Y | N | The "action-tendency-set" attribute of <emotion>, if present, MUST be of type xsd:anyURI. | 4 | 0 | 5 |
170 | [2.1.2] | Y | Y | SUB CONSTRAINT: The "action-tendency-set" attribute of <emotion>, if present, MUST refer to the ID of a <vocabulary> element with type="action-tendency". | 4 | 0 | 5 | |
171 | version | [2.1.2] | N | N | The <emotion> element MAY have an attribute "version". | 6 | 0 | 3 |
172 | version | [2.1.2] | Y | N | The "version" attribute of <emotion>, if present, MUST have the value "1.0". | 6 | 0 | 3 |
173 | id | [2.1.2] | N | N | The <emotion> element MAY contain an attribute "id". | 3 | 0 | 6 |
174 | id | [2.1.2] | Y | N | The "id" attribute of <emotion>, if present, MUST be of type xsd:ID. | 2 | 0 | 6 |
175 | start | [2.1.2] | N | N | The <emotion> element MAY have an attribute "start". | 3 | 0 | 6 |
176 | end | [2.1.2] | N | N | The <emotion> element MAY have an attribute "end". | 3 | 0 | 6 |
177 | duration | [2.1.2] | N | N | The <emotion> element MAY have an attribute "duration". | 3 | 0 | 6 |
178 | time-ref-uri | [2.1.2] | N | N | The <emotion> element MAY have an attribute "time-ref-uri". | 2 | 0 | 7 |
179 | time-ref-anchor-point | [2.1.2] | N | N | The <emotion> element MAY have an attribute "time-ref-anchor-point". | 2 | 0 | 7 |
180 | offset-to-start | [2.1.2] | N | N | The <emotion> element MAY have an attribute "offset-to-start". | 2 | 0 | 7 |
181 | expressed-through | [2.1.2] | N | N | The <emotion> element MAY have an attribute "expressed-through". | 2 | 0 | 7 |
182 | emotion | [2.1.2] | N | N | The <emotion> element MAY contain arbitrary plain text. | 5 | 0 | 4 |
Representations of emotions and related states | ||||||||
210 | category | [2.2.1] | Y | N | If the <category> element is used, a category vocabulary MUST be declared using a "category-set" attribute on either the enclosing <emotion> element or the root element <emotionml>. | 8 | 0 | 1 |
211 | name | [2.2.1] | Y | N | A category element MUST contain a "name" attribute. | 8 | 0 | 1 |
212 | [2.2.1] | Y | Y | SUB CONSTRAINT: The value of the "name" attribute of the <category> element MUST be contained in the declared category vocabulary. If both the <emotionml> and the <emotion> element has a "category-set" attribute, then the <emotion> element's attribute defines the declared category vocabulary. | 8 | 0 | 1 | |
213 | name | [2.2.1] | Y | N | For any given category name in the set, zero or one occurrence is allowed within an <emotion> element, i.e. a category with name "x" MUST NOT appear twice in one <emotion> element. | 8 | 0 | 1 |
214 | value | [2.2.1] | N | N | A <category> MAY contain a "value" attribute. | 6 | 0 | 3 |
215 | trace | [2.2.1] | N | N | A <category> MAY contain a <trace> element. | 4 | 0 | 5 |
216 | value / trace | [2.2.1] | Y | N | A <category> MUST NOT contain both a "value" attribute and a <trace> element. | 6 | 0 | 3 |
217 | confidence | [2.2.1] | N | N | A <category> element MAY contain a "confidence" attribute. | 4 | 0 | 5 |
220 | dimension | [2.2.2] | Y | N | If the <dimension> element is used, a dimension vocabulary MUST be declared using a "dimension-set" attribute on either the enclosing <emotion> element or the root element <emotionml>. | 7 | 0 | 2 |
221 | name | [2.2.2] | Y | N | A <dimension> element MUST contain a "name" attribute. | 7 | 0 | 2 |
222 | [2.2.2] | Y | Y | SUB CONSTRAINT: The value of the "name" attribute of the <dimension> element MUST be contained in the declared dimension vocabulary. If both the <emotionml> and the <emotion> element has a "dimension-set" attribute, then the <emotion> element's attribute defines the declared dimension vocabulary. | 7 | 0 | 2 | |
223 | name | [2.2.2] | Y | N | For any given dimension name in the set, zero or one occurrence is allowed within an <emotion> element i.e. a dimension with name "x" MUST NOT appear twice in one <emotion> element. | 7 | 0 | 2 |
224 | value / trace | [2.2.2] | Y | N | A <dimension> MUST contain either a "value" attribute or a <trace> element. | 7 | 0 | 2 |
225 | confidence | [2.2.2] | N | N | A <dimension> element MAY contain a "confidence" attribute. | 4 | 0 | 5 |
230 | appraisal | [2.2.3] | Y | N | If the <appraisal> element is used, an appraisal vocabulary MUST be declared using an "appraisal-set" attribute on either the enclosing <emotion> element or the root element <emotionml>. | 4 | 0 | 5 |
231 | name | [2.2.3] | Y | N | An <appraisal> element MUST contain the "name" attribute. | 4 | 0 | 5 |
232 | [2.2.3] | Y | Y | SUB CONSTRAINT: The value of the "name" attribute of the <appraisal> element MUST be contained in the declared appraisal vocabulary. If both the <emotionml> and the <emotion> element has an "appraisal-set" attribute, then the <emotion> element's attribute defines the declared appraisal vocabulary. | 4 | 0 | 5 | |
233 | name | [2.2.3] | Y | N | For any given appraisal name in the set, zero or one occurrence is allowed within an <emotion> element, i.e. an appraisal with name "x" MUST NOT appear twice in one <emotion> element. | 4 | 0 | 5 |
234 | value | [2.2.3] | N | N | An <appraisal> element MAY contain a "value" attribute. | 4 | 0 | 5 |
235 | trace | [2.2.3] | N | N | An <appraisal> element MAY contain a <trace> element. | 3 | 0 | 6 |
236 | value / trace | [2.2.3] | Y | N | An <apraisal> element MUST NOT contain both a "value" attribute and a <trace> element. | 3 | 0 | 6 |
237 | confidence | [2.2.3] | N | N | An <appraisal> element MAY contain a "confidence" attribute. | 4 | 0 | 5 |
240 | action-tendency | [2.2.4] | Y | N | If the <action-tendency> element is used, an action tendency vocabulary MUST be declared using an "action-tendency-set" attribute on either the enclosing <emotion> element or the root element <emotionml>. | 4 | 0 | 5 |
241 | name | [2.2.4] | Y | N | An <action-tendency> element MUST contain the "name" attribute. | 4 | 0 | 5 |
242 | [2.2.4] | Y | Y | SUB CONSTRAINT: The value of the "name" attribute of the <action-tendency> element MUST be contained in the declared action tendency vocabulary. If both the <emotionml> and the <emotion> element has an "action-tendency-set" attribute, then the <emotion> element's attribute defines the declared action tendency vocabulary. | 4 | 0 | 5 | |
243 | name | [2.2.4] | Y | N | For any given action tendency name in the set, zero or one occurrence is allowed within an <emotion> element, i.e. an action tendency with name "x" MUST NOT appear twice in one <emotion> element. | 4 | 0 | 5 |
244 | value | [2.2.4] | N | N | An <action-tendency> element MAY contain a "value" attribute. | 4 | 0 | 5 |
245 | trace | [2.2.4] | N | N | An <action-tendency> element MAY contain a <trace> element. | 3 | 0 | 6 |
246 | value / trace | [2.2.4] | Y | N | An <action-tendency> element MUST NOT contain both a "value" attribute and a <trace> element. | 3 | 0 | 6 |
247 | confidence | [2.2.4] | N | N | An <action-tendency> element MAY contain a "confidence" attribute. | 4 | 0 | 5 |
Meta-information | ||||||||
300 | confidence | [2.3.1] | Y | N | The value of the "confidence" attribute MUST be a floating point number in the closed interval [0, 1]. | 4 | 0 | 5 |
301 | expressed-through | [2.3.2] | Y | N | The attribute "expressed-through" of the <emotion> element, if present, MUST be of type xsd:nmtokens. | 4 | 0 | 5 |
302 | info | [2.3.3] | N | N | The <info> element MAY contain any elements with a namespace different from the EmotionML namespace, "http://www.w3.org/2009/10/emotionml". | 4 | 0 | 5 |
303 | info | [2.3.3] | N | N | The <info> element MAY contain arbitrary plain text. | 5 | 0 | 4 |
304 | info | [2.3.3] | Y | N | The <info> element MUST NOT contain any elements in the EmotionML namespace, "http://www.w3.org/2009/10/emotionml". | 4 | 0 | 5 |
305 | id | [2.3.3] | N | N | The <info> element MAY contain an attribute "id". | 4 | 0 | 5 |
306 | id | [2.3.3] | Y | N | The "id" attribute of the <info> element, if present, MUST be of type xsd:ID. | 4 | 0 | 5 |
References and time | ||||||||
410 | uri | [2.4.1] | Y | N | The <reference> element MUST contain a "uri" attribute. | 6 | 0 | 3 |
411 | uri | [2.4.1] | Y | N | The "uri" attribute of <reference> MUST be of type xsd:anyURI. | 6 | 0 | 3 |
412 | [2.4.1] | N | Y | SUB CONSTRAINT: The URI in the "uri" attribute of a <reference> element MAY be extended by a media fragment. | 3 | 0 | 6 | |
413 | role | [2.4.1] | N | N | The <reference> element MAY contain a "role" attribute. | 5 | 0 | 4 |
414 | role | [2.4.1] | Y | N | The value of the "role" attribute of the <reference> element, if present, MUST be one of "expressedBy", "experiencedBy", "triggeredBy", "targetedAt". | 5 | 0 | 4 |
415 | media-type | [2.4.1] | N | N | The <reference> element MAY contain a "media-type" attribute. | 2 | 0 | 7 |
416 | media-type | [2.4.1] | Y | N | The value of the "media-type" attribute of the <reference> element, if present, MUST be of type xsd:string. | 2 | 0 | 7 |
417 | [2.4.1] | Y | Y | SUB CONSTRAINT: The value of the "media-type" attribute of the <reference> element, if present, MUST be a valid MIME type. | 2 | 0 | 7 | |
420 | start | [2.4.2] | Y | N | The value of the "start" attribute of <emotion>, if present, MUST be of type xsd:nonNegativeInteger. | 3 | 0 | 6 |
421 | end | [2.4.2] | Y | N | The value of the "end" attribute of <emotion>, if present, MUST be of type xsd:nonNegativeInteger. | 2 | 0 | 7 |
422 | duration | [2.4.2] | Y | N | The value of "duration" attribute of <emotion>, if present, MUST be of type xsd:nonNegativeInteger. | 2 | 0 | 7 |
423 | time-ref-uri | [2.4.2] | Y | N | The value of the "time-ref-uri" attribute of <emotion>, if present, MUST be of type xsd:anyURI. | 2 | 0 | 7 |
424 | time-ref-anchor-point | [2.4.2] | Y | N | The value of the "time-ref-anchor-point" attribute of <emotion>, if present, MUST be either "start" or "end". | 2 | 0 | 7 |
425 | offset-to-start | [2.4.2] | Y | N | The value of the "offset-to-start" attribute of <emotion>, if present, MUST be of type xsd:integer. | 2 | 0 | 7 |
Scale values | ||||||||
500 | value | [2.5.1] | Y | N | The value of a "value" attribute, if present, MUST be a floating point value from the closed interval [0, 1]. | 7 | 0 | 2 |
501 | freq | [2.5.2] | Y | N | The <trace> element MUST have a "freq" attribute. | 5 | 0 | 4 |
502 | freq | [2.5.2] | Y | N | The value of the "freq" attribute of <trace> MUST be a positive floating point number followed by optional whitespace followed by "Hz". | 5 | 0 | 4 |
503 | samples | [2.5.2] | Y | N | The <trace> element MUST have a "samples" attribute. | 5 | 0 | 4 |
504 | samples | [2.5.2] | Y | N | The value of the "samples" attribute of <trace> MUST be a space-separated list of floating point values from the closed interval [0, 1]. | 5 | 0 | 4 |
Defining vocabularies for representing emotions | ||||||||
600 | item | [3.1.1] | Y | N | A <vocabulary> element MUST contain one or more <item> elements. | 6 | 0 | 3 |
601 | info | [3.1.1] | N | N | A <vocabulary> element MAY contain a single <info> element. | 2 | 0 | 7 |
602 | type | [3.1.1] | Y | N | A <vocabulary> element MUST contain a "type" attribute. | 6 | 0 | 3 |
603 | type | [3.1.1] | Y | N | The value of the "type" attribute of the <vocabulary> element MUST be one of "category", "dimension", "action-tendency" or "appraisal". | 6 | 0 | 3 |
604 | id | [3.1.1] | Y | N | A <vocabulary> element MUST contain an "id" attribute | 6 | 0 | 3 |
605 | id | [3.1.1] | Y | N | The value of the "id" attribute of the <vocabulary> element MUST be of type xsd:ID . | 6 | 0 | 3 |
606 | info | [3.1.2] | N | N | An <item> element MAY contain a single <info> element. | 3 | 0 | 6 |
607 | name | [3.1.2] | Y | N | An <item> element MUST contain a "name" attribute. | 5 | 0 | 4 |
608 | name | [3.1.2] | Y | N | An <item> MUST NOT have the same name as any other <item> within the same <vocabulary>. | 6 | 0 | 3 |
Conformance | ||||||||
700 | [4.1] | Y | N | All EmotionML elements MUST use the EmotionML namespace, "http://www.w3.org/2009/10/emotionml". | 9 | 0 | 0 |
The Multimodal Working Group would like to acknowledge the contributions of several individuals: