ISSUE-29: how should audio visual speech recognition be annotated

how should audio visual speech recognition be annotated

State:
CLOSED
Product:
EMMA
Raised by:
Michael Johnston
Opened on:
2006-12-12
Description:
How should the results of audio visual speech recognition
be annotated?

mode=voice

medium=acoustic,visual?

More work needed to make clear the differences in meaning
between medium and mode and how they apply to cases such as
AV speech recognition.
Related Actions Items:
No related actions
Related emails:
  1. ISSUE-98 (EMO-29): SMIL or EMMA-like representation of time? [EmotionML] (from sysbot+tracker@w3.org on 2009-11-02)
  2. Re: [emo] Issues in EmotionML (from ashimura@w3.org on 2009-10-31)
  3. [emo] Issues in EmotionML (from schroed@dfki.de on 2009-10-30)
  4. Re: [emma] resolution of open issues in issue tracker (from ashimura@w3.org on 2007-10-31)
  5. [emma] resolution of open issues in issue tracker (from johnston@research.att.com on 2007-10-29)
  6. Re: issue tracker issues (from ashimura@w3.org on 2007-03-28)
  7. [emma] draft 032107-diff (some more changes and list of open issues) (from paolo.baggia@loquendo.com on 2007-03-21)
  8. ISSUE-29: how should audio visual speech recognition be annotated [EMMA] (from dean+cgi@w3.org on 2006-12-12)

Related notes:


This issue has been resolved as both medium and mode can
have multiple values:

mode=voice,camera

medium=acoustic,visual

Michael Johnston, 29 Oct 2007, 21:42:41

Display change log ATOM feed


Chair, Staff Contact
Tracker: documentation, (configuration for this group), originally developed by Dean Jackson, is developed and maintained by the Systems Team <w3t-sys@w3.org>.
$Id: 29.html,v 1.1 2017/02/13 15:50:59 ted Exp $