IRC log of media on 2009-11-01

Timestamps are in UTC.

17:07:50 [RRSAgent]
RRSAgent has joined #media
17:07:50 [RRSAgent]
logging to http://www.w3.org/2009/11/01-media-irc
17:07:55 [Hixie]
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-July/021125.html
17:07:56 [chaals]
chaals has joined #media
17:08:06 [plh]
rrsagent, make log public
17:08:18 [plh]
Chair: Dave and John
17:08:36 [chaals]
ScribeNick: chaals
17:08:51 [plh]
Meeting: Accessibility of Media Elements in HTML 5 Gathering
17:09:02 [chaals]
Scribe: Chaals
17:11:03 [dsinger]
dsinger has joined #media
17:11:22 [ChrisL]
rrsagent, make logs public
17:11:42 [dsinger]
dsinger has changed the topic to: Making Excessible Displays Imminently Available
17:17:01 [dsinger]
dsinger has changed the topic to: Media Experts Define Innovative Accessibility
17:22:46 [chaals]
chaals has changed the topic to: Moderately Excessive Debating Idiocy: Accessibility
17:23:13 [fsasaki]
fsasaki has joined #media
17:34:20 [chaals]
agenda+ Introductions
17:34:25 [chaals]
agenda?
17:34:34 [Zakim]
Zakim has joined #media
17:34:38 [chaals]
agenda+ Introductions
17:34:49 [chaals]
agenda+ set agenda...
17:35:17 [chaals]
agenda+ Changes for HTML5?
17:35:21 [chaals]
agenda+ Changes for CSS?
17:35:48 [dsinger]
dsinger has joined #media
17:35:53 [joakim]
joakim has joined #media
17:35:56 [chaals]
agenda+ What's next - Review of decisions and actions
17:36:04 [dsinger]
dsinger has left #media
17:36:09 [chaals]
agenda+ Any Other Business
17:36:46 [dsinger]
dsinger has joined #media
17:40:54 [soap]
soap has joined #media
17:42:53 [SCain]
SCain has joined #media
17:45:12 [dsinger]
wondering where a few people are: silvia, judy, and a few others
17:45:58 [MichaelC]
MichaelC has joined #media
17:46:22 [ChrisL2]
ChrisL2 has joined #media
17:47:38 [chaals]
chaals has changed the topic to: Accessible video media discussion thing
17:48:42 [eric_carlson]
eric_carlson has joined #media
17:49:15 [mattmay]
mattmay has joined #media
17:49:46 [chaals]
zakim, agendum 1
17:49:46 [Zakim]
I don't understand 'agendum 1', chaals
17:49:54 [chaals]
zakim, next agendum
17:49:54 [Zakim]
agendum 1. "Introductions" taken up [from chaals]
17:50:04 [chaals]
Topic: Introductions
17:50:36 [chaals]
JF: I am John Foliot. I want to show the Captioning service from Stanford at some point.
17:50:47 [chaals]
... work on accessibiltiy services at Stanford
17:51:10 [chaals]
JS: Janina Sajka, chair WAI Protocols and Formats. Want to present some thughts about need for controls API
17:51:39 [chaals]
SC: Sally Cain, RNIB. Member of W3C PF group. Echoing what Janina said, want to talk about audio description
17:52:26 [chaals]
AT: oops from IBM lab Tokyo. We have been working on this stuff for a decade. Am a general chair for W4A. Want to indtroduce our audio description stuff.
17:52:55 [chaals]
s/oops/Hiro/
17:53:17 [chaals]
MM: Matt May, Adobe. Want to talk about what we learned from accessibility in Flash, and the authoring tools context
17:53:59 [chaals]
MC: Michael Cooper, work as WAI staff. We're interested in W3C technology having accessibility baked in. I am intersted in using existing stuff rather than new things where effective.
17:55:02 [chaals]
Frank: Microsoft, here to listen
17:55:45 [chaals]
KH: Ken Herrington, google. Work on captioning systems - would like to show a bit of what we have done, and have tried to avoid standards work for ages (but failed now ;) )
17:56:14 [chaals]
Joakim: researched at Ericsson, everyday job is on indexing (photo tagging, media services, ...)
17:56:25 [chaals]
... co-chair of media annotation group at W3C
17:56:49 [chaals]
... want to talk about what we have done at Ericsson.
17:57:33 [chaals]
FS: Felix Sasaki, from University of Applied Sciences in Potsdam. Teaching metadata and in media annotation workshop. Not presenting in particular
17:57:57 [chaals]
Marisa: DAISY Consortium developer (we make Digital Talking Book standards)
17:58:04 [chaals]
... here to learn
17:58:35 [chaals]
... can present a bit about DAISY and possibilities with HTML5
17:58:57 [chaals]
DB: Dick Bolterman, co-chair of SYMM (group at W3C doing SMIL)
17:59:27 [chaals]
... I have been working on this for 14 decades :( Researcher at CWI in Amsterdam, interested in authoring systems for multimedia presentations.
17:59:59 [chaals]
... would like to talk about what SMIL has done, maybe demo a captioning system we have for YouTube and some seperate caption streams allowing 3rd-party personalisation
18:00:10 [chaals]
IH: Ian Hickson, Google, HTML5 editor.
18:00:57 [chaals]
CL: Chris Lilley, W3C multimedia and SVG group, etc. Want to make sure that whatever we can do will be usable in SVG as well as HTML. Interested in i18n question - make sure you can have different languages.
18:01:51 [chaals]
PLH: Philippe le Hegaret, W3C. Responsible for HTML, and video, within W3C. Late-arriving participant in timed text working group. Hoping to get that work finished, have a demo of timed text with HTML5.
18:01:55 [ChrisL2]
s/multimedia and/Hypertext CG cochair, CSS and/
18:02:04 [chaals]
... Didn't hear anyone wanting to present the current state in HTML5
18:03:01 [chaals]
JF: Also representing Pierre-Antoince Champain from Lyris (France) who ar working on this.
18:03:34 [chaals]
EC: Eric Carlson, Apple. Mostly responsible for engineering HTML5 media elements in Webki
18:04:47 [chaals]
CMN: Chaals, Opera. In charge of standards, hope not to present anything but interested in i18n and use of accessibility methods across different technologies withut reinventing wheels
18:05:19 [chaals]
DS: Dave Singer, Apple, head of multimedia standards. Interested in building up a framework that gets better accessibility over time.
18:06:29 [chaals]
SP: Sylvia Pfeiffer, work part time for Mozilla, trying to figure out how to get accessibility into multimedia (including looking at karaoke and various other time-based systems). Have some demo stuff to show, but want to review the requirements we have...
18:07:02 [fo]
fo has joined #media
18:07:50 [chaals]
GF: Geoff Freed, NCAM, want to talk about captions and audio description in HTML5
18:09:14 [chaals]
JB: Judy Brewer, head of WAI at W3C. Interested in how the options for accessible media affect the user, especially when there are multiple options.
18:10:12 [chaals]
DS: Please do not speak over each other, speak clearly and slowly so interpreters and scribe can follow.
18:11:42 [chaals]
Topic: agenda bashing
18:11:54 [chaals]
DB: SMIL current experience
18:12:43 [chaals]
[we get: DAISY, Geoff, Google, Timed Text, Stanford Captioning, Silvia, Matt]
18:13:07 [chaals]
DS: go like this:
18:13:18 [chaals]
1. Geoff - about multimedia
18:13:24 [chaals]
2. John, Standford
18:13:34 [chaals]
3. Ken, Google stuff
18:13:47 [chaals]
4. Marisa, Daisy
18:14:00 [chaals]
James Craig rolls in from Apple
18:14:24 [chaals]
5. Silvia, stuff she has done
18:14:30 [chaals]
6. Matt (Flash)
18:14:43 [chaals]
7. Dick - SMIL-based captioning
18:15:05 [chaals]
8. Philippe, Timed Text
18:15:28 [chaals]
2.5. Pierre's video
18:15:56 [dsinger]
geoff: how do we do your presentation?
18:16:12 [chaals]
Topic: Geoff Freed
18:16:20 [chaals]
GF: Want to talk a little about my concerns.
18:16:39 [chaals]
... we also did some Javascript real-time captioning
18:17:30 [chaals]
GF: We have been playing around at NCAM with some captioning stuff that might work with video as is in HTML5, using real-time captioning.
18:18:07 [chaals]
... we have been testing by stripping captions from a broadcast stream and embedding them in a page using javascript. We have a way to change channels - e.g. to have live captioning for an event like this
18:18:26 [chaals]
... or it could be used as a way to stream caption data over different channels.
18:18:43 [chaals]
... not the ideal way, it involves some outside work
18:19:08 [chaals]
... would like to see something like a caption element rather than having to inject stuff with JS because that is probably not the most efficient way.
18:19:21 [chaals]
... We have a demo, I will send a screenshot
18:20:15 [Judy]
Judy has joined #media
18:21:22 [chaals]
Topic: John Foliot
18:22:36 [chaals]
s/Foliot/Foliot and Stanford workflow for Captioning/
18:23:36 [chaals]
JF: My role is assisting content providers to get accessible content online. Video has been a problem for some time.
18:24:10 [chaals]
... Feedback from people with Video was that they found it expensive and difficult to actually make it happen when they were just staff doing something else, not video experts.
18:24:27 [chaals]
... We made a workflow that allows staff to get captioned video online more or less by magic.
18:24:38 [chaals]
... We set up a user account, and they can upload a media file.
18:25:02 [silvia]
silvia has joined #media
18:25:04 [chaals]
http://captiontool.stanford.edu/public/UploadFile.aspx
18:25:42 [chaals]
... We have contracted with professional transcription companies (auto-transcription was not accurate enough for us) to do transcripts.
18:26:14 [chaals]
... $5/minute to get 24-hour turn-around, $1.50 / minute for 5-day turnaround - $90 per hour of video.
18:26:57 [chaals]
... System allows us to use multiple transcription services. We have created some custom dictionaries (e.g. people's name, technical terms we use, and so on)
18:27:27 [chaals]
... Content owners can also add special terms if they use something odd, to improve accuracy.
18:28:00 [chaals]
... If you already have a transcript you can upload that instead. (We then do the remaining work for free)
18:28:51 [chaals]
JF: Upload file, and we generate multiple formats - FLV, MP4, MP3 (which is sent to transcript company).
18:29:26 [chaals]
... email is sent to transcription company when we have the file, and one to content producer so they can start putting their content online even if they haven't yet received the captions.
18:29:57 [chaals]
... When transcription is done it is returned by the transcription company into the web interface.
18:30:23 [chaals]
... Then we do some automatic timestamp generation to turn transcript into various formats.
18:30:58 [chaals]
... User gets an email saying they have their captions, and we give them some code to copy that incorporates the captions into the web.
18:32:01 [chaals]
... This is not quite shrink-wrapped and painless, but it is pretty close. You still have to shift some files yourself from server to server, but we are working on automating these steps too.
18:32:31 [chaals]
... We rolled out the system at the end of the summer, have some users now and are talking to campus content creators to roll it out widely.
18:33:21 [chaals]
JF: Scalability of production is important. Stanford makes maybe 200 hours of video / week, and a bunch of that has archival value. So we need to be able to run it simply, and scalably.
18:33:45 [chaals]
SP: You mentioned a company that does timesamping
18:34:01 [chaals]
JF: Docsoft http://docsoft.com
18:34:05 [chaals]
SP: Open source?
18:34:52 [chaals]
JF: Nope. Shrinkwrap product based on Dragon. Speech reco not close enough for datamining, but good enough to automate timestamping completely.
18:37:06 [chaals]
JF: We focus on the datamining aspect of this as much as the accessibility
18:37:17 [chaals]
JC: Have you experimented with the UI in the video player for this?
18:37:39 [chaals]
JF: Not at this point. Also talking to reelsurfer about this. We are looking at it.
18:38:05 [chaals]
SP: Linking directly - should mention that there is a W3C Media Fragments group, looking at standards to directly address into a time with a URI
18:39:38 [chaals]
JF: We see people putting stuff onto YouTube - and that will use our caption materials if they are there. We are giving people the ability to do this stuff...
18:39:56 [chaals]
ACTION: John to provide a pointer into this
18:40:20 [ChrisL2]
Please let us have your sample code for embedding, so we can see what people are using today
18:40:48 [chaals]
s/pointer into this/link that shows us some of how the stanford system works and looks for users/
18:41:30 [chaals]
Topic: Pierre Antoine
18:41:37 [chaals]
s/Lyris/Liris/
18:45:47 [chaals]
Video is presented (slideset and commentary)
18:47:48 [ChrisL2]
Wonder what "speaker diarization" is ?
18:50:49 [chaals]
SP: Tracking who is speaking when
18:52:30 [chaals]
Topic: Ken Harrenstien, Google
18:53:08 [chaals]
http://www.youtube.com/watch?v=QRS8MkLhQmM
18:54:22 [chaals]
(Note that the video *relies* on the captions)
18:56:03 [chaals]
KH: Video is spoken and signed in different langauges. Trying t oclarify it isn't for a handful of poor deaf people, but for everyone.
18:56:15 [chaals]
(Luckily everyone here gets that)
18:56:25 [chaals]
PLH: Are the captions burned into the file or seperate
18:56:32 [chaals]
KH: Seperate. I will explain that...
18:56:54 [chaals]
... A bit about numbers. Google has a very large number of videos, and everything has to work at scale.
18:57:28 [chaals]
... Numbers are importa, going back to what John was saying, we show there is value for a lot of poeple. Now we have numbers like how many people are using captions on You Tube.
18:57:40 [chaals]
... those numbers can drive the adoption of captioning.
18:58:07 [chaals]
... we offer video searching of caption data.
18:58:34 [chaals]
(demo of actually searching, digging into a particular term's occurence in a video)
18:59:34 [chaals]
PLH: Is there anything in the usage numbers about how much people use video and if captioning has an influence on it.
18:59:50 [chaals]
KH: Don't have numbers to give, but they are good enough that I get more support in Google than I used to :)
19:00:05 [chaals]
... want to let people see the numbers for their own video.
19:00:37 [chaals]
[Shows translation of captioning]
19:01:07 [chaals]
PLH: Speech to text, or caption as a source?
19:01:13 [janina]
janina has joined #media
19:01:14 [chaals]
KH: Taking the captions as a source.
19:01:42 [chaals]
SP: Has Google included the offset search for YouTube into the Google engine.
19:01:52 [chaals]
KH: Yep. That was what I just showed.
19:03:00 [chaals]
SP: Captions can be used to improve search results. Is Google using them for indexing videos?
19:03:02 [chaals]
KH: Yes
19:03:23 [Hiro]
Hiro has joined #media
19:03:30 [chaals]
DB: If I post a video, and someone else wants to caption later, how does that work? Can anybody caption anybody else's video?
19:03:38 [chaals]
... can I restrict who uses the content?
19:03:55 [chaals]
KH: There are several websites that allow this. You can do it with or without permission.
19:04:06 [chaals]
DB: So I effectively make a new copy of the base video?
19:04:22 [chaals]
KH: Sort of depends on how you do it. For YouTube you ned to be the owner - or contact the owner.
19:04:38 [chaals]
JF: There are other 3rd prty tools that pull tools and transcript from seperate places.
19:05:17 [chaals]
... The value of captions gets attention when people see the translation of captions
19:05:26 [chaals]
KH: Behind the scenes...
19:06:11 [chaals]
http://video.google.com/timedtext?v=aC_7NzXAJNI&lang=en
19:06:30 [jcraig]
jcraig has joined #media
19:06:31 [chaals]
SP: You use a new XML format instead of SRT - for some particular reason?
19:06:55 [chaals]
KH: We control it. SRT isn't always so sweet....
19:07:50 [chaals]
... we actually produce various formats (incl. srt among others). It's easy to add new ones, so we don't care which format people have.
19:08:35 [chaals]
http://video.google.com/timedtext?v=aC_7NzXAJNI&lang=en&tlang=ru is a live-generated translation
19:11:14 [chaals]
Topic: Multi-channel caption streamer.
19:11:18 [chaals]
[screenshot]
19:11:33 [chaals]
Caption text is stripped from the broadcast and injected in real time using Javascript.
19:11:47 [chaals]
various UI controls on top.
19:11:47 [plh]
--> http://lists.w3.org/Archives/Public/public-tt/2009Oct/0014.html NCAM real-time javascript captions
19:12:24 [chaals]
Transcript button can also give you a full log.
19:13:48 [chaals]
Break - 17 minutes.
19:14:31 [MichaelC]
present: Judy_Brewer, Dick_Bolterman, Sally_Cain, Eric_Carlson, James_Craig, Michael_Cooper, Marisa_DeMeglio, John_Foliot, Geoff_Freed, Ken_Harrenstien, Philippe_le_Hégaret, Ian_Hickson, Chris_Lilley, Charles_McCathieNevile, Matt_May, Frank_Olivier, Silvia_Pfeiffer, Janina_Sajka, Felix_Sasaki, David_Singer, Joakim_Söderberg, Hironobu_Takagi
19:15:21 [RRSAgent]
I have made the request to generate http://www.w3.org/2009/11/01-media-minutes.html MichaelC
19:34:31 [ChrisL2]
ScribeNick: ChrisL2
19:34:47 [chaals]
[Please do not use camera flash]
19:34:53 [ChrisL2]
Marisa gives a presentation
19:35:06 [ChrisL2]
rrsagent, here
19:35:06 [RRSAgent]
See http://www.w3.org/2009/11/01-media-irc#T19-35-06
19:35:12 [ChrisL2]
rrsagent, draft minutes
19:35:12 [RRSAgent]
I have made the request to generate http://www.w3.org/2009/11/01-media-minutes.html ChrisL2
19:36:01 [ChrisL2]
John introduces a guest account for the Stanford facility to experiment with the transcription service
19:40:02 [ChrisL2]
Topic: Marisa DeMeglio, DAISY
19:40:25 [ChrisL2]
[Marisi introduces DAISY with a demo]
19:40:29 [jcraig]
http://captiontool.stanford.edu/ user: W3C_HTML5 pass: Accessible!
19:41:38 [ChrisL2]
DAISY uses SMIL to synchronise the audio with the html text
19:42:19 [ChrisL2]
MD: DAISY 4 being defined adds forms and video. Has an authoring and a distribution side. Can HTML5 be used for distribution
19:44:21 [ChrisL2]
MD: Native browser support for audio and video is good, but we se barriers in html5, for example the extensibility. How to de add sidebars for example. Needs more roles than are built in.
19:44:38 [ChrisL2]
MD: Want to avoid downgrading the content to represent it
19:45:42 [ChrisL2]
JS: DAISY did a demo with two filmings of Henry V, scene by scene sync for comparative cinematography plys avatar for lip readers. Still interest in multiple synced media levels?
19:45:53 [ChrisL2]
MD: Will be able to answer that next week
19:46:49 [ChrisL2]
JS: Not clear if it fits in HTML 5 or 5.1 or 6
19:47:11 [ChrisL2]
MD: Interested to use SMIL and HTML5 together to get that synchronisation
19:47:24 [ChrisL2]
... also forms and annotations
19:47:34 [ChrisL2]
... constrained by implementations
19:48:33 [MichaelC]
present+ Victor_Tsaran
19:48:51 [ChrisL2]
Topic: Silvia_Pfeiffer, Demo and Proposals.... and Everything
19:50:32 [ChrisL2]
SP: Demos with three Firefox builds - standard, with screenreader, and custom patched to add native accessibility support
19:51:23 [ChrisL2]
SP: Need for signing captions for hard of hearing and deaf
19:51:30 [ChrisL2]
... audio descriptions
19:51:57 [ChrisL2]
... textual transcripts for screen reader or braille
19:53:43 [ChrisL2]
SP: All of these create multiple content tracks. Multiple audio, text, and video tracks for a composite media eleent
19:53:56 [ChrisL2]
... need a standard javaScript interface to control this
19:54:08 [ChrisL2]
... and coneg reduces download to just the items of interest
19:54:56 [ChrisL2]
SP: Text tracks are special and much more valuable outside the multimedia container. Easier to search index etc more easily than burried in media
19:55:18 [ChrisL2]
... aids editability, crowdsourching, CMS integration
19:55:42 [ChrisL2]
... sharable
19:56:24 [ChrisL2]
Proposes an itext element with @lang, @type, @charset @src @category and @display
19:57:03 [silvia]
https://wiki.mozilla.org/Accessibility/HTML5_captions
19:57:48 [ChrisL2]
sent to whatwg and html5 list, not much discussion there, but feedback directly on the wiki
19:59:46 [ChrisL2]
[demo with video and selectable captions, subtitles]
20:00:23 [ChrisL2]
SP: Issue is that all timed text tracks are treated identically. So next proposal identifies some categories, not all supported in the implementation
20:00:37 [ChrisL2]
... but these are the categories in use based on 10 years of collecting them
20:01:00 [ChrisL2]
[demo with compiled firefox, rather than using scripting]
20:02:42 [ChrisL2]
(prettier interface compared to the scripted one, same functionality, captions are transparent background now like proper subtitles
20:03:00 [ChrisL2]
CL: Question on encodings vs display
20:03:12 [ChrisL2]
PLH: Why is this an issue, Mozilla does this
20:03:58 [ChrisL2]
SP: Uses HTML existing code for layout
20:04:06 [ChrisL2]
PLH: default captioning language
20:05:02 [ChrisL2]
SP: display="auto" selects an auto language negortiation. other options are none and force
20:05:36 [ChrisL2]
... user can override author choice, but author shoule be abel to express their design too
20:06:13 [ChrisL2]
... second proposal has a grouping element itextlist to express common categories etc rather than repeating them
20:06:40 [ChrisL2]
https://wiki.mozilla.org/Accessibility/HTML5_captions_v2
20:07:10 [ChrisL2]
dom interface allos to see if a given itext element is active
20:07:30 [ChrisL2]
PLH: Are you generating events?
20:07:50 [ChrisL2]
SP: Yes, onenter and onleave for new caption blocks or segments, so can listen for that
20:08:15 [ChrisL2]
PLH: charset because srt ddoes not indicate the charset?
20:08:37 [ChrisL2]
SP: yes. Some formats dont provide this
20:08:55 [ChrisL2]
SP: charset is optional, some formats self describing and don't need it
20:09:04 [ChrisL2]
... no registered mime type for srt
20:10:05 [ChrisL2]
[discussion on where SRT is defined]
20:10:06 [ChrisL2]
http://srt-subtitles.com/
20:10:49 [ChrisL2]
SP: currentText api shows the currently displayed text, so a script can manipulate the text or display it
20:11:02 [ChrisL2]
... also a currenttime interface
20:11:35 [ChrisL2]
... works the same for external text tracks of for ones in the media container
20:11:59 [ChrisL2]
JC: So can do searching or access text that will be displayed later
20:13:05 [ChrisL2]
[demo of v2 using scripting implementation, changing language on the fly]
20:13:38 [ChrisL2]
[demo of firefox with a screen reader, firevox]
20:16:00 [ChrisL2]
[demo hgad audio discriptions for vision impaired, and text to speech audio description. uses aria.]
20:16:13 [ChrisL2]
aria active region
20:17:01 [ChrisL2]
SC: Will that use the defualt screenreader, or ontly the one in the browser
20:17:05 [ChrisL2]
SP: the default
20:17:14 [ChrisL2]
CN: How do you add audio ?
20:17:56 [ChrisL2]
SP: needs native support in the browser with a n interface so its the same for internal and external sources ... audo and video needs to be synchronised
20:18:14 [ChrisL2]
... dynamic composition on server is recommended way to do that
20:18:48 [ChrisL2]
CMN: Issue with therird party signing track, third party has no access to server where the video lives
20:19:10 [ChrisL2]
SP: text is special, but audo could be treated like that too. To be discussed
20:19:14 [ChrisL2]
... needs a spec
20:19:35 [ChrisL2]
CMN: Signed video is similarly special to subtitle text
20:20:40 [ChrisL2]
JC: Can the audio pause the video for long descriptive text (for time to read it or have it spoken)
20:21:14 [ChrisL2]
FO: If the text is inside the container and there is external text too what happens?
20:21:30 [ChrisL2]
(we agree that the proiority needs to be defined)
20:21:43 [ChrisL2]
FO: User needs to decide, no one size fits all solution
20:22:00 [ChrisL2]
SP: yes we need the flexibility there and have the api to make it workable
20:22:10 [ChrisL2]
JS: Resource discovery
20:22:36 [ChrisL2]
CMN: So need to think about an API that finds internal as wel las external tracksa dn treat them uniformly
20:23:07 [ChrisL2]
HT: Can itext eleents be added or changed dynamically?
20:23:09 [ChrisL2]
SP: yes
20:23:40 [ChrisL2]
S\[SP demonstrates a video hosting site with ogg audio, video, and subtitle./captioning/transcript support]
20:24:04 [ChrisL2]
(all done in HTML5 and script)
20:24:48 [ChrisL2]
http://oggify.com/ under development
20:25:20 [ChrisL2]
SP: Like youtube except with open source and open formats and script plus HTML5
20:25:40 [ChrisL2]
FS: Where does the timing info come from?
20:25:56 [ChrisL2]
SP: From the subtitle file, they all have start and end times
20:26:40 [ChrisL2]
JS: Nested structural navigation is important. chapters, sections etc
20:27:05 [ChrisL2]
... access to next scrne next act would be good
20:27:21 [ChrisL2]
SP: Titled text tracks have DVD-style chapter markers
20:27:41 [ChrisL2]
.. linear though, not hierarchical due to limitations of flat file
20:28:08 [ChrisL2]
MD: Yes DAISY infers structure from heading levels
20:28:41 [ChrisL2]
SP: Complext to bring in generic HTML and then display it o anothert HTL stream .. security issues
20:29:16 [ChrisL2]
... media fragments wg is specifying how to jump to named offsets as well as time offsets
20:29:23 [ChrisL2]
... not finished yet
20:29:50 [ChrisL2]
JS; Direct access also for bookmarking as well as flipping through
20:30:47 [ChrisL2]
SP: Chapter markers and structure exposes the structural content of the video, for speed reading among others. can do it with URIs so bookmarkable and can be in the history
20:31:36 [ChrisL2]
Topic: Matt_May, Adobe, Flash experience
20:32:15 [ChrisL2]
[Matt talks about history of accessible captioning in Flash]
20:32:54 [ChrisL2]
MM: Two minimal requirements, flash support in video since Flash 6 and later the ability to insert cue points to associate captions with the video, in Flash 8
20:33:30 [ChrisL2]
.. several attempts to crreate captioning, but they were unsynced so unsuccessful. Result was lack of adoption and a thousand hacks to try and do it
20:33:42 [ChrisL2]
... cue points got us closer, reliable feature
20:34:31 [ChrisL2]
... starting with flash 8, reliable caption sync but no standard way to do it. usually embedded int ehc ontsainer so hard coded fonts, done in actionscript. Content buried inside script
20:35:09 [ChrisL2]
MM: Came to realisation that inside-only was a naive approach, looked for alternatives. In flash 9 we supported timed text dfxp
20:35:38 [ChrisL2]
... can assocate flv_playback_caption component, takes an external timed text file for captions
20:35:50 [ChrisL2]
... used an existing standard, tt was there
20:36:07 [ChrisL2]
... not re-re-re-re inventing the whee;
20:36:24 [ChrisL2]
... third parties can build captions and authoring software
20:36:48 [ChrisL2]
... hopeful that other formats adopt dfxp as well
20:37:56 [ChrisL2]
MM: Breaking out the captions, as Sylvia and Ken mentioned, is important for any standard. Cam embed but thats just the first step. Only download the required captions. Allosw third parties to add their own content. Crowdsourcing captions
20:38:23 [ChrisL2]
... dealing with third parties adding captions later
20:39:16 [ChrisL2]
MM: For html5, also important to have captions *inline* in the html document itself. Not complex to add
20:40:14 [ChrisL2]
CL: SMIL also found a need to have text inline as an option
20:40:18 [ChrisL2]
DB: One option
20:40:46 [ChrisL2]
SP: most of my demos are at http://www.annodex.net/~silvia/itext/
20:41:14 [ChrisL2]
MM: Everyone is familiar with magpie
20:41:18 [ChrisL2]
[we aren't]
20:42:04 [ChrisL2]
MM: Shows this is well tried territory, so HTML5 should be able t use existing authoring tools and workflows to do this
20:42:15 [jcraig]
MAGpie is a captioning tool from NCAM
20:42:23 [ChrisL2]
MM: So please consider existing solutions
20:42:40 [jcraig]
http://ncam.wgbh.org/webaccess/magpie/
20:43:24 [ChrisL2]
JC: Many content providers of video have no idea how captioning works, as other people do it
20:43:58 [ChrisL2]
M: Using captions in AJAX, inserting vie athe DOM in real time. Issues of scrolling.
20:44:10 [ChrisL2]
[lunch break]
20:50:41 [Laura]
Laura has joined #media
21:22:45 [MichaelC]
present+ Doug_Schepers
21:33:10 [Hiro]
Hiro has joined #media
21:33:39 [SCain]
SCain has joined #media
21:35:55 [fo]
t
21:36:57 [silvia]
Dick Bulterman (sp?) will talk about SMIL
21:37:15 [ChrisL2]
ScribeNick: sylvia
21:37:34 [ChrisL2]
Topic: Dick_Bolterman, CWI, SMIL Text
21:37:40 [dsinger]
ScribeNick: silvia
21:37:41 [plh]
s/Bo/Bu/
21:38:11 [silvia]
chaals: I will scribe *for a bit* :-)
21:39:18 [chaals]
Scribe: Silvia
21:40:45 [silvia]
"Supporting Accessible Content" - lessons from SMIL 1/2/3/
21:40:51 [silvia]
co-chair of SMIL working group
21:40:59 [shepazu]
shepazu has joined #media
21:41:06 [silvia]
head of distributed & interactive systems group at CWI
21:41:16 [silvia]
interest for long time in working with multimedia
21:41:26 [silvia]
temporal & spatial synchronisation
21:41:39 [silvia]
take-home message: a11y isn't about a particular media format
21:41:48 [silvia]
it is about supporting selectivity among peer encodings
21:42:30 [silvia]
e.g. different encoding of same content for different situations, e.g. when driving/reading/conference
21:42:45 [silvia]
it is about a coordination mechanism to manage selection of emdia streams
21:42:59 [silvia]
s/emdia/media/
21:43:18 [silvia]
difficulty is that they change over time
21:44:09 [silvia]
it is about providing 'situational accessibility' support
21:44:15 [silvia]
nobody wants special-purpose tools
21:44:18 [plh]
--> http://www.w3.org/2009/Talks/1031-html5-video-plh/Overview.xhtml plh's slides
21:44:31 [silvia]
we should solve the problem for everybody
21:44:58 [silvia]
we need to make the temporal and spatial synchronisation explicit to be able to do the complex things
21:45:08 [silvia]
what is accessible content?
21:45:35 [silvia]
what kind of things would need to be done with a video object
21:45:46 [silvia]
it could be a svg object or another object, too
21:45:51 [silvia]
- want to add subtitles
21:46:03 [silvia]
- want to add captions and labels
21:46:28 [silvia]
labels are a cheap and simple way to communicate what is being visible in the video
21:46:42 [silvia]
"you are about to see my son play an instrument"
21:46:52 [silvia]
- line art & graphics
21:47:19 [silvia]
it would be nice to have a uniform model for all types of objects
21:47:23 [silvia]
- audio descriptions
21:47:49 [silvia]
- semantic discriminators
21:48:03 [silvia]
people want things at different levels of detail at different times
21:49:01 [silvia]
Some experiences from our SMIL work:
21:49:10 [silvia]
- not all encodings will be produce by the same party
21:49:39 [silvia]
e.g. even while Disney owns the video, they may not be the ones to create the captions
21:49:45 [silvia]
- not all content will be co-located
21:50:00 [silvia]
e.g. a video may be on one server, but content enrichments will be on many different servers
21:50:24 [silvia]
if you want highly synchronised audio and video, they practically have to be in the same file
21:50:58 [silvia]
the network delays can easily add up to make it impossible to synchronise them
21:51:06 [silvia]
but you can create some things on different servers
21:51:17 [silvia]
- there may be complex content dependencies
21:51:30 [silvia]
- each piece of content may not be aware of the complete presentation
21:51:45 [silvia]
*SMIL Support for Alternative Cotnent*
21:51:55 [silvia]
SMIL 1.0 in 1996
21:52:17 [silvia]
<switch> : selection based on system test attributes
21:52:30 [silvia]
(language, bitrate, captions)
21:52:42 [silvia]
support alternative selection of parallel tracks
21:53:13 [silvia]
demo of MIT Prof. Lewen (sp?)
21:53:41 [silvia]
in SMIL:
21:53:50 [silvia]
<video src="MITguy" … />
21:53:53 [silvia]
<switch>
21:54:09 [silvia]
<text src="A" systemLanguage="l" systemCaptions="on"/>
21:54:10 [plh]
s/ Lewen (sp\?)/Walter Lewin/
21:54:25 [silvia]
<text src="B" systemCaptions="on"/>
21:54:28 [silvia]
</switch>
21:54:29 [chaals]
s/Cotnent/Content/
21:55:01 [silvia]
s/"l"/"nl"/
21:55:06 [ChrisL2]
ther eis an implicit PAR around that example so the video and the switch play in parallel
21:55:40 [silvia]
document order is only fallback - user preference dominates it
21:55:49 [silvia]
*SMIL 2.0 (2001)*
21:56:00 [silvia]
- custom test attributes
21:56:15 [silvia]
- added <excl> tag for pre-emptive inclusion of content
21:56:28 [silvia]
<excl> provides support for audio descriptions
21:56:35 [silvia]
event-based activation
21:57:16 [silvia]
demo of a video, which pauses on a schedules pre-empt to wait for an audio description to be displayed
21:57:26 [silvia]
when that audio description is done, the video continues
21:57:35 [silvia]
*SMIL 3.0 (2008)*
21:57:41 [silvia]
- number of different extensions
21:58:14 [RRSAgent]
I'm logging. I don't understand 'do not start a new lo', MichaelC. Try /msg RRSAgent help
21:58:18 [silvia]
- <smilText>: another timed text format - allows embedded hyperlinks, allows style sheets, allows motion, allows fine-grained text
21:58:43 [silvia]
streamable labels, captions, mW events
21:58:54 [silvia]
- smilState: allows coordination via data model
21:59:18 [silvia]
- timed, decentralized metadata
21:59:38 [silvia]
- media pan & zoom (temporal focus, e.g. Ken Burns effect, coupled with audio description)
22:00:35 [silvia]
demo of a web page with three buttons used to influence the presentation
22:01:03 [silvia]
* SmilText *
22:01:10 [silvia]
Why not simply reuse DFXP?
22:01:21 [silvia]
- it was not intended to be embedded in SMIL
22:01:28 [silvia]
- it isn't a streaming format
22:01:39 [silvia]
- it doesn't allow mix of absolute/relative/event timing
22:01:48 [silvia]
- it doesn't handle motion text
22:02:00 [silvia]
- layout + style processing are idiosyncratic
22:02:47 [silvia]
SMIL needed to support live TV streaming and supporting live captioning wasn't possible with DFXP
22:03:32 [silvia]
smilText was explicitly designed to map well with DFXP
22:03:37 [silvia]
there is still an easy mapping
22:03:57 [silvia]
- smilText is functionally compatible, with a direct mapping to DFXP
22:04:07 [silvia]
- smilText is also a direct replacement for RealText
22:04:15 [silvia]
*What will HTML5 need?*
22:04:30 [silvia]
* video object can't always determine timeline
22:04:46 [silvia]
- need external-to-video notion of temporal/spatial coordination
22:05:03 [silvia]
* Simple media control (start/stop/pause) are not rich enough
22:05:23 [silvia]
- impossible to enumerate all of the things that you may want to start/stop/pause in parallel
22:06:13 [silvia]
it might be a good idea to create a middleware that handles e.g. pausing across all involved elements
22:06:26 [silvia]
* need to support embedded and external companion content
22:06:55 [silvia]
*T/S coordination info: where?*
22:07:22 [silvia]
temporal/spatial coordination should go into a script? a web page header? a flash object? into SMIL?
22:07:28 [silvia]
a companion media object?
22:07:38 [silvia]
- very laborious for fine-grained timing
22:07:45 [silvia]
in script controlling directive activation?
22:07:52 [silvia]
- probably not, cause it doesn't scale
22:08:00 [silvia]
In companion synchronization specification
22:08:04 [silvia]
- good for extensibility
22:08:09 [silvia]
- options:
22:08:13 [silvia]
— fully external
22:08:15 [silvia]
— timesheet
22:08:16 [silvia]
— internal
22:08:43 [silvia]
timesheets are like style sheets for synchronising timing
22:09:17 [silvia]
html+time is an example
22:09:35 [silvia]
there is no right answer - but it is important to have all the flexibility
22:09:43 [silvia]
*Editing Complex Presentations*
22:09:56 [silvia]
Demo: GRiNS (1996-2004)
22:10:03 [silvia]
authoring sw for smil presentations
22:10:27 [silvia]
- interactive navigation
22:10:30 [silvia]
-scalable presentations
22:11:32 [silvia]
demo: BBC did a 40min newscast with a structured view and direct access using SMIL
22:12:10 [silvia]
a structured view gives you all the possibilities of gaining the presentation in the way that you want it
22:12:18 [silvia]
*Adding Captions to 3rd Party Videos*
22:12:21 [silvia]
An Aside:
22:12:35 [silvia]
- helping the community to share comments on videos that other people own
22:12:49 [silvia]
demo: Ambulant Captioner
22:13:06 [silvia]
*Adding Captions & Labels (and Context)*
22:13:11 [silvia]
After selection, add comments
22:13:41 [silvia]
helps caption authoring
22:13:48 [silvia]
does predictive timing on your captions
22:14:09 [silvia]
it helps people produce timing more easily
22:14:34 [silvia]
*Putting Navigation into Captions*
22:14:48 [silvia]
captions are being used to provide navigation
22:14:54 [silvia]
temporal hyperlinking
22:15:05 [silvia]
intra-clip navigation
22:15:09 [silvia]
inter-clip navigation
22:15:16 [silvia]
*More about SMIL & Accessibility*
22:15:22 [silvia]
SMIL 3.0 Book:
22:15:30 [silvia]
find it on xmediaSmil.net
22:15:45 [silvia]
captioning tool: www.ambulantPlayer.org/smilTextWebApp/
22:16:04 [silvia]
together anywhere anytime project: www.ta2-project.eu
22:16:18 [silvia]
smilText: code.google.com/smiltext-javascript/
22:16:25 [silvia]
ugliest demo section :-)
22:16:35 [silvia]
JS SMIL Player: ambulantPlayer.org
22:16:45 [silvia]
timesheets: w3.org/TR/Timesheets
22:17:32 [silvia]
re-usable timing
22:17:39 [silvia]
that ends the presentation of DB on SMIL
22:17:43 [silvia]
questions?
22:18:12 [silvia]
??: is SMIL used?
22:18:18 [silvia]
MMS is based on SMIL
22:18:22 [silvia]
Quicktime media player
22:18:27 [ChrisL2]
s/www./http:\/\/
22:18:27 [silvia]
digital signage
22:18:51 [silvia]
quicktime on the desktop had a smil implementation
22:18:55 [silvia]
windows media player
22:19:04 [Judy]
s/??/joakim/
22:19:19 [silvia]
windows media player uses it to pre-empt national security events
22:19:44 [silvia]
realplayer
22:20:15 [silvia]
joakim: playlist of product presentations?
22:20:29 [silvia]
in supermarkets France or Finnland uses SMIL for kiosks
22:20:40 [silvia]
interactive selectivity is one of the big things there
22:20:51 [silvia]
having the target of a hyperlink change over time
22:21:11 [silvia]
we see a lot of ways of deployment of SMIL, but it's been frustrated that it's not been used more
22:21:20 [silvia]
s/frustrated/frustrating/
22:21:36 [silvia]
things move incredibly slowly
22:22:08 [silvia]
even with dozens of years of experience with interactive multimedia, we still only have a <video> tag in html
22:22:57 [silvia]
SMIL has the power that easy things can be done easily, but also difficult things in a simple way
22:23:17 [silvia]
finishes Dick's presentation
22:23:29 [plh]
--> http://www.w3.org/2009/Talks/1031-html5-video-plh/Overview.xhtml plh's slides
22:23:34 [silvia]
next: Philippe on the state of timed text (DFXP)
22:23:50 [silvia]
shows a html5 video demo
22:24:37 [silvia]
demo: www.w3.org/2009/Talks/1031-html5-video-plh/Overview.xhtml#(2)
22:24:39 [ChrisL2]
http://www.w3.org/2009/Talks/1031-html5-video-plh/
22:25:23 [silvia]
Timed Text
22:25:29 [silvia]
came out of the TV world
22:25:34 [silvia]
started in 2003
22:25:50 [silvia]
original idea was to have an authoring format for the web
22:26:18 [silvia]
as Adobe used it in Flash, it became increasingly a delivery format
22:26:53 [silvia]
demo: NCAM flash player with DFXP
22:27:09 [silvia]
<tt
22:27:09 [silvia]
xmlns="http://www.w3.org/ns/ttml"
22:27:09 [silvia]
xmlns:tts="http://www.w3.org/ns/ttml#styling">
22:27:09 [silvia]
<body>
22:27:09 [silvia]
<div>
22:27:10 [silvia]
<p begin="0s" end="10s">
22:27:11 [silvia]
This word must be
22:27:14 [silvia]
<span tts:color='red'>red</span>
22:27:16 [silvia]
<br />and this one
22:27:18 [silvia]
<span tts:color='green'>green</span>.
22:27:19 [silvia]
</p>
22:27:21 [silvia]
</div>
22:27:24 [silvia]
</body>
22:27:26 [silvia]
</tt>
22:27:30 [silvia]
highly controversial use of name spaces
22:27:48 [silvia]
*online captioning*
22:27:49 [silvia]
# Parameters: frame rate, ... (SMPTE, SMIL)
22:27:49 [silvia]
# Styling: XSL FO 1.0, CSS 2
22:27:49 [silvia]
# Layout and region
22:27:52 [silvia]
# Timing model: SMIL
22:27:57 [silvia]
# Basic Animation: SMIL, SVG
22:28:01 [silvia]
# Metadata
22:28:33 [silvia]
example timed text document
22:29:16 [silvia]
it's possible to use it in a streaming context, but you have to be careful what additions you make to the file
22:29:40 [silvia]
test suites
22:29:57 [silvia]
http://www.w3.org/2008/12/dfxp-testsuite/web-framework/START.html
22:30:19 [silvia]
one demo is with HTML5 using javascript to synchronise with the <video> element
22:31:03 [silvia]
the list of which features are supported in which player is given at http://www.w3.org/2009/05/dfxp-results.html
22:31:25 [silvia]
Adobe and MS players are still prototypes
22:31:44 [silvia]
JW player's support is disastrous
22:31:51 [silvia]
WBGH support is also not that great
22:32:03 [silvia]
I don't know what the plan is to update those implementations
22:32:56 [silvia]
plh shows different web browsers supporting html5 and MS/Flash implementation in test interface
22:33:13 [silvia]
*Recent Progress*
22:33:21 [silvia]
finishing on the testing
22:33:26 [silvia]
published last call
22:33:34 [silvia]
dynamic flow still needs testing
22:33:48 [silvia]
we're waiting on the implementation from Samsung
22:34:05 [silvia]
trying to become W3C recommendation by Dec 2009
22:34:27 [silvia]
by we need the dynamic flow implementations first
22:34:42 [chaals]
s/by/but
22:35:53 [silvia]
SP: when did you last update the HTML5 DFXP implementation
22:36:21 [silvia]
plh: I need to update it a bit, but there is cool stuff that can be done with DFXP
22:36:44 [silvia]
finishes plh demo
22:36:51 [silvia]
next up joakim
22:37:10 [silvia]
"Media Annotations Working Group - overview"
22:37:22 [silvia]
we started a year ago
22:37:35 [silvia]
I'm co-chair, Felix used to be chair
22:37:41 [silvia]
*purpose*
22:37:51 [fsasaki]
s/used to be chair/used to be staff contact/
22:37:55 [silvia]
- facilitate metadata integration for media objects in the Web, such as video, audio and images
22:38:07 [silvia]
- means is to define an Ontology and API for metadata
22:38:19 [silvia]
we're part of the Web Video work in W3C
22:38:28 [silvia]
we're trying to re-use what exists
22:38:35 [silvia]
vision is to make it easy to use
22:38:56 [silvia]
the ontology relates the different metadata formats
22:39:13 [silvia]
the mapping between different formats is provided by the working group
22:39:17 [silvia]
some formats in sight:
22:39:22 [silvia]
XMP, DublinCore, ID3 etc
22:39:38 [silvia]
*Definition of metadata properties for multimedia objects*
22:39:43 [silvia]
example properties
22:39:48 [silvia]
* ma:contributor
22:39:50 [silvia]
* ma:language
22:39:56 [silvia]
* ma:compression
22:40:10 [silvia]
http://www.w3.org/TR/mediaont-10/
22:40:19 [silvia]
*Relating properties to existing formats*
22:40:44 [silvia]
gives an example where ma:contributor maps to media:credit@role in YouTube Data API protocol
22:40:55 [silvia]
ma:contributor maps to dc:creator in XMP
22:41:11 [silvia]
semantic mappings: exact, related to, more specific/more general
22:41:18 [silvia]
syntactic mappings e.g.
22:41:23 [silvia]
unicode string, also given
22:41:33 [silvia]
*API for MEdia Resources 1.0*
22:41:54 [silvia]
example for ma:contributor property
22:42:06 [silvia]
consists of id and role
22:42:23 [silvia]
API was published 2 weeks ago and is in first public working draft
22:42:27 [silvia]
*Challenges*
22:42:30 [silvia]
- General
22:42:49 [silvia]
— reading is easy part, but how to (or if to) write "ma:" properties into media files
22:43:02 [silvia]
— getting verification of mappings (needs to be based on actual usage, not on the specifications)
22:43:10 [silvia]
- specific to accessibility
22:43:19 [silvia]
— how does the media nnotations approach fit to a11y needs?
22:43:31 [silvia]
— are there a11y related attributes that are missing?
22:43:40 [silvia]
*Resources*
22:44:00 [silvia]
provides links to home page, requirements,, ontoloy, and API spec
22:44:42 [silvia]
Marisa: how would you use this?
22:44:50 [silvia]
DAISY is using the book ontology
22:45:08 [silvia]
we're interested in this ontology
22:45:32 [silvia]
Felix: it's easy to create the mapping table, but to get the feedback on mapping ontologies to each other is difficult
22:45:47 [silvia]
also, it's different what is being written into the spec and what is used in the wild
22:46:14 [silvia]
Doug: are they mis-using some of the bits in the ontology for other reasons?
22:46:20 [silvia]
Felix: no, just using substitutes
22:46:33 [silvia]
ends joakim's presentation
22:46:44 [silvia]
Ian's presentation next
22:47:08 [silvia]
"Where are we in HTML5 with <video>?"
22:47:27 [silvia]
-html5 has an audio and a video element
22:47:32 [silvia]
- basically the same element
22:47:45 [silvia]
- defined as a single abstract concept of a media element
22:48:01 [silvia]
- the ui is basically up to the browser
22:48:09 [silvia]
- common codec is a challenge
22:48:29 [silvia]
http://www.w3.org/2009/Talks/1031-html5-video-plh/Overview.xhtml#%282%29
22:48:42 [silvia]
what we're seeing there is: the part above the controls is the video element
22:48:55 [silvia]
the part below the video is SVG - it could use HTML div/buttons etc
22:49:02 [silvia]
controls in this example are all scripted
22:49:19 [silvia]
when you hit the play element, it sends video.play() and then the video starts playing
22:49:29 [silvia]
there is scripted access to the loudness control
22:49:56 [silvia]
if you enable the video UA controls, they are also shown
22:50:24 [silvia]
browser UI and scripted UI are in sync, so if you silence the video through either, the other reacts
22:50:34 [silvia]
the API gives you further information
22:50:37 [silvia]
e.g. state of network
22:50:39 [fsasaki]
s/substitutes/subsets/
22:50:43 [silvia]
playback/buffering state
22:51:21 [silvia]
you can seek
22:51:34 [silvia]
it basically supports streaming content
22:51:43 [silvia]
browser is exposing a slowly moving window into the video
22:51:50 [silvia]
playback rate can be changed
22:51:54 [silvia]
you can make it loop
22:51:59 [silvia]
autoplay
22:52:32 [silvia]
the goal was not to do SMIL
22:52:45 [silvia]
if you needed the whole support of SMIL. you'd use SMIL
22:53:02 [silvia]
DB: we need to be careful to go down the road of this
22:53:04 [Hixie]
Hixie has joined #media
22:53:17 [silvia]
DB: similar things need to be called the same only if they are completely the same
22:53:30 [silvia]
Ian:
22:53:38 [silvia]
2-3 ways that a11y is built into the API
22:53:55 [silvia]
* tracks that are built into the media resource
22:54:01 [silvia]
not currently exposed in the API
22:54:11 [silvia]
the browsers are expected to expose that if it's available
22:54:14 [silvia]
* javascript overlays
22:54:35 [silvia]
missing the cue ranges API now
22:55:03 [silvia]
* <source> element is not just used for different codecs, but also for different bitrate/quality videos
22:55:26 [silvia]
Next version is expected to have something similar to what silvia suggested <itext>
22:55:43 [silvia]
the main thing blocking this now is that the things already in the spec aren't even implemented
22:56:08 [silvia]
what's already in the spec needs to be implemented solidly before more spec is added
22:56:16 [silvia]
a test suite for this is still missing
22:56:29 [silvia]
* extensibility
22:56:35 [silvia]
HTML5 has successful extensibility mechanisms
22:57:08 [silvia]
ChrisLilly:
22:57:53 [silvia]
CL: <video> implementation being incomplete may also be because the spec is incomplete, so extending the spec would be better than waiting for full implementation
22:58:22 [silvia]
DB: you need to bring people in and you need a level of functionality that is more attractive than just video playback
22:58:25 [silvia]
DB: you need to have a roadmap
22:58:38 [silvia]
Ian: what Silvia is proposing is pretty much what we need
22:58:58 [silvia]
DB: srt timing model is different to SMIL and different from others
22:59:39 [silvia]
DB: if we come up with multiple timing models, that don't work together - in particular with multitrack audio/video - it might be better to change the timing model
23:00:06 [silvia]
DB: the impression I have is that we may need to look at <video> more indepth and extend it more, instead of making it too small
23:00:40 [silvia]
DB: is missing the discussion on the timing model
23:01:11 [silvia]
Ian: my understanding is that it's using the timing model that people are expecting
23:01:46 [silvia]
DB: there are many practical issues for the choice of the timing model - we think <video>'s timing model is too restricted
23:01:52 [plh]
q?
23:02:04 [silvia]
DB: we may have a means extend this better
23:02:43 [silvia]
Doug: from what I understand HTML5, no decision has been made on the choice of the timing model and SMIL still fits into it
23:03:28 [silvia]
Ian: the design of the element was based on the idea that any timed actions that need to be done would use SMIL
23:04:17 [silvia]
Doug: the SMIL stuff that is in SVG could be reused in HTML5 - and since browser vendors like to reuse things, it's inclined to reuse that
23:04:51 [silvia]
Michael: no support currently for resource discovery?
23:05:01 [silvia]
Ian: not yet, but it seems silvia has a plan
23:05:51 [silvia]
Marisa: is there any approach to HTML5 where people are creating similar things to a digital talking book with overlays and synchronised SMIL scripts etc?
23:06:34 [silvia]
Ian: if the goal is to specifically synchronise the playback of audio and video, then the approach should be SMIL
23:07:19 [silvia]
joakim: I think media discovery would be a perfect way to use media annotations, e.g. two different versions of a video
23:07:42 [silvia]
Matt: basically the a11y of the video file is left to the video format
23:08:17 [silvia]
Matt: there are characteristics that you need to know about, whether it has captions etc, so given that we have the capacity for scripted controls, then the selector should be in the API
23:08:32 [silvia]
Ian: silvia's itext has that as a proposal
23:08:52 [silvia]
Matt: how do we determine what belongs in the API and what not ?
23:09:07 [silvia]
There's a process for how to extend HTML5
23:10:20 [Hixie]
my answer was "that's a judgement call, there's no general rule" :-)
23:25:21 [plh]
--> http://www.w3.org/2009/10/W3C-AccessibilityPA.pdf Dick Bulterman slides
23:26:01 [fsasaki]
scribe: fsasaki
23:26:15 [fsasaki]
now judy brewer presentation
23:26:44 [fsasaki]
judy: interested in quality of acc. support
23:27:02 [fsasaki]
.. current support in HTML5 may be insufficient for captions etc.
23:27:18 [fsasaki]
.. profileration of different caption etc. formats is a problem
23:27:49 [fsasaki]
.. silvia described high-level requirements for captions etc., liked that
23:28:14 [fsasaki]
.. best practices are good, but need an overall approach of sets of requirements
23:28:29 [fsasaki]
.. and description of relations between existing standards / approaches
23:28:47 [plh]
--> http://media.w3.org/2009/10/ACAV.ogv ACAV Project video
23:28:49 [MichaelC]
s/Matt: basically/Michael: basically/
23:28:50 [fsasaki]
.. html5 joint taskforce might be a place to define what requirements that be
23:28:58 [MichaelC]
s/Matt: there/Michael: there/
23:29:19 [fsasaki]
.. jeff said that html5 caption solutions are insufficient
23:29:40 [fsasaki]
.. captions should be in html5 like video in html5, that is its own element
23:30:03 [fsasaki]
.. deaf community features are not sufficiently represented
23:30:05 [plh]
s/jeff/Geoff/
23:30:11 [fsasaki]
.. srt is easy to write with text editor
23:30:29 [fsasaki]
.. but that might mean choose fast gain over long term benefit
23:30:55 [fsasaki]
silvia: a clarification: itext element tries to be format independent
23:31:16 [fsasaki]
.. srt could be a good baseline format, not XML , but that can be discussed
23:31:35 [fsasaki]
.. allows for linking to any format that your brower supports
23:32:23 [fsasaki]
judy: jeff said audio and video should be on the same level, e.g. audiodesc and videodesc element
23:32:55 [fsasaki]
.. time is perfect for html5 to have good solution with caption-specific elements
23:33:21 [fsasaki]
.. externally referencing captions is another need, formulated by jeff
23:33:31 [jcraig]
s/jeff/Geoff/
23:33:45 [jcraig]
s/jeff/Geoff/
23:34:53 [fsasaki]
dick: understand why geoff said that external captions might be good
23:34:59 [ChrisL2]
http://www.evertz.com/resources/eia_608_708_cc.pdf
23:35:07 [plh]
--> http://www.w3.org/2009/10/MarisaDeMeglio.DAISY.pdf DAISY slides
23:35:11 [fsasaki]
.. but would be a shame if we had two mechanisms (two and external) to handle the same thing
23:35:39 [plh]
--> http://www.w3.org/2009/10/html5 access mawg-20091101.ppt Joakim's slides
23:35:58 [fsasaki]
now presentation by dave singer
23:36:26 [fsasaki]
dave: good acc. needs three things: good specs, uptake by authors and users and user agents
23:36:37 [fsasaki]
.. it is easy on one of the three
23:37:18 [fsasaki]
.. we can do better than TV, not replicating what is there in the non-web world
23:38:24 [fsasaki]
.. timed acc. problems. e.g. captions in audio, video contrast
23:38:37 [fsasaki]
.. some people need high, some people low contrast
23:39:16 [RRSAgent]
I have made the request to generate http://www.w3.org/2009/11/01-media-minutes.html plh
23:39:21 [fsasaki]
.. general time management: acc. by flipping information with the media
23:39:40 [fsasaki]
.. rate preferences slower than normal rate sometimes necessary for acc.
23:39:41 [plh]
s/Bolterman/Bulterman/
23:39:46 [ChrisL2]
s/Bolter/Bulter/
23:40:18 [fsasaki]
.. question of untimed acc. : having a transcript
23:40:53 [fsasaki]
.. also untimed: longdesc, fallback not only for non-support, but also for support of e..g video, but not for a specific user
23:41:19 [fsasaki]
.. a question: inside or outside the video container?
23:41:38 [ChrisL2]
Topic: David Singer, Apple
23:42:12 [fsasaki]
.. inside: media container can have overlay time tracks.
23:42:53 [fsasaki]
.. no mechanism for outside avail. yet, so synchronization with inside is easier to achieve
23:43:20 [fsasaki]
.. meeting users needs: select the resource that the user needs
23:43:38 [fsasaki]
.. choices: by preference, or by action, or both?
23:44:20 [fsasaki]
hypothesis: out of scope is a user preference repository
23:45:20 [ChrisL2]
[Judy explains seizure disorders]
23:46:18 [fsasaki]
dave descriping possible choice approaches for user needs, see slide 8 of presentation
23:47:31 [ChrisL2]
Scribe: Chris
23:47:37 [ChrisL2]
rrsagent, draft minutes
23:47:37 [RRSAgent]
I have made the request to generate http://www.w3.org/2009/11/01-media-minutes.html ChrisL2
23:49:26 [fsasaki]
dave: it matters who renders captions engine or somewhere else
23:49:40 [fsasaki]
silvia: could it all be done by web engine?
23:49:54 [fsasaki]
ian: depends on media framework used
23:50:10 [fsasaki]
.. we can expose API, but depends on media framework if it is possible or not
23:51:20 [fsasaki]
dave: scripted accessibility
23:51:53 [fsasaki]
@@: you might prefer additional content depending on the specific part of the media
23:52:12 [fsasaki]
dave: in HTML5, the approach is to link to SMIL for synchronization
23:52:31 [fsasaki]
.. in future we want to have a video in different areas of a page
23:52:50 [fsasaki]
dick: several ways of doing things
23:53:07 [fsasaki]
.. be careful of controling things: scripted control vs. declarative control
23:53:30 [fsasaki]
.. "scripting" is not always the term you want to use
23:54:40 [fsasaki]
chaals: worth to see in html5 how to get there
23:55:02 [fsasaki]
@@@: flahs already supports some of these things
23:55:34 [MichaelC]
s/@@@/James/
23:56:07 [fsasaki]
dave: about sign language, there was just one code
23:56:25 [fsasaki]
silvia, felix: has been solved in latest version of bcp47
23:57:09 [fsasaki]
dave: summary: what do we need what is not in HTML5, what should be in a best practices document?
23:57:17 [fsasaki]
.. cue ranges are very important
23:57:38 [fsasaki]
.. describing user preferences, probably informative in HTML5 view, since in other specs
23:57:54 [fsasaki]
.. and script access to control features of the media
23:57:58 [fsasaki]
.. and CSS media queries
23:58:08 [fsasaki]
silvia: about cue ranges
23:58:16 [fsasaki]
.. everybody means something else talking about it
23:58:27 [chaals]
s/@@/Janina/
23:58:28 [fsasaki]
.. making the requirements for it clear would help
23:58:43 [chaals]
s/flahs/flash/
00:00:11 [fsasaki]
dave: look not only in captioning, but at the big picture
00:00:26 [fsasaki]
topic: james presentation
00:01:08 [fsasaki]
james: interest in universal design for all users
00:01:34 [fsasaki]
.. including technical needs, e.g. related to band width
00:02:16 [fsasaki]
.. looked into content selection without knowing preferences
00:02:48 [fsasaki]
dave: web developers asked for "how do we find out if a user uses a screen reader"
00:03:07 [fsasaki]
.. currently, SWIF / FLASH are the only means to achieve that
00:03:20 [fsasaki]
.. that has some security implications
00:04:33 [fsasaki]
.. in CSS media query you download everything, different than content selection before download
00:04:53 [fsasaki]
.. potential that user wants to share preferences with certain web servers
00:06:31 [fsasaki]
.. if there are methods in video element like getCurrentCaptions, that would have security implications as well
00:06:53 [fsasaki]
.. certain security restrictions could be more lax for certain users
00:07:17 [fsasaki]
shepazu: "cross origion resource sharing" is the way we are going to do this
00:07:53 [fsasaki]
.. a question: what do people think about privacy concerns?
00:08:27 [ChrisL2]
s/origion/origin/
00:09:16 [fsasaki]
james: thought of some preferences, e.g. color-blindness, which you might not want to convey via your browser
00:09:26 [fsasaki]
dave: very important aspect
00:09:50 [fsasaki]
matt: heuristics can be used to determine whether a user uses a scree reader
00:10:16 [plh]
--> http://www.w3.org/2009/10/dws-access-workshop.ppt Dave' slides
00:10:31 [fsasaki]
.. even flash are not guarentees that information is not conveyed
00:11:16 [fsasaki]
dave: platform integration is a problem, e.g. if the screenreader does not find a play-button
00:11:36 [fsasaki]
.. scripting does not integrate well with platform specific heuristics to find things
00:12:49 [fsasaki]
topic: janina's presentation
00:13:09 [fsasaki]
janina: important that API can access control information
00:13:20 [fsasaki]
.. what is default list of controls being disposed
00:13:39 [fsasaki]
.. if it is left to developer to come up with controls, there will be a small set of acc. controls
00:14:14 [fsasaki]
doug: I am editor of DOM 3 events
00:14:28 [fsasaki]
.. hardware controls are a way of bypassing the problem
00:15:21 [fsasaki]
james: having video controls standardized methods would help
00:15:27 [silvia]
http://www.marcozehe.de/2009/06/11/exposure-of-audio-and-video-elements-to-assistive-technologies/
00:16:03 [fsasaki]
.. system could have its own key etc. usages
00:16:20 [fsasaki]
silvia: there is acc. of controls by shortcuts implemented by browser vendors
00:16:30 [fsasaki]
.. important for all users, not only acc.
00:16:45 [fsasaki]
.. through javascript interfaces there is also access to buttons
00:17:08 [fsasaki]
.. that kind of control is on the roadmap for firefox
00:17:18 [fsasaki]
@@: for webkit as well
00:17:41 [fsasaki]
janina: have browser folks looked into structural navigation?
00:18:01 [fsasaki]
.. e.g. subscene to subscene, ...
00:18:17 [fsasaki]
.. could be in a 3-hours physics presentation very important
00:18:39 [fsasaki]
silvia: the @@@@ website works in this, that is navigation markers
00:18:56 [fsasaki]
.. not hierarchical yet, nobody has looked into that yet
00:19:11 [fsasaki]
silvia: DAISY did that to some extend
00:19:26 [fsasaki]
s/silvia:/janina:/
00:20:09 [fsasaki]
silvia: if we have time-aligned text tracks, that can go back into video
00:20:35 [fsasaki]
.. both search and a structural overview are important scenarios
00:20:57 [fsasaki]
dick: producer and consumer aspect are important, latter e.g. user annotations
00:21:45 [fsasaki]
dave: likely that we will get automated scene detection etc., not manual annotation
00:22:12 [fsasaki]
janina: there are also use cases for manual creation
00:22:23 [fsasaki]
john: needs to be a method to do the automatic way too
00:22:46 [Judy]
Judy has joined #media
00:23:03 [fsasaki]
dick: in SMILtext, people used links in captions
00:23:46 [ChrisL2]
Topic: Sally_Cain, RNIB, Audio Description
00:25:24 [chaals]
[+1 for IPTV and other TV-based standards being important liaisons to keep in mind]
00:25:25 [fsasaki]
sally: doing audio description, but also looking into IPTV
00:25:41 [fsasaki]
.. audio description often underrepresented, bu is very important
00:25:50 [RRSAgent]
I have made the request to generate http://www.w3.org/2009/11/01-media-minutes.html plh
00:25:50 [chaals]
s/bu /but /
00:26:01 [fsasaki]
.. our broadcasting team says, audio description is mixed in content, or separate
00:26:28 [fsasaki]
.. think that separation is better, or at least awareness that sometimes it is mixed
00:27:04 [fsasaki]
.. using existing guidance from here is good, but how do we integrate work in ISO or ETSI?
00:27:38 [fsasaki]
.. route of deliverance are various, e.g. eTest, eAssignments etc. Need also to be taken into account
00:27:58 [ChrisL2]
Topic: Hironobu_Takagi, IBM
00:29:08 [fsasaki]
takagi-san: work is part of japanese government program
00:29:26 [fsasaki]
.. only 0.5 programs of TV has audio descriptions
00:29:41 [fsasaki]
.. huge expection to provide more audio description also on the we
00:29:44 [fsasaki]
s/we/web/
00:29:53 [ChrisL2]
s/0.5/0.5%/
00:29:57 [fsasaki]
.. cost is most important in audio description
00:30:17 [fsasaki]
.. requires special expertise, human skilled narrator
00:31:14 [fsasaki]
.. text-to-speech has become very good today
00:32:00 [chaals]
[impressive demo of emotionally inflected Text to Speech]
00:32:14 [fsasaki]
.. project is NICT (Media Accessibilty platform)
00:35:29 [fsasaki]
Takagi-san shows video describing the benefits of Takagi
00:36:55 [fsasaki]
s/Takagi/Media Access. platform/
00:38:38 [fsasaki]
dave: all Text-to-speech is done automatically?
00:38:52 [fsasaki]
Takagi-san: yes, we just have language tagging
00:39:01 [fsasaki]
.. we use also a kind of ruby
00:39:12 [fsasaki]
.. we also plan to use emotional markup
00:41:10 [fsasaki]
.. choices are: human prerecorded voice, prerecorded tts, server-side tts, client-side tts
00:41:17 [fsasaki]
.. not sure yet what is most appropriate
00:41:55 [fsasaki]
s/choices/choices in audio description/
00:42:17 [ChrisL2]
http://www.w3.org/TR/emotionml/ Emotion Markup Language (EmotionML) 1.0
00:43:11 [fsasaki]
Takagi-san: content protection is another requirement
00:44:14 [fsasaki]
john: how do you do the synchronization?
00:44:28 [fsasaki]
Takagi-san: used windows time seeking functions from windows media player
00:45:22 [fsasaki]
silvia: you have high-quality speech synthesis
00:46:02 [fsasaki]
.. could be a web-service: user sents a text description, turns that into audio, potentially on a different server
00:46:22 [fsasaki]
dick: source from TTS: where does it come from?
00:46:34 [fsasaki]
Takagi-san: from a script
00:46:43 [fsasaki]
dave: somebody typed in a scence description
00:47:10 [fsasaki]
dick: there are legal issues about changing the content flow e.g. for Disney
00:47:29 [fsasaki]
Takagi-san: yes, discussing always with content providers like Disney
00:47:36 [fsasaki]
.. legal issue is really important
00:48:06 [fsasaki]
.. that is why we work with Japanese government
00:48:09 [chaals]
s/scence/scene/
00:48:36 [fsasaki]
janina: this is a problem to be solved on government level, not industry
00:51:04 [fsasaki]
short break, short discussion after that
00:52:01 [Hiro_]
Hiro_ has joined #media
01:09:27 [fsasaki]
topic: wrap up discussion
01:09:55 [fsasaki]
dave: need a paper or s.t. describing how to get good acc. into html5
01:10:15 [fsasaki]
michael: made some notes related to that, and about existing solutions
01:10:28 [fsasaki]
.. many people said that there are non-acc. use cases as well
01:10:34 [chaals]
Scribe: Chaals
01:10:43 [chaals]
MC: Need to gather use caes and requirements
01:10:55 [chaals]
... gather the non-accessibility use cases
01:11:04 [chaals]
... and the existing solutions.
01:11:16 [chaals]
... There are proposals about technology - which are proposals, which are solutions
01:11:49 [chaals]
... How do we make sure that the technology makes it possible to meet WCAG requirements (not limited to them, but at least getting to that level)
01:11:57 [chaals]
[MC is staff contact for WCAG]
01:12:47 [chaals]
DS: Non-accessibility uses: I am a bit hard of hearing, and there will be stuff I cannot catch. I rewind, mute, and watch the captions. Would like to be able to call the captions for the last section in parallel. Is that a non-accessibility?
01:13:29 [chaals]
JS: Another one is the ability to slow audio as a way to increase comprehensibility
01:13:47 [chaals]
JB: Don't think that is a non-accessibility use case.
01:14:35 [chaals]
JC: The semantics of "accessibility" is a bit in flux.
01:14:51 [chaals]
... preference for low-bandwidth - is that accessibility?
01:16:38 [jcraig]
jcraig has joined #media
01:17:15 [silvia]
https://wiki.mozilla.org/Accessibility/Video_a11y_requirements
01:17:39 [chaals]
CMN: Question - where to do this? PF? HTML Accessibility TF? somewhere else? I suggest HTML accessibility task force
01:17:53 [chaals]
SP: Have written a requirements doc for Mozilla that would be a good basis.
01:18:25 [chaals]
[/me has a brief skim and thinks it is a very good basis to steal things from]
01:18:56 [chaals]
DS: Should we start from something like this?
01:19:19 [chaals]
MC: We need a "how to do accessibility" and then gap analysis of HTML and what it does now (and what it needs)
01:19:31 [chaals]
DB: We can comment on a document that does this from the SYMM group.
01:20:29 [dsinger]
http://www.w3.org/WAI/PF/html-task-force
01:20:34 [chaals]
JS: Makes a lot of sense that we take the work in the HTML Accessibility Task Force.
01:20:51 [chaals]
MC: Concerned that the HTML Accessibility task force has a heavy load.
01:21:04 [chaals]
... concerned that the current make-up of the task force lacks video expertise
01:21:19 [chaals]
JS: Maybe a group within that group would take the work.
01:21:41 [chaals]
DS: People copy code. Maybe we should also look at the things that people can already do in HTML 5 and show how to do what is already possible.
01:22:01 [chaals]
s/DS: Should/Doug: Should/
01:22:14 [chaals]
s/DS: People/Doug: People/
01:22:33 [jcraig]
s/withing/within/
01:22:51 [chaals]
SP: There is a fear that at the next level of HTML5 there is no way we can put more into it. How serious is the prospect of a complete freeze?
01:23:32 [shepazu]
creating tutorials also helps discover gaps in functionality
01:24:20 [chaals]
IH: The consideration is not serious - the spec will continue to evolve. The rate-limiter is how fast the browsers implement. It might be that we make a feature-freeze and the work goes into HTML 5+some ... right now the video implementations are still flaky, and interoperability is still poor. By the time that is solved, I see no reason not to add better stuff.
01:24:50 [chaals]
CL: Heard you say that you have code on a private branch. Is there a chicken-and-egg problem?
01:25:06 [chaals]
SP: The trial implementation is available to influence the spec already.
01:25:42 [chaals]
[Process discussion about our process]
01:26:40 [plh]
--> http://www.w3.org/2009/10/VideoA11yWorkshop.txt Silvia's slides
01:28:02 [chaals]
JC: Even if we are going with e.g. itext, should we consider requirements on media formats (e.g. the need for dealing with some external captions and some internal captions?
01:28:12 [chaals]
[several]: Yes
01:29:48 [chaals]
CMN: Is there anyone prepared to take on the editing of a requirements and use cases document?
01:30:14 [chaals]
JB: Think we should follow the idea of working in the HTML access task force
01:30:26 [chaals]
DaveS: Think we should try to make it happen there.
01:30:43 [chaals]
MC: This means we need people to join the task force, which means being a member of HTML-WG
01:31:55 [chaals]
SP: I've done requirements work and implementation for itext and tried to validate it. So far still seems good, but I am happy that this goes to others looking and figuring out if it is what we need, or what we need to change.
01:32:08 [dsinger]
action: everyone to look at joining the HTML accessibility TF
01:32:12 [janina]
HTML A11y TF is at:
01:32:14 [janina]
http://www.w3.org/WAI/PF/html-task-force
01:32:22 [dsinger]
action: everyone to review Silvia's requirements document
01:33:02 [chaals]
... think itext seems to be going down the right track, but will be making sure that I am not just going off on my own and ignoring what others want and do.
01:34:17 [chaals]
JB: Right. I am concerned about fragmentation - but that is what we keep an eye on in the process of going forward.
01:34:47 [MichaelC]
q+ to say if we want video to be a focused group within HTML accessibility task force, need all the interested people to join (call for participation should go out when fully constituted) and we'll probably need to identify a project lead
01:35:33 [chaals]
DB: Sounds like so far you are happy with what you have - and agree that it's nice. The part of the process that brings things to working groups for comment I want to be sure that there is a way to comment and say changes are needed, if that should be the case.
01:36:05 [chaals]
... also worried about what we do with SRT etc, who owns the problem, how it fits in with re-using the technology, etc.
01:37:14 [chaals]
SP: Sure. This is happening in the HTML WG.
01:38:17 [chaals]
ack MichaelC
01:38:17 [Zakim]
MichaelC, you wanted to say if we want video to be a focused group within HTML accessibility task force, need all the interested people to join (call for participation should go
01:38:21 [Zakim]
... out when fully constituted) and we'll probably need to identify a project lead
01:39:22 [chaals]
KH: I'm new to the process, so...
01:39:42 [chaals]
... It sounded like Ian wanted something implemented before it could be accepted as part of the spec.
01:40:16 [chaals]
... So the process is "have an idea, implement, get others to implement". But if we implemented it, do we not have to get anyone else to do so?
01:40:18 [plh]
q+
01:41:17 [dsinger]
q+
01:41:19 [chaals]
IH: Stuff in the spec has to be implemented better before we get to adding more stuff. It definitely helps to have experimental implementations to decide what shuld be the standards. In practice, they get shipped, people start relying on it, and then we are stuck with not breaking that. Which isn't ideal, but the reality is somewhere in the middle
01:41:50 [chaals]
DSing: We don't want the specs or implementations to get too far ahead of each other.
01:42:09 [chaals]
Doug: If a few pages do something wierd, we can insist they change...
01:42:16 [chaals]
IH: Depends on who/what they are.
01:43:09 [dsinger]
ack plh
01:43:17 [chaals]
... poster child was canvas. We found some serious problems, and we had to figure out how to change it. We changed some of it, but then we found more that would cause huge problems to fix it further, and we were kind of stuck with this.
01:43:56 [chaals]
PLH: I don't believe HTML5 can move forward without addressing video accessibility. We have to figure that out. Best we can do is review proposals and provide feedback, not just follow the first implementation and bless it without thinking.
01:44:19 [chaals]
... But it will not be acceptable to simply move forward without getting it right.
01:44:59 [chaals]
DaveS: John and I are volunteering to take responsibility for video accessibility within the HTML Accessibility task force at least for now - chase peopleinto it, get documents together, etc.
01:46:05 [chaals]
ACTION: Silvia to put links to existing content into the task force wiki
01:46:41 [chaals]
ACTION: DaveS and JohnF to take responsibility to drive this work into existence in the HTML Accessibility Task Force
01:48:27 [chaals]
THANKS to John, and to Dave, for making it happen.
01:48:31 [chaals]
ADJOURNED
01:48:45 [chaals]
rrsagent, draft minutes
01:48:45 [RRSAgent]
I have made the request to generate http://www.w3.org/2009/11/01-media-minutes.html chaals
01:52:15 [soap]
soap has left #media
01:58:47 [silvia]
silvia has joined #media
02:46:32 [marisa]
marisa has joined #media
03:24:20 [Hiro]
Hiro has joined #media
04:05:13 [Hiro]
Hiro has joined #media
05:33:34 [shepazu]
shepazu has joined #media
05:46:17 [silvia]
silvia has joined #media
05:59:01 [dsinger]
dsinger has joined #media
06:57:51 [silvia]
silvia has joined #media
09:14:48 [Judy]
Judy has joined #media