W3C

- DRAFT -

SV_MEETING_TITLE

14 Jul 2010

See also: IRC log

Attendees

Present
Janina, John_Foliot, Judy, Eric_Carlson, kenny_j
Regrets
Chair
SV_MEETING_CHAIR
Scribe
kenny_j

Contents


<janina> is Kenny_Johar

<janina> scribe: kenny_j

John: we should remain as agnostic as possible about specific technical solutions
... technical requirements should be defined on needs, and not what technologies are available.

Janina: negotiation will be required with the html 5 wg when we get to the technical solution recommendation phase.

<JF> Masatomo

masatoma: I have a few points about the technical details, but not at this stage.

Judy: lets take the requirements in order.

John: In terms of audio descriptions, the technical implication is quite simple i.e. multiple tracks.

Janina: audio descriptions could be contained in the same track as the main audio, or separated out into a separate track.
... what do we do with legacy content is an important consideration.

John: Whether the audio description is john: There are two ways of referencing audio descriptions inside a media wrapper. 1: merged into the main track. 2: a separate track inside the media wrapper.
... For the technology we are recommending here, it has to be a separate track inside the media wrapper.

Judy: if we do come up with things on the authoring side, we should be able to capture them somewhere.

John: I will make a note for myself that this needs to be discussed with WAI.

janina: The ATAG 2.0 last call went out last week.

John: Moving on to texted audio descriptions.
... Masatomma, I believe the proof of concept you are doing at IBM contains texted audio descriptions. any comments?

Masatomo: Our format is similar to SSML.

Janina: styling and semantic structure is important in the format we go with.

John: we are saying that the text file that provides audio descriptions should be able to be marked up as other formats on the web.
... Whether the markup of this format is understood by browsers is a different subject.
... Styling and semantic structure is required in the format we decide to go with. We need to come back to deciding what that format is.
... Moving to extended audio descriptions.

janina: One option is that the main audio be paused when the extended audio description is played.

John: I think we are saying that 2.3 includes 2.1 and 2.2 in terms of controls.
... moving to 2.4
... john: it is an audio track. We will revisit this to define the technical solution.
... on 2.5, eric, comments?
... there needs to be a mechanism that makes sure that all the supporting files are in synch in the media asset.

eric: any external file that is played along with the timelines of the main resource has to be in synch. I think this is a separate requirement.
... any external file that is played along with the timelines of the main resource has to be in synch. I think this is a separate requirement.Janina: I am happy to split the requirements out.

Janina: I am happy to split the requirements out.

Eric: the data file cannot define the timelines. there is no timeline inherent in a data file. there has to be a resource somewhere that defines the timeline for the presentation.
... Whenever we refer to a file that is external to the media, it becomes a part of the presentation.

Janina: for example, lets say there is a 30 minute video. there is an audio description available, a sign language translation available, and a caption available. If the next track in the video is played, all the supporting resources must synch up. something needs to affect this.
... On their own there would be no time reference in the files that contain the captions, audio descriptions, sign language.

<Judy> Kenny: in Daisy, we call this the "director," which makes sure that when you go to the next track in your video, the relevant parts are synch up as well

Eric: I don't believe the working group needs to do anything about this. I think this should be left to the user agent. I think this is out of scope.
... I think all we need to say is that the synchronisation should be maintained.
... John: the main media resource file is better called primary asset than default.

Janina: so are we saying that we leave the synchronisation to user agents completely?

eric: Whatever piece of code is responsible for playing the different resources in the media wrapper should be responsible for the synchronisation.

janina: I understand the approach. The synchronisation is being left to each individual user agent/operating environment.

<Judy> Kenny: Are we saying that all the files should contain timeline information inside them?

<Judy> Eric: [missed, sorry]

<Judy> Kenny: What's to say that the time marker in this file would correspond to the time marker in the other file?

<Judy> Janina: So we've identified a problem that needs solving.

<Judy> John: let's capture well.

janina: our text file need to contain timeline data. We also need to decide to what resolution or granularity.

eric: timestamp formats should contain such information e.g. srt.

<Judy> [judy notes that the rationale that Janina provided for potential offsets is where there is an extended audio description]

<Judy> Kenny: About audio description tracks -- that might play during a pause in the video -- would the timeline of the audio track and the timeline of the video track perhaps not be the same?

<Judy> John: Everything has a start and an end-point. For descriptive audio, it will have a start point, but the end point may be asynchronous to the end of that segment of video... the amount of time needed to provide the description exceeds the length of that video segment

<Judy> Janina: Right -- that's the extended audio case I described.

<Judy> John: There are various ways of handling that.

<Judy> Janina: So we've id'd two things now that we have to come to resolution on.

Janina: the second item we need to deal with is the case of extended audio tracks where the timeline of the extended track and the primary asset are not the same.

eric: A text format must contain timestamp information or else it can't be used.

John: I agree. timestamp information has to be available inside the text format.

<Judy> Kenny: The audio track -- x seconds of the audio track correspond to the first five seconds of the video track... but where is that info contained?

<Judy> ...this is why in Daisy we resorted to "the director"

kenny: something like the daisy smil director would ensure that the timeline is synchronised across the raft of supporting resources.

<Judy> Eric: we don't know where we would store that

* thansk Judy.

john: lets continue this conversation on the list.

<Judy> +1 to pursuing this point on the list, but i think this has been a good discussion to id the specific problem to be resolved

Janina: do we agree that captions should be text documents?

Eric: It is possible to have a binary file that has the same information.

Judy: I am happy for the removal of the text constraint from captioning.

Kenny: this discussion is about 2.6.

eric: I don't think we should use the words text file. Just using the word text is fine.

john: subtitles are just whats spoken on the screen, whereas captions could include other information i.e. applause ...

janina: I think there will be more subtitles created than captions created. merging them might have a favourable result for captioning.

john: moving on to 2.7.
... we could use something like the aria role captioning.
... In terms of 2.8, the switching mechanism for languages in audio tracks and sign language tracks would be similar

Janina: I prefer selecting over the word switching.

john: We can use the word selecting over switching. what I am refering to is the choice of a user to select between different language tracks i.e. different language tracks for captioning, sign language ...
... 2.9 is accepted.
... lets discuss a few of the 3.* items on the list this week.

Judy: we need to turn the requirements over to the html wg by the end of next week.

Eric: We talked about people updating the wiki sending notes describing the changes to the list. Can we please enforce this?

Janina: clarify turning over.

Judy: I think the connotation is sharing with.

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.135 (CVS log)
$Date: 2010/07/14 23:44:23 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.135  of Date: 2009/03/02 03:52:20  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found Scribe: kenny_j
Inferring ScribeNick: kenny_j

WARNING: No "Topic:" lines found.

Default Present: Janina, John_Foliot, Judy, Eric_Carlson, kenny_j
Present: Janina John_Foliot Judy Eric_Carlson kenny_j

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting


WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Got date from IRC log name: 14 Jul 2010
Guessing minutes URL: http://www.w3.org/2010/07/14-html-a11y-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report


[End of scribe.perl diagnostic output]