IRC log of html-a11y on 2010-07-14

Timestamps are in UTC.

21:59:00 [RRSAgent]
RRSAgent has joined #html-a11y
21:59:00 [RRSAgent]
logging to
21:59:05 [janina]
zakim, this will be WAI_PFWG(A11Y)
21:59:05 [Zakim]
ok, janina; I see WAI_PFWG(A11Y)6:00PM scheduled to start in 1 minute
21:59:13 [janina]
zakim, call janina
21:59:13 [Zakim]
ok, janina; the call is being made
21:59:14 [Zakim]
WAI_PFWG(A11Y)6:00PM has now started
21:59:15 [Zakim]
21:59:20 [Zakim]
22:00:56 [Zakim]
+ +61.3.903.8.aaaa
22:01:28 [Zakim]
22:01:34 [janina]
zakim, udy
22:01:34 [Zakim]
I don't understand 'udy', janina
22:01:39 [janina]
is Kenny_Johar
22:01:48 [janina]
zakim, udy
22:01:48 [Zakim]
I don't understand 'udy', janina
22:02:09 [mkobayas]
mkobayas has joined #html-a11y
22:02:19 [janina]
zakim, Kenny_Johar is janina
22:02:19 [Zakim]
sorry, janina, I do not recognize a party named 'Kenny_Johar'
22:02:26 [janina]
zakim, 61.3.903.8.aaaa is Kenny_Johar
22:02:26 [Zakim]
sorry, janina, I do not recognize a party named '61.3.903.8.aaaa'
22:02:41 [janina]
zakim, +61.3.903.8.aaaa is Kenny_Johar
22:02:42 [Zakim]
+Kenny_Johar; got it
22:03:03 [kenny_j]
kenny_j has joined #html-a11y
22:03:28 [Zakim]
22:03:30 [janina]
zakim, Kenny_Johar is kenny_j
22:03:30 [Zakim]
+kenny_j; got it
22:07:12 [janina]
scribe: kenny_j
22:08:48 [kenny_j]
John: we should remain as agnostic as possible about specific technical solutions
22:08:55 [Zakim]
22:09:20 [kenny_j]
john: technical requirements should be defined on needs, and not what technologies are available.
22:10:11 [kenny_j]
Janina: negotiation will be required with the html 5 wg when we get to the technical solution recommendation phase.
22:11:00 [janina]
zakim, +??P4 is Matsatomo
22:11:00 [Zakim]
sorry, janina, I do not recognize a party named '+??P4'
22:11:30 [JF]
22:11:32 [janina]
zakim, pf is Masatomo
22:11:32 [Zakim]
sorry, janina, I do not recognize a party named 'pf'
22:12:13 [janina]
zakim, ?P4 is masatomo
22:12:13 [Zakim]
sorry, janina, I do not recognize a party named '?P4'
22:12:36 [kenny_j]
masatoma: I have a few points about the technical details, but not at this stage.
22:13:24 [kenny_j]
Judy: lets take the requirements in order.
22:14:26 [kenny_j]
John: In terms of audio descriptions, the technical implication is quite simple i.e. multiple tracks.
22:15:14 [kenny_j]
Janina: audio descriptions could be contained in the same track as the main audio, or separated out into a separate track.
22:15:54 [kenny_j]
Janina: what do we do with legacy content is an important consideration.
22:17:57 [kenny_j]
John: Whether the audio description is john: There are two ways of referencing audio descriptions inside a media wrapper. 1: merged into the main track. 2: a separate track inside the media wrapper.
22:18:39 [Judy]
Judy has joined #html-a11y
22:19:50 [kenny_j]
john: For the technology we are recommending here, it has to be a separate track inside the media wrapper.
22:20:39 [Judy]
zakim, what conference is this?
22:20:39 [Zakim]
this is WAI_PFWG(A11Y)6:00PM conference code 2119
22:20:48 [Judy]
zakim, ok thanks
22:20:48 [Zakim]
I don't understand 'ok thanks', Judy
22:22:04 [kenny_j]
Judy: if we do come up with things on the authoring side, we should be able to capture them somewhere.
22:22:53 [kenny_j]
John: I will make a note for myself that this needs to be discussed with WAI.
22:23:23 [kenny_j]
janina: The ATAG 2.0 last call went out last week.
22:24:00 [kenny_j]
John: Moving on to texted audio descriptions.
22:24:54 [kenny_j]
John: Masatomma, I believe the proof of concept you are doing at IBM contains texted audio descriptions. any comments?
22:27:15 [kenny_j]
Masatomo: Our format is similar to SSML.
22:27:45 [kenny_j]
Janina: styling and semantic structure is important in the format we go with.
22:29:07 [kenny_j]
John: we are saying that the text file that provides audio descriptions should be able to be marked up as other formats on the web.
22:29:53 [kenny_j]
John: Whether the markup of this format is understood by browsers is a different subject.
22:31:22 [kenny_j]
John: Styling and semantic structure is required in the format we decide to go with. We need to come back to deciding what that format is.
22:31:51 [kenny_j]
John: Moving to extended audio descriptions.
22:32:28 [kenny_j]
janina: One option is that the main audio be paused when the extended audio description is played.
22:33:03 [kenny_j]
John: I think we are saying that 2.3 includes 2.1 and 2.2 in terms of controls.
22:33:54 [kenny_j]
john: moving to 2.4
22:36:18 [kenny_j]
john: john: it is an audio track. We will revisit this to define the technical solution.
22:37:34 [kenny_j]
John: on 2.5, eric, comments?
22:39:05 [kenny_j]
john: there needs to be a mechanism that makes sure that all the supporting files are in synch in the media asset.
22:39:38 [kenny_j]
eric: any external file that is played along with the timelines of the main resource has to be in synch. I think this is a separate requirement.
22:40:02 [kenny_j]
eric: any external file that is played along with the timelines of the main resource has to be in synch. I think this is a separate requirement.Janina: I am happy to split the requirements out.
22:40:32 [kenny_j]
Janina: I am happy to split the requirements out.
22:41:50 [kenny_j]
Eric: the data file cannot define the timelines. there is no timeline inherent in a data file. there has to be a resource somewhere that defines the timeline for the presentation.
22:43:22 [kenny_j]
Eric: Whenever we refer to a file that is external to the media, it becomes a part of the presentation.
22:43:44 [Zakim]
22:44:29 [Zakim]
22:45:38 [Judy]
Judy has joined #html-a11y
22:46:02 [kenny_j]
Janina: for example, lets say there is a 30 minute video. there is an audio description available, a sign language translation available, and a caption available. If the next track in the video is played, all the supporting resources must synch up. something needs to affect this.
22:47:24 [kenny_j]
Janina: On their own there would be no time reference in the files that contain the captions, audio descriptions, sign language.
22:49:17 [Judy]
Kenny: in Daisy, we call this the "director," which makes sure that when you go to the next track in your video, the relevant parts are synch up as well
22:50:17 [kenny_j]
Eric: I don't believe the working group needs to do anything about this. I think this should be left to the user agent. I think this is out of scope.
22:51:18 [kenny_j]
Eric: I think all we need to say is that the synchronisation should be maintained.
22:52:53 [kenny_j]
eric: John: the main media resource file is better called primary asset than default.
22:53:59 [kenny_j]
Janina: so are we saying that we leave the synchronisation to user agents completely?
22:55:06 [kenny_j]
eric: Whatever piece of code is responsible for playing the different resources in the media wrapper should be responsible for the synchronisation.
22:57:15 [kenny_j]
janina: I understand the approach. The synchronisation is being left to each individual user agent/operating environment.
23:00:37 [Judy]
Judy has joined #html-a11y
23:00:46 [Judy]
Kenny: Are we saying that all the files should contain timeline information inside them?
23:01:08 [Judy]
Eric: [missed, sorry]
23:01:33 [Judy]
Kenny: What's to say that the time marker in this file would correspond to the time marker in the other file?
23:01:47 [Judy]
Janina: So we've identified a problem that needs solving.
23:01:57 [Judy]
John: let's capture well.
23:02:08 [kenny_j]
janina: our text file need to contain timeline data. We also need to decide to what resolution or granularity.
23:03:12 [kenny_j]
eric: timestamp formats should contain such information e.g. srt.
23:03:20 [Judy]
[judy notes that the rationale that Janina provided for potential offsets is where there is an extended audio description]
23:04:58 [Judy]
Kenny: About audio description tracks -- that might play during a pause in the video -- would the timeline of the audio track and the timeline of the video track perhaps not be the same?
23:06:33 [Judy]
John: Everything has a start and an end-point. For descriptive audio, it will have a start point, but the end point may be asynchronous to the end of that segment of video... the amount of time needed to provide the description exceeds the length of that video segment
23:06:50 [Judy]
Janina: Right -- that's the extended audio case I described.
23:07:11 [Judy]
John: There are various ways of handling that.
23:07:40 [Judy]
Janina: So we've id'd two things now that we have to come to resolution on.
23:08:09 [kenny_j]
Janina: the second item we need to deal with is the case of extended audio tracks where the timeline of the extended track and the primary asset are not the same.
23:09:18 [kenny_j]
eric: A text format must contain timestamp information or else it can't be used.
23:09:58 [kenny_j]
John: I agree. timestamp information has to be available inside the text format.
23:11:47 [Judy]
Kenny: The audio track -- x seconds of the audio track correspond to the first five seconds of the video track... but where is that info contained?
23:12:00 [Judy]
...this is why in Daisy we resorted to "the director"
23:12:18 [kenny_j]
kenny: something like the daisy smil director would ensure that the timeline is synchronised across the raft of supporting resources.
23:12:18 [Judy]
Eric: we don't know where we would store that
23:12:46 [kenny_j]
* thansk Judy.
23:13:40 [kenny_j]
john: lets continue this conversation on the list.
23:14:09 [Judy]
+1 to pursuing this point on the list, but i think this has been a good discussion to id the specific problem to be resolved
23:15:35 [kenny_j]
Janina: do we agree that captions should be text documents?
23:16:02 [kenny_j]
Eric: It is possible to have a binary file that has the same information.
23:17:35 [kenny_j]
Judy: I am happy for the removal of the text constraint from captioning.
23:18:56 [kenny_j]
Kenny: this discussion is about 2.6.
23:19:42 [kenny_j]
23:20:05 [kenny_j]
eric: I don't think we should use the words text file. Just using the word text is fine.
23:21:26 [kenny_j]
john: subtitles are just whats spoken on the screen, whereas captions could include other information i.e. applause ...
23:23:11 [kenny_j]
janina: I think there will be more subtitles created than captions created. merging them might have a favourable result for captioning.
23:23:56 [kenny_j]
john: moving on to 2.7.
23:24:31 [kenny_j]
John: we could use something like the aria role captioning.
23:25:51 [kenny_j]
john: In terms of 2.8, the switching mechanism for languages in audio tracks and sign language tracks would be similar
23:28:00 [kenny_j]
Janina: I prefer selecting over the word switching.
23:29:17 [kenny_j]
john: We can use the word selecting over switching. what I am refering to is the choice of a user to select between different language tracks i.e. different language tracks for captioning, sign language ...
23:30:02 [kenny_j]
John: 2.9 is accepted.
23:30:58 [kenny_j]
John: lets discuss a few of the 3.* items on the list this week.
23:31:42 [kenny_j]
Judy: we need to turn the requirements over to the html wg by the end of next week.
23:32:34 [kenny_j]
Eric: We talked about people updating the wiki sending notes describing the changes to the list. Can we please enforce this?
23:33:23 [kenny_j]
Janina: clarify turning over.
23:33:45 [kenny_j]
Judy: I think the connotation is sharing with.
23:35:23 [Zakim]
23:35:24 [Zakim]
23:35:24 [Zakim]
23:36:18 [Zakim]
23:38:15 [kenny_j]
zakim, bye
23:38:15 [Zakim]
Zakim has left #html-a11y
23:38:15 [Zakim]
leaving. As of this point the attendees were Janina, John_Foliot, Judy, Eric_Carlson, kenny_j
23:40:35 [kenny_j]
rrsagent, make minutes
23:40:35 [RRSAgent]
I have made the request to generate kenny_j
23:42:42 [Judy]
Judy has joined #html-a11y
23:43:20 [kenny_j]
rrsagent, make log public
23:44:18 [kenny_j]
rrsagent, make minutes
23:44:18 [RRSAgent]
I have made the request to generate kenny_j