06:19:06 RRSAgent has joined #saur 06:19:06 logging to https://www.w3.org/2021/10/20-saur-irc 06:19:08 RRSAgent, make log public 06:19:32 RRSAgent, stay 13:25:29 atai has joined #saur 13:25:33 atai has left #saur 13:42:40 janina has joined #saur 13:49:50 Bert has joined #saur 13:50:22 MURATA has joined #saur 13:55:34 SteveNoble has joined #saur 13:55:36 Joshue108 has joined #saur 13:55:45 present+ 13:55:56 atai has joined #saur 13:55:59 present+ 13:56:02 present+ 13:56:49 Zakim has joined #saur 13:57:14 present+ atai 13:57:22 Meeting: Synchronization Accessibility User Requirements (SAUR) 13:57:31 Date: 20 Oct 2021 13:57:42 Chair: Jason_White 13:57:47 avneeshsingh has joined #saur 13:58:13 Matthew_Atkinson has joined #saur 13:58:37 rrsagent, make log public 13:58:52 topic: Introducing SAUR and its implications 13:59:16 gkatsev has joined #saur 14:01:11 scribe: Joshue108 14:01:31 14:01:46 kirkwood has joined #SAUR 14:01:49 nigel has joined #saur 14:01:57 Present+ Nigel_Megitt 14:01:57 Jemma_ has joined #saur 14:01:58 present+ Bert_Bos 14:02:00 Judy has joined #saur 14:02:08 tzviya has joined #saur 14:02:13 present+ 14:02:16 present+ Jemma 14:02:22 JW: Thanks e'one for attending 14:02:23 present+ 14:02:31 present+ 14:02:37 Roy has joined #saur 14:02:38 present+ 14:02:45 present+ 14:02:46 stevelee has joined #saur 14:02:49 We are here to discuss the FPWD of Synchronization Accessibility User Requirements (SAUR) 14:03:12 It has implications for future W3C guidance including future accessibility guidelines 14:03:37 To define the problem.. 14:03:56 14:04:08 How closely in sync do these resources need to be? 14:04:35 What we have done in the Research Questions Task force (APA) is look at research literature and document findings. 14:04:50 As well as the timing tolerances between different media resources. 14:05:25 We cover enhancing compreshension via synchronization of captions, sign language interpretation, and descriptions 14:05:48 There is a question around XR, Augmented environments - do they have different tolerances? 14:06:03 We note the distinction between live and pre-recorded media also. 14:06:06 Jemma has joined #saur 14:06:54 There are various issues covered 14:07:35 We need to discuss how we shall document certain things - but the intent is for other groups, working in related specs can take this material and use it. 14:08:00 q+ to note the different actors involved and their respective responsibilities to achieve the requirements 14:08:03 We want to make sure this work is useful as it is developed. 14:08:44 JW: Going to hand over to Steve Noble - he did most of the research here and documented the findings. 14:08:49 We can then discuss. 14:09:26 ack nige 14:09:26 nigel, you wanted to note the different actors involved and their respective responsibilities to achieve the requirements 14:09:32 Jennie has joined #saur 14:09:41 present+ 14:09:42 NM: This is really valuable reserach, in terms of being data driven. 14:09:57 The Timed Text working group is happy to be invited. 14:10:08 NM: Notes this is coming up in media IG and other places. 14:10:29 TTWG focus is on the document format - and what text should be presented at what time. 14:11:03 The diff is that you can specify the time, in a Timed Text doc - but what matters from audience perspective, is what gets presented. 14:11:26 There are many things happening in a user agent that is presenting various a11y source formats. 14:11:47 These requirements explain what is good for the audience, but we need to think in terms of meeting those. 14:12:11 Playback requirements etc - how close should the UA get to honouring these requirements. 14:12:45 There are diff jobs and responsibilities. The concern is that in defining the end result we need to ensure we note who is responsible. 14:12:59 SN: Thank you. That does come through in the research. 14:13:25 What we know from research does impact on the user when one media component, is in sync with another. 14:13:42 q+ 14:13:42 There is the question around what is possible currently, and what is experienced by the end user. 14:14:03 Regarding syncing primary video and audio - if we think of a person speaking. 14:14:18 That is a simple synchronization issue. 14:14:36 But if they get out of sync - this causes an a11y barrier. 14:14:59 For those who are lip reading, or hard of hearing, or in noisy space. 14:15:19 Really, there are constraints relating to the media. 14:15:42 Those in TV can control these things much more easily than in a Zoom meeting for example. 14:15:49 So the environment is a factor. 14:16:09 However, we are trying to find metrics. 14:16:23 What are our target tolerances. 14:16:40 What provides the best a11y? 14:17:17 SN: These points are around, what are the technology capabilities - what is possible? 14:17:55 In this doc, we are trying to look at the issues that are known. 14:18:03 Link to document: https://www.w3.org/TR/saur/ 14:18:25 SN: I can quickly point out some of the issues we were looking at. 14:18:43 e.g. the synchronization of the audio and video streams 14:19:15 The research shows that for those who are not hard of hearing, a person can experience the same issues, as someone who is hard of hearing. 14:19:48 This shows us that there is a range where the audio and video can be out of sync, but beyond that range can mean an a11y barrier. 14:20:26 When they get very out of sync, this gets in the way of comprehension on one level, and reduces enjoyment on another. 14:20:33 q- 14:21:00 q+ to mention that there's an asymmetry in audience experience between late and early presentation, at least of captions 14:21:25 SN: The researach shows that having the video slightly ahead (milliseconds) can actually be beneficial. 14:21:44 rrsagent, make minutes 14:21:44 I have made the request to generate https://www.w3.org/2021/10/20-saur-minutes.html janina 14:21:54 You are starting to comprehend before you hear the audio 14:22:23 SN: When they get out of sync it causes problems for many. 14:22:42 We looked at caption synchronization. There is an issue around the rate that it comes accross. 14:23:14 We attempted to provide some metrics. As well as caption synchronization capabilities. 14:23:25 There will be a trade off between latency and accuracy. 14:23:57 There is often latency, and human captioning can't match automated but there is a quality tradeoff. 14:24:09 In live meetings, how late can the captions come? 14:24:19 Jemma__ has joined #saur 14:24:20 tzviya_ has joined #saur 14:24:41 We have just looked at the research, and the community will need to discuss. 14:25:06 Regarding Sign Language, it is not a one to one translation. 14:25:34 So there is some delay - so what kind of lag is sufficient, to not put the user at a disadvantage. 14:26:08 q? 14:26:14 Regarding video description synchronization - you are looking for available space in the audio - to describe what is going on visually. 14:26:24 Impossible to have it exactly in sync. 14:26:35 So what is possible and best case scenario? 14:27:03 Regarding XR - we are not aware of a lot of research in XR media timing. Some here may have insights. 14:27:09 So that is an overview. 14:27:26 Happy to discuss and hear from others. 14:27:34 ack nigel 14:27:34 nigel, you wanted to mention that there's an asymmetry in audience experience between late and early presentation, at least of captions 14:27:36 q+ 14:27:53 q+ 14:28:04 NM: What you call video description may be called audio description in other places/ 14:28:23 You alluded to the asymmetry in audience experience between late and early presentation. 14:29:20 Early captions can be harder for people to deal with, rather than late ones. 14:29:52 14:30:48 SN: Pre-recorded captions have more scope for tweaking etc. There is a limited range of what can be done. 14:31:01 There is an issue finding 'space'. 14:31:33 NM: From an editorial angle - they may not be exactly mapped to what is being described. 14:32:02 Comprehension can happen anyway, depending on context - sync requirements do change. 14:32:07 SN: Exactly. 14:32:52 SN: 14:33:41 SN: It requires planning to get this right. 14:33:45 ack jan# 14:33:58 ack janina 14:34:34 JS: Harks back to conversations we had around the time of HTML 5.2 14:35:08 We have Media Accessibility User Requirements - where we looked at video, being presented as audio, or outputted as Braille/TTS 14:35:39 We outlined the ability to allow the user to consume descriptions of video presented in text. 14:35:46 14:36:35 There may be more elaborate descriptions needed depending on context. 14:36:46 You may also need to pause, control the steam etc. 14:37:01 -> https://bbc.github.io/Adhere/ Demonstrator for allowing audio description to be presented in text, based on Audio Description Profile of TTML2 14:37:02 Entertainment and education have different requirements 14:37:47 Time off-sets etc 14:37:50 q? 14:38:14 JS: We can revisit as a part of WCAG 14:38:27 ack andreas 14:38:31 ack ata 14:38:58 AT: Like Nigel I think this is valuable collection of requirements that has come up earlier. 14:39:13 It would be good to have concrete values for these tolerances 14:39:36 It would be good to have clearer guideline of what to do. 14:39:51 Looking at the research is good - people are looking for this. 14:40:10 Caption rate etc - also important. 14:40:34 s/steam/stream/ 14:40:48 The European Broadcasting union also has requirements. 14:41:07 Secondly, it is good to summarize but who is responsible? 14:41:30 If would be good to look at what kind of technical application development are these requirements targetting. 14:41:37 And at what stage? 14:41:47 q? 14:42:11 scribeoptions: -implicit 14:42:21 rrsagent, make minutes 14:42:21 I have made the request to generate https://www.w3.org/2021/10/20-saur-minutes.html janina 14:43:05 SN: What do we do next? What ranges of tolerances should be used for a11y? But what is the goal, who will implement? 14:43:22 q? 14:43:31 JW: We are discussing interesting questions. 14:43:54 We should summarize the research findings and draw some conclusions. 14:44:18 If you know of other who could review and submit issues, please do notify them. 14:44:21 q+ 14:44:34 What else can we do in the next version? 14:44:36 ack jan 14:45:04 JS: Coming to conclusions.. I'd like input from Nigel, Andreas and others. 14:45:24 We are looking at asking the browser to buffer content, as a part of Flash mitigation. 14:45:36 There will be discussions later. 14:46:05 If you can buffer, it can help users who are sensitive, when the machine can prevent this. 14:46:22 Would this be helpful ? Tighter tolerances? 14:46:30 Is this another use case? 14:46:49 NM: Regarding video - there are legal UK requirements to avoid flashing. 14:47:00 Responsibility of the content provider. 14:47:10 You could extend the scope to non video? 14:47:14 q+ 14:47:23 Animations could do this for example. 14:47:58 Some video providers spend effort on managing buffer time - so may be tricky if forced. 14:48:03 Could work as an option. 14:48:12 q+ to clarify the optionality 14:48:41 NM: There is also an issue around @@ 14:49:22 JB: This is an issue around assessing a users disability status. 14:49:41 Our personalization work may help with Sandboxing this. 14:50:02 14:50:48 q+ to ask about non-media synchronisation accessibility user requirements 14:50:54 JS: IIRC the delay is very short 14:50:56 ack judy 14:50:56 Judy, you wanted to clarify the optionality 14:51:20 https://groups.csail.mit.edu/infolab/publications/Barbu-Deep-video-to-video-transformations.pdf 14:51:25 JB: I'm encouraged at the research - this may be low frequency but substantial for those effected. 14:51:27 ack nig 14:51:27 nigel, you wanted to ask about non-media synchronisation accessibility user requirements 14:52:01 NM: This has made me think - you can cause flashing in diff ways. 14:52:02 q+ 14:52:50 14:53:25 NM: Should this be mentioned? Responsiveness? 14:53:40 JB: We want people to think about things we are missing? 14:53:58 What are the other angles - thank you for thinking more broadly. 14:54:03 q? 14:54:10 ack judy 14:54:55 q+ 14:55:00 KP: As an aside, I've worked with transcripts and video - and it seems we are not talking enough about transcripts. 14:55:20 Having multiple sources, can mean less of a problem. 14:55:29 Another useful pathway. 14:55:46 SN: Good point Kim. And we haven't discussed this in the document. 14:56:14 There are issues around, audio and video description etc - you want this stuff to make it into the transcript. 14:56:28 This becomes an issue of incorporation. 14:56:41 We've not looked at this here but we should 14:56:45 ack jan 14:57:12 JS: This would be helpful - we did look at this when working on the MAUR (Media A11y) 14:57:22 We got many things into the HTML spec. 14:57:53 But there was a meeting where it was decided (2013), to not programmatically determine these things. 14:58:04 That was a long time ago - things may be different now. 14:58:16 KM: I've got live examples of this. 14:58:26 JS: Yes. 14:58:40 KP: Also the production process is easier. I can demo. 14:58:58 SN: Appreciate the input on that. 14:59:12 rrsagent, make minutes 14:59:12 I have made the request to generate https://www.w3.org/2021/10/20-saur-minutes.html janina 14:59:35 JW: Comments can be submitted via Github or sent via email etc. 15:00:02 We look forward to planning other APA activities, and thanks all for a useful conversation 15:07:10 janina has left #saur 15:12:49 nigel has joined #saur 15:20:13 atai has joined #saur 16:02:09 nigel has joined #saur 16:20:05 nigel has joined #saur 16:37:00 nigel has joined #saur 16:58:06 Zakim has left #saur 17:30:47 Bert has left #saur 18:22:23 nigel has joined #saur 20:23:12 nigel has joined #saur 22:24:49 nigel has joined #saur