00:01:37 RRSAgent has joined #me 00:01:41 logging to https://www.w3.org/2025/11/13-me-irc 00:01:41 RRSAgent, make logs Public 00:01:42 please title this meeting ("meeting: ..."), nigel_ 00:01:42 matatk has joined #me 00:01:46 scribe+ cpn 00:01:51 present+ 00:02:06 Meeting: Media and Entertainment Interest Group, Timed Text Working Group, Accessible Platform Architectures WG joint meeting 00:02:27 scribe+ nigel_ 00:02:31 present+ Chris_Needham 00:02:36 present+ Nigel Megitt 00:02:50 present+ Matthew Atkinson 00:02:50 present+ 00:02:55 zakim, who's here? 00:02:55 Present: matatk, Chris_Needham, Nigel, Megitt, Matthew, Atkinson, Roy_Ruoxi 00:02:57 On IRC I see matatk, RRSAgent, Zakim, cpn, nigel_, englishm, ovf, mattp, hyojin, gkatsev, Roy_Ruoxi, timeless, tidoust 00:03:14 rrsagent, make minutes 00:03:16 I have made the request to generate https://www.w3.org/2025/11/13-me-minutes.html nigel_ 00:03:27 wschildbach has joined #me 00:03:33 present+ Gary 00:03:38 agenda: https://www.w3.org/events/meetings/77e2d0cf-aa11-434e-bb56-fa303759ff4d/ 00:03:40 present+ 00:03:48 alwu has joined #me 00:03:51 ohmata has joined #me 00:03:58 atsushi has joined #me 00:04:00 present+ 00:04:06 present+ Alastor Wu 00:04:09 present+ Nigel_Megitt 00:04:12 present- Nigel 00:04:15 present+ mattp 00:04:18 present- Atkinson 00:04:20 present- Megitt 00:04:21 present- Matthew 00:04:24 present- matatk 00:04:32 Adam_Page has joined #me 00:04:32 present+ Matthew_Atkinson 00:04:34 nico has joined #ME 00:04:35 regrets+ 00:04:40 present+ 00:04:47 rrsagent, make minutes 00:04:48 I have made the request to generate https://www.w3.org/2025/11/13-me-minutes.html matatk 00:04:54 janina1 has joined #me 00:05:03 Chair: Nigel_Megitt 00:05:10 Bernd has joined #me 00:05:10 present+ 00:05:15 Niko has joined #me 00:05:20 present+ 00:05:34 Topic: Agenda review 00:05:50 present+ Nikolaus_Färber 00:05:56 atai has joined #me 00:06:31 Nigel: Reviews the agenda in https://www.w3.org/events/meetings/77e2d0cf-aa11-434e-bb56-fa303759ff4d/ 00:07:03 eric-carlson has joined #me 00:07:46 present+ Brian_Kardell, Eric_Carlson 00:08:19 Nigel: Anything to add to the agenda? 00:08:33 (nothing) 00:09:02 bkardell has joined #me 00:09:34 present+ 00:10:42 present+ Andy_Estes 00:12:30 present+ Hisayuki_Ohmata, Tetsuya_Honda, Adam_Page, Yuta_Hagio, Pierre-Anthony_Lemieux, Mike_Gower 00:12:39 present+ Gary_Katsevman 00:13:06 hagio has joined #me 00:13:54 present+ Sayaka_Nishide, Megumi_Tsunoya 00:14:11 +Present Yuta_Hagio 00:14:38 Present+ Yuta_Hagio 00:15:06 mbgower6 has joined #me 00:15:09 present+ 00:15:27 nigel has joined #me 00:15:49 Topic: DVB Accessibility Implementation Guideline 00:15:54 meg has joined #me 00:16:14 scribe+ nigel 00:16:43 Andreas: I'm chair of the accessibility task force at DVB. We send this to W3C, implementation guidelines for media accessibility 00:16:59 ... From a DVB perspective, we'd like your feedback on the guidelines 00:17:12 ... Global needs for media accessibility features 00:17:22 ... What should that be, exactly, and where should we work on it? 00:17:59 ... There may be more than the DVB goal to get feedback, so other areas of collaboration? What could DVB help with? 00:18:54 cabanier has joined #me 00:18:54 ... The guideline describes signalling within in its own specs. It supports DVB-I. Of interest for this group is not so much the broadcast aspects, but we added a systemisation of accessibility features 00:19:19 ... Is the vocabulary correct or sufficient? How could the descriptions be used in other specs? 00:19:54 ... DVB is Digital Video Broadcasting, working on terrestrial broadcasting 00:20:04 Nigel: It's used throughout Europe and elsewhere in the world 00:20:37 Andreas: Some examples of accessibility features that need a description. Audio Description, Hard of Hearing Subtitles, Spoken Subtitles, Variable Audio Rendering 00:20:48 ... Plain Language Video, Programme-Level Content Warnings 00:21:12 ... Two examples for descriptions: Variable Audio Rendering, [reads the description from the slide] 00:21:57 ... Plain Language Video: [reads the description] 00:22:30 Andreas: Nigel is a member of the Accessibiility TF in DVB, and a main contributor to the descriptions 00:22:41 q+ 00:22:50 ... I want to separate the different layers of how you can use the media accessibility features 00:23:11 ... User needs, mostly defined in W3C WCAG and MAUR 00:23:19 ... Requirements in WCAG and MAUR 00:24:07 ... Features and Mitigations in DVB A185 specification, and in MAUR. There are some brief descriptions in WCAG 2 and WCAG 3 00:24:15 ... The goal in DVB is how the features area signalled 00:24:57 ... The intent is to get feedback, are we on the right path? Could there be a global reference for media accessibility features. Other things to work on together? 00:24:57 q? 00:25:06 ack w 00:25:08 q+ to say what controls are proposed about verbosity and its control by the user 00:25:21 Wolfgang: I can share the link to the DVB document 00:25:28 q+ 00:25:37 The link to the DVB document A185 is https://dvb.org/?standard=accessibility-implementation-guidelines 00:26:15 Janina: It would be useful for APA, because it's a major undertaking to enter into a longer term relationship, what value does W3C get from this? 00:26:20 q? 00:26:47 ... I need to understand, are we just working on labels and definitions? To what extent to what extent can we influence what's delivered through streaming channels? 00:26:54 q+ to mention consistency of experience for users across different platforms and devices 00:27:08 q+ 00:27:56 ... We had a Zoom call over a year ago. There's work we can do but the DVB spec is well advanced, so missed the opportunity. The MAUR was to define what HTML needed to do to support media in HTML 00:28:11 ... The approach has been useful in other user requirements document, as it was so successful 00:28:42 ... If we can do something to influence W3C standards, it's a big piece of work. Good group of experts in the room 00:29:10 ... Are we trimming the product that's already defined, or building something more fundamental 00:29:17 ack mbgower6 00:29:20 ack mbgower 00:29:20 mbgower, you wanted to say what controls are proposed about verbosity and its control by the user 00:29:39 Mike: In MAUR there's something completely lacking, text video description, as it's all in the soundtrack 00:29:53 ... Something needed is giving user control of verbosity of information delivered 00:29:59 qq+ janina 00:30:05 ... There's potential to provide more information and let the user reduce it 00:30:38 ... Potential for bypassing the prescriptive idea of getting it by audio, have text and transformed to how the user wants? 00:31:10 ack jan 00:31:10 janina, you wanted to react to mbgower 00:31:13 ... How baked in is this? Backport stuff from WCAG 2.3. Corrections in WCAG 3 that opens prospect of text based information as opposed to just audio 00:31:33 Janina: Do we name by the media of delivering the description, or the media being described 00:31:41 q? 00:32:08 ... Promoted by NCAM, aimed at educational institutions, e.g., videos about Phyisics that are well described, expensive to produce an audio description 00:32:23 q? 00:32:26 ... But to be able to add text after the fact. It's in MAUR, and available in HTML 00:32:38 ack mat 00:33:02 Matthew: Thank you Andreas, it's helpful context. 00:33:05 Digital Accessibility User Requirements: https://www.w3.org/WAI/research/user-requirements/ 00:33:13 Fazio has joined #me 00:33:20 q+ 00:33:32 present+ 00:33:35 ... Good to see mention of MAUR. THere's a whole range of user requirements documents. One or two that are relevant: Syncronisation user requirements, and RTC 00:33:57 https://github.com/w3c/maur/issues 00:33:59 ... The last update to MAUR was 10 years ago. So help with updating MAUR would be welcome 00:34:32 ... Immersive captions, for example. There was a report from a community group. 00:34:40 ... There's a wishlist of things to address 00:35:11 ... Groups that should be involved for the taxonomy of media features. WHATWG, if we want it to be supported and delivered on the web 00:35:47 q? 00:36:22 ... WCAG works on conformance and meeting requirements. One idea for exploration is using machine learning, one level of conformance could be ML generated captions, then user generated captions as another level. That's in AGWG 00:36:24 ack n 00:36:25 nigel, you wanted to mention consistency of experience for users across different platforms and devices 00:37:19 Nigel: Regarding the benefit of collaboration, it's about user needs and across devices. The web is not the whole world. Context switching between platforms means context switching between different terminology and a11y features 00:37:46 ... That makes things harder. So if we can identify the common set of user needs and patterns, that's beneficial. W3C, DVB, other groups such as CTA 00:38:16 ... Mike asked if it's too late to make changes. It's never too late. The web isn't a completed project... 00:38:26 ... So we should make progress wherever we can 00:39:11 ... About exposing things as text that might currently only in audio. DAPT enables distribution to players of timed text that could be the text of the audio description. This enables what you're talking about 00:39:28 ... There's someting similar proposed for WebVTT. They don't match in terms of level of detail 00:39:54 ... I also mention DAPT as it has lots of metadata for tracking translations, and what langauge the text is in and what language it was translated from. 00:40:11 ... Helpful for spoken subtitles (but naming has same problem as audio description) 00:40:52 ... Example of Dutch news programme, including speech from Barack Obama 00:41:25 q? 00:41:30 ... Metadata in DAPT enables the player to extract useful information to present the text, e.g., in text to speech, or in Braille 00:41:44 Andreas: Thanks for the responses, and the interest being shown 00:42:09 ack at 00:42:19 annotation of data for ai is a major consideration that we need to acknowledge and address 00:42:20 ... About the benefit of collaboration, DVB has a lot of media and broadcast expertise, and W3C has lots of a11y expertise, so we benefit from bringing them together 00:42:40 ... What we currently have is a concrete requests for feedback on one document 00:43:06 ... What can we do more? From DVB, there's interest to collaborate with W3C and have a liaison. We'd need that as DVB doesn't work in the open 00:43:23 q? 00:43:28 ... There'd be interest, and DVB would benefit in a broader scope with exchange with accessibility experts 00:44:17 ... The document is non-normative, guidance for how to use DVB specifications. We plan publication in Q1 next year, so feedback by end November, but after that we still have options 00:44:51 ... I expected MAUR to come up. Who could work on an update? People in the media and broadcast sphere could help. Worth exploring 00:45:55 Fazio: We do developer training. I've realised in developing the curriculum is that timed text is important for development of AI. Community is often left out 00:46:07 ... Help misunderstanding of data when fed to LLMs 00:46:57 ... Subtitles in a movie often have 3 lines of text for 5 minutes while things are changing on screen. Captions can inform an AI model what's going on at the time 00:47:35 ... By creating these a11y featuers we're better informing AI models, and mitigating risks. What I don't hear enough of is working with AI groups so we're standardising as much as we can, doing this in cooperation with AI. 00:47:44 q+ to move to next steps 00:47:47 ack Faz 00:47:57 .... Text for labels as another example. Come up with a structure for annotating for accessibility, to make this happen 00:48:29 Nigel: Proposals for concrete next steps? There's a request for review 00:48:31 q? 00:48:32 ack ni 00:48:32 nigel, you wanted to move to next steps 00:49:13 Janina: As we read and provide comments, we should look at the descriptions, not the architecture of how it's delivered? 00:49:24 Nigel: Not so much the architecture.... 00:50:05 q+ 00:50:15 Andreas: Initial feedback good, but looking at the architecture could take more time. If we need to delay, we want something that can be fed into the specific document 00:50:55 Nigel: Talked with Kevin White, who noticed things proposed that are possible in DVB or broadcast standards that the web can't do. Example, the object based audio approach is poorly supported. 00:51:38 ... There is an architectural issue on how different media components can be assembled and presented to the user. Not only to do with audio, so need to think about it holistically 00:51:58 Janina: I should review, but can't by end November 00:52:33 Andreas: If you do intend to review, it would be worthwhile communicating that more time is needed. Let us know when you might be ready 00:52:39 Janina: mid-January? 00:52:58 Andreas: This would be really good 00:53:07 q? 00:53:36 Janina: We had a group working on MAUR, so hope to bring others in 00:53:52 ... Building the relationship here is valuable, so want to support that 00:54:23 ... I'll be more involved in WCAG2 and 3 from December 00:54:36 Mike: I'd be happy to be involved in the review 00:54:50 ack mat 00:55:12 Matthew: Agreed on the points about architecture. APA has a similar request, in the audio domain, that's not well formed yet, about routing of audio to different devices 00:55:23 ... We'd like to progress that 00:55:36 ... +1 to having a global taxonomy 00:56:06 ... In terms of review time, could be useful to know if there are blockers earlier, and focus attention on those 00:57:24 Nigel: Next steps, the review, and establishing the DVB / W3C liaison 00:58:03 Chris: I spoke to Remo at DVB, but we haven't concretely identified other topic areas beyond this at this stage. We'll follow up 00:58:07 Topic: DAPT 00:58:29 Nigel: We want to request a new CR Snapshot. We're near the end of this process 00:58:41 ... We're working on implementations. Thank you for review comments 00:58:52 ... I don't think there are outstanding APA review questions 00:59:30 ... We asked review on IMSC 1.3. We are seeking to go to CR. We asked horizontal review of FPWD 01:00:22 ... It's a profile that includes features that are already in Recs, so no additional tests needed to demonstrate interop 01:00:36 ... We do the smallest amount if testing and implementation work needed 01:00:49 ... As a result, no further tests needed, so we can move quickly from CR to Rec 01:01:25 ... Two open questions about IMSC. APA review, and anything we need to do for a11y of superscripts and subscripts 01:01:37 ... The other is about Japanese character sets and Unicode 01:01:59 ... But on superscript and subscript, anything APA wants to say? 01:02:17 Link to nigel's comment: https://github.com/w3c/a11y-tracking/issues/252#issuecomment-3334994063 01:03:05 Matthew: [Explains how the review mechanism works] 01:03:51 ... What you propose sounds reasonable. I'd suggest that ARIA should weigh in, if that's the best thing we can do or not 01:04:29 ... No process yet for getting horizontal review from them, but we're discussing that this afternoon 01:05:02 Janina: We treat FPWD and CRs differently, so we may want to refine that 01:05:44 Nigel: Some refinement in the horizontal review process, to signal the intent of where to go next 01:06:05 ... Pierre, as IMSC editor, anything to raise? 01:06:23 Pierre: No, I think you've covered it all. We're open to further feedback on that issue 01:06:43 Nigel: So the proposed informative text would be helpful to include in the spec, so we'll do that 01:08:08 Pierre: I don't have a strong opinion. It's a definitive recommendation, are we sure it'll be the case forever? 01:09:01 ... It applies when mapping TTML/IMSC to HTML/CSS. Are we sure we want something that definitive? I don't want it to make statements about parts of the web that it's not in charge of, so it's really a HTML/CSS issue 01:09:02 q+ 01:09:18 ... Hundreds of comments in HTML about how to do superscript... 01:10:10 Nigel: So either make no statement, or say something less specific, e.g., care should be taken to ensure assistive technology can ... 01:10:25 Janina: I think we'd be fine with that 01:10:45 Pierre: No objection to current wording, but don't want to take sides in that debate 01:11:29 Matthew: There was an interesting debate earlier in the week about what consitutes an AT bug or a spec bug, etc 01:11:56 ... This is not so much a bug, but this is a note to give a warning, which seems fair 01:12:11 ... But appreciate you don't want to say something that may be wrong in future 01:13:26 q? 01:13:28 ack me 01:13:58 Nigel: We may not have time for the Unicode issue. Ohmata-san, there's a question about the precise wording regarding ideographic selectors that we need to resolve to complete the work on Japanese character set references 01:14:20 Topic: ARIA Notifications API 01:14:41 Nigel: BBC exposes subtitles and captions to AT using aria-live, set to polite 01:14:51 ... It's complex, whether a good thing to do in all cases 01:14:57 q+ 01:15:15 ... A screen reader user watching alongside a subtitle/caption user, it could get noisy for the screen reader user in ways they don't want 01:16:03 For some background: TAG design review for AriaNotify API: https://github.com/w3ctag/design-reviews/issues/1075 01:16:06 ... not clear what the solution should be. User could have option to say whether it should go to AT or not. In native playback, a system setting. The ARIA Notifications API might help with this? 01:16:50 Matthew: It relates to live regions, which are areas of the page that the browser monitors for changes and tells the AT, like an audio status bar 01:17:11 Regarding the Japanese Character set, I am preparing feedback, so I will communicate with Atsushi again and provide an official response. 01:17:32 ... The pain points encountered with live regions relate to web app authors, who want to provide a cue to a screenreader user that something has happened but don't want it to be visible in the page 01:17:55 ... ARIANotify works for things that are visible. Good idea to make things availalbe to all users in all modalitity 01:18:18 ... Can't assume screenreader means blind user 01:19:10 ... ARIA live regions are good for cases where the content is visible and not going away 01:19:47 ... This is an alternative approach, imperative not declarative. Call a function with a message, and prioirity. Call it on an element, which has implications for filtering stuff out potentially 01:19:59 The TAG design review thread I posted links to the explainer with examples. 01:20:05 s/The/... The/ 01:20:14 q? 01:20:16 ... No privacy concern 01:20:18 ack ma 01:20:24 q+ 01:21:04 ... You might have captions displayed visually, but could be too noisy. So instead have an app setting to enable calls to the API. I hadn't considered it, and ARIA may not have considered. But seems reasonable thing to do 01:21:42 Nigel: An additional use case. BBC has online videos with ambient sound and no dialog, and the video image contains narrative text, e.g., a silent film 01:22:24 ... It's not accessible! An alternative might be to provide something like a text track, but where the intent isn't to display a visual, but use ARIANotify to send to the screenreader 01:23:04 q+ 01:23:09 ack n 01:23:16 Fazio: I have reading impairment. Many movies have text on the screen, you can't read 01:24:06 q+ 01:24:13 ... What about people who don't screenreaders? Something embedded in the HTML media player, as it's not just people who can't see completely, also people with cognitive disabilities 01:24:27 ack mb 01:24:57 Mike: In a music video where the lyrics appear on the screen, you still get captions. They're fully redundant 01:25:15 q+ 01:25:43 ... But you don't get it the other way round. There's not redundancy with text. 01:25:47 ack gk 01:26:30 Gary: It sounds like for the BBC use case that ARIANotify might be better. I wonder if we need a way to expose the subtitle/caption or text description in another way such as a Braille reader or similar 01:26:54 +1 to Gary, would be v useful to have a consistent way to flag this to user agents 01:27:02 ... Could be helpful as a way to enable captions to be consumed ... would need to be at the browser or OS level 01:27:10 ack ma 01:27:56 Matthew: I struggle with videos with burned-in text. But the timing issue that David mentioned is annoying, as you have to rewind repeatedly 01:28:07 Web Platform Gaps (needs your input!): https://github.com/w3ctag/gaps/issues 01:28:44 q? 01:28:47 ... There's a repo owned by the TAG to identify gaps in the web platform. We want problems there, not solutions. But please add to it! 01:29:08 Topic: Subtitle format support in browsers 01:29:27 Nigel: Lots of issues involved. Proposal from Apple to support a new FCC requirement 01:30:06 ... Beneficial to have native support for IMSC as well as WebVTT. Causing stress and tension in this world. Lots of work to be done, and questions 01:30:30 ... Privacy, user data, rendering accuracy, and also what information you expose to AT and how the user controls it 01:30:30 q? 01:31:21 ... No time to discuss today. But this will continue. Confusing that W3C has more than one caption standards. One that's a Rec and one that isn't. Confusing that WHATWG refers to the one that isn't 01:31:45 ... Let's keep the conversation going, to get the most accessible formats supported 01:31:50 Gary: No more caption formats! 01:32:26 Nigel: Just the ones we have. We started talking about broadcasting standards and formats. They say you can use IMSC, but Web is different 01:32:56 Fazio: I appreciate what YouTube has done to democratise creating captions. It's been a game changer 01:33:21 [adjourned] 01:33:50 hagio has left #me 01:35:04 s/Topic: DAPT/Topic: DAPT and IMSC updates/ 01:35:15 rrsagent, draft minutes 01:35:16 I have made the request to generate https://www.w3.org/2025/11/13-me-minutes.html cpn 01:36:41 Adam_Page has joined #me 01:37:32 cpn has joined #me 01:37:37 rrsagent, pointer? 01:37:37 See https://www.w3.org/2025/11/13-me-irc#T01-37-37 01:38:07 present- bkardell, Gary, mbgower6 01:38:09 scribe+ nigel 01:38:43 s/+Present Yuta_Hagio// 01:40:07 nigel has joined #me 01:45:04 s/langauge/language/g 01:50:14 Adam_Page has joined #me 01:50:22 rrsagent, make minutes 01:50:23 I have made the request to generate https://www.w3.org/2025/11/13-me-minutes.html nigel 01:51:04 scribeOptions: -final -noEmbedDiagnostics 01:51:07 zakim, end meeting 01:51:07 As of this point the attendees have been matatk, Chris_Needham, Nigel, Megitt, Matthew, Atkinson, Roy_Ruoxi, Gary, wschildbach, ohmata, Alastor, Wu, Nigel_Megitt, mattp, 01:51:10 ... Matthew_Atkinson, Adam_Page, janina, Bernd, Nikolaus_Färber, Brian_Kardell, Eric_Carlson, bkardell, Andy_Estes, Hisayuki_Ohmata, Tetsuya_Honda, Yuta_Hagio, 01:51:10 ... Pierre-Anthony_Lemieux, Mike_Gower, Gary_Katsevman, Sayaka_Nishide, Megumi_Tsunoya, mbgower, Fazio 01:51:10 RRSAgent, please draft minutes 01:51:11 I have made the request to generate https://www.w3.org/2025/11/13-me-minutes.html Zakim 01:51:17 I am happy to have been of service, nigel; please remember to excuse RRSAgent. Goodbye 01:51:18 Zakim has left #me 01:51:24 rrsagent, excuse us 01:51:24 I see no action items