Meeting minutes
Agenda review
Nigel: Reviews the agenda in https://
Nigel: Anything to add to the agenda?
(nothing)
DVB Accessibility Implementation Guideline
Andreas: I'm chair of the accessibility task force at DVB. We send this to W3C, implementation guidelines for media accessibility
… From a DVB perspective, we'd like your feedback on the guidelines
… Global needs for media accessibility features
… What should that be, exactly, and where should we work on it?
… There may be more than the DVB goal to get feedback, so other areas of collaboration? What could DVB help with?
… The guideline describes signalling within in its own specs. It supports DVB-I. Of interest for this group is not so much the broadcast aspects, but we added a systemisation of accessibility features
… Is the vocabulary correct or sufficient? How could the descriptions be used in other specs?
… DVB is Digital Video Broadcasting, working on terrestrial broadcasting
Nigel: It's used throughout Europe and elsewhere in the world
Andreas: Some examples of accessibility features that need a description. Audio Description, Hard of Hearing Subtitles, Spoken Subtitles, Variable Audio Rendering
… Plain Language Video, Programme-Level Content Warnings
… Two examples for descriptions: Variable Audio Rendering, [reads the description from the slide]
… Plain Language Video: [reads the description]
Andreas: Nigel is a member of the Accessibiility TF in DVB, and a main contributor to the descriptions
… I want to separate the different layers of how you can use the media accessibility features
… User needs, mostly defined in W3C WCAG and MAUR
… Requirements in WCAG and MAUR
… Features and Mitigations in DVB A185 specification, and in MAUR. There are some brief descriptions in WCAG 2 and WCAG 3
… The goal in DVB is how the features area signalled
… The intent is to get feedback, are we on the right path? Could there be a global reference for media accessibility features. Other things to work on together?
Wolfgang: I can share the link to the DVB document
<wschildbach> The link to the DVB document A185 is https://
Janina: It would be useful for APA, because it's a major undertaking to enter into a longer term relationship, what value does W3C get from this?
… I need to understand, are we just working on labels and definitions? To what extent to what extent can we influence what's delivered through streaming channels?
… We had a Zoom call over a year ago. There's work we can do but the DVB spec is well advanced, so missed the opportunity. The MAUR was to define what HTML needed to do to support media in HTML
… The approach has been useful in other user requirements document, as it was so successful
… If we can do something to influence W3C standards, it's a big piece of work. Good group of experts in the room
… Are we trimming the product that's already defined, or building something more fundamental
<Zakim> mbgower, you wanted to say what controls are proposed about verbosity and its control by the user
Mike: In MAUR there's something completely lacking, text video description, as it's all in the soundtrack
… Something needed is giving user control of verbosity of information delivered
… There's potential to provide more information and let the user reduce it
… Potential for bypassing the prescriptive idea of getting it by audio, have text and transformed to how the user wants?
<Zakim> janina, you wanted to react to mbgower
Mike: How baked in is this? Backport stuff from WCAG 2.3. Corrections in WCAG 3 that opens prospect of text based information as opposed to just audio
Janina: Do we name by the media of delivering the description, or the media being described
… Promoted by NCAM, aimed at educational institutions, e.g., videos about Phyisics that are well described, expensive to produce an audio description
… But to be able to add text after the fact. It's in MAUR, and available in HTML
Matthew: Thank you Andreas, it's helpful context.
<matatk> Digital Accessibility User Requirements: https://
Matthew: Good to see mention of MAUR. THere's a whole range of user requirements documents. One or two that are relevant: Syncronisation user requirements, and RTC
<matatk> https://
Matthew: The last update to MAUR was 10 years ago. So help with updating MAUR would be welcome
… Immersive captions, for example. There was a report from a community group.
… There's a wishlist of things to address
… Groups that should be involved for the taxonomy of media features. WHATWG, if we want it to be supported and delivered on the web
… WCAG works on conformance and meeting requirements. One idea for exploration is using machine learning, one level of conformance could be ML generated captions, then user generated captions as another level. That's in AGWG
<Zakim> nigel, you wanted to mention consistency of experience for users across different platforms and devices
Nigel: Regarding the benefit of collaboration, it's about user needs and across devices. The web is not the whole world. Context switching between platforms means context switching between different terminology and a11y features
… That makes things harder. So if we can identify the common set of user needs and patterns, that's beneficial. W3C, DVB, other groups such as CTA
… Mike asked if it's too late to make changes. It's never too late. The web isn't a completed project...
… So we should make progress wherever we can
… About exposing things as text that might currently only in audio. DAPT enables distribution to players of timed text that could be the text of the audio description. This enables what you're talking about
… There's someting similar proposed for WebVTT. They don't match in terms of level of detail
… I also mention DAPT as it has lots of metadata for tracking translations, and what language the text is in and what language it was translated from.
… Helpful for spoken subtitles (but naming has same problem as audio description)
… Example of Dutch news programme, including speech from Barack Obama
… Metadata in DAPT enables the player to extract useful information to present the text, e.g., in text to speech, or in Braille
Andreas: Thanks for the responses, and the interest being shown
<Fazio> annotation of data for ai is a major consideration that we need to acknowledge and address
Andreas: About the benefit of collaboration, DVB has a lot of media and broadcast expertise, and W3C has lots of a11y expertise, so we benefit from bringing them together
… What we currently have is a concrete requests for feedback on one document
… What can we do more? From DVB, there's interest to collaborate with W3C and have a liaison. We'd need that as DVB doesn't work in the open
… There'd be interest, and DVB would benefit in a broader scope with exchange with accessibility experts
… The document is non-normative, guidance for how to use DVB specifications. We plan publication in Q1 next year, so feedback by end November, but after that we still have options
… I expected MAUR to come up. Who could work on an update? People in the media and broadcast sphere could help. Worth exploring
Fazio: We do developer training. I've realised in developing the curriculum is that timed text is important for development of AI. Community is often left out
… Help misunderstanding of data when fed to LLMs
… Subtitles in a movie often have 3 lines of text for 5 minutes while things are changing on screen. Captions can inform an AI model what's going on at the time
… By creating these a11y featuers we're better informing AI models, and mitigating risks. What I don't hear enough of is working with AI groups so we're standardising as much as we can, doing this in cooperation with AI.
… Text for labels as another example. Come up with a structure for annotating for accessibility, to make this happen
Nigel: Proposals for concrete next steps? There's a request for review
<Zakim> nigel, you wanted to move to next steps
Janina: As we read and provide comments, we should look at the descriptions, not the architecture of how it's delivered?
Nigel: Not so much the architecture....
Andreas: Initial feedback good, but looking at the architecture could take more time. If we need to delay, we want something that can be fed into the specific document
Nigel: Talked with Kevin White, who noticed things proposed that are possible in DVB or broadcast standards that the web can't do. Example, the object based audio approach is poorly supported.
… There is an architectural issue on how different media components can be assembled and presented to the user. Not only to do with audio, so need to think about it holistically
Janina: I should review, but can't by end November
Andreas: If you do intend to review, it would be worthwhile communicating that more time is needed. Let us know when you might be ready
Janina: mid-January?
Andreas: This would be really good
Janina: We had a group working on MAUR, so hope to bring others in
… Building the relationship here is valuable, so want to support that
… I'll be more involved in WCAG2 and 3 from December
Mike: I'd be happy to be involved in the review
Matthew: Agreed on the points about architecture. APA has a similar request, in the audio domain, that's not well formed yet, about routing of audio to different devices
… We'd like to progress that
… +1 to having a global taxonomy
… In terms of review time, could be useful to know if there are blockers earlier, and focus attention on those
Nigel: Next steps, the review, and establishing the DVB / W3C liaison
Chris: I spoke to Remo at DVB, but we haven't concretely identified other topic areas beyond this at this stage. We'll follow up
DAPT and IMSC updates
Nigel: We want to request a new CR Snapshot. We're near the end of this process
… We're working on implementations. Thank you for review comments
… I don't think there are outstanding APA review questions
… We asked review on IMSC 1.3. We are seeking to go to CR. We asked horizontal review of FPWD
… It's a profile that includes features that are already in Recs, so no additional tests needed to demonstrate interop
… We do the smallest amount if testing and implementation work needed
… As a result, no further tests needed, so we can move quickly from CR to Rec
… Two open questions about IMSC. APA review, and anything we need to do for a11y of superscripts and subscripts
… The other is about Japanese character sets and Unicode
… But on superscript and subscript, anything APA wants to say?
<matatk> Link to nigel's comment: w3c/
Matthew: [Explains how the review mechanism works]
… What you propose sounds reasonable. I'd suggest that ARIA should weigh in, if that's the best thing we can do or not
… No process yet for getting horizontal review from them, but we're discussing that this afternoon
Janina: We treat FPWD and CRs differently, so we may want to refine that
Nigel: Some refinement in the horizontal review process, to signal the intent of where to go next
… Pierre, as IMSC editor, anything to raise?
Pierre: No, I think you've covered it all. We're open to further feedback on that issue
Nigel: So the proposed informative text would be helpful to include in the spec, so we'll do that
Pierre: I don't have a strong opinion. It's a definitive recommendation, are we sure it'll be the case forever?
… It applies when mapping TTML/IMSC to HTML/CSS. Are we sure we want something that definitive? I don't want it to make statements about parts of the web that it's not in charge of, so it's really a HTML/CSS issue
… Hundreds of comments in HTML about how to do superscript...
Nigel: So either make no statement, or say something less specific, e.g., care should be taken to ensure assistive technology can ...
Janina: I think we'd be fine with that
Pierre: No objection to current wording, but don't want to take sides in that debate
Matthew: There was an interesting debate earlier in the week about what consitutes an AT bug or a spec bug, etc
… This is not so much a bug, but this is a note to give a warning, which seems fair
… But appreciate you don't want to say something that may be wrong in future
Nigel: We may not have time for the Unicode issue. Ohmata-san, there's a question about the precise wording regarding ideographic selectors that we need to resolve to complete the work on Japanese character set references
ARIA Notifications API
Nigel: BBC exposes subtitles and captions to AT using aria-live, set to polite
… It's complex, whether a good thing to do in all cases
… A screen reader user watching alongside a subtitle/caption user, it could get noisy for the screen reader user in ways they don't want
<matatk> For some background: TAG design review for AriaNotify API: w3ctag/
Nigel: not clear what the solution should be. User could have option to say whether it should go to AT or not. In native playback, a system setting. The ARIA Notifications API might help with this?
Matthew: It relates to live regions, which are areas of the page that the browser monitors for changes and tells the AT, like an audio status bar
<ohmata> Regarding the Japanese Character set, I am preparing feedback, so I will communicate with Atsushi again and provide an official response.
Matthew: The pain points encountered with live regions relate to web app authors, who want to provide a cue to a screenreader user that something has happened but don't want it to be visible in the page
… ARIANotify works for things that are visible. Good idea to make things availalbe to all users in all modalitity
… Can't assume screenreader means blind user
… ARIA live regions are good for cases where the content is visible and not going away
… This is an alternative approach, imperative not declarative. Call a function with a message, and prioirity. Call it on an element, which has implications for filtering stuff out potentially
… The TAG design review thread I posted links to the explainer with examples.
… No privacy concern
… You might have captions displayed visually, but could be too noisy. So instead have an app setting to enable calls to the API. I hadn't considered it, and ARIA may not have considered. But seems reasonable thing to do
Nigel: An additional use case. BBC has online videos with ambient sound and no dialog, and the video image contains narrative text, e.g., a silent film
… It's not accessible! An alternative might be to provide something like a text track, but where the intent isn't to display a visual, but use ARIANotify to send to the screenreader
Fazio: I have reading impairment. Many movies have text on the screen, you can't read
… What about people who don't screenreaders? Something embedded in the HTML media player, as it's not just people who can't see completely, also people with cognitive disabilities
Mike: In a music video where the lyrics appear on the screen, you still get captions. They're fully redundant
… But you don't get it the other way round. There's not redundancy with text.
Gary: It sounds like for the BBC use case that ARIANotify might be better. I wonder if we need a way to expose the subtitle/caption or text description in another way such as a Braille reader or similar
<nigel> +1 to Gary, would be v useful to have a consistent way to flag this to user agents
Gary: Could be helpful as a way to enable captions to be consumed ... would need to be at the browser or OS level
Matthew: I struggle with videos with burned-in text. But the timing issue that David mentioned is annoying, as you have to rewind repeatedly
<matatk> Web Platform Gaps (needs your input!): https://
Matthew: There's a repo owned by the TAG to identify gaps in the web platform. We want problems there, not solutions. But please add to it!
Subtitle format support in browsers
Nigel: Lots of issues involved. Proposal from Apple to support a new FCC requirement
… Beneficial to have native support for IMSC as well as WebVTT. Causing stress and tension in this world. Lots of work to be done, and questions
… Privacy, user data, rendering accuracy, and also what information you expose to AT and how the user controls it
… No time to discuss today. But this will continue. Confusing that W3C has more than one caption standards. One that's a Rec and one that isn't. Confusing that WHATWG refers to the one that isn't
… Let's keep the conversation going, to get the most accessible formats supported
Gary: No more caption formats!
Nigel: Just the ones we have. We started talking about broadcasting standards and formats. They say you can use IMSC, but Web is different
Fazio: I appreciate what YouTube has done to democratise creating captions. It's been a game changer
[adjourned]