W3C

– DRAFT –
Media and Entertainment IG

11 July 2023

Attendees

Present
Chris_Lorenzo, Chris_Needham, Francois_Daoust, Hisayuki_Ohmata, Janina_Sajka, Jason_White, jasonjgw, John_Riviello, Kaz_Ashimura, Kinji_Matsumura, Louay_Bassbouss, Matthew_Atkinson, Nigel_Megitt, Ryo_Yasuoka, Tatsuya_Igarashi, Xabier_Rodríguez_Calvar
Regrets
-
Chair
Chris_Lorenzo, Chris_Nedham, Tatsuya_Igarashi
Scribe
cpn, kaz

Meeting minutes

Joint discussion with APA WG

Introduction

cpn: Chris Needham from BBC, one of the MEIG co-Chairs

cl: Chris Lorenzo from Comcast, also MEIG co-Chair

ti: Tatsuya Igarashi from Sony Group, another MEIG co-Chair

<kaz> i/CHris Needham/subtopic: Introduction/

(Janina and Matthew are the APA co-Chairs)

Janina: We came up with the MAUR, it was a success, made sure it was supported, there's room for improvement in UA support, it started us on other directions in the research questions TF an APA

Matthew: I'm the other co-chair of APA, at TPGI, I also develop browser extensions, based in London

Jason: I'm facilitator in the RQ TF, the TF is designed to develop documents to set out requirements to enable future and enable web technologies to be more accessibility to people with disabilities
… It's also a linking mechanism between review and development of W3C specs and the research community
… It has a function to inform the APA WG and other groups of research and important proposals and ideas for accessibility for emerging technologies
… I facilitate together with Scott Hollier

Nigel: I chair TTWG, where we plan to have a joint meeting at TPAC

Kaz: I'm team contact for this group, and WoT

Ohmata: I'm from NHK, public broadcaster, working on broadcast/broadband solutions. Accessibility is very important

Yasuoka: I'm also from NHK

Matsumura: I'm in R&D at NHK

John_Riviello: I'm at Comcast

Francois: I'm W3C team, for media

Louay: Fraunhofer FOKUS

Media Accessibility User Requirements

<matatk> https://www.w3.org/TR/media-accessibility-reqs/

Janina: Some people in this call were involved in the details. It covers things like captions, described video. Inter-linear synchronisation
… People thought it was interesting, but not implemented in specifications. Bible translations, Hebrew and Greek
… One thing we like to look at is can we do more with synchronising images. Can we show a Beethoven score as you watch an orchestra?
… That's an edge case. One area to explore
… One more, more achievable, is semantically labelling labels of a large format video. If you have a play, you have a time offset and a text string
… Accessibility need is supporting people not on the web, Bliss, they're not in web content as they don't have a Rosetta stone
… We now have one in FPWD, authoring in the symbol set they recognise. A spec in CR, requires we can add attributes to a chapter mark
… If we can do that, why can't we nest them, so we can do Act 1 Scene 1, Scene 2 in a play
… The symbol languages aren't new. They're successful before the web. They're not in the web because people know *a* symbol language but not *your* language
… So needs a Rosetta stone, spec at W3C

Matthew: Other things we want to touch on are embedded media, MusicXML, also multiple tracks of sign language

Jason: The MAUR sets out the user needs and requirements in relation to media in general. It's a relatively comprehensive document, to influence the development of specs and implementations
… The purpose of the current effort is to revise the document to bring it up to date with current needs wrt media
… That includes requirements gathering, to be as thorough as possible, to ensure new requirements are identified and documented

Janina: So we'll write up the requirements, look at implications on authoring, UA support, implementation support. We expect the answer is some yes, some not yet

Jason: That's separate to the requirements document, we'd expect progressive implementation as the requirements move to specification

Media Accessibility User Requirements (2015 Note)

Nigel: You mentioned interlinear synchronisation, can you explain?

Janina: biblegateway.com lets you bring up any portion, it has English and on the line below, the original Greek or Hebrew - which is interesting as you have LTR and RTL
… You may also do for Beowulf or Chaucer, other ways it can be useful. Is it media or is it e-pub?

Nigel: Is it related to audio or video media?

Janina: Not sure how they're doing it. In EPUB 3.3 we use SMIL with audio and video tracks. Can pick the syncrhonisation level, paragraph or word level
… If you can do at the word level, you can highlight the word on screen as the text is read, by a TTS engine
… People otherwise wouldn't understand the content. We can do a better job in the 2.0 update, where it's critical
… Synchronisation, learning resources, and it's coming to general populations

Jason: Another good use case is an operatic performance where the music and video might need to be synchronised at some level of granulatity, the user can follow along appropriately
… There are use cases that involve A/V tracks, others are purely textual, but the two are related. As soon as you have parallel translations, multiple languages, translation to A/V tracks

Janina: An app in Broadway theatres, will sync the translation to your preferred language. www.galapro.com

Nigel: Trying to understand the point of differentiation with captions, timed transcripts
… If you have text associated with times in the media, it sounds like captions

Janina: It's similar, yes

Nigel: If you have another synchronisation source, that's different

Janina: The UA isn't the synchronisation source. Benetech and EPUB is the usual time text case, we've had forever

Nigel: Because the UA is playing the media it knows the timing source, different to the theatre case where you're synchronising to something outside the UA
… For the EPUB case, it seems that it's the text driving sync with the media, rather than the other way around

Jason: Adding to that, there's potentially multiple levels of heirarchical navigation, in the media and the text document
… It describes the problem space, captions describes part of it, but not all

Janina: I thnk there's a lot of room for semantic navigation improvement. I thought we'd be further along

Jason: One task is for the MAUR to reflect these use cases, that process is under way
… Another is to connect that to future work on specs

Kaz: Thank you for your points, Janina and Matthew. I've been working on multimodal interaction for a while. This discussion related to multimodality for the recent web
… We should think about geolocation and the situation of the user in the moment, as well as their ordinary setting. Important for services in the future, smart cities and metaverse
… How to let people access these services based on preferred modality and method. We should think about that wider scope too?

Janina: Sounds good to me.

Kaz: I'd like to also support from IoT and smart city point of view

Nigel: In the bigger picture with MAUR, with terminology, how it interacts with horizontal review. Any issues open that need to be resolved on that?
… I just tried to do the FAST self questionnaire, some of the terms seemed jargon I wasn't sure what they mean. Do you have a plan to update?

Janina: We are standing up a FAST TF to improve the questionnaire, and do a better job gathering the fucntional requirements to describe accessibilty concerns
… We're looking for times to start work. It was Michael Cooper's project. FAST will continue, we know we need to work on it

Janina: On terminology, no issues specifically, but we have had some discussion on building a WAI-wide glossary. We'd expect the FAST and MAUR work to help us do better

Matthew: Thank you Nigel for filing issues on the FAST repositiory, we'll be addressing those in the TF
… If there's terminology you found esoteric, please let us know. Your feedback is appreciated!

Nigel: The area I'm conscious of is where there are existing techniques in use, such as "dubbing" and "AD", that could be described in a more theoretical way, different modalities, alternate presentations
… Those are more abstract, difficult to relate back to what poeple are doing

Janina: We have this with MAUR 1.0, how different devices might help, a section on second screen, but it could be a bluetooth headset, it works

Matthew: I made a note of that example, we could solve by adding more examples

Jason: For the TF, have we gathered all the requirements coming from specs in development at W3C?
… The connection to AR/VR, there's a separate doc on that. W3C is engaged in ongoing collaboration on XR. So a question is are there additional sources of requirements that might need to be considered?

Janina: We're eager to hear from you, what's on your horizon that we should be thinking about

ChrisN: This reminds me, I was looking recently at an open review issue on Picture in Picture API in Media WG, on accessibility of controls in the PiP window

Janina: Multiple tracks of sign language, we have deaf participation in our meetings facilitated by ASL, so arguablble multiple PiP tracks, they'll have use cases in education for people who are deaf

ChrisN: Also looking at RTC?

Janina: Yes, we have a separate document on that. We do see RTC slowly getting better for accessbiliity. We're generally happy with Zoom, but still somethings can happen. One thing they took on board, you can pin the signer next to the video who whey are interpreting
… Keeping those side by side is important

<matatk> *AURs:

<matatk> Media: https://www.w3.org/TR/media-accessibility-reqs/

<matatk> XR: https://www.w3.org/TR/xaur/

<matatk> RTC: https://www.w3.org/TR/raur/

<matatk> Collaboration tools: https://www.w3.org/TR/ctaur/

<matatk> Also we have Accessibility of Remote Meetings: https://www.w3.org/TR/remote-meetings/

Jason: Another area in the TF, not involving media yet, is collaborative editing environments. So if anyone is working on that for media, it's an area we could look at in future

ChrisL: As we build TV UI in Lightning using canvas, which bypasses ARIA tags in HTML, are there any APIs available to hook into ARIA capabilities for non-HTML languages?

Janina: Worth capturing for our meeting with ARIA. There was a need for canvas identified. At the time we had canvas support we were reasonably happy with
… It's a good time to be looking at things like this. ARIA just finished 1.2, now a Recommendation. They're looking to what's next, so excellent time how we support canvas, support other use cases
… We want to be taking to each other, not working in silos. OpenUI is looking at similar things as we've discussed today
… EPUB 3.3 is in maintenance mode, now looking ahead

Jason: Associating regions of a 2D canvas with ARIA roles, but hadn't been worked out for 3D. That was a while ago, may be new developments

Matthew: Discussed at the W3C Web Games workshop. I'm also thinking about AOM, which could be the vehicle

Kaz: Which direction is expected from APA? Extend the existing requirements document, or collect use cases for a new requirements document?

Janina: My top thing would be updating the existing linear chapter in media. Doesn't support what we need. We will have show and tell ready by TPAC to explain why it's important
… Big users of YouTube, can't use text, so they need to be able to navigate how they expect. Also, what else? Markup support for chapters, to reference the id in the Rosetta stone
… That should be low hanging fruit. We'll also talk with EOWG, brings users with accessibility needs not supported by the web in the past. It's a big number of people

Kaz: Personally interested in a11y for web payments and verifiable credentials too, but we should start with the priority topics you mentioned..

TPAC 2023

<cpn> https://docs.google.com/presentation/d/1nSZ6BTmdUee7_kq4UFrCZeB6-rtSXg_MheJb_hu9rk0/edit

cpn: TPAC schedule on the slide
… MEIG on Monday
… Media WG also Monday (afternoon)
… agenda for MEIG includes
… inviting John and Louay on CTA WAVE updates
… then TV application development
… joint meeting with TTWG
… and priorities for the next year
… potential topics include media provenance and next-generation audio
… The agenda is still in development, things may move around. Please get in touch if there are items you want to cover. We also have the option to propose breakouts on the Wednesday

janina: thank you for hosting the discussion today

cpn: thank YOU
… we'll keep open on the collaboration

mat/jason: thanks!

[adjourned]

Minutes manually created (not a transcript), formatted by scribe.perl version 210 (Wed Jan 11 19:21:32 2023 UTC).