W3C

– DRAFT –
MEIG monthly meeting

14 October 2025

Attendees

Present
Alicia_Boya_Garcia, Atsushi_Shimono, Bernd_Czelhan, Chris_Needham, Harald_Fuchs, John_Riviello, Kensaku_Komatsu, Matt_Paradis, Nigel_Megitt, Rob_Smith, Roy_Ruoxi_Ran, Tatsuya_Igarashi, Wolfgang_Schildbach, Zhe_Wang
Regrets
-
Chair
Chris_Needham, Tatsuya_Igarashi
Scribe
cpn, nigel

Meeting minutes

<ken> There are 2 links, but please join for zoom.

Introduction

Chris: Brief update on rechartering, TPAC agenda planning, Next Generation Audio

Rechartering

Roy: Francois is responsble for the rechartering. I'm the new Team Contact for the group

Chris: Thanks, it's great to have you here, we appreciate your support.
… There are some changes coming, which we'll cover in more detail at TPAC.
… There was a suggestion to refresh the list of groups we coordinate with.
… We have quite a long list.

https://w3c.github.io/media-and-entertainment/charters/charter-2025.html

Chris: Some of the organisations we list in the Charter may not be relevant any more.
… Maybe we can look at the details offline.
… If there are examples of organisations that are no longer active or are doing related work,
… then I'm happy to remove orgs that are no longer relevant
… or add new orgs, like we did with SVTA.
… [shows list]
… Long list of W3C groups, comprehensive and up to date.
… The external groups need some review.
… We haven't had a huge amount of contact with some of them.
… For example I can't recall much direct contact with ANSI.
… OASIS, too.
… I suspect we will request a short extension to work through this, then
… it can go to AC for their review. There will be some personnel changes,
… like Roy coming in, in place of Kaz.
… I will use our TPAC meeting to announce the changes.
… Any thoughts or comments?

Roy: I checked the status. We have already done horizontal review,
… the next step is to go through the W3C TILT group.
… I will talk with Francois who will take responsibility for pushing this forward.
… Also, apologies from me I have a clashing meeting at the same time, so I will try
… to adjust my schedule for the future.
… Please email me if there are any actions for me from this meeting.

Chris: Thank you, if you need to go that's perfectly okay.
… That's a question for all of us, whether we want to think about a different slot.
… Not only for you Roy, but for everyone in the group, if the time is bad for other reasons
… we can consider alternatives.

TPAC 2025 plans

w3c/media-and-entertainment#110

Chris: The plans are developing at the above GitHub issue, #110.
… We have 4 sessions.
… 1st session 90 minutes, intro and codec-related issues.
… I've been talking to people from VNova about low complexity codecs and challenges in
… browsers, and how web standards can help usage of LC-EVC.
… 2nd session, more flexibility if some of the previous topics overrun we can continue over the break.
… I have been speaking with Ken, about sync on the web,
… and NHK on media content metadata,
… and Louay on EME testing and CTA Wave.
… We have a joint meeting with TTWG - Nigel I'm happy to discuss that with you.
… We also have a joint meeting to organise over the next week or two
… with APA and TTWG.
… Any thoughts or suggestions regarding TPAC?

Wolfgang: The MediaWG just had a successful recharter.
… They are working on connecting WebCodecs and MSE.
… Is there something for us to do as a consequence?

Chris: Good question. I don't know what the WG's priority is for working on that.
… It's useful for us to show the industry interest to help with prioritisation decisions.
… Would you like us to include this at TPAC? We could make some time for it.

Wolfgang: Maybe, let's tentatively schedule something, I'm not sure if there's time.

Chris: The WG is open to observers to join, which could be a good place to join if you're interested in that feature.

Wolfgang: Good idea.

Ken: My topics will be in the 11-12 slot?

Chris: Yes if that's okay.

Ken: For the details, we need to plan the topics?

Chris: Yes you already sent me the detail, I need to add the information to the issue.
… Sorry I didn't have time to do it yet.
… Yes, I can confirm we will do that and I will update this accordingly.
… For your proposal.
… Is this timeslot okay for you?

Ken: Yes it's okay for me.

Chris: That's the plan, a few details to finalise but otherwise its pretty much organised.

Next Generation Audio

Chris: Last TPAC was last time we talked about this here.
… We discussed a gap analysis to see what you can do with existing web APIs and the potential
… integration points with WebCodecs.
… Wolfgang, could you give us an update?

Wolfgang: You should have received by mail a copy of the analysis.
… I have some slides to share.
… [NGA Personalization Gap Analysis slide]
… The task was to explain what use cases can be covered by existing APIs.
… [Core challenge]
… NGA can deliver audio and metadata together in a single stream.
… The web platform doesn't have a standardized way to do that.
… [Key use cases]
… Dialogue enhancement, language select, spatial positioning, personalised mixing
… [Why not delivery of separate components]
… Been suggested could deliver several components into web platform and have them
… mixed in web audio. Several web codecs, use WebAudio to mix.
… Why not?
… A few things:
… 1. Delivering multiple components is more tricky, more assets to track, failure points, sync, buffer management.
… The industry ecosystem is built around single stream delivery (MPEG DASH + MSE, HLS)
… It's not single stream, its multi-stream.
… [Can WebCodecs + WebAudio be used?]
… Challenge is this mixing honours metadata that's been authored and set by the content creator.
… The mixing should have limited control by the user. Some interaction, not completely free.
… So you need the concept of metadata that travels alongside the audio and is forwarded by WebCodecs to WebAudio.
… Also no concept of time aligning that metadata.
… No way to extract the metadata from the bitstream.

Chris: Question: When I think about WebAudio, and NGA...
… NGA is one bitstream with potentially multiple output audio streams.
… In a WebAudio context would there need to be a mapping between the decoder and some
… input node to represent the separate audio streams so they can be treated individually?

Wolfgang: If I understand correctly, that's not the challenge that I'm addressing here.
… My mental image is that we have not 1x NGA stream with components inside, but n NGA streams
… each with a different WebCodec. Maybe I'm misunderstanding your question.

Chris: I'm thinking of a simple case, stereo audio, 2 channels go into a codec, get decoded,
… produce a l/r pair that can flow through WebAudio processing.
… NGA is much more complex than a simple stereo delivery because there are multiple components.
… Therefore when you do the decode can you take each component and put them through a
… WebAudio pipeline graph as individual separate things, so that in WebAudio you could have
… some control over each of those components to adjust its gain, position etc. within the WebAudio API?

Wolfgang: OK, that assumes using NGA as it is used today, one stream with several components,
… and the decoder has several outputs, one per component. That's conceivable but doesn't
… exist in WebCodecs today. In theory it is possible. There would also have to be a metadata
… output that says what the streams are and how they get rendered into the end experience.
… It could probably be done but it doesn't exist today.

Chris: That's why I ask this. Does it need to exist? Or is it not an interesting use case?
… Ultimately we need to figure out what the API surface needs to include.
… In the past we've talked about providing an API more closely tied to the MSE spec, than
… WebCodecs or WebAudio are.

Bernd: In my opinion we should focus on the WebCodecs approach.
… A big benefit of NGA is the content creator can constrain the personalisation options.

Wolfgang: I think the point is API changes are needed.
… I don't want to preclude specific options.
… Or say it has to be MSE, though I think it's the best candidate.
… We may find there are other options.
… [WebAudio Limitations]
… Not designed for immersive audio experiences
… If you have 8 channels you don't know if it's 7.1 in the plane or 5.1.2 with height channels,
… or something different.
… WebAudio would have to be extended to deal with these channel configurations
… If we talk about mixing not just channels then we're into a new universe, where the
… position etc can change in realtime from metadata.
… WebAudio is not set up to deal with DRM-protected media, which is the way that the industry
… delivers media. The MediaWG is looking to connect web codecs to MSE.
… If that happens then WebAudio is still not part of the picture.
… It would be a big lift to enable WebAudio to do what we want, I think.
… [Performance Advantages of Unified Decoding]
… We talked about decoding one stream to several outputs in one web codec, vs having
… multiple web codecs. There may be a performance benefit but that might change in the future.
… If you do the decoding and rendering and mixing all in one component, you don't have to
… expose the metadata and dynamic positioning issues I mentioned.

Nigel: If you create the rendering in WebCodecs, you need to feed metadata into WebCodecs to describe the listening environment, e.g., speaker outputs and positioning?

Wolfgang: It has to be aware of the audio context, devices and channels, so it would need to be informed

Nigel: So we should think about what are the best flows of data, to meet the widest range of use cases. Where does it make sense to have separated processing components?
… There might be security, privacy, or accessibility concerns that could be easier to address in one architecture over another

Wolfgang: Yes, I can work that into the architecture

Wolfgang: (Going back to the slides) Something will have to change in Web Audio and WebCodecs. With the existing APIs, many of the use cases can't be fulfilled
… We have a strawman API, in the presentation last TPAC. We think it can be made to work with MSE. It'll enable the personalisation options, respecting content creator constraints
… Creators having control over the processing and mixing is important
… I'd like to get to a point where this group understands and sees the need for the use cases, and agree that they're not solved in today's APIs, and some work is needed
… Want to progress this topic

Ken: Thank you for presenting. Multi-channel and time aligned metadata are complex asks. Your idea makes sense, I think
… I'm doing low latency cases, for example, for time aligned data I'm trying to handle with lighting devices, DMX data, and with video and audio data
… To synchronise the lighting data, I need time accuracy of 10-30ms
… Your proposal might be a good approach. In such cases, to handle metadata, a content based approach wouldn't work. What latency is required in the use cases? The DataCue approach would be better
… It's use case dependent. We need to think about a use case based approach

Wolfgang: IIUC, synchronisation of metadata is context, and when you have different data paths for multiple objects, and different paths have different latencies is difficult
… I agree, these are difficult issues. They have been solved in scenarios where the decoder and mixer and renderer are in a single component, a more controlled environment
… We've done it. But if we distribute it over several Web Audio or WebCodecs components, that would have to be tackled again, difficult

Ken: My approach can't handle DRM ;-)

cpn: Sorry we're running short on time.
… What you've presented is a good basis for the TPAC discussion.
… Illustrating a couple of those possible architectures, block diagrams, would help.
… To show what these different possibilities might look like.
… Perhaps considering that it may not be an either/or choice.
… One approach might not preclude a future development in WebAudio or WebCodecs.

Wolfgang: You're right we could have both.

Chris: Some illustrations could be valuable to add, maybe not essential.

Nigel: I think that would be helpful, but also, to MEIG's scope, what are the requirements, e.g., DRM?
… You could imagine constraints on the output devices that Web Audio can connect to. So you can construct a chain where it goes through Web Audio, but can't be connected to a recorder, for example

Wolfgang: The presentation last year included the requirements

Chris: Some of the feedback in last year's meeting was asking about the gap analysis.
… From my point of view that is part of what we need to come back with in the next stage.
… I think we have the requirements already.
… Pulling it together would be a help.

Wolfgang: OK, one document that puts the requirements side by side with what the existing
… solutions could deliver, and highlighting the gaps.

Chris: Yes that could work. Or prefacing the existing slides with the motivating use cases and high level requirements.
… I think we have it all written down, but in different places.
… Thank you, I hope we can make some good progress on this next month when we meet.

Chris: Any more thoughts on this?

nothing more

Meeting close

Chris: Thank you all, look forward to seeing those of you who are travelling next month
… at Kobe.
… [adjourns]

Minutes manually created (not a transcript), formatted by scribe.perl version 246 (Wed Oct 1 15:02:24 2025 UTC).

Diagnostics

Succeeded: s/add/remove

Succeeded: s/o./o

Maybe present: Bernd, Chris, cpn, Ken, Nigel, Roy, Wolfgang

All speakers: Bernd, Chris, cpn, Ken, Nigel, Roy, Wolfgang

Active on IRC: cpn, ken, nigel, Roy_Ruoxi