14:13:43 RRSAgent has joined #me 14:13:48 logging to https://www.w3.org/2025/10/14-me-irc 14:13:48 Zakim has joined #me 14:13:55 There are 2 links, but please join for zoom. 14:13:59 Meeting: MEIG monthly meeting 14:16:13 Topic: Introduction 14:16:40 scribe+ cpn 14:16:40 Chris: Brief update on rechartering, TPAC agenda planning, Next Generation Audio 14:16:45 Topic: Rechartering 14:18:00 Roy: Francois is responsble for the rechartering. I'm the new Team Contact for the group 14:18:12 scribe+ nigel 14:18:28 Chris: Thanks, it's great to have you here, we appreciate your support. 14:18:41 .. There are some changes coming, which we'll cover in more detail at TPAC. 14:18:52 .. There was a suggestion to refresh the list of groups we coordinate with. 14:18:56 .. We have quite a long list. 14:19:03 https://w3c.github.io/media-and-entertainment/charters/charter-2025.html 14:19:11 .. Some of the organisations we list in the Charter may not be relevant any more. 14:19:19 .. Maybe we can look at the details offline. 14:19:35 .. If there are examples of organisations that are no longer active or are doing related work, 14:19:43 .. then I'm happy to add orgs that are no longer relevant 14:19:50 s/add/remove 14:20:02 .. or add new orgs, like we did with SVTA. 14:20:21 .. [shows list] 14:20:31 .. Long list of W3C groups, comprehensive and up to date. 14:20:40 .. The external groups need some review. 14:20:49 .. We haven't had a huge amount of contact with some of them. 14:21:07 .. For example I can't recall much direct contact with ANSI. 14:21:42 .. OASIS, too. 14:21:54 .. I suspect we will request a short extension to work through this, then 14:22:03 .. it can go to AC for their review. There will be some personnel changes, 14:22:09 .. like Roy coming in, in place of Kaz. 14:22:22 .. I will use our TPAC meeting to announce the changes. 14:22:28 q+ 14:22:30 .. Any thoughts or comments? 14:22:37 ack Roy 14:22:51 Roy: I checked the status. We have already done horizontal review, 14:23:07 .. the next step is to go through the W3C TILT group. 14:23:20 .. I will talk with Francois who will take responsibility for pushing this forward. 14:23:31 .. Also, apologies from me I have a clashing meeting at the same time, so I will try 14:23:39 .. to adjust my schedule for the future. 14:23:52 .. Please email me if there are any actions for me from this meeting. 14:24:01 Chris: Thank you, if you need to go that's perfectly okay. 14:24:41 .. That's a question for all of us, whether we want to think about a different slot. 14:25:05 .. Not only for you Roy, but for everyone in the group, if the time is bad for other reasons 14:25:09 .. we can consider alternatives. 14:25:17 Topic: TPAC 2025 plans 14:25:28 https://github.com/w3c/media-and-entertainment/issues/110 14:25:42 Chris: The plans are developing at the above GitHub issue, #110. 14:25:47 .. We have 4 sessions. 14:26:08 .. 1st session 90 minutes, intro and codec-related issues. 14:26:40 .. I've been talking to people from VNova about low complexity codecs and challenges in 14:26:49 .. browsers, and how web standards can help usage of LC-EVC. 14:27:05 .. 2nd session, more flexibility if some of the previous topics overrun we can continue over the break. 14:27:12 wschildbach has joined #me 14:27:40 .. I have been speaking with Ken, about sync on the web, 14:27:51 .. and NHK on media content metadata, 14:28:13 .. and Louay on EME testing and CTA Wave. 14:28:29 .. We have a joint meeting with TTWG - Nigel I'm happy to discuss that with you. 14:28:37 .. We also have a joint meeting to organise over the next week or two 14:28:42 .. with APA and TTWG. 14:28:48 .. Any thoughts or suggestions regarding TPAC? 14:28:51 q+ 14:29:02 Wolfgang: The MediaWG just had a successful recharter. 14:29:14 .. They are working on connecting WebCodecs and MSE. 14:29:22 .. Is there something for us to do as a consequence? 14:29:36 Chris: Good question. I don't know what the WG's priority is for working on that. 14:29:51 .. It's useful for us to show the industry interest to help with prioritisation decisions. 14:30:07 .. Would you like us to include this at TPAC? We could make some time for it. 14:30:24 Wolfgang: Maybe, let's tentatively schedule something, I'm not sure if there's time. 14:30:40 Chris: The WG is open to observers to join, which could be a good place to join if you're interested in that feature. 14:31:07 Wolfgang: Good idea. 14:31:10 ack ken 14:31:32 Ken: My topics will be in the 11-12 slot? 14:31:36 Chris: Yes if that's okay. 14:31:50 Ken: For the details, we need to plan the topics? 14:32:04 Chris: Yes you already sent me the detail, I need to add the information to the issue. 14:32:09 .. Sorry I didn't have time to do it yet. 14:32:25 .. Yes, I can confirm we will do that and I will update this accordingly. 14:32:34 .. For your proposal. 14:32:42 .. Is this timeslot okay for you? 14:32:46 Ken: Yes it's okay for me. 14:33:04 Chris: That's the plan, a few details to finalise but otherwise its pretty much organised. 14:33:16 Topic: Next Generation Audio. 14:33:19 s/o./o 14:33:28 Chris: Last TPAC was last time we talked about this here. 14:33:47 .. We discussed a gap analysis to see what you can do with existing web APIs and the potential 14:33:53 .. integration points with WebCodecs. 14:34:01 .. Wolfgang, could you give us an update? 14:34:14 Wolfgang: You should have received by mail a copy of the analysis. 14:34:18 .. I have some slides to share. 14:34:54 .. [NGA Personalization Gap Analysis slide] 14:35:11 .. The task was to explain what use cases can be covered by existing APIs. 14:35:18 .. [Core challenge] 14:35:30 .. NGA can deliver audio and metadata together in a single stream. 14:35:50 .. The web platform doesn't have a standardized way to do that. 14:35:57 .. [Key use cases] 14:36:15 .. Dialogue enhancement, language select, spatial positioning, personalised mixing 14:36:24 .. [Why not delivery of separate components] 14:36:38 .. Been suggested could deliver several components into web platform and have them 14:36:51 .. mixed in web audio. Several web codecs, use WebAudio to mix. 14:36:53 .. Why not? 14:36:57 .. A few things: 14:37:17 .. 1. Delivering multiple components is more tricky, more assets to track, failure points, sync, buffer management. 14:37:37 .. The industry ecosystem is built around single stream delivery (MPEG DASH + MSE, HLS) 14:37:45 .. It's not single stream, its multi-stream. 14:37:55 .. [Can WebCodecs + WebAudio be used?] 14:38:17 present+ Chris_Needham, Tatsuya_Igarashi, Nigel_Megitt, Wolfgang_Schildbach, Atsushi_Shimono, Roy_Ruoxi_Ran, Harald_Fuchs, Matt_Paradis, Rob_Smith, Zhe_Wang, Bernd_Czelhan, John_Riviello, Kensaku_Komatsu 14:38:24 .. Challenge is this mixing honours metadata that's been authored and set by the content creator. 14:38:31 Chair: Chris_Needham, Tatsuya_Igarashi 14:38:42 .. The mixing should have limited control by the user. Some interaction, not completely free. 14:39:01 .. So you need the concept of metadata that travels alongside the audio and is forwarded by WebCodecs to WebAudio. 14:39:08 .. Also no concept of time aligning that metadata. 14:39:17 .. No way to extract the metadata from the bitstream. 14:39:41 Chris: Question: When I think about WebAudio, and NGA... 14:39:50 .. NGA is one bitstream with potentially multiple output audio streams. 14:40:06 .. In a WebAudio context would there need to be a mapping between the decoder and some 14:40:22 .. input node to represent the separate audio streams so they can be treated individually? 14:40:36 Wolfgang: If I understand correctly, that's not the challenge that I'm addressing here. 14:40:57 .. My mental image is that we have not 1x NGA stream with components inside, but n NGA streams 14:41:10 .. each with a different WebCodec. Maybe I'm misunderstanding your question. 14:41:37 Chris: I'm thinking of a simple case, stereo audio, 2 channels go into a codec, get decoded, 14:41:52 .. produce a l/r pair that can flow through WebAudio processing. 14:42:07 .. NGA is much more complex than a simple stereo delivery because there are multiple components. 14:42:21 .. Therefore when you do the decode can you take each component and put them through a 14:42:35 .. WebAudio pipeline graph as individual separate things, so that in WebAudio you could have 14:42:52 .. some control over each of those components to adjust its gain, position etc. within the WebAudio API? 14:43:10 Wolfgang: OK, that assumes using NGA as it is used today, one stream with several components, 14:43:24 .. and the decoder has several outputs, one per component. That's conceivable but doesn't 14:43:39 .. exist in WebCodecs today. In theory it is possible. There would also have to be a metadata 14:43:51 .. output that says what the streams are and how they get rendered into the end experience. 14:43:58 .. It could probably be done but it doesn't exist today. 14:44:11 Chris: That's why I ask this. Does it need to exist? Or is it not an interesting use case? 14:44:28 .. Ultimately we need to figure out what the API surface needs to include. 14:44:46 .. In the past we've talked about providing an API more closely tied to the MSE spec, than 14:44:47 .. WebCodecs or WebAudio are. 14:45:22 present+ Alicia_Boya_Garcia 14:45:29 Bernd: In my opinion we should focus on the WebCodecs approach. 14:45:45 .. A big benefit of NGA is the content creator can constrain the personalisation options. 14:46:01 Wolfgang: I think the point is API changes are needed. 14:46:08 .. I don't want to preclude specific options. 14:46:19 .. Or say it has to be MSE, though I think it's the best candidate. 14:46:28 .. We may find there are other options. 14:46:35 .. [WebAudio Limitations] 14:46:46 .. Not designed for immersive audio experiences 14:47:03 .. If you have 8 channels you don't know if it's 7.1 in the plane or 5.1.2 with height channels, 14:47:07 .. or something different. 14:47:17 .. WebAudio would have to be extended to deal with these channel configurations 14:47:31 .. If we talk about mixing not just channels then we're into a new universe, where the 14:47:41 .. position etc can change in realtime from metadata. 14:47:57 .. WebAudio is not set up to deal with DRM-protected media, which is the way that the industry 14:48:15 .. delivers media. The MediaWG is looking to connect web codecs to MSE. 14:48:27 .. If that happens then WebAudio is still not part of the picture. 14:48:39 .. It would be a big lift to enable WebAudio to do what we want, I think. 14:48:51 .. [Performance Advantages of Unified Decoding] 14:49:08 .. We talked about decoding one stream to several outputs in one web codec, vs having 14:49:23 .. multiple web codecs. There may be a performance benefit but that might change in the future. 14:49:35 .. If you do the decoding and rendering and mixing all in one component, you don't have to 14:49:57 .. expose the metadata and dynamic positioning issues I mentioned. 14:50:02 scribe+ cpn 14:50:32 Nigel: If you create the rendering in WebCodecs, you need to feed metadata into WebCodecs to describe the listening environment, e.g., speaker outputs and positioning? 14:50:55 Wolfgang: It has to be aware of the audio context, devices and channels, so it would need to be informed 14:51:20 Nigel: So we should think about what are the best flows of data, to meet the widest range of use cases. Where does it make sense to have separated processing components? 14:51:44 ... There might be security, privacy, or accessibility concerns that could be easier to address in one architecture over another 14:52:03 Wolfgang: Yes, I can work that into the architecture 14:52:51 Wolfgang: (Going back to the slides) Something will have to change in Web Audio and WebCodecs. With the existing APIs, many of the use cases can't be fulfilled 14:53:32 ... We have a strawman API, in the presentation last TPAC. We think it can be made to work with MSE. It'll enable the personalisation options, respecting content creator constraints 14:53:45 ... Creators having control over the processing and mixing is important 14:54:27 ... I'd like to get to a point where this group understands and sees the need for the use cases, and agree that they're not solved in today's APIs, and some work is needed 14:54:37 ... Want to progress this topic 14:54:41 q? 14:55:25 Ken: Thank you for presenting. Multi-channel and time aligned metadata are complex asks. Your idea makes sense, I think 14:55:55 ... I'm doing low latency cases, for example, for time aligned data I'm trying to handle with lighting devices, DMX data, and with video and audio data 14:56:15 ... To synchronise the lighting data, I need time accuracy of 10-30ms 14:57:11 ... Your proposal might be a good approach. In such cases, to handle metadata, a content based approach wouldn't work. What latency is required in the use cases? The DataCue approach would be better 14:57:21 ... It's use case dependent. We need to think about a use case based approach 14:58:03 Wolfgang: IIUC, synchronisation of metadata is context, and when you have different data paths for multiple objects, and different paths have different latencies is difficult 14:58:33 ... I agree, these are difficult issues. They have been solved in scenarios where the decoder and mixer and renderer are in a single component, a more controlled environment 14:58:59 ... We've done it. But if we distribute it over several Web Audio or WebCodecs components, that would have to be tackled again, difficult 14:59:09 Ken: My approach can't handle DRM ;-) 14:59:29 cpn: Sorry we're running short on time. 14:59:38 .. What you've presented is a good basis for the TPAC discussion. 14:59:57 .. Illustrating a couple of those possible architectures, block diagrams, would help. 15:00:08 .. To show what these different possibilities might look like. 15:00:24 .. Perhaps considering that it may not be an either/or choice. 15:00:39 .. One approach might not preclude a future development in WebAudio or WebCodecs. 15:00:47 Wolfgang: You're right we could have both. 15:00:49 q+ 15:01:10 Chris: Some illustrations could be valuable to add, maybe not essential. 15:01:16 q? 15:01:44 Nigel: I think that would be helpful, but also, to MEIG's scope, what are the requirements, e.g., DRM? 15:02:23 ... You could imagine constraints on the output devices that Web Audio can connect to. So you can construct a chain where it goes through Web Audio, but can't be connected to a recorder, for example 15:02:56 Wolfgang: The presentation last year included the requirements 15:03:31 Chris: Some of the feedback in last year's meeting was asking about the gap analysis. 15:03:47 .. From my point of view that is part of what we need to come back with in the next stage. 15:03:55 .. I think we have the requirements already. 15:04:00 .. Pulling it together would be a help. 15:04:15 Wolfgang: OK, one document that puts the requirements side by side with what the existing 15:04:22 .. solutions could deliver, and highlighting the gaps. 15:04:36 Chris: Yes that could work. Or prefacing the existing slides with the motivating use cases and high level requirements. 15:04:49 .. I think we have it all written down, but in different places. 15:05:13 q? 15:05:15 .. Thank you, I hope we can make some good progress on this next month when we meet. 15:05:16 ack n 15:05:23 Chris: Any more thoughts on this? 15:05:34 nothing more 15:05:38 Topic: Meeting close 15:05:50 Chris: Thank you all, look forward to seeing those of you who are travelling next month 15:05:53 .. at Kobe. 15:06:33 .. [adjourns] 15:07:04 rrsagent, draft minutes 15:07:05 I have made the request to generate https://www.w3.org/2025/10/14-me-minutes.html cpn 15:07:11 rrsagent, make log public 17:22:41 Zakim has left #me