00:06:14 RRSAgent has joined #me 00:06:18 logging to https://www.w3.org/2025/11/10-me-irc 00:06:18 RRSAgent, make logs Public 00:06:19 please title this meeting ("meeting: ..."), nigel 00:06:38 Meeting: MEIG 00:06:55 mjwilson has joined #me 00:07:06 kaz has joined #me 00:07:11 Present: Chris_Needham, Nigel_Megitt, John_Riviello 00:07:21 Agenda: https://github.com/w3c/media-and-entertainment/issues/110 00:07:29 kaz has joined #me 00:07:29 mjwilson has joined #me 00:07:29 thelounge has joined #me 00:07:29 ohmata has joined #me 00:07:29 ovf has joined #me 00:07:29 englishm has joined #me 00:07:34 scribe+ nigel 00:07:54 rrsagent, make minutes 00:07:55 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 00:08:28 Slides: https://docs.google.com/presentation/d/1ATqy1ynwQhEBnYg2HCfYUr5gLEnHZiEve45n0Ia2LO8/ 00:08:36 present+ John_Riviello 00:08:39 kota has joined #me 00:08:39 Haruki has joined #me 00:08:39 hiroki_endo has joined #me 00:08:39 kaz has joined #me 00:08:39 thelounge has joined #me 00:08:39 ohmata has joined #me 00:08:39 ovf has joined #me 00:08:39 englishm has joined #me 00:08:51 present+ Francois_Daoust 00:08:59 cpn: [Welcomes everyone, shows antitrust slide] 00:09:11 Topic: Introduction 00:09:21 cpn: The mission of our group is to provide an industry forum in W3C 00:09:22 cabanier has joined #me 00:09:32 .. for discussions related to media and media features on the web in particular, 00:09:35 mattp has joined #me 00:09:41 .. and interacting with working groups who develop the features. 00:09:53 +present 00:09:58 wschildbach has joined #me 00:09:58 Bernd has joined #me 00:09:58 m-alkalbani has joined #me 00:09:58 igarashi has joined #me 00:09:58 kota has joined #me 00:09:58 Haruki has joined #me 00:09:58 hiroki_endo has joined #me 00:09:58 kaz has joined #me 00:09:58 thelounge has joined #me 00:09:58 ohmata has joined #me 00:09:58 ovf has joined #me 00:09:58 englishm has joined #me 00:09:59 .. As an Interest Group we do not develop specifications but do 00:10:04 .. work on requirements and reviews. 00:10:06 present+ 00:10:18 .. The scope is the application of web technology relating to media from end to end. 00:10:25 .. [shows Introduction slide] 00:10:40 .. and related technologies such as timed text, assistive tech etc. 00:10:48 .. We've been going since 2011 00:10:51 .. [History of major initiatives slide] 00:11:08 wschildbach has joined #me 00:11:08 Bernd has joined #me 00:11:08 m-alkalbani has joined #me 00:11:08 igarashi has joined #me 00:11:08 kota has joined #me 00:11:08 Haruki has joined #me 00:11:08 hiroki_endo has joined #me 00:11:08 kaz has joined #me 00:11:08 thelounge has joined #me 00:11:08 ohmata has joined #me 00:11:08 ovf has joined #me 00:11:08 englishm has joined #me 00:11:08 .. Covered the video element in HTML5, EME and MSE as key capabilities. 00:11:14 .. Reformed as MEIG in 2017 00:11:27 .. Looked at the development of the media web app platform, 00:11:38 .. and the application of web technologies for media consumption and creation. 00:11:51 .. [Workflow slide] 00:11:52 .. We operate as: 00:11:54 .. Typically take input from members and contributors 00:12:06 .. Many relationships with other standards bodies and industry groups. 00:12:11 eric-carlson has joined #me 00:12:11 wschildbach has joined #me 00:12:11 Bernd has joined #me 00:12:11 m-alkalbani has joined #me 00:12:11 igarashi has joined #me 00:12:11 kota has joined #me 00:12:11 Haruki has joined #me 00:12:11 hiroki_endo has joined #me 00:12:11 kaz has joined #me 00:12:11 thelounge has joined #me 00:12:11 ohmata has joined #me 00:12:11 ovf has joined #me 00:12:11 englishm has joined #me 00:12:11 .. They bring us requirements. 00:12:20 .. We do gap analysis thinking about future web platform evolution. 00:12:31 .. The outcome is either new work for existing WGs or 00:12:40 .. in new areas, the creation of a CG if we have enough buy-in. 00:12:47 .. [Resources slide] 00:13:01 .. We have a homepage, Charter, GitHub repository, a public mailing list and a member mailing list 00:13:12 .. if needed e.g. for sharing liaison information that can't necessarily be made public. 00:13:28 .. One interesting resource is an overview of Medit Technologies for the Web 00:13:32 mjwilson has joined #me 00:13:32 eric-carlson has joined #me 00:13:32 wschildbach has joined #me 00:13:32 Bernd has joined #me 00:13:32 m-alkalbani has joined #me 00:13:32 igarashi has joined #me 00:13:32 kota has joined #me 00:13:32 Haruki has joined #me 00:13:32 hiroki_endo has joined #me 00:13:32 kaz has joined #me 00:13:32 thelounge has joined #me 00:13:32 ohmata has joined #me 00:13:32 ovf has joined #me 00:13:32 englishm has joined #me 00:13:35 .. that we published a few years ago. 00:13:50 .. A one-stop place to see media activities that are happening. 00:14:02 .. Unfortunately it hasn't been maintained - would be interesting to bring it up to date. 00:14:13 .. If you are interested in working on it I would be happy to work with you on that. 00:14:21 .. [Announcements slide] 00:14:25 .. We have some new co-chairs. 00:14:40 mjwilson has joined #me 00:14:40 eric-carlson has joined #me 00:14:40 wschildbach has joined #me 00:14:40 Bernd has joined #me 00:14:40 m-alkalbani has joined #me 00:14:40 igarashi has joined #me 00:14:40 kota has joined #me 00:14:40 Haruki has joined #me 00:14:40 hiroki_endo has joined #me 00:14:40 kaz has joined #me 00:14:40 thelounge has joined #me 00:14:40 ohmata has joined #me 00:14:40 ovf has joined #me 00:14:40 englishm has joined #me 00:14:44 .. First, I thank Igarashi-san and Chris Lorenzo for your work as co-chairs. 00:14:56 .. You've been with us for a long time and we very much appreciate the work that you have done. 00:15:09 .. Welcome to Wolfgang and Song as joining us as new co-Chairs. 00:15:15 .. Look forward to working with you. 00:15:21 .. We also have a new W3C Staff contact. 00:15:32 .. Thank you to Kaz who has been with us from the beginning and been a key driver 00:15:42 .. to all our work here. None of it would be possible, thank you for all that you have done. 00:15:54 kazho has joined #me 00:15:54 mjwilson has joined #me 00:15:54 eric-carlson has joined #me 00:15:54 wschildbach has joined #me 00:15:54 Bernd has joined #me 00:15:54 m-alkalbani has joined #me 00:15:54 igarashi has joined #me 00:15:54 kota has joined #me 00:15:54 Haruki has joined #me 00:15:54 hiroki_endo has joined #me 00:15:54 kaz has joined #me 00:15:54 thelounge has joined #me 00:15:54 ohmata has joined #me 00:15:54 ovf has joined #me 00:15:54 englishm has joined #me 00:15:58 .. Congratulations on your new position. We look forward to working with you in your Invited Expert capacity. 00:16:02 kaz: Thank you 00:16:15 cpn: Welcome to Roy as our new team contact. 00:16:43 Roy_Ruoxi: I will try my best to support this group and learn from all of you how MEIG works. 00:17:05 Wolfgang: I look forward to working with you. 00:17:22 kazho has joined #me 00:17:22 mjwilson has joined #me 00:17:22 eric-carlson has joined #me 00:17:22 wschildbach has joined #me 00:17:22 Bernd has joined #me 00:17:22 m-alkalbani has joined #me 00:17:22 igarashi has joined #me 00:17:22 kota has joined #me 00:17:22 Haruki has joined #me 00:17:22 hiroki_endo has joined #me 00:17:22 kaz has joined #me 00:17:22 thelounge has joined #me 00:17:22 ohmata has joined #me 00:17:22 ovf has joined #me 00:17:22 englishm has joined #me 00:17:42 Song: I'm Song from China Mobile, and work in the media industry and more recently AI 00:17:58 .. stuff that is hot everywhere. I offer myself to help Wolfgang and Chris to work with you all. 00:19:04 song has joined #me 00:19:04 Louay has joined #me 00:19:04 kazho has joined #me 00:19:04 mjwilson has joined #me 00:19:04 eric-carlson has joined #me 00:19:04 wschildbach has joined #me 00:19:04 Bernd has joined #me 00:19:04 m-alkalbani has joined #me 00:19:04 igarashi has joined #me 00:19:04 kota has joined #me 00:19:04 Haruki has joined #me 00:19:04 hiroki_endo has joined #me 00:19:04 kaz has joined #me 00:19:04 thelounge has joined #me 00:19:04 ohmata has joined #me 00:19:04 ovf has joined #me 00:19:04 englishm has joined #me 00:19:18 present+ 00:19:32 present+ Hiroki_Endo 00:19:51 present+ 00:20:00 Topic: Agenda 00:20:12 cpn: Before the break, two codec related presentations, 00:20:38 .. from Jianhua, Fabir and Simone from V-Nova about challenges using LC-EVC. 00:20:55 Niko has joined #me 00:20:58 .. Then we will revisit Next Generation Audio codecs with Wolfgang and Bernd. 00:21:05 .. [slide: Agenda] 00:21:18 .. [continues to iterate through agenda] 00:21:26 .. rrsagent, make minutes 00:23:39 Topic: LCEVC decode support in browser 00:23:57 s/.. rrsagent, make minutes// 00:24:02 rrsagent, make minutes 00:24:03 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 00:24:36 Jianhua: Today I will talk about LCEVC decode support in browser 00:24:43 .. and how to enable playback in the browser environment 00:25:17 Chair: Chris, Song, Wolfgang 00:25:19 cpn has joined #me 00:25:31 .. V-Nova is a company based in London, 00:25:33 s/Wolfgang/wschildbach/ 00:25:39 .. mainly providing video compression technology 00:26:03 .. LC-EVC, SMPTE VC-6 for contribution and production workflows, 00:28:29 .. and PresenZ - a VR format for cinematic presentation 00:29:41 .. TV 3.0 overview 00:29:57 .. Signed in August this year into law in Brazil. 00:30:07 rrsagent, draft minutes 00:30:08 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html kaz 00:30:15 .. Means that VVC and MPEG-H are the mandatory codecs to implement in the next few years. 00:30:28 .. The first devices will be in June 2026 before the World Cup. 00:30:56 .. Many devices use dash.js and Shaka player as web player solutions. 00:31:05 .. Ecosystem development 00:31:39 .. 2023: developed and contributed to LCEVCdec and LCEVCdecJS open source projects. 00:32:00 .. For the LCEVCdec library, it will be integrated into the media framework for e.g. ffmpeg and gstreamer. 00:32:22 .. For the decJS, it will be integrated into dash.js and Shaka player. 00:32:44 s/For 2025: For 00:33:04 .. When we talk about LCC playback in browser, our goal is to have an efficient decoder 00:33:10 .. available on different platforms 00:33:22 .. [How LCEVC Works overview slide] 00:33:37 scribe+ cpn 00:34:21 present+ Kaz_Ashimura 00:34:28 Jianhua: There is a base layer encoding. The LC-EVC is the difference between that and the 4K 00:35:10 ... After demux, there's the base layer data, and the enhancement layer. The base layer is decoded in hardware, and the LC-EVC is done in hardware or software 00:35:46 ... The major benefit of this form of encoding is to save bitrate 00:36:00 ... LC-EVC can be deployed in software on exiting devices 00:36:15 ... Sometimes, on less powerful devices, there's a performance challenge 00:36:20 ken has joined #me 00:36:47 ... Let's look at how the data is carried in the MP4 or DASH format 00:36:59 ... We contributed to the ISO standard a few years ago 00:37:08 ... The idea is to link the enhancement layer to the base layer 00:37:45 ... The player needs to know which base layer is linked to which enhancement layer, so there's a new MP4 box, "sbas" to link them 00:38:05 ... This is known as dual-track carriage 00:38:53 ... There's a similar mechanism in DASH-IF IOP since v4.3. The idea is the base layer and enhancement layer to be described in separate presentations, then use a @dependencyId attribute to link them 00:39:12 ... We implemented support in DASH.js and Shaka Player 00:39:22 atsushi has joined #me 00:39:45 ... Proce a low resolution base layer, and the delta, without producing the high resolution 00:39:52 s/Proce/Produce/ 00:40:08 ... Diagram of DASH.js solution 00:40:44 ... On the left, the player handles the base video stream and LCEVC enhancement stream 00:40:56 ... After downloading the data, it's sent to MSE to demux and decode 00:41:54 ... We contributed to DASH.js the lower part. We created a module, a software library to decode the LCEVC bitstrea 00:42:49 ... An external MSE model. LCEVC bitstream decoding in WASM. Upsample the base video. From the HTML video tag, after decoding by MSE, 00:43:01 ... the data is signalled to the LCEVC module, upsample to 4K 00:44:00 ... Base layer is decoded by browser MSE. LCEVC is decoded in software. And construct a final 4K video stream. 00:44:17 ... Generally it works well on PC browsers, we demoed at IBC and NAB 00:45:00 ... Limitations of this solution. Performance - on STB and smart TV it doesn't work so well. Copy data from MSE to DASh.js outside the library, instead of in the browser 00:45:16 ... This architecture issue downgrades the performance 00:45:26 ... No way to have DRM protected streams 00:45:57 ... Need to be able to protect the video frames in memory 00:46:09 ... So, how to enable a native LCEVC decoder architecture? 00:46:46 ... Parse manifest, download data. MSE with base and enhancement layer decoding, to perform well 00:47:13 ...MSE and EME are designed to work together, so DRM protection shuold be a natural outcome 00:47:24 ... So I come here to ask for help 00:47:41 ... Questions: how to enable LCEVC decode and playback in a native architecture 00:49:45 ... Query if the device can support this codec. Use Media Capabilities API to detect. Syntax for querying support, using the codec string, e.g., avc1.42E01E,lvc1 00:50:49 ... Seems simple, but I tried querying with this syntax - can the browser support queries for two codecs. Dolby vision? I got a syntax error 00:51:03 ... Reading RFC6381, it says you can use multiple codec strings 00:51:19 ... For modern browsers, are there existing implementations of this kind of query? 00:52:03 ... After the player has negotiated support with the browser. Two use cases: 00:52:38 ... If the input stream is a single segment with dual tracks, a progressive MP4 file or a DASH stream with one representation containing two tracks. How to send to MSE? 00:53:19 ... One segment, so call addSourceBuffer with the codec string containing both codecs. So MSE would have to recognise the data in the MP4 00:53:52 ... This way of carriage has been standardised in ISO, but no implementation 00:54:28 ... Use case 2, two representations in the DASH manifest. Linked to each are two segments. This method saves bitrate as the player only downloads what's necessary 00:54:55 nigel_ has joined #me 00:55:04 ... Create a SourceBuffer for the base layer and another for the enhancement layer 00:55:19 ... The browser decodes and makes sure they're synchronised 00:55:49 ... Question: Is the above understanding correct? 00:56:41 ... SoC support is important. Companies like RealTek and Amlogic have implemented support in their STB and TV SoCs 00:57:05 ... We're working closely with manufacturers. on Android, we use Google's Exoplayer to support this use case 00:57:39 If the SoC has an implementation in the driver, just need to add the codec implementation. A high level codec can just set up a link 00:57:53 ... How to get access to the low level decode in the browser? 00:58:16 ... People want to use W3C API for playback. Trend is moving towards web based APIs 00:59:01 ... Based on the hardware decode, and the user wants to use DASH.js or Shaka Player to play a 4K stream, what needs to be done? How to access the decoder from browser API? 00:59:27 ... Is it up to browser vendors, or to TV manufacturers to link the API to the lower level decoder? 00:59:28 q? 00:59:30 [Regarding use case 2 mentioned above, I believe MSE currently does not support merging 2 SourceBuffer into a single video track] 00:59:50 q+ 00:59:52 cpn: Would like to hear opinions in the room. 01:00:06 .. You're saying that by implementing LCEVC you can save network usage overall 01:00:17 .. but you need the ability to decode and render in a browser based environment 01:00:30 .. but you have efficiency concerns because you have to decode and render into a canvas, 01:00:46 .. apply the enhancement to the canvas data, and that's computationally intensive, 01:00:59 hiroki has joined #me 01:01:00 .. but certain devices have hardware acceleration support and in theory you could take advantage 01:01:16 .. of it to make a more efficient decode pipeline, but you can't use it because the web APIs 01:01:30 .. don't provide access to it. Then the question is should we investigate how such support 01:01:47 .. should be introduced into the web APIs and what conditions need to be met to allow that. 01:01:53 .. Then you have specific questions. 01:02:20 .. The first one is about capability detection and how you express a codec string that includes 01:02:31 .. the base layer and the enhancement layer and if this is the right syntax, 01:02:40 Niko has joined #me 01:02:45 wschildbach: I think this syntax is not supported, arguably it should be. 01:03:01 .. RFC6381 I think is targeted at expressing what's contained in the media and by proxy it has 01:03:10 .. become something to use to express capability queries. 01:03:21 .. It's not clear how to query more than one codec at the same time. 01:03:23 q? 01:03:28 .. I don't think it's generally supported. 01:03:29 ack w 01:03:44 cpn: Can we move to the question about MSE? I think this is the core of the question. 01:03:58 .. You're proposing that in order to benefit from the bandwidth bitrate reduction you really 01:04:11 .. want to look at use case 2 where there are separate source buffers, one for the base layer 01:04:23 .. and one for the enhancement layer and allow the implementation to combine them together. 01:04:50 q? 01:05:00 Jianhua: They're both the same really, just different ways to express the data. 01:05:24 q+ 01:05:29 .. We can focus more on one use case as a starting point. 01:05:52 q+ 01:06:08 tidoust: On use case 2, if I understand MSE, I don't think it's possible for 2 source buffers 01:06:22 .. to be merged into 1 video track. It would not work well with the way the specification is written now. 01:06:36 .. Use case 1 does not require changes to the spec, but use case 2 oes. 01:06:40 s/oes/does 01:06:51 Jianhua: Use case 2 is more up to the MSE implementation. 01:07:08 tidoust: Each source buffer is associated with a video track, and at the moment they are disjoint 01:07:21 .. so you can't have two source buffers be merged into a single video track. 01:07:50 alan: If we wanted to start a study mission to do that enhancement, what would be the way to do that in W3C? 01:07:55 q? 01:08:17 cpn: Some of this depends on the end goal. If you want something that works across all 01:08:29 .. mainstream desktop and mobile browsers then I would recommend starting discussions with 01:08:41 .. the projects that implement those browser engines and get expressions of support 01:08:52 .. to say that it looks interesting and we want it as a general capability on the web. 01:09:07 .. With support in that direction doing the downstream work of specific API structure design 01:09:29 .. can be figured out. You're in the right place, in this group. 01:09:34 eric-carlson has joined #me 01:09:35 Meg has joined #me 01:09:40 .. It would be interesting for us to have a document that captures what you have described 01:09:55 .. and the existing pain point and the need that you see, and that supports a dialogue 01:10:01 .. with the browser engine implementations. 01:10:25 alan: We can head in that direction. We have support from the organisations building 01:10:42 .. equipment for the TV 3.0 service. Hopefully we can then bring the browser community 01:10:49 .. in after demonstrating the business importance. 01:10:58 q. 01:10:59 q- 01:11:04 .. But you're saying that we should come back here about API discussions? 01:11:20 Chris: That's right, this group can't work on specifications, we have to focus on use cases, 01:11:27 .. requirements and gap analysis, which is what you've done here. 01:11:42 .. For solution design we can form a different kind of group to give us the IPR protections to 01:11:46 .. work on draft specifications. 01:11:59 alan: Thank you, appreciate the insight. 01:12:01 q? 01:12:16 igarashi: I've been impressed by the implementation. 01:12:30 .. Aside from the issue of W3C API support, I have a question about codec licensing. 01:13:10 .. If you are targeting mainstream browser support, as you know we have discussed 01:13:25 .. in W3C and in relation to the patent policy, there are requirements that the codec licensing is 01:13:44 .. free and previously some codec licensing has been that the decoder is free but the encoder is not. 01:14:01 .. There are still requirements - what is the licensing scheme for LCEVC? 01:14:24 alan: V-Nova is committed to transparency [audio cuts out] 01:15:38 .. [scribe cannot understand, though audio is working again] 01:16:07 .. V-Nova's website has information about licensing. 01:16:34 cpn: Does anyone here want to comment on how what kind of requirements browser implementations 01:16:47 .. look for when considering adding support for codecs? What conditions do they place on this, typically? 01:17:06 Chris_bloom: I'm not a strong source here, but a rough generalisation is that after adding support 01:17:17 .. it is difficult to remove later, so you probably want wide support in general and then add to 01:17:26 .. the browser last. I would recommend getting support all over the place where you can. 01:17:42 cpn: Good indication to get broad support and when there's widespread adoption that's a positive 01:17:58 .. indicator. The W3C doesn't set rules on which codecs should or should not be implemented. 01:18:07 .. Our specifications tend to be more descriptive in that sense. 01:18:20 .. We say how to support implementations but not which codecs must be implemented. 01:18:30 FYI https://www.w3.org/TR/webcodecs-codec-registry/ 01:18:37 .. It's a choice for each browser implementation based on their own criteria like what Chris just mentioned. 01:18:40 q? 01:18:45 ack ig 01:19:02 Alan: There's a cart and horse problem with the API being limited for implementations currently in play. 01:19:09 .. That's something we have to talk about internally. 01:19:36 Song: With the recent experience when I tried to start a discussion between codecs and browsers, 01:19:47 .. the copyright and IPR statement is the bottom line for W3C. 01:20:03 .. Beside that, in general, codec adoption in industry: it's true that we need to get the potential 01:20:20 .. codec validation data to present to most of the members in the group. 01:20:42 .. Also EVC is not a standalone codec, so we need to let the members see the value of this kind 01:20:42 .. of enhancement framework. 01:20:45 q+ 01:20:55 .. We also need the developer to see the value of using LCEVC. 01:21:11 .. We need some validation or usage data to prove that this is stable or widely adopted in the industry 01:21:24 .. even before standardisation. This is my general experience for past work. 01:21:31 q? 01:21:41 q+ 01:21:50 alan: Something that may cast it in a different light. LCEVC is a unique technology in that it is 01:22:07 .. the first scalable video codec solution that seems to have got significant traction for a television 01:22:25 .. service. We've had it for conferencing, but this is a first for television, where the client devices 01:22:34 .. are using browsers to create media rendering experiences. 01:22:58 .. I don't think it will be the only one, an additional stream might be needed for others as well as LCEVC too. 01:23:10 cpn: That makes it interesting as well, looking at how to generalise into other encodings. 01:23:11 q? 01:23:29 eric-carlson: Stepping back to requirements for a browser to add support for a new video codec. 01:23:44 .. For us, for Apple's port of webkit, we typically won't use a software based video codec 01:23:55 .. because of the power requirements, so support in hardware is generally a requirement for us. 01:23:59 cpn: Thank you 01:24:11 wschildbach: I've been following the discussion with some interest because there's a more general 01:24:21 .. use case for multiple source buffers - Dolby vision has been mentioned. 01:24:28 .. The question of what requirements apply is an interesting one. 01:24:48 .. We've heard that this is a conversation with browser implementors, which can be outside W3C. 01:25:01 .. When it comes to changing semantics of the API, then I think it is in W3C and starts in this group. 01:25:06 cpn: Yes it can be. 01:25:08 q? 01:25:11 ack e, w 01:25:16 q? 01:25:16 q- 01:25:21 ack e 01:25:33 cpn: Thank you for this, it has been useful, we're happy to continue the conversation. 01:25:37 .. This is a good starting point. 01:25:54 Jianhua: thank you everyone 01:26:17 Topic: Next Generation Audio codec API proposal 01:26:30 rrsagent, make minutes 01:26:31 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 01:26:45 wschildbach: I work for Dolby labs, and alongside Fraunhofer IIS we have been talking to W3C 01:26:54 .. about next generation audio and how personalisation can be achieved. 01:27:07 .. Rather than give another presentation to this group about what personalisation and NGA are, 01:27:10 mouri has joined #me 01:27:19 .. take a step back, and understand where we are with this journey with W3C. 01:27:39 shu has joined #me 01:27:42 .. I've seen your introduction Chris, where you talk about requirements, gap analysis documents, use cases 01:27:53 shu has left #me 01:27:57 .. where this has been put into the process. Are we coming to an end of the journey or still 01:28:14 .. figuring out if this is a thing. For the group to ponder, and discuss after the break, 01:28:34 .. is this something the group needs an output for, or are we yet to agree that we have something 01:28:37 .. to do. 01:28:44 .. Suggest coming back to this after the break. 01:28:57 cpn: Yes. To add to that, in previous TPACs we've walked through the use cases for personalisation. 01:29:06 .. Did we have detail about the gap analysis at that time? 01:29:14 wschildbach: Possibly not enough detail. 01:29:24 .. The suggestion was made to put forward a gap analysis, and we sent one to the reflector, 01:29:39 .. but did not discuss it in detail. In the last MEIG call I summarised it. 01:29:47 .. We didn't give the gap analysis a lot of detail. 01:30:00 mjwilson has left #me 01:30:08 cpn: OK let's break, we're now at 10:30, can discuss over coffee. 01:30:19 .. [adjourns for the break] 01:30:23 rrsagent, make minutes 01:30:24 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 02:01:33 nigel has joined #me 02:02:59 Bernd has joined #me 02:03:40 Topic: Agenda review for session 2 02:03:52 cpn: The previous presentations overran, so let's review the agenda now. 02:04:24 .. Louay, can we hold for 20 minutes? 02:04:29 Louay: Yes, fine for me 02:04:46 cpn: Thank you. Then we'll restart with Next Gen Audio. 02:04:56 q? 02:05:01 Topic: Next Generation Audio codec API proposal (cont.) 02:05:15 cpn: Last TPAC there was feedback to talk about a gap analysis. 02:05:21 .. We had focused on presenting the use cases. 02:05:36 .. There is a document about them, including what an API change might look like 02:05:55 Wolfgang: Right. 2 years ago I think we presented the use cases we want to enable. 02:06:04 .. That document is appended to the notes of that meeting. We can share it again. 02:06:07 Kota has joined #me 02:06:25 .. The outcome of that meeting - we began talking about it in 2022 in Vancouver, then 02:06:46 .. continued in Sevilla. In Anaheim we were asked to provide a gap analysis. 02:06:57 .. That is a document to explain why the use cases cannot be fulfilled with the current API,. 02:07:20 .. There's a longer Word document that I sent to the reflector, just before the previous IG telco 02:07:31 .. a few weeks ago. I presented a slide deck that abridged it. 02:07:41 .. If you haven't seen the gap analysis I would encourage you to take a look at it. 02:07:43 m-alkalbani has joined #me 02:08:02 .. It explains why the use cases cannot be implemented using existing APIs. We can provide more detail. 02:08:19 cpn: We could decide to use that document as the basis for an IG Note published by W3C. 02:08:37 Nigel: The use cases or the gap analysis document? 02:08:47 cpn: Actually both, either as one or two IG Notes. 02:08:59 .. The question is if that advances the goal of encouraging implementation. 02:09:07 .. It perhaps gives the requirements some more visibility. 02:09:21 .. Similar discussion to the LCEVC codecs one, whether this group produces a requirements 02:09:32 .. document or not does not guarantee anyone moves forward to the next stage 02:09:46 .. but it does give us something to iterate on and gives us more visibility than a mailing list document. 02:09:53 Wolfgang: Yes, it would also be a point of reference. 02:10:05 .. We understand that this group has no power to force anyone else to do specific work. 02:10:17 .. We can produce a note that shows what we have talked about and where we stand, 02:10:27 .. and could produce an end point to our conversations on this topic. 02:10:48 cpn: We could decide to do that. That means that, if we have sufficient motivation and interest 02:11:13 .. to do that work, then I would be supportive. We can figure out it that's one or two documents. 02:11:32 cpn: I wondered if you have any thoughts on the Eclipsr codec and if that has similarity to AC4 and MPEG-H 02:11:43 .. codecs and if that means we should be looking at a broader set of codecs in the gap analysis. 02:12:01 .. I'm not familiar with the details, but my colleague Matt who chairs the Web Audio WG thought 02:12:06 .. it would be worth considering. 02:12:23 .. How similar or different their capability sets are and if it satisfies the same set of use cases. 02:12:36 Wolfgang: I don't actually know, maybe colleagues from Fraunhofer might now. 02:12:59 .. I think Eclipsr does Immersive Audio but I'm not sure if it does personalisation. 02:13:10 .. I don't think we are proposing any particular codec though, we want to be codec agnostic. 02:13:17 Bernd: That was a clear requirement for us. 02:13:31 Wolfgang: If the proponents of Eclipsr have particular viewpoints we would welcome them. 02:13:35 .. Would it change anything though? 02:13:47 cpn: It depends on who is interested to implement. 02:14:02 Wolfgang: I think implementation is a different issue, not for this group. 02:14:10 .. If nothing gets implemented then none of this matters. 02:14:19 .. I think that's a different set of conversations, or is it? 02:14:32 cpn: Typically a WG is going to want to have at least one implementer willing to commit 02:14:40 .. to not necessarily a specific codec but the API shape. 02:14:50 .. At least with expressions of support from others. 02:15:00 atsushi has joined #me 02:15:00 .. We don't want specifications to be drawn up with no implementer support. 02:15:20 .. There would need to be at least one committed, and ideally others giving expressions of support. 02:15:34 Wolfgang: That would be a prerequisite for anyone starting work in the Media WG. 02:15:45 cpn: Exactly, yes. 02:15:55 Wolfgang: Is it a requirement to finish the work in MEIG? 02:16:07 cpn: Not really, no. If implementers are committed to working on it then all you need is a 02:16:16 .. definition of the problem space such that we can charter a group around it. 02:16:29 .. They can do the spec work then. It can happen independently of this group. 02:16:36 .. We're not the only route to creating things. 02:16:46 .. It could come out of private conversation with potential implementers. 02:16:57 .. Then that would feed through the chartering process into pulling it into a WG. 02:17:02 q+ 02:17:40 Nigel: The other approach is to have an incubation group, where you prototype an implementation. When you have something that looks like it works, standardise 02:17:58 cpn: W3C is quite well set up to do incubation groups, either in a community group for the purpose 02:18:14 .. or the WICG where all the browser vendors are already signed up to the IPR terms, so it might 02:18:21 .. be an easier path than creating a new group. 02:18:25 ack ig 02:18:48 eric-carlson has joined #me 02:19:10 kaz has joined #me 02:19:18 Igarashi: Implementation is not a requirement to move forward with a standard, it's for interoperability. If a member supports the activity, it can move forward. 02:20:03 ... More important is whether people in the IG support the work 02:20:55 Wolfgang: So what this group can give is a statement of support that the technology and potential new activity are useful 02:21:56 ... What I'd like from this group is to agree that the gap analysis is correct, i.e., that the use cases cannot be implemented using existing APIs 02:22:03 Igarashi: And that the use cases are beneficial to the web 02:22:49 Wolfgang: I'm looking for support about how we do that, is it a Note or something else? 02:22:57 Chris: Notes are the tools we have, then separately there's advocacy 02:24:16 Chris: To make it concrete we can publish a Note or two Notes that will serve as a point of reference. 02:24:19 .. We have done that before. 02:24:32 .. At this point that would be a proposed Resolution, for the IG, based on the documents 02:24:43 .. already shared, to draft an IG Note detailing the use cases, requirements and gap analysis 02:24:48 .. for Next Generation Audio Codecs. 02:24:59 Wolfgang: Should we start that today, or give the group time to review the documents. 02:25:08 q+ 02:25:17 Chris: There are some formalities about how we do CfC and see if there are any objections. 02:25:18 ack k 02:25:35 kaz: Are we okay with moving ahead as a whole IG or should we set up a task force? 02:25:51 Chris: A task force would be a subset of the IG that forms to work on a specific area of work, 02:25:57 .. holds its own meetings, and then reports back. 02:26:05 .. I don't know, do you recommend we have one? 02:26:19 kaz: Maybe we could start with a simple draft first and then think about how to proceed after that. 02:26:34 Chris: Task forces help when the group is working on multiple things at the same time. 02:26:53 .. At the moment we aren't doing that, so it's a tool available to us but we don't need to do it. 02:26:54 q? 02:27:03 Haruki has joined #me 02:27:11 Wolfgang: If everyone agrees already then we can move ahead, or if they're neutral. 02:27:33 Chris: We'll take that as a proposed resolution and issue a CfC 02:27:34 q+ 02:28:12 Nigel: I think you don't need a CfC to start the work, it's more for publishing 02:29:05 Chris: Yes we can get going and do a CfC when it is time to publish 02:29:12 Bernd: We don't have to wait another year? 02:29:23 Chris: Absolutely, we can do it by email or in our regular calls 02:29:37 Wolfgang: OK I will work on the first version. Do we put this on GitHub? 02:30:04 Chris: Yes, that's detail we can figure out. There are two formats, Bikeshed and Respec. 02:30:29 .. I can help with that part. 02:30:36 .. The GitHub is the MEIG's own repo 02:30:47 kaz has joined #me 02:30:50 .. When we do the publication step we ask W3C to give it a formal W3C URL 02:30:55 Roy_Ruoxi: I can help with that part. 02:31:11 looking at the Process, it appears to me that Nigel is correct that nothing formal is required until we decide to publish the note: https://www.w3.org/policies/process/#publishing-notes 02:31:41 Nigel: Experience has been that a repo per document makes things easier with PR Preview etc 02:31:45 kaz: Yes it's easier that way 02:31:52 Roy_Ruoxi: I can set that up 02:32:02 Chris: We'll take Roy's advice about which to use and figure out the repo stuff. 02:32:27 Topic: CTA Wave EME Testing 02:32:48 Louay: [shares slides] 02:33:00 wschildbach has joined #me 02:33:05 present+ 02:33:19 scribe+ wschildbach 02:33:35 scribe+ 02:33:39 Louay: A brief intro to the CTA WAVE test suite 02:33:57 Louay gives over view of test Suite group. 02:34:15 ... which tests devices 02:34:51 ... Idea is to have mezzanine content that facilitates automatic testing of devices. 02:35:28 ... testing is also automatic. There is a testrunner, test implementations in HTML+JS, using MSE and EME APIs to playback content. 02:36:02 ... There are a set of test implementations for encrypted and clear content. And an observation framework 02:36:20 .... that records audio+video and checks whether content is missing is misaligned. 02:36:39 ... (demoes the device testing process) 02:37:33 ... test device, recording device, test runner are cross referenced by use of QR codes. 02:38:19 ... a lot of failures can be detected, both in audio and video. A/V sync can be tested as well; and there are audio only tests. 02:38:51 kota has joined #me 02:38:54 ... If you want to get started, there is a landing page at CTA WAVE (link in presentation). 02:39:37 .. DRM testing is an ongoing activity (currently, only clear key testing is supported). 02:40:25 .. CTA WAVE is looking to support commercial DRM systems in testing, and has a call for partners / survey out. 02:41:07 .. Key survey insights: CBC and CENC are widely used. 02:41:23 ... and Fairplay, Playready, Widevice are dominant DRM. 02:41:54 .. CTA WAVE is looking for organizations interested in supporting the activities 02:42:38 Louay presents results of questionaire, answer by answer. 02:43:08 .. wide adoption of major DRM systems named above. There is good adoption of CBC encryption (compatible with the Apple ecosystem) 02:43:31 .. which is important because it enables CMAF content to be used across content ecosystems 02:44:15 .. so you can have CMAF content for DASH and HLS, and better utilize content storage. 02:44:51 rrsagent, make minutes 02:44:52 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 02:45:43 s/... which tests devices/Louay: which tests devices 02:45:54 cpn: suggests a separate meeting to further discuss open issues 02:46:10 s/.. wide adoption of major/Louay: wide adoption of major 02:46:21 rrsagent, make minutes 02:46:22 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 02:47:03 Louay: agrees with call. Often there are implementation issues, which are surface by these tests (and tests are the only way to surface these) 02:47:31 .. one example is switch between encrypted and non encrypted content, as can happen in advertisement breaks 02:48:08 q? 02:48:09 .. We have basic tests but priority calls need to be made for which test cases to develop further. 02:48:12 q+ 02:48:18 q- 02:48:28 ack wschildbach 02:49:03 cpn: The presentation contains a lot of practical issues and it is interesting for the group to discuss this. 02:49:37 .. and we should have some EME implementers in the room, and get their feedback. Could also do spec fixes on the back of this. 02:50:03 cpn: what can we do to help? 02:50:39 Louay: Help identify scope what needs to be implemented. The survey is a good starting point. 02:51:10 .. members that use the EME API could contribute input what needs to be tested, could review the tests themselves, or run the tests. 02:51:37 .. If they are willing, could also contribute test case implementations. 02:51:50 atai has joined #me 02:52:03 .. If they are DRM vendors, could help with DRM servers. 02:52:24 cpn: This work is very welcome. The test suite is hugely beneficial for practical interop. 02:52:46 .. cpn would encourage people to help as Louay suggested. 02:53:07 .. it is a good next step to have a call fairly soon. 02:53:23 q? 02:53:30 .. we need to check with Louay and SVTA on a joint presentation. Action on cpn. 02:54:03 Topic: Media Content Metadata Japanese CG Update 02:55:18 kazho has joined #me 02:55:41 kaz has joined #me 02:55:46 hiroki: gives media content metadata presentation. 02:56:05 .. hiroki is chair of mcmj cg 02:56:42 .. Explains the background of breakout session. 02:56:49 Hiroki: We have a breakout session coming up 02:56:53 s/mcmj cg/MCM-JP CG/ 02:57:09 ... We had a breakout at TPAC 2023 to present challenges 02:57:17 scribe- 02:57:50 ... Example issue, operational costs are rising due to adapting for different platforms, and verification on each platform 02:57:54 i|Explains|-> https://www.w3.org/community/mcm-jp/ MCM-JP CG| 02:58:04 ... The MCM-JP CG was created address the challenges 02:58:28 scribe+ cpn 02:58:31 ... In the breakout we'll present the outcomes from one year of the CG, invite feedback, and include live demonstrations 02:58:52 ... The mission of the CG is to promote interoperabiltiy of media metadata across industries 02:59:21 .... We gather and share case studies and best practices 02:59:24 i|We had a b|-> https://www.w3.org/events/meetings/e9d8c4dc-b34a-4e43-bef9-b7a0c041f407/ MCM-JP Breakout during TPAC 2025| 02:59:36 rrsagent, draft minutes 02:59:38 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html kaz 02:59:47 .... Not motivation to move to a different spec unless there's a specific need 03:00:26 ... We're collecting scenarios from different industries, look at issues, solutions, best practices for metadata use in each industry 03:00:47 ... A report on feasibility of desired scenarios, combining industry knowledge 03:01:17 ... Outcomes: We collected over 10 case studies, documenting issues and solutions and metadata used in practice 03:01:32 ... We have identified 6 use cases using only existing industry metadata 03:01:44 ... We'll have a live demo in the breakout 03:02:27 ... Some scenarios can be implemented using existing industry metadata. They'll be introduced in the demo 03:03:06 ... [Shares details of the breakout, Tuesday at 08:30 JST, in Floor 3, Room 302] 03:03:43 ... We invite other industries, such as publishers. CG members will be at the breakout. All stakeholders are welcome 03:04:26 lilin has joined #me 03:04:53 q? 03:04:56 cpn: You are listing a good collection of use cases. Looking forward to the breakout tomorrow. 03:05:19 q? 03:05:34 .. You have participants from japanese publishers and epub community. 03:05:54 kaz: and metadata providers 03:06:42 Topic: WebVMT and DataCue for lighting devices 03:07:58 kaz has joined #me 03:08:40 Ken: Media over QUIC and timed metadata. Some use case studies 03:09:28 ... QUIC is a protocol in IETF. For browsers, using MoQ, use WebTransport API and WebCodecs 03:10:00 ... MoQ enables very low latency. What are the differences with WebRTC? 03:10:17 ... For real time streaming, there are two parts: networking part, and the media processing part 03:10:47 ... WebRTC has both of these. Fundamentally, only media data can be handled by developers 03:11:22 ... MoQ handles packetisation and transport. It is not standardised in the browser, handled in the application 03:12:01 ... Many use cases will be realised by MoQ, high quality and low latency A/V, multi-track video, high resolution audio. Things that are hard in WebRTC 03:12:30 ... Media and device orchestration. Timed metadata can be handled in MoQ. With timestamps, we can synchronise media data 03:13:30 ... Interactive live screen, with a live venue and a screening 03:14:13 ... MoQ-ILS. Live streaming video, audio, and DMX (lighting) data to a remote site, synchronised 03:14:34 ... About 0.1 seconds latency can be realised 03:14:45 ... [Shows the MoQ-ILS stack] 03:15:17 ... The jitter buffer is well tuned for low latency. Timestamp alignment for synchronisation is now done 03:15:35 ... [Shows demo] 03:17:47 ... At the Kobe develop meetup, we showed a demo. A robot is controlled at the venue site, so not only video data 03:18:40 ... This requires two way communication. The robot sends feedback to the controller side 03:18:48 q+ 03:20:05 Wolfgang: You listed a number of use cases. The use cases that use WebCodecs that are restricted to in the clear content. Does that present a challenge? 03:20:20 q+ 03:21:11 Ken: We can use h264 and h265. For browser supported cases, we can use WASM, to have more codecs, and high quality video 03:21:57 Wolfgang: On content protection specifically? Artists might not be willing to use the system... 03:22:28 q+ to ask about synchronised event API 03:22:37 ... Media WG was rechartered to add protection around WebCodecs, not sure? 03:22:39 ack w 03:23:05 Ken: EME and low latency a challenge. EME v2 is focused on the frame based approach 03:23:52 Kaz: Is it per-frame or per-second update? 03:24:14 Ken: It's updating every frame. For video, 33 milliseconds synchronisation 03:24:15 q? 03:24:20 q- 03:25:07 q- 03:25:26 Philip: WebVMT and TextTrackCue? 03:26:09 Ken: WebVMT is Video Map Tracks, similar to WebVTT but handles object metadata 03:26:31 ... With this timed metadata, geolocation data can be displayed and synchronised to the video 03:26:39 ... [Shows a sample WebVMT file] 03:27:16 ... DataCue is a JS interface for timed metadata. Difference from TextTrackCue, we can add object data to the track 03:27:39 ... WebVMT and DataCue and handle arbitrary data such as lighting devices 03:27:49 q+ 03:28:17 ... [Shows sample of lighting device in WebVMT and DataCue] 03:28:36 ... DMX is used for control of lighting devices 03:29:38 ... The data format is simple binary data, 512 bytes maximum. Each value is applied to the channel of the DMX device 03:30:05 ... Pan, Tilt, Dimmer, Red, Green, Blue. We can set a start address for each datacue 03:30:56 ... Each DMX lighting product has a specific DMX chart for that device 03:31:49 ... The challenge is that metadata from the VMT file we create DMX data. so we have to transform the metadata 03:32:28 ... Two approaches: Use a binary format of the DMX data as is, or base64 encoded 03:33:04 ... Approach2## 03:33:18 ... It's easy to transform the data for dedicated device 03:33:46 ... [Demo] 03:35:15 ... Code is available on GitHub 03:35:55 ... For WebVMT, the polyfill works well, and in Chromium based browsers using WebVMT 03:38:03 q+ to ask whether slides / presentation will be available? 03:39:13 kota has joined #me 03:39:36 q+ 03:41:36 q? 03:49:30 q? 03:49:31 ack c 03:51:38 ack w 03:51:38 wschildbach, you wanted to ask whether slides / presentation will be available? 03:52:11 kaz: Good discussion that's WoT related. How to handle binary data including streaming data 03:52:34 Ken: Any data, can be included the WebVMT 03:52:46 Kaz: Suggest joining the WoT plugfest! 04:00:15 nigel has joined #me 04:00:47 nigel has joined #me 04:16:11 nigel has joined #me 04:42:28 nigel has joined #me 04:47:36 nigel has joined #me 04:48:08 topic: MEIG / Timed Text Working Group Joint Meeting 04:49:31 hiroki has joined #me 04:50:27 Nigel: (Reviews the agenda) Any other business? 04:50:36 (nothing) 04:51:13 wschildbach has joined #me 04:52:25 atai has joined #me 04:52:25 present+ 04:53:39 subtopic: TTWG Updates 04:54:25 Nigel: Not much has happened on TTML2. Two profiles being actively worked on. First is for subtitles and captions, v1.3 of IMSC Text Profile 04:54:50 ... This is in CR. A couple of changes: introduce the TTML2 feature that allows font variations for superscript and subscript 04:55:05 ... The other is to support Japanese character sets better. In collaboration with ARIB 04:55:35 ... The only thing we had feedback on, is we have separated the text profile from the image profile in IMSC 04:55:52 ... No change, so IMSC 1.3 will be just a text profile 04:56:39 ... Japanese feedback, sometimes graphics are used for company logos in subtitles and captions. I'll suggest using reserved Unicode codepoints. That should allow character-like glyphs that can be laid out by the browser 04:56:44 ... Goal to get IMSC to Rec soon 04:57:03 ... As it's a profile of TTML2, no features introduced. So the test suite is empty 04:57:07 ... Any questions? 04:57:09 q? 04:57:24 ack k 04:57:33 (none) 04:57:55 Next, DAPT, is for dubbing and audio description. Working towards a new CR Snapshot, the last before Rec 04:58:11 We have a test suite and an implementation report. Awaiting implementing feedback 04:58:20 ... It's a good format for transcripts and translations 04:58:39 ... A nice stepping stone from transcription to subtitles and captions. It doesn't require styling and positioning 04:59:04 ... Any questions on DAPT? 04:59:09 (none) 04:59:32 Nigel: WebVTT is in CR from 2016. Some changes proposed in the last year, to support an attributes block 04:59:58 Dana: Last year we proposed WebVTT as an Interop focus area, it was approved as an investigation area 05:00:18 ... Proposed because of low WPT test scores, and promote adoption of VTT across the web 05:00:32 ... Investigation area means focus on improving the test suite 05:00:57 ... Once that's done it might be a focus area. For now, hard to improve interop score without a test suite 05:01:25 ... Started investigation in July. Limited the scope to tests that one browser fails. We've added some new tests as well 05:01:56 ... We maintain a spreadsheet of which tests have been investigated so far. Discuss at bimonthly meetings. If we think there's a bug in the test, we file a PR 05:02:18 ... Or we file bugs against implementations. Or against the spec 05:02:31 ... We propose investigating in 2026 05:02:53 ... Calls are an offshoot of TTWG so we don't occupy all the meeting time 05:02:59 Nigel: So they're not W3C meetings? 05:03:11 Dana: No 05:03:19 Nigel: Would be good to have visibility, e.g., minutes 05:03:40 Dana: We have a github repo with meeting notes. Wasn't sure how to bring here 05:03:50 ... I'd welcome help on how to do that 05:03:54 Nigel: Let's follow up offline 05:04:17 Dana: So far we've made changes to about 70 tests. We've merged about 7 PRs to the WebVTT WPT repo 05:04:30 ... We merged one editorial PR against the spec 05:04:42 ... This allowed us to clarify the behaviour tested by 10s of tests 05:04:59 ... We have open issues and PRs against the VTT spec to discuss in TTWG 05:05:12 ... Issues related to our ability to test more areas of the spec 05:05:23 ... One example, that cues move out of the way when the controls are showing 05:05:39 ... The tests were written arbitrarily, each browser has uniquely sized controls 05:06:31 ... We'll continue working on VTT interoperability, hope to have people join or contribute 05:07:14 ... I'd like to ask representatives of web apps, if there are things to allow browsers to render VTT cues. Things not in the spec that should be added? 05:08:04 Nigel: There's a wider discussion to have, MEIG might be a good place. Outside the web, broadcast standards use TTML. So the web is a weird outlier 05:08:23 ... CMAF profile requires IMSC, but allows VTT if you want. Weird that the web doesn't support that 05:08:45 q? 05:08:45 q? 05:09:02 subtopic: DVB Accessibility Implementation Guidelines 05:10:29 Andreas: I'm chair of the DVB accessibility task force, and working for SWR in Germany 05:10:52 ... There's a letter, requesting feedback. There's a public draft that anyone can review 05:11:01 ... DVB standardises broadcast technology 05:11:35 ... Guideline is DVB Bluebook A185. 05:11:59 ... It originated in DVB-I, but it's a spec that enables a common UI for broadcast and internet media offerings 05:12:01 thelounge has joined #me 05:12:31 ... Hides the complexity of getting media services, independent of source, and standardises the internet media offerince 05:13:06 ... A standardised portal or interface for content. Many broadcasters and manufacturers working on it. Published as an ETSI TS 05:13:15 ... New Bluebook planned next year, A177r7 05:14:00 ... The other spec is the metadata used. For accessibilty it defines a model for how to signal a11y features 05:14:14 ... We have guidelines. DVB Bluebook A185 05:14:44 ... The first Bluebook published in Feb 2025 is a guideline to implement accessibility in DVB-I 05:14:57 ... The second, A185r1 is more generic 05:15:10 ... This new draft, the main purpose is to document what's in DVB specs 05:15:36 ... Different signalling mechanisms, so we describe those for each spec 05:15:58 ... Pre-playback signalling, e.g., in EPGs. In playback signalling, to select the different services 05:16:13 ... It also adds categorisation of features 05:16:34 ... What are we asking from W3C? 05:16:49 ... Feedback, is it complete, accurate? 05:17:16 ... Different standards using different terminology and descriptions. There's no harmonised way to describe 05:17:46 ... so it's understood through the entire chain, end to end 05:18:11 ... Nigel is the main contributor 05:18:35 ... How it's categorised, and naming of the features 05:19:07 ... Audio Description, Sign interpreted video. Hard of hearing subtitles (known as captions) 05:19:56 ... Audio related a11y features. Next gen audio or object based audio, or pre-mixed enhanced speech intelligibility, or an enhanced immersion audio mix, or dubbing audio 05:20:23 ... New a11y features for media. A programme localised for a certain country. A video with easy to understand language 05:20:49 ... Transliteration subtitles - writing the same words using a different writing system 05:21:15 ... Content warnings, at programme level, before you watch, or in-programme signalling 05:21:26 q? 05:21:53 Chris: When do you need the feedback? 05:22:27 Andreas: Ideally by end of this month. But we can be a bit flexible. ASAP, but even after November is still valuable 05:22:52 Nigel: Kinds of feedback: is this the right set of features? And is this more useful in a global setting? 05:23:07 ... e.g., global alignment on terminology, e.g., in W3C, ITU 05:23:47 ... One of the motivations I have writing this, is that over many years I've noticed that people who aren't experts in making media accessible don't know these terms 05:24:07 ... And then developers, if they don't know what a feature is, they might not implement, or not implement properly 05:24:25 ... The problem might not just be for device manufacturers 05:24:42 Andreas: European norms, also struggle with terminology 05:24:59 ... What's used in WCAG vs in DVB not always the same, so good to align on it 05:25:00 Nigel: Yes 05:25:21 ChrisBlume: Explain transliteration subtitles? 05:25:42 Nigel: In Serbia, writing in Cyrillic and in Latin 05:25:53 q? 05:25:53 q+ 05:26:30 Andreas: Interest to review? We'll discuss with APAWG in joint meeting this week 05:26:41 ... Need media and a11y expertise 05:27:06 Hiroki: I'm excited about this topic, a11y is a key requirement for modern broadcasting services 05:27:13 wschildbach has joined #me 05:27:17 q+ 05:27:25 ... As public broadcaster, we align to current web standards, WCAG, including 3.0 05:28:21 .... Japan proposed a content discovery system, as example of a common broadcast+broadband schema approach 05:28:37 ... Your insight on metadata is a key point 05:29:03 ... We've been exploring approaches such as automatically applying WCAG HTML tags using programme information, expressed in schema.org metadata 05:29:11 q- 05:29:54 ... I prepare the show metadata in schema.org, Javascript generated. Showed a demo in TPAC breakout session 05:30:06 ... I want to continue to discuss this, the metadata vocabulary 05:30:11 ... It's important, I think. 05:30:23 q? 05:30:29 ack h 05:31:14 cpn: We have a session on Thursday with APA. What's the most useful thing for us to do? 05:31:30 .. Review the document, as Nigel described, send feedback on the definitions, terminology, 05:31:40 .. anything that's missing. Do we need a meeting to go through it in detail? 05:32:05 Andreas: Whatever we want to do, as always. Nigel and I have worked on it, so the feedback we 05:32:12 .. need is from others. 05:32:23 .. As we see with APA, it would be good to find people to drive it. 05:32:41 .. It is good to get feedback from whoever wants to provide it. 05:32:52 .. The wider question is if we should work on a common set of references for media 05:32:57 .. accessibility services and metadata. 05:33:12 .. In DVB we have different ways to do it. This document is informative, but as I showed before 05:33:26 .. in TV-Anytime we have a fixed normative vocabulary. It would be good to bring it all together. 05:33:32 q? 05:34:35 Nigel: I think there are gaps, but also things included that others haven't thought of. One example, is using NGA concepts to make audio accessible 05:34:51 ... It might expand horizons of people in the web community 05:35:00 Wolfgang: Yes 05:35:25 ... Dialog enhancement, narrative importance. Where would the factor iin? 05:35:55 Nigel: Processing the audio to make dialog easier to understand is one. Other is generating audio to make it more immersive for people who can't see 05:36:38 Wolfgang: Would the document have common ownership between DVB and W3C? is that the idea? 05:36:51 Andreas; THe DVB document is for their own specs 05:37:19 ... A more global document doesn't have to be DVB, could be W3C. Look at if there's interest, and then see where is the best place to work on it 05:37:50 Nigel: Some W3C specs are relevant. MAUR, and WCAG3. I think WCAG3 should be focused on barriers to access 05:38:18 ... That's a good framing, but can be too abstract for developers 05:38:42 ... So write the mitigations for each of the barriers to access. Then you find there's duplication 05:38:59 ... This document describes the mitigations, and maps them to the barriers to access. Makes it more concrete 05:39:32 q? 05:39:47 subtopic: Regulatory changes 05:40:21 Nigel: Two things I'm aware of. FCC changes, and in EU, the EAA. EN 301 549 being updated 05:40:36 ... They both potentially impact how we make media accessible on the web 05:41:23 Dana: FCC mandates and end user should be able to change their caption style settings without exiting the web app. They need to be at the system level 05:41:38 ... Apps need to show a control that shows the system settings menu 05:41:52 ... WebKit proposal to add a method on the HTMLMediaElement to show the caption menu 05:42:04 ... To mitigate fingerprinting, the website can't access the preference 05:42:12 ... The browser does the work of styling the VTTCue 05:42:39 ... The API we propose has optional argument to let the website position where the menu appears, e.g., an anchor node 05:42:49 Nigel: We have time in TTWG to discuss 05:43:22 ... Timeline? 05:43:36 Dana: I'd need to check, but August 2026 05:43:47 Andreas: Does it includes metadata? 05:43:55 Dana: Styling, color and font 05:44:15 Andreas: It affects not only web, so TV manufacturers. So good to have alignment 05:44:20 Nigel: Only a requirement in US? 05:44:22 Dana: Yes 05:45:12 Nigel: From BBC perspective, it breaks our subtitles. We include semantic information in colours, e.g., change of speaker 05:45:30 Dana: In Mac or iOS you can check a box to allow the website to override that style 05:45:57 Nigel: The issue is that the colour override is for a single colour, not manipulation of the colour palette to show them differently 05:46:17 Nigel: Any questions about this? 05:46:19 (none) 05:47:02 Andreas: In Europe, EAA, came into force this year 05:47:14 ... What does it mean for web standards, web media, and browsers? 05:47:38 ... EAA is a follow up to the Web Accessibility Directive, EU 2019/822 05:47:57 ... This puts requirements on public sector, and EAA also applies to private sector 05:48:18 ... National laws must be at least as strict as EAA. Should now be law in EU contries 05:48:25 Dana has joined #me 05:48:41 ... It applies to any product that provides access to AV media: TV and STBs 05:48:55 ... For services, any service that provides AV media 05:49:20 ... or servies that provide accessibilty services 05:49:40 ... Requirements, all comply to WCAG type requirements 05:50:11 ... For media, components need to be fully transmitted and displayed with adequate quality, synchronised, and give user control over display 05:50:44 ... That means, if you have an MP4 container and IMSC in it, the subtitle needs to be accessible to the user 05:51:06 ... The regulation applies to products and services, not browsers directly 05:51:24 ... The base technology to implement regulatory requirements 05:51:40 ... Interesting for the IG to look at if web standards area ready. I see gaps. 05:52:25 ... On the EN 301 549, a harmonised norm can be used as assumption of conformity. If conformant to the norm, i'm also conformant to the EAA 05:52:55 ... It should have been ready in June, still working on it 05:53:01 ... Plan to be ready next year 05:53:10 ... Certain parts on media accessibility in chapter 7 05:53:32 ... 7.1 is display, synchronization, personalistion of subtitles, also spoken subtitles 05:53:57 ... 7.3 on User controls. Activation via remote control or other interface, at same layer as channe; change or volume control 05:54:40 q? 05:55:04 ... Have a more systematic approach? Gaps to be closed? 05:55:09 q? 05:55:12 q+ 05:55:23 ack cpn 05:55:39 cpn: You've just quoted some European Norms that use terminology - is it aligned with the 05:55:47 .. DVB terminology from the previous topic? 05:55:55 Haruki has joined #me 05:56:06 Andreas: No, I think DVB should contact ETSI about that, but I am not sure if they will change. 05:56:20 .. They had a very long discussion on subtitles and captions and which to call it. 05:56:35 .. In the US "subtitles" means something very different to what it means in Europe. 05:56:39 .. A common mapping would be useful. 05:56:53 cpn: Is it as specific as the FCC about user controls, e.g. saying it's a system level thing 05:57:02 Andreas: 2 things - user control means switching on and off. 05:57:16 .. For user preferences it only says to give the user choice to change it but it does not say 05:57:19 .. where it should be done. 05:57:28 .. I think they want to define what needs to be achieved but not how. 05:58:12 Nigel: Synchronization of subtitles and captions. In the work on the EN 301 549 update, this came up. The requirements are slightly tighter 05:58:21 ... +/- 100ms of the timestamp 05:58:53 ... I sent feedback that this is too coarse. Using TextTrackCue in desktop browsers, it fires within 5-10ms, which is fine 05:59:01 q+ 05:59:36 ... TV manufacturers say their web stacks, even if they support TextTrackCue, they're more like 150-200ms threshold. The implement using currentTime, which isn't appropriate for this purpose 06:00:25 ... A half second subtitle, show 100ms late and clear 100ms early, you half the duration is should be shown fore 06:00:52 Andreas: Blame given to web standards, it was worse years ago but changed now 06:01:33 Wolfgang: Going back to the terminology, subtitles is used differently in different context, so discuss the concept, not the terms 06:01:49 ... in different geographies 06:02:22 q- 06:02:35 subtopic: Real time timed text streams on unmanaged networks 06:03:07 Nigel: Live captions, and streaming of pre-recorded subtitles. How to transport them over the network, including unmanaged networks, e.g., into and out of cloud 06:03:49 ... Not talking about distribution to client players. MP4 and DASH/HLS covers that. This is about upstream 06:04:24 ... SRT transport is good for transfer on unmanaged networks. Popular for AV media, and now being demonstrated for use with subtitles and captions 06:05:02 ... e.g., Syncwords demonstrated using TTML in SRT using the DVB TTML spec, which specifies how to put subtitles into MPEG2 transport streams 06:05:41 rrsagent, draft minutes 06:05:42 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html cpn 06:06:18 Chris: What should this group do? 06:06:41 Nigel: I'm just sharing it. No web standards for sharing upstream of distribution 06:07:14 ... It's an application, not a requirement for new stuff 06:07:36 ... There's an Audio Description CG meeting, discussing DAPT 06:08:13 kazho has joined #me 06:08:43 [adjourned] 06:08:47 rrsagent, draft minutes 06:08:48 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html cpn 06:08:55 rrsagent, make log public 06:26:05 nigel has joined #me 06:26:27 nigel has joined #me 06:31:49 nigel_ has joined #me 06:52:21 kaz has joined #me 06:58:57 kaz has joined #me 07:00:03 kaz has joined #me 07:27:31 nigel has joined #me 08:30:32 Zakim has left #me 08:52:02 rrsagent, make minutes 08:52:03 I have made the request to generate https://www.w3.org/2025/11/10-me-minutes.html nigel 09:07:48 nigel has joined #me 09:11:49 nigel_ has joined #me 09:27:56 nigel has joined #me 09:28:19 nigel has joined #me 09:49:05 nigel has joined #me 11:15:27 nigel has joined #me 11:31:35 nigel has joined #me 11:51:28 nigel has joined #me 12:10:20 nigel has joined #me 12:30:42 nigel has joined #me 12:51:42 nigel has joined #me