07:28:23 RRSAgent has joined #me 07:28:27 logging to https://www.w3.org/2023/09/11-me-irc 07:28:28 Zakim has joined #me 07:28:32 RRSAgent, make logs public 07:28:51 irc has joined #me 07:31:59 Agenda: https://github.com/w3c/media-and-entertainment/issues/95#issuecomment-1580622323 07:34:23 Mizushima has joined #me 07:34:30 nigel has joined #me 07:34:57 Meeting: Media and Entertainment IG - TPAC 2023 07:35:06 present+ Tomoaki_MIzushima 07:35:06 JohnRiv has joined #me 07:35:10 nhk-ryo has joined #me 07:35:25 present+ 07:35:25 ChrisLorenzo has joined #me 07:35:43 scribe+ tidoust 07:35:44 Louay has joined #me 07:35:48 present+ Francois_Daoust 07:35:55 igarashi has joined #me 07:36:00 present+ Louay_Bassbouss 07:36:02 atai has joined #me 07:36:11 present+ Nigel_Megitt 07:36:14 ohmata has joined #me 07:36:26 present+ Igarashi 07:36:28 cpn: [reminds people of code of conduct and health policy] 07:36:50 kaz has joined #me 07:37:01 Evan has joined #me 07:37:01 rrsagent, make log public 07:37:02 shie has joined #me 07:37:03 Maud_ has joined #me 07:37:05 Song has joined #me 07:37:05 present+ andreas_tai 07:37:05 rrsagent, draft minutes 07:37:06 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 07:37:13 present+ Kaz_Ashimura 07:37:49 ... [then goes through the agenda] 07:37:57 Wilaw has joined #me 07:38:04 Slideset: https://docs.google.com/presentation/d/1sadWsW1NZkfjlUdZ5FkXa_taUCzp1jyBNtemqY2WXgE/edit#slide=id.p 07:38:05 ccameron has joined #me 07:38:17 present+ 07:38:20 [Slide 6] 07:38:38 Topic: M&E IG introduction 07:38:42 [Slide 7] 07:38:53 [Slide 8] 07:39:16 cpn: Forum for media related topics as related to the Web and looking into new use cases and requirements. Identify new features that would help build media related applications. 07:39:27 ... Across the entire space, from production to final consumption. 07:39:37 [Slide 9] 07:40:10 cpn: Historically, group started with the addition of video to HTML. Adaptive streaming through MSE and EME. 07:40:51 lilin_ has joined #me 07:40:53 ... We've got a new generation of technologies that are coming through, WebCodecs, WebTransport, WebRTC-based technologies, etc. 07:41:05 ... Creating a more powerful platform for media applications. 07:41:06 young has joined #me 07:41:14 ... We don't produce specifications, we look at use cases and requirements. 07:41:15 +Youenn Fablet 07:41:25 ... We liaise with other industry related groups. 07:41:35 youenn has joined #me 07:41:36 [Slide 10] 07:42:12 cpn: We can propose the creation of a new Community Group, go back to Working Groups with requirements, give input to ongoing developments, etc. 07:42:17 wilaw_ has joined #me 07:42:17 [Slide 11] 07:42:52 Topic: CTA WAVE news 07:43:03 Slideset: @1 07:43:14 s|@1|https://www.w3.org/2011/webtv/wiki/images/4/4d/WAVE_TPAC_2023_-_Survey_Results_%26_WMAS2023.pdf 07:43:20 rrsagent, draft minutes 07:43:22 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 07:43:28 [Slide 1] 07:43:28 eric_carlson_ has joined #me 07:43:43 JohnRiv: I shared a survey we did a few months ago in CTA WAVE. 07:44:03 [Slide 2] 07:44:25 JohnRiv: The survey intended to assess the need for a performant common platform for TV. 07:45:02 ... We reached out to a few individuals and organizations. We received 9 responses. Comments we received is pretty consistent. 07:45:09 [Slide 3] 07:45:18 present+ 07:45:21 JohnRiv: The problem statement at the beginning of the survey is on this slide. 07:45:31 ccameron has joined #me 07:46:17 ... Raises performance issues, proprietary natvie application development. 07:47:03 ... Ways to address concerns include canvas-based approach, the creation of a sort of mini app equivalent for TV, etc. 07:47:05 present+ Chris_Needham 07:47:12 irc has left #me 07:47:13 [Slide 4] 07:47:25 cpn has joined #me 07:47:30 JohnRiv: Everyone would be willing to participate in a collaborative effort to address these issues. 07:47:32 scribe+ cpn 07:47:48 ... We're certainly looking into this in WAVE but want to encourage collaboration. 07:47:54 [Slide 5] 07:48:23 JohnRiv: Majority of dev report performance problems. Majority of CE manufacturers report that streaming apps create issues. 07:48:49 ... Majority agree that a common platform would be beneficial. Pretty clear that a solution needs to be web-based. 07:49:00 ... There must be interest from streaming vendors for this to succeed. 07:49:15 [Slide 6] 07:49:57 JohnRiv: Going into more details here. Majority of CE device manufacturers feel that HTML5 can be implemented with acceptable performance. The opposite for web developers. 07:50:00 [Slide 7] 07:50:11 JohnRiv: Not going into details of individual comments. 07:50:39 ... Insisting on supported vs. not supported well enough. 07:51:31 ... [going into response details] 07:51:51 ... Additional comments around additional APIs that developers would like to see to create more native experiences. 07:52:05 [Slide 8] 07:52:14 JohnRiv: Next question specifically targeted at developers. 07:52:23 ... Is the runtime performance of web applications on TVs and 07:52:23 other consumer devices a problem, in your opinion? 07:52:44 JohnRiv: Most answer that it is. 07:53:02 ... In general simple applications run fine but once you had a bunch of graphics, issues start to show. 07:53:11 [Slide 9] 07:53:19 JohnRiv: Similar question to CE device manufacturers. 07:53:48 ... Which report issues with supporting streaming apps. 07:54:36 [Slide 10] 07:54:54 JohnRiv: Third question on common web platform for CE device manufacturers. 07:55:02 ... 3 yes, 1 no, 1 other. 07:55:43 ... Highlight on some features that are costly to support on some embedded devices. 07:55:48 [Slide 11] 07:55:59 JohnRiv: Same question for developers. 07:56:05 ... 4 yes, 1 no. 07:56:23 ... Some suggest a simpler set of functionalities for TV. 07:56:33 ... Coupled with TV specific features. 07:56:51 ... Needs to be web-based otherwise we'd be shooting ourselves in the foot. 07:57:06 [Slide 12] 07:57:50 JohnRiv: Last question on opinion on possible solutions (MiniApps, WebAssembly, WebGL, etc.) for those who have experience in them. 07:58:01 ... Thumbs for WebAssembly and WebGL. 07:59:31 ... Wondering about feedback from the room. Anything surprising? 08:00:17 ccameron: Good to see confirmation about WebAssembly and WebGL. Wondering about interest for WebGPU? 08:00:45 ChrisLorenzo: Yes, although not supported yet. 08:01:43 ... I do TV app development for my job. 20 years of dev experience on the Web. When we first started many years ago, you would start developing for IE and fix bugs in other browsers. 08:02:29 ... Then mobile web development. iOS and Android. Native apps. But Web was too slow. Fast forward to today, web on phones is extremely fast. 08:03:07 ... That relates to TV development, lots of platforms: Samsung, Tizen, LG. 5 or 6, and different operating systems. Different browsers, different browser versions, some of them 5 years old. 08:03:21 ... Also browser code ported to the different devices. 08:03:36 shiestyle has joined #me 08:03:49 ... Building web apps for TV is hard in practice, but needed otherwise you end up developing multiple versions of native apps for the different platforms. 08:03:54 tetter has joined #me 08:04:03 ... The CPU on TV devices is much slower than the ones you find in mobile phones. 08:05:01 Wilaw: Been discussing this for many years. The problem is that both sides have incentive to run proprietary solutions. How to get out of that? 08:05:28 ... Same thing for TV manufacturers, who would have an advantage with a more powerful platform of their own. How do you break that logic? 08:06:05 rrsagent, draft minutes 08:06:06 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 08:06:16 JohnRiv: I think that the answer is that there is a cost to that. Lot of work on both sides. As an app developer, if you're building custom applications, that does not transfer well in your skills. 08:07:10 ... If we all collaborate together, we can perhaps create a platform where we can all benefit and innovate on a common platform. 08:07:13 chair: Chris_Needham, Chris_Lorenzo, Tatsuya_Igarashi 08:07:53 ChrisLorenzo: The most with the most viewership get the most support from device manufacturers. But they need to spend time each time to test new devices. Takes a few months to roll up a new version of Netflix for instance. 08:08:07 ... With the Web, things could be way easier. 08:08:28 ... Netflix may have dozens of developers, we do, to maintain each variant of the application. That does not scale well. 08:09:02 ... Also, one person developing a Web applications could more easily compete with other apps. 08:09:25 JT has joined #me 08:09:49 ... Also, native apps take place on TVs which have limited constraints. My TV keeps prompting me about apps to uninstall in order to install a new one. With Web technologies, you don't need to install anything. 08:10:09 ... We want to come to a common ground. 08:10:54 cpn: My organization builds TV apps. We very much target the HTML environment for our applications. On certain devices, we need to install an app, but it's only there to bootstrap the web runtime. 08:11:21 ... What we do is vary the level of animations. We put devices into buckets (low-end, etc.). 08:11:44 q+ to mention accessibility as a driver, also cost of change 08:11:50 ... I was talking recently with one of the other UK broadcasters that recently launched their Web TV app. They described the same pain points that we've been through. 08:11:59 q+ 08:12:19 ... They started by building a React app, and found the performance to be terrible on TV because of the additional burden on CPUs. 08:12:58 ... The approach that we take is to keep the document structure as simple as possible, and do progressive enhancement, essentially based on fingerprinting of the device to identify the device's model. 08:13:33 ... We're also thinking of new applications. We're looking to more personalized experience, with more composition done on the client side. 08:14:08 ... We'd like apps to be more responsive based on user needs. We're very much interested to a world where we can leverage WebGL/WebGPU as additional capabilities on the Web on TV devices. 08:14:36 ... We still have the legacy, so having to do things differently would be expensive. 08:15:08 ... The approach that we would like to take is incremental, opt-in to use additional capabilities when they are available. 08:15:51 ... Some of my colleagues have done some tests on WebAssembly. What we're observing is that, because browsers are evolving, we're starting to see good support across TV devices. 08:16:02 ... Not a requirement for us yet, though, just opt-in. 08:17:00 nigel: Adding to that. We've had a TV app for a number of years. At that time, the APIs available were very limited. Since that time, increasing awareness among TV manufacturers towards accessibility features. 08:17:07 ccameron has joined #me 08:17:26 ... That's one of the drivers that allows us to justify spending the money to essentially re-write these apps. 08:17:50 ... Will makes a really good point describing the situation. 08:18:05 ... The cost of changes is very high because of the number of devices that you need to support. 08:18:39 ... Challenge from newer companies is very different from old players. 08:19:08 ... Certification regime and testing. We have a bunch tests that devices need to pass in order for us to be confident that our app will run on their platform. 08:19:37 shiestyle has joined #me 08:19:48 cpn: That's quite an expensive thing for us to develop and maintain. So doing it collaboratively would be a plus. 08:19:55 ack ni 08:19:55 nigel, you wanted to mention accessibility as a driver, also cost of change 08:20:37 ... Different sort of communities or content organizations. We're broadcasters, collaborating through groups like DVB or HbbTV. A common HbbTV view would show a concerted broadcasters view. 08:21:17 ... Was there anything coming from HbbTV? 08:21:54 JohnRiv: When I gave a presentation to that group, it was interesting because they were indeed more focused on the broadcasting side of things. Nothing has come out of that yet. 08:21:58 ack atai 08:23:12 atai: Should W3C work on this common platform? If so, what would be the next steps? If there is interest from this group, would it be a good question to ask these organizations for a position as an SDO to see whether it's worthwhile pursuing this? 08:23:42 JohnRiv: Good point. If we feel that there's a need to change standards, W3C would be a good place to do so. 08:24:08 ChrisLorenzo: I think it's not so much about introducing new APIs than on making sure that existing APIs are correctly supported across the board. 08:24:24 q+ to ask if the real requirement here is to be able to specify performance levels in a meaningful way 08:24:31 ... Also how to launch web applications? There is no way to type in a URL and go to a page. 08:24:45 ... You may have to package your application in some zip format. 08:24:53 ... Different solutions, not a single one. 08:25:27 ... We should have some sort of performance test suite "This TV is certified to run apps at x frames per second using WebGL". 08:25:57 ... Rendering HTML with CSS can be very slow on some of these devices, which is why we moved to canvas-based rendering to gain more control. 08:26:13 ... Also making sure that SharedArrayBuffer is available across TV devices, etc. 08:26:47 ack ni 08:26:48 nigel, you wanted to ask if the real requirement here is to be able to specify performance levels in a meaningful way 08:26:49 nigel: I was coming to that: a way to express device performances so that we can tailor a solution to the device. 08:27:09 Piers_O_Hanlon has joined #me 08:27:10 ChrisLorenzo: One problem is that performance is really dependent on the context of what you're trying to achieve. 08:27:45 q+ 08:27:59 ... Video playback is actually somewhat easy to measure. When it comes to the UI, there are tons of things you can do, and a zillion tests you could imagine on transitions that you may want to achieve. 08:28:31 ccameron: Video playback vs. web applications, I assume that video playback is done separately, and not tied to HTML rendering. 08:29:16 rrsagent, draft minutes 08:29:17 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 08:29:28 ChrisLorenzo: It really depends on devices. Some support video tag, others feed in a native video player. But the device manufacturers have spent so much money on video playback that it has stable and predictable performances. 08:29:35 q? 08:29:42 ccameron: Is is the case that the UI is running at a lower resolution? 08:30:03 q? 08:30:39 ChrisLorenzo: Yes, video playback is often 4K, but the CPU is unable to render such a resolution for the UI, and GPU almost not there. 08:31:47 ccameron: Separate device for display? E.g. not touching shaders for video players? GPU might not even have access to the display. I was curious if there was a desire to stay within 1080p and leave video playback on the side. 08:31:52 myles has joined #me 08:32:01 ChrisLorenzo: Yes, the UI is 1080p, sometimes 720p, at max. 08:32:16 ... Way more expensive to reach 4K, although it would be cool to get that. 08:32:55 ... The video layer is often underneath the web app, so you have to "dig a hole" in the app to view the video. 08:33:21 ... A video tag for which you can change the z-index is really difficult to achieve on TV devices. 08:33:50 cpn: And some of the use cases that we're trying to achieve with composition on the client relies on these types of capabilites. Compositing to a video overlay for instance. 08:34:14 myles_ has joined #me 08:34:29 ... At worst, we need to deliver a video-only stream. Doing the composition upstream is more expensive though. 08:36:17 ccameron: [giving an example]. Not new APIs, moving everything to WebGL/WebGPU would give really good UI. If we are to integrate 4K video in the mix though, that makes things very complicated 08:36:34 ... What about security with older TVs? 08:36:48 cpn: That's often a problem. 08:37:34 ... The issue there is that there's not always commercial incentives to upgrade software. Also new features may require more performance and thus introduce additional performance issues on TV devices. 08:37:49 ... We'll come back to it as part of the next topic on testing. 08:38:13 q? 08:38:46 ccameron: I note that these discussions have impact on mechanisms to introduce HDR for instance. 08:39:35 Patrick_Griffis: Thanks for mentioning HDR on top of spatial resolution. That's a key topic indeed. 08:40:25 kaz: In Japan, IPTV Forum was also working on this sort of platforms. Consolidating that into a common problem statement and platform would be good. We could publish an official group note about that. 08:40:36 ... Main target is TV, right? 08:40:54 ChrisLorenzo: Yes. CE devices may include e.g., car devices, but TV is definitely the main target. 08:41:12 ccameron has joined #me 08:41:14 kaz: Also involved in WoT, where CE devices has broader meaning. 08:41:28 ack k 08:41:38 nigel: One thing that we haven't talked about here is non functional requirements 08:42:20 eric_carlson has joined #me 08:43:08 ... There are lots of things that may be needed in parallel. Doing video playback, CSS animations, caption display, audio rendering, network communcations. Having a way to describe this sort of app usage at the worst moment, would be great. 08:43:13 ChrisLorenzo: Yes, that's a good point. 08:43:45 ... Also remote control, infrared or bluetooth, input events can be really different. 08:43:53 nigel: Also voice processing. 08:44:38 cpn: I'd like to move on to our next point on testing. 08:44:54 Topic: WAVE DPCTF Testing and Web Media API Test suite 08:45:19 Slideset: @2 08:45:32 [Slide 1] 08:45:43 i/cpn: I'd like/nigel: Having a way to express those common non-functional requirements could be a common resource. 08:46:21 Louay: Update on what is happening in CTA WAVE on the media test suite. 08:46:27 [Slide 2] 08:46:37 Louay: This is an overview of the different components of the test suite. 08:46:49 ... I won't go into details, 4 main componets. 08:47:24 s/@2/https://github.com/w3c/media-and-entertainment/files/12574037/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf/ 08:47:25 ... In the diagram, you can see content annotation for mezzanine content creation. 08:47:41 [Slide 3] 08:48:35 Louay: You can see the test running. QR code provides information about the test being run, the frame and so on. If you record the test on a TV, you can measure how the test runs, measure performance, etc. 08:49:13 ... All components are available on GitHub. You may create new content from your original content for instance. 08:49:16 [Slide 4] 08:49:25 Louay: Next stage is creation of the test content. 08:49:55 [Slide 5] 08:50:07 Louay: Test runner comes next. It runs tests on TVs. 08:50:42 ... I think you're all aware of the Web platform Tests, more built for desktops where you can run tests in multiple windows for instance. 08:51:03 ... We updated the WPT test runner to make tests available on TV sets as well. 08:51:25 ... You can configure your test run. 08:52:02 ... Most contributions we made within the CTA WAVE project has been merged back into WPT, so only one test runner maintained in a central place. 08:52:19 [Slide 6] 08:52:22 [Slide 7] 08:52:37 Louay: You can export test results as JSON (same as WPT) 08:52:44 [Slide 8] 08:53:03 Louay: The observation framework is used to record test runs. 08:53:21 ... This will help report results directly to the test results repository. 08:53:29 [Slide 9] 08:54:11 Louay: We ran one week of testing within our facilities in Berlin. 08:54:23 [Slide 10] 08:54:29 Louay: Many participants. 08:54:31 [Slide 11] 08:55:05 Louay: We organize tests by giving access to TV devices during different slots. 08:55:26 [Slide 12] 08:55:58 Louay: This diagram shows the test setting. 08:56:16 ... We used smartphone with high frame recording for the obersvation framework. 08:56:21 [Slide 13] 08:56:28 Louay: Some demo. 08:56:31 [Slide 14] 08:56:45 Louay: This gives you an overview of how many tests were run. 08:57:55 ... [detailing a test example that failed on TV devices] 08:58:04 [Slide 15] 08:58:12 Louay: A screenshot of the test results 08:58:19 ... You can see what people see. 08:58:55 ... You can monitor the test execution. 08:58:57 [Slide 16] 08:59:07 Louay: Then you can see test results. 08:59:18 ... After observation. 08:59:40 ... You can run this out of the recorded video with the observation framework. 09:00:03 ... That's it, you can download everything on GitHub. 09:00:36 RRSAgent, draft minutes 09:00:37 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html tidoust 09:04:14 s|Slideset: https://github.com/w3c/media-and-entertainment/files/12574037/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0006/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf 09:04:18 RRSAgent, draft minutes 09:04:20 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html tidoust 09:07:09 s|Slideset: https://docs.google.com/presentation/d/1sadWsW1NZkfjlUdZ5FkXa_taUCzp1jyBNtemqY2WXgE/edit#slide=id.p|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0007/Media___Entertainment_IG_Meeting_11_Sep_2023.pdf 09:07:13 RRSAgent, draft minutes 09:07:14 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html tidoust 09:11:01 shiestyle has joined #me 09:13:41 JohnRiv has joined #me 09:14:44 myles has joined #me 09:22:00 shiestyle has joined #me 09:25:17 JohnRiv has joined #me 09:32:35 atai has joined #me 09:33:25 JT has joined #me 09:34:06 JohnRiv has joined #me 09:34:06 q? 09:34:36 [Resuming after break] 09:35:21 cpn: One of the things that we hear is that the level of implementation of different features diverges across TV devices. I wonder whether that shows up in tests. 09:35:50 Louay: I tried to focus the slides on Device Playback Capabilities because that was the original focus. 09:37:02 ... But we can also test other web platform tests. We can compare desktop support for features with support in TV browsers. 09:37:19 ... WMAS is the name of the dedicated test suite. 09:37:38 ... You can run the tests yourselves. The challenge was to run these kinds of tests on TV devices. 09:37:57 ... This is why we extended Web Platform Tests. 09:38:55 ... We wanted to use existing tests, and not redo tests from scratch. This works well, but manual tests are not well supported, because e.g., clicking on a button is not as easy. 09:39:48 ... Another important aspect is that we already integrated ECMAScript and WebGL tests. Thousands of them which you can run together with HTML, CSS, WebSockets, etc. APIs. 09:40:25 q+ to ask about subtitle testing 09:40:29 cpn: So you're covering the whole platform. 09:40:44 Louay: Yes, and every changes we need to make, we contribute them back to Web Platform Tests 09:40:55 ack ni 09:40:55 nigel, you wanted to ask about subtitle testing 09:41:41 nigel: On subtitles, it mentions "we don't have subtitles tests" yet. I'm wondering what's missing there to include subtitles tests in the framework. 09:41:51 Louay: What we currently have is a first iteration. 09:42:04 ... In the next release, we're also looking into subtitles. 09:42:27 ... The main problem is: how to implement observation? 09:42:40 ... We are working on this. Any volunteer and support is welcome! 09:42:58 nigel: Are requirements settled? 09:43:29 rrsagent, draft minutes 09:43:30 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 09:44:09 Louay: You can participate as CTA WAVE project member if you want to change requirements. 09:44:09 q+ 09:44:11 tidoust: We talked earlier about support for features. Is it enough to test for feature support only? 09:44:27 .. From a pure feature scoping perspective, do the tests cover everything you need? 09:44:34 .. Or are other changes needed in WPT? 09:44:37 scribe+ nigel 09:44:52 Louay: We are relying on WPT tests. What is done there, we are using. 09:45:05 .. Challenge of adding manual tests e.g. including user interaction. 09:45:18 .. There are only a few tests implemented on top. 09:45:40 .. We developed the test cases for media device capabilities in CTA WAVE. 09:45:51 .. Checking how the feature is implemented and how performant. 09:45:57 .. For example MSE. 09:46:22 .. You can check if the TV skips or overplays - about quality of playback. 09:46:36 .. Also startup times for video, if the duration is reported correctly etc. 09:46:48 .. This is the main focus on device capability in the test suite. 09:47:24 atai: Back to Nigel's question about how W3C could contribute with future tests 09:47:25 igarashi has joined #me 09:47:44 ... I also had the same question about where requirements are listed. 09:47:58 ... I think W3C could create some baseline set of requirements. 09:48:37 Topic: TV application development 09:48:57 Slideset: https://www.w3.org/2011/webtv/wiki/images/4/4d/WAVE_TPAC_2023_-_Survey_Results_%26_WMAS2023.pdf 09:49:18 [Slide 13] 09:49:47 s|Slideset: https://www.w3.org/2011/webtv/wiki/images/4/4d/WAVE_TPAC_2023_-_Survey_Results_%26_WMAS2023.pdf|| 09:50:38 ChrisLorenzo: Problem space is launching a web application on TV sets. We'd like to create a common mechanism. One of the solutions could be to use DIAL that Netflix developed. 09:51:20 ... I'd like to gather support for such a common mechanism. 09:51:23 ... It's really challenging to test applications on TV sets which means that you tend to avoid it as much as you can. 09:51:51 cpn: Really interesting because Louay's presentation shows the need to use a DVB modulator in order to launch the tests on a TV. 09:53:09 ericc: Have you looked at the Presentation API? Two specs: Presentation API, about discovery of devices, authentication, and establishing a channel of communication to the devices so that you can send it commands. Then there's the Second Screen API that allows you to get you much more control. 09:53:33 ChrisLorenzo: I will look into it. 09:54:11 eric_carlson has joined #me 09:54:21 Louay: We use a DVB modulator as cpn mentioned. But we also look at using DIAL on HbbTV devices because HbbTV 2.0 requires it. 09:54:40 cpn: Is there a distinction between development phase and deployment? 09:55:34 ChrisLorenzo: Yes. One flow is developer mode. I want to simplify that flow. And then another flow is deployment to TV. If you look at mobile phones and PWA in general, it's the ability to tell some application store about your application at a certain location. 09:56:07 q? 09:56:08 Song has joined #me 09:56:18 ... That would be ideal from a development process. It's just a matter of agreeing that it's the right way to go. Microsoft is creating such an application store for instance. 09:56:22 i|Have you|-> https://www.w3.org/TR/presentation-api/ Presentation API| 09:56:34 i|Have you|-> https://www.w3.org/TR/remote-playback/ Remote Playback API| 09:56:42 rrsagent, draft miutes 09:56:42 I'm logging. I don't understand 'draft miutes', kaz. Try /msg RRSAgent help 09:56:45 Wolfgang: Depending on how you launch an application, functionality can be different. 09:56:51 s/rrsagent, draft miutes// 09:56:53 rrsagent, draft minutes 09:56:55 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 09:57:32 ChrisLorenzo: Yes, permissions can be different. That's another part of it. The application store approach may come with that set of permissions. 09:57:37 q- atai 09:58:05 myles has joined #me 09:58:09 cpn: I'd like to conclude on where to go next. 09:58:18 ... That's something that we talked about before. 09:58:30 ... It's not entirely clear to me what new activity we should start. 09:58:57 Wilaw: I have a couple of suggestions to move forward. We need to start somewhere. 09:59:11 ... We have a test that CTA WAVE is developing that also tests performance. 09:59:22 Xiaohan_ has joined #me 09:59:23 ... I believe we should start with it. 10:00:04 ... And then find a content brand that is well-known enough to say "to show my content, you need to pass these tests". 10:00:36 ... Netflix would likely be the last company to join such an initiative because they ahve their own way to do that. That's fine. 10:00:40 present+ Wolfgang_Schildbach, Patrick_Griffis, Will_Law, Christopher_Cameron, Shinya_Takami, Ryo_Yasuoka, Hisayuki_Oomata 10:01:16 present+ Chris_Lorenzo 10:02:03 lilin has joined #me 10:02:17 cpn: From a Media WG perspective, we want our work to be reviewed as early as possible by this community to orient our design. 10:02:28 s/ahve/have 10:03:29 Wilaw: I don't think W3C is the right group to try to coerce industry to use technologies. I think that we should go back to WAVE and convince them to look into creating threshold levels for performance. 10:03:45 ... Then, in parallel, work with W3C to develop necessary features. 10:03:58 nigel: Just a reminder that not all web tests are in WPT. 10:04:15 ... For instance, IMSC tests and TTML tests are in different repositories. 10:04:50 Topic: Joint Meeting with Timed Text Working Group and Media Working Group 10:04:53 present+ Evan_Liu, Bernd_Czelhan, Youenn fablet, Song_Xu 10:05:11 ohmata has joined #me 10:05:36 present+ Eric_Carlson 10:05:47 rrsagent, draft minutes 10:05:49 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 10:06:16 Nigel: We have 3 topics: IMSC HRM, DAPT, TextTrackCue 10:06:17 s|[Slide 13]|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0007/Media___Entertainment_IG_Meeting_11_Sep_2023.pdf 10:06:31 ... TTWG deals with formats for timed text: TTML and WebVTT 10:07:01 i/ChrisLorenzo: Problem space is launching/[Slide 15] 10:07:08 ... TTML is profiled, IMSC is a subset of TTML2. It has 3 active versions, feature enhancements in each version 10:07:17 RRSAgent, draft minutes 10:07:18 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html tidoust 10:07:24 ... Different people have adopted these versions, so we maintain them al 10:08:00 ... Constraints on document complexity, so implementers can present the captions closely to how they're intended by the author 10:08:26 ... Has a relationship with performance, per previous discussion. Performacne related to document complexity 10:09:02 ... Refer to the HRM from the other specs, people with a requirmeent to use HRM can also require use of the HRM complexity levels 10:09:18 ... The HRM is a Candidate Rec, so we're looking for implementation evidence 10:09:59 Workflow: authoring tool creates a document, goes through presentation processor, then either succeeds or fails 10:10:18 -> https://www.w3.org/TR/imsc-hrm/ IMSC Hypothetical Render Model 10:10:18 s/Workflow/... Worlflow/ 10:10:31 ... It is hypothetical - it describes a hypothetical implementation pipeline 10:10:47 -> https://www.w3.org/TR/imsc-hrm/#fig-hypothetical-render-model Figure 2 Hypothetical Render Model 10:10:53 ... concepts like ability to decode images, cache things, buffer, switch to display them 10:11:30 ... As the presentation of captions changes, we give that a number, defines the entire presentation for a specific period of tie 10:11:33 s/tie/time/ 10:11:56 -> https://www.w3.org/TR/imsc-hrm/#fig-rendering-presentation-time Figure 3 rendering and presentation of Intermediate Synchronic Documents 10:12:11 ... The idea is we can know what the semantic presentation is described as at a time, then the presentation processor displays it, then at display time it needs to be composited in 10:12:45 ... If the document is so complex that the render model says it can't be rendered in the time available, it would fail the test 10:12:55 ... It applies to both image and text profiles 10:13:14 ... The spec is a CR, and essentially stable. There is an HRM test suite 10:13:37 ... [shows example test] 10:14:34 ... There's one open source implementation, in Python 10:14:47 ... We're looking for implementation evidence to meet our CR exit criteria 10:15:02 ... One way to do that is demonstrate that authoring tools generate valid documents 10:16:04 ... The CR wording matches the charter, after we had objections 10:16:15 ... Any questions? 10:16:17 -> https://www.w3.org/TR/imsc-hrm/#sotd Status of his Document 10:16:59 Francois: It's a hypothetical model and a test suite. How can you be sure implementations don't deviate from the hypothetical model, so the tests are still valid for them? 10:17:23 Nigel: it's a model that applies to document instances, it's not specifying processor behaviour 10:17:36 present+ Xiaohan_Wang 10:17:42 rrsagent, draft minutes 10:17:43 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 10:18:01 ... However, it would be possible to take a real presentation processor and ask if it presents documents that pass the HRM tests. I don't think it's a substitute for real world tests? 10:18:09 s/tests?/tests/ 10:18:16 Francois: Does it define multiple levels? 10:18:19 Nigel: Just one 10:18:51 Francois: Could be extended to analyse the complexity of an entire TV app? 10:19:13 Nigel: It could... Even a warning system to show it's close to the limit would be good 10:19:39 Francois: It can show through devtools too. Web Apps WG is making tools to make that possible 10:19:42 q? 10:20:13 subtopic: DAPT 10:20:30 Nigel: The DAPT spec is a profile of TTML2 10:20:40 ... It affects localisation and accessibility 10:20:55 ... Editors are from BBC and Netflix 10:21:10 ... There's a lot of diversity in the approach for creating dubbing scripts or audio description 10:21:23 ... What's missing is an open standard exchange format that supports the steps needed 10:21:53 ... When we analysed workflows we found we weren't looking at two separate specs, so create a single standard 10:22:02 -> https://www.w3.org/TR/dapt/ Dubbing and Audio description Profiles of TTML2 10:22:28 ... We're seeking feedback. We define things like dubbing scripts and transcripts 10:22:35 ... We have example documents in the spec 10:22:50 ... It allows you to block out times when you might put scripting information 10:23:01 ... Metadata to say it's an AD document 10:23:46 ... and languages 10:24:02 -> https://www.w3.org/TR/dapt/#example-4 Example 4 10:24:29 ... We can include the recorded audio version. When mixing AD into program audio you can duck the program audio. That requires smooth animation of the audio gain 10:24:34 ... We use TTML2 syntax for that 10:24:59 ... You can embed the audio directly in the document, base64 encoded 10:25:52 ... It describes the workflow steps, source and target language 10:26:20 ... Then there are detailed adaptations, specific timings, then create the dubbed version, and create an as-recorded script 10:26:22 Wolfgang has joined #me 10:26:28 ... Those are the use cases we're covering 10:27:26 ... In terms of implementations, it can be used to implement an authoring tool, or in the AD world, distribute this to clients, map it to client side calls in Web Audio or Text Track Cues 10:27:31 ... or Web Speech API 10:27:53 ... Client side mixing using the text means you can use screen readers to read the text 10:28:03 ... Goal is to provide an overall accessible solution 10:28:10 -> https://www.w3.org/TR/dapt/#fig-class-diagram Figure 1 Class diagram showing main entities in the DAPT data model 10:28:13 ... We have lots of questions, as issues in the document 10:28:25 ... Current status is wide review of the working draft 10:28:35 ... Please have a look and your feedback is welcome 10:28:40 wilaw has joined #me 10:28:50 ... Also we're going through horizontal review 10:29:29 ... I have had feedback from people. Some was to say it can also be used in the production process for captions whether translated or not 10:29:59 ... E.g., if the dubbed version doesn't match the translation subtitles 10:30:24 ... And people unexpectedly starting to implement 10:30:31 ... There's still work needed on the spec 10:30:43 ... Any thoughts or questions? 10:30:51 q? 10:31:35 ... Also, on application complexity on TV, asking the device to do real time document processing and mixing, can be tough 10:32:25 Francois: Is there anything to explain the TTML features and how DAPT overlaps or differs? 10:33:21 Nigel: Not straight away. Other groups create TTML profiles too. What would you like to achieve? 10:34:11 Francois: It's about understanding all the specs and the HRM, and why we need the different profiles, why IMSC can't be used in a dubbing context. But I'm coming from an outside perspective 10:34:37 Nigel: They're targeted at different use cases. IMSC is for captions to be presented to the audience for hard of hearing requirements 10:35:21 ... It's designed to be a useful subset with the right features. For dubbing and AD we're thinking about production processes but not mainly for presentation of text but creation of audio descriptions 10:35:48 ... Interesting is if you look at the capabilities for each, there's more emphasis on metadata in DAPT and less on styling 10:36:02 ... For subtitles and captions I care more about styling 10:36:31 ... The common interchange is you can take an as-recorded dubbing script then use it as a basis to add the styling to make an IMSC document, take out the production metadata 10:36:42 ... Intent is the work with each other and not conflict 10:37:21 subtopic: TextTrackCue 10:37:40 Eric: The TextTrack API only has support for WebVTT as a caption format 10:38:25 ... For various reasons some sites don't use WebVTT and use another format and convert that format to DOM nodes and use the WebVTT as a means to know when to insert their cues into the DOM 10:38:47 ... So the browser is responsible for knowing when it's time to show and hide cues, and script is responsible for inserting nodes in the DOM to show the cues 10:39:18 ... That works, up to a point. Where it falls apart is in the US, the FCC mandates that the user must be able to have their own preferences for how captions are styled 10:39:42 ... Devices have system level preferences for that. But a browser can't expose those prefs to script, as it would be fingerprinting surface 10:40:10 ... When a web page used WebVTT where all the responsibility for captions is given to the browser, the browser can apply the user preferences to the cues and honour the user's preferences 10:40:33 ... When a script makes the DOM nodes, the browser has no idea that what's inserted in the DOM is supposed to represent a caption 10:40:55 ... So there's no way to apply user preferences, so every site has to have their own version of the styling preferences, which doesn't work well 10:41:21 ... We have come up with a proposal. VTTCue inherits TextTrackCue, but TextTrackCue doesn't have a constructor 10:41:53 ... We propose to give TTC a contructor that takes a DOM node, so script can do what it needs to do to create DOM nodes from the format they're using 10:42:07 ... then the browser is responsible for putting it into the shadow DOM and apply the user styles 10:42:25 ... We added attributes so you can tag the node representing the cue and the background 10:42:39 ... There are other minor things, moving things from VTTCue to TextTrackCue 10:43:01 ... But it's a simple proposal that can make it possible for sites that want to use non-WebVTT formats let the browser apply the user styling 10:43:26 ... We'll present this tomorrow at 5pm in the TTWG meeting, and show a demo 10:44:06 Andreas: If we agree on this kind of requirement and approach, would the MEIG describe the requirement and feed it to WHATWG for HTML? 10:44:25 Eric: There'll be changes in the WebVTT and HTML specs. 10:44:48 ... The way cues are rendered is an implementation detail. Maybe changes are needed in CSS, not sure 10:45:20 i|The TextTrack API|-> TextTarck API from HTML Living Standard 10:45:20 ... We'll send PRs to each spec that needs changing 10:45:27 rrsagent, draft minutes 10:45:29 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 10:46:18 Chris: So using TTWG as the place to get consensus on the approach 10:46:41 Nigel: I'm interested in the data model for captions, as there are different understandings. So having something that meets everyones needs will be valuable 10:46:51 q? 10:47:24 ChrisLorenzo: Will there be some non-HTML format, as we're using Canvas and WebGL? 10:48:08 Nigel: When we've thought about this challenge before, we though of using JSON. Accessibilty will be an issue with WebGL, as you should be exposing the text to assistive technology 10:48:38 Francois: If you're rendering, you won't be applying user preferences 10:48:57 Eric: Unless there's an API to render a document fragment to a canvas, that's a whole other thing 10:48:59 q? 10:50:40 Topic: Breakouts 10:50:52 -> https://www.w3.org/2023/09/TPAC/breakouts.html#b-c6e71b80-0b1d-4a04-88e2-d3588bbc60d8 NHK's breakout: Facilitating media content distribution across industries 10:51:31 We have a breakout on media metadata, it's important, so talk about use cases in the current media industry. What should we think about for common requirements? 10:51:58 s/We have/Oomata: We have/ 10:51:58 ... If you have time to please come, it's from 12-1pm 10:52:49 s/12-1pm/12:15-13:15 10:53:20 -> https://www.w3.org/2023/09/TPAC/breakouts.html#b-009a5b81-0459-4ae4-9b33-f88dd9a9d89f Christopher's breakout: HDR on the web 10:53:36 ccameron: I also have a breakout on HDR, coming for images and video. A discussion on how to integrate CSS colors, canvas and WebGPU, some discussion on standardising rendering of HLG and PQ on desktop and mobile, discussing ISO standards from TC42 10:53:54 17:15-18:15 10:54:14 s/18:15/18:15 at Nervion-Arenal II - Level -1/ 10:54:38 s/13:15/13:15 at Azalea - Low Level/ 10:54:59 Topic: DVB Liaison statement 10:55:02 rrsagent, draft minutes 10:55:03 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html kaz 10:55:28 Andreas: It's an update from DVB-I, a spec that combines broadcast and broadband and an update to TV Anytime on signalling of accessibility services 10:55:58 ... The work shared with this group tries to consolidate and extend the signalling of a11y services, matching preferences 10:56:23 ... Will be discussed tomorrow in TTWG. The group would be happy to receive comments 10:56:52 a/TTWG/TTWG at Tech room, Low Level/ 10:57:10 Nigel: This topic about marrying user a11y preferences and matching to what media is available seems to be an active discussion among groups 10:57:34 ... People are keen to coordinate 10:58:07 ... But on the web we're sensitive to privacy issues and need to be carefully handle 10:58:47 Chris: Happy to help in MEIG on the coordination if we need to 10:59:27 Andreas: We organised an EBU meeting with different SDOs and each presented its own approach, and each wanted more bilateral coordination 11:00:37 Topic: Media WG Update 11:00:42 [Slide 17] 11:01:07 i/Slide/scribenick: tidoust/ 11:01:50 [Slide 18] 11:02:02 cpn: Lots of discussions on Managed Media Source in MSE. Are there other priorities related that this community would like to see addressed? 11:02:08 [Slide 19] 11:02:13 cpn: Same question for EME 11:02:20 ... We should have a FPWD ready soon 11:02:28 [Slide 20] 11:02:33 cpn: Also check Media Capabilities 11:02:41 i|18|-> https://github.com/w3c/media-source/milestone/8 Issues in scope for v2| 11:02:41 Topic: Priorities for 2023-2024 11:02:43 [Slide 21] 11:02:58 i|18|Managed Media Source - implemented in Safari. Discussions| 11:03:07 cpn: I very much welcome your input on priorities that the Media & Entertainment IG should have for next year. 11:03:11 [meeting adjourned] 11:03:15 RRSAgent, draft minutes 11:03:16 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html tidoust 11:03:49 JohnRiv has joined #me 11:03:50 rrsagent, draft minutes 11:03:51 I have made the request to generate https://www.w3.org/2023/09/11-me-minutes.html cpn 11:03:56 rrsagent, make log public 11:10:17 JohnRiv has joined #me 11:11:07 JohnRiv has joined #me 11:13:41 shiestyle has joined #me 11:24:39 JohnRiv has joined #me 12:08:36 shiestyle has joined #me 12:13:15 JohnRiv has joined #me 12:17:44 tidoust has joined #me 12:19:26 shiestyle has joined #me 12:21:06 shiestyle has joined #me 12:23:41 JohnRiv has joined #me 12:25:47 JohnRiv_ has joined #me 12:29:33 JohnRiv_ has joined #me 12:29:49 myles has joined #me 12:31:00 nhk-ryo has joined #me 12:33:12 nigel has joined #me 12:33:13 nhk-ryo has joined #me 12:33:20 nhk-ryo has joined #me 12:33:55 myles_ has joined #me 12:35:28 myles__ has joined #me 12:36:40 myles___ has joined #me 12:37:14 kaz has joined #me 12:39:15 sangwhan has joined #me 12:46:14 nhk-ryo has joined #me 12:47:24 nhk-ryo has left #me 12:48:12 nhk-ryo has joined #me 12:57:35 atai has joined #me 13:10:55 Zakim has left #me 14:08:35 nigel has joined #me 14:23:09 nigel has joined #me 14:23:50 nigel has joined #me 14:25:46 nigel has joined #me 14:27:41 nigel has joined #me 14:28:49 nigel has joined #me 14:31:23 JohnRiv has joined #me 14:37:45 JohnRiv_ has joined #me 14:39:40 JohnRiv has joined #me 14:41:58 myles has joined #me 14:45:36 JohnRiv_ has joined #me 14:47:41 JohnRiv__ has joined #me 14:47:49 nigel has joined #me 14:48:25 nhk-ryo has joined #me 14:48:36 nigel has joined #me 14:49:10 nigel has joined #me 14:49:37 nigel has joined #me 14:49:59 nigel has joined #me 14:56:21 tidoust has joined #me 14:56:40 JohnRiv has joined #me 15:01:22 myles has joined #me 15:31:09 atai has joined #me 15:57:35 JohnRiv has joined #me 15:59:21 JohnRiv has joined #me 16:00:03 nhk-ryo has joined #me 16:36:17 ryo has joined #me 16:48:12 JohnRiv has joined #me 16:52:02 shiestyle has joined #me 16:52:37 JohnRiv_ has joined #me 16:55:02 JohnRiv__ has joined #me 17:07:36 atai has joined #me 17:07:46 shiestyle has joined #me 17:08:30 JohnRiv has joined #me 17:08:37 shiestyle has joined #me 17:25:02 JohnRiv has joined #me 17:26:51 JohnRiv_ has joined #me 17:43:04 JohnRiv has joined #me 17:46:30 shiestyle has joined #me 18:06:35 shiestyle has joined #me 18:53:17 shiestyle has joined #me 19:28:12 atai has joined #me 19:36:44 shiestyle has joined #me 19:39:04 JohnRiv has joined #me 20:11:42 JohnRiv has joined #me 20:36:42 shiestyle has joined #me 20:43:41 shiestyl_ has joined #me 21:26:53 shiestyle has joined #me 21:29:02 kaz has joined #me 21:47:53 shiestyle has joined #me 22:27:33 myles has joined #me 22:31:23 shiestyle has joined #me 23:07:41 shiestyle has joined #me 23:45:52 shiestyle has joined #me