W3C

– DRAFT –
Media and Entertainment IG - TPAC 2023

11 September 2023

Attendees

Present
andreas_tai, Bernd_Czelhan, Chris_Lorenzo, Chris_Needham, Christopher_Cameron, Eric_Carlson, Evan_Liu, Francois_Daoust, Hisayuki_Oomata, Igarashi, JohnRiv, Kaz_Ashimura, lilin_, Louay_Bassbouss, Nigel_Megitt, Patrick_Griffis, Ryo_Yasuoka, Shinya_Takami, Song_Xu, Tomoaki_MIzushima, Wilaw, Will_Law, Wolfgang_Schildbach, Xiaohan_Wang, Youenn fablet
Regrets
-
Chair
Chris_Lorenzo, Chris_Needham, Tatsuya_Igarashi
Scribe
cpn, nigel, tidoust

Meeting minutes

cpn: [reminds people of code of conduct and health policy]
… [then goes through the agenda]

Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0007/Media___Entertainment_IG_Meeting_11_Sep_2023.pdf

[Slide 6]

M&E IG introduction

[Slide 7]

[Slide 8]

cpn: Forum for media related topics as related to the Web and looking into new use cases and requirements. Identify new features that would help build media related applications.
… Across the entire space, from production to final consumption.

[Slide 9]

cpn: Historically, group started with the addition of video to HTML. Adaptive streaming through MSE and EME.
… We've got a new generation of technologies that are coming through, WebCodecs, WebTransport, WebRTC-based technologies, etc.
… Creating a more powerful platform for media applications.
… We don't produce specifications, we look at use cases and requirements.

<young> +Youenn Fablet

cpn: We liaise with other industry related groups.

[Slide 10]

cpn: We can propose the creation of a new Community Group, go back to Working Groups with requirements, give input to ongoing developments, etc.

[Slide 11]

CTA WAVE news

Slideset: https://www.w3.org/2011/webtv/wiki/images/4/4d/WAVE_TPAC_2023_-_Survey_Results_%26_WMAS2023.pdf

[Slide 1]

JohnRiv: I shared a survey we did a few months ago in CTA WAVE.

[Slide 2]

JohnRiv: The survey intended to assess the need for a performant common platform for TV.
… We reached out to a few individuals and organizations. We received 9 responses. Comments we received is pretty consistent.

[Slide 3]

JohnRiv: The problem statement at the beginning of the survey is on this slide.
… Raises performance issues, proprietary natvie application development.
… Ways to address concerns include canvas-based approach, the creation of a sort of mini app equivalent for TV, etc.

[Slide 4]

JohnRiv: Everyone would be willing to participate in a collaborative effort to address these issues.
… We're certainly looking into this in WAVE but want to encourage collaboration.

[Slide 5]

JohnRiv: Majority of dev report performance problems. Majority of CE manufacturers report that streaming apps create issues.
… Majority agree that a common platform would be beneficial. Pretty clear that a solution needs to be web-based.
… There must be interest from streaming vendors for this to succeed.

[Slide 6]

JohnRiv: Going into more details here. Majority of CE device manufacturers feel that HTML5 can be implemented with acceptable performance. The opposite for web developers.

[Slide 7]

JohnRiv: Not going into details of individual comments.
… Insisting on supported vs. not supported well enough.
… [going into response details]
… Additional comments around additional APIs that developers would like to see to create more native experiences.

[Slide 8]

JohnRiv: Next question specifically targeted at developers.
… Is the runtime performance of web applications on TVs and

other consumer devices a problem, in your opinion?

JohnRiv: Most answer that it is.
… In general simple applications run fine but once you had a bunch of graphics, issues start to show.

[Slide 9]

JohnRiv: Similar question to CE device manufacturers.
… Which report issues with supporting streaming apps.

[Slide 10]

JohnRiv: Third question on common web platform for CE device manufacturers.
… 3 yes, 1 no, 1 other.
… Highlight on some features that are costly to support on some embedded devices.

[Slide 11]

JohnRiv: Same question for developers.
… 4 yes, 1 no.
… Some suggest a simpler set of functionalities for TV.
… Coupled with TV specific features.
… Needs to be web-based otherwise we'd be shooting ourselves in the foot.

[Slide 12]

JohnRiv: Last question on opinion on possible solutions (MiniApps, WebAssembly, WebGL, etc.) for those who have experience in them.
… Thumbs for WebAssembly and WebGL.
… Wondering about feedback from the room. Anything surprising?

ccameron: Good to see confirmation about WebAssembly and WebGL. Wondering about interest for WebGPU?

ChrisLorenzo: Yes, although not supported yet.
… I do TV app development for my job. 20 years of dev experience on the Web. When we first started many years ago, you would start developing for IE and fix bugs in other browsers.
… Then mobile web development. iOS and Android. Native apps. But Web was too slow. Fast forward to today, web on phones is extremely fast.
… That relates to TV development, lots of platforms: Samsung, Tizen, LG. 5 or 6, and different operating systems. Different browsers, different browser versions, some of them 5 years old.
… Also browser code ported to the different devices.
… Building web apps for TV is hard in practice, but needed otherwise you end up developing multiple versions of native apps for the different platforms.
… The CPU on TV devices is much slower than the ones you find in mobile phones.

Wilaw: Been discussing this for many years. The problem is that both sides have incentive to run proprietary solutions. How to get out of that?
… Same thing for TV manufacturers, who would have an advantage with a more powerful platform of their own. How do you break that logic?

JohnRiv: I think that the answer is that there is a cost to that. Lot of work on both sides. As an app developer, if you're building custom applications, that does not transfer well in your skills.
… If we all collaborate together, we can perhaps create a platform where we can all benefit and innovate on a common platform.

ChrisLorenzo: The most with the most viewership get the most support from device manufacturers. But they need to spend time each time to test new devices. Takes a few months to roll up a new version of Netflix for instance.
… With the Web, things could be way easier.
… Netflix may have dozens of developers, we do, to maintain each variant of the application. That does not scale well.
… Also, one person developing a Web applications could more easily compete with other apps.
… Also, native apps take place on TVs which have limited constraints. My TV keeps prompting me about apps to uninstall in order to install a new one. With Web technologies, you don't need to install anything.
… We want to come to a common ground.

cpn: My organization builds TV apps. We very much target the HTML environment for our applications. On certain devices, we need to install an app, but it's only there to bootstrap the web runtime.
… What we do is vary the level of animations. We put devices into buckets (low-end, etc.).
… I was talking recently with one of the other UK broadcasters that recently launched their Web TV app. They described the same pain points that we've been through.
… They started by building a React app, and found the performance to be terrible on TV because of the additional burden on CPUs.
… The approach that we take is to keep the document structure as simple as possible, and do progressive enhancement, essentially based on fingerprinting of the device to identify the device's model.
… We're also thinking of new applications. We're looking to more personalized experience, with more composition done on the client side.
… We'd like apps to be more responsive based on user needs. We're very much interested to a world where we can leverage WebGL/WebGPU as additional capabilities on the Web on TV devices.
… We still have the legacy, so having to do things differently would be expensive.
… The approach that we would like to take is incremental, opt-in to use additional capabilities when they are available.
… Some of my colleagues have done some tests on WebAssembly. What we're observing is that, because browsers are evolving, we're starting to see good support across TV devices.
… Not a requirement for us yet, though, just opt-in.

nigel: Adding to that. We've had a TV app for a number of years. At that time, the APIs available were very limited. Since that time, increasing awareness among TV manufacturers towards accessibility features.
… That's one of the drivers that allows us to justify spending the money to essentially re-write these apps.
… Will makes a really good point describing the situation.
… The cost of changes is very high because of the number of devices that you need to support.
… Challenge from newer companies is very different from old players.
… Certification regime and testing. We have a bunch tests that devices need to pass in order for us to be confident that our app will run on their platform.

cpn: That's quite an expensive thing for us to develop and maintain. So doing it collaboratively would be a plus.

<Zakim> nigel, you wanted to mention accessibility as a driver, also cost of change

cpn: Different sort of communities or content organizations. We're broadcasters, collaborating through groups like DVB or HbbTV. A common HbbTV view would show a concerted broadcasters view.
… Was there anything coming from HbbTV?

JohnRiv: When I gave a presentation to that group, it was interesting because they were indeed more focused on the broadcasting side of things. Nothing has come out of that yet.

atai: Should W3C work on this common platform? If so, what would be the next steps? If there is interest from this group, would it be a good question to ask these organizations for a position as an SDO to see whether it's worthwhile pursuing this?

JohnRiv: Good point. If we feel that there's a need to change standards, W3C would be a good place to do so.

ChrisLorenzo: I think it's not so much about introducing new APIs than on making sure that existing APIs are correctly supported across the board.
… Also how to launch web applications? There is no way to type in a URL and go to a page.
… You may have to package your application in some zip format.
… Different solutions, not a single one.
… We should have some sort of performance test suite "This TV is certified to run apps at x frames per second using WebGL".
… Rendering HTML with CSS can be very slow on some of these devices, which is why we moved to canvas-based rendering to gain more control.
… Also making sure that SharedArrayBuffer is available across TV devices, etc.

<Zakim> nigel, you wanted to ask if the real requirement here is to be able to specify performance levels in a meaningful way

nigel: I was coming to that: a way to express device performances so that we can tailor a solution to the device.

ChrisLorenzo: One problem is that performance is really dependent on the context of what you're trying to achieve.
… Video playback is actually somewhat easy to measure. When it comes to the UI, there are tons of things you can do, and a zillion tests you could imagine on transitions that you may want to achieve.

ccameron: Video playback vs. web applications, I assume that video playback is done separately, and not tied to HTML rendering.

ChrisLorenzo: It really depends on devices. Some support video tag, others feed in a native video player. But the device manufacturers have spent so much money on video playback that it has stable and predictable performances.

ccameron: Is is the case that the UI is running at a lower resolution?

ChrisLorenzo: Yes, video playback is often 4K, but the CPU is unable to render such a resolution for the UI, and GPU almost not there.

ccameron: Separate device for display? E.g. not touching shaders for video players? GPU might not even have access to the display. I was curious if there was a desire to stay within 1080p and leave video playback on the side.

ChrisLorenzo: Yes, the UI is 1080p, sometimes 720p, at max.
… Way more expensive to reach 4K, although it would be cool to get that.
… The video layer is often underneath the web app, so you have to "dig a hole" in the app to view the video.
… A video tag for which you can change the z-index is really difficult to achieve on TV devices.

cpn: And some of the use cases that we're trying to achieve with composition on the client relies on these types of capabilites. Compositing to a video overlay for instance.
… At worst, we need to deliver a video-only stream. Doing the composition upstream is more expensive though.

ccameron: [giving an example]. Not new APIs, moving everything to WebGL/WebGPU would give really good UI. If we are to integrate 4K video in the mix though, that makes things very complicated
… What about security with older TVs?

cpn: That's often a problem.
… The issue there is that there's not always commercial incentives to upgrade software. Also new features may require more performance and thus introduce additional performance issues on TV devices.
… We'll come back to it as part of the next topic on testing.

ccameron: I note that these discussions have impact on mechanisms to introduce HDR for instance.

Patrick_Griffis: Thanks for mentioning HDR on top of spatial resolution. That's a key topic indeed.

kaz: In Japan, IPTV Forum was also working on this sort of platforms. Consolidating that into a common problem statement and platform would be good. We could publish an official group note about that.
… Main target is TV, right?

ChrisLorenzo: Yes. CE devices may include e.g., car devices, but TV is definitely the main target.

kaz: Also involved in WoT, where CE devices has broader meaning.

nigel: One thing that we haven't talked about here is non functional requirements
… There are lots of things that may be needed in parallel. Doing video playback, CSS animations, caption display, audio rendering, network communcations. Having a way to describe this sort of app usage at the worst moment, would be great.

ChrisLorenzo: Yes, that's a good point.
… Also remote control, infrared or bluetooth, input events can be really different.

nigel: Also voice processing.

<nigel> nigel: Having a way to express those common non-functional requirements could be a common resource.

cpn: I'd like to move on to our next point on testing.

WAVE DPCTF Testing and Web Media API Test suite

Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0006/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf

[Slide 1]

Louay: Update on what is happening in CTA WAVE on the media test suite.

[Slide 2]

Louay: This is an overview of the different components of the test suite.
… I won't go into details, 4 main componets.
… In the diagram, you can see content annotation for mezzanine content creation.

[Slide 3]

Louay: You can see the test running. QR code provides information about the test being run, the frame and so on. If you record the test on a TV, you can measure how the test runs, measure performance, etc.
… All components are available on GitHub. You may create new content from your original content for instance.

[Slide 4]

Louay: Next stage is creation of the test content.

[Slide 5]

Louay: Test runner comes next. It runs tests on TVs.
… I think you're all aware of the Web platform Tests, more built for desktops where you can run tests in multiple windows for instance.
… We updated the WPT test runner to make tests available on TV sets as well.
… You can configure your test run.
… Most contributions we made within the CTA WAVE project has been merged back into WPT, so only one test runner maintained in a central place.

[Slide 6]

[Slide 7]

Louay: You can export test results as JSON (same as WPT)

[Slide 8]

Louay: The observation framework is used to record test runs.
… This will help report results directly to the test results repository.

[Slide 9]

Louay: We ran one week of testing within our facilities in Berlin.

[Slide 10]

Louay: Many participants.

[Slide 11]

Louay: We organize tests by giving access to TV devices during different slots.

[Slide 12]

Louay: This diagram shows the test setting.
… We used smartphone with high frame recording for the obersvation framework.

[Slide 13]

Louay: Some demo.

[Slide 14]

Louay: This gives you an overview of how many tests were run.
… [detailing a test example that failed on TV devices]

[Slide 15]

Louay: A screenshot of the test results
… You can see what people see.
… You can monitor the test execution.

[Slide 16]

Louay: Then you can see test results.
… After observation.
… You can run this out of the recorded video with the observation framework.
… That's it, you can download everything on GitHub.

[Resuming after break]

cpn: One of the things that we hear is that the level of implementation of different features diverges across TV devices. I wonder whether that shows up in tests.

Louay: I tried to focus the slides on Device Playback Capabilities because that was the original focus.
… But we can also test other web platform tests. We can compare desktop support for features with support in TV browsers.
… WMAS is the name of the dedicated test suite.
… You can run the tests yourselves. The challenge was to run these kinds of tests on TV devices.
… This is why we extended Web Platform Tests.
… We wanted to use existing tests, and not redo tests from scratch. This works well, but manual tests are not well supported, because e.g., clicking on a button is not as easy.
… Another important aspect is that we already integrated ECMAScript and WebGL tests. Thousands of them which you can run together with HTML, CSS, WebSockets, etc. APIs.

cpn: So you're covering the whole platform.

Louay: Yes, and every changes we need to make, we contribute them back to Web Platform Tests

<Zakim> nigel, you wanted to ask about subtitle testing

nigel: On subtitles, it mentions "we don't have subtitles tests" yet. I'm wondering what's missing there to include subtitles tests in the framework.

Louay: What we currently have is a first iteration.
… In the next release, we're also looking into subtitles.
… The main problem is: how to implement observation?
… We are working on this. Any volunteer and support is welcome!

nigel: Are requirements settled?

Louay: You can participate as CTA WAVE project member if you want to change requirements.

<nigel> tidoust: We talked earlier about support for features. Is it enough to test for feature support only?

<nigel> .. From a pure feature scoping perspective, do the tests cover everything you need?

<nigel> .. Or are other changes needed in WPT?

Louay: We are relying on WPT tests. What is done there, we are using.
… Challenge of adding manual tests e.g. including user interaction.
… There are only a few tests implemented on top.
… We developed the test cases for media device capabilities in CTA WAVE.
… Checking how the feature is implemented and how performant.
… For example MSE.
… You can check if the TV skips or overplays - about quality of playback.
… Also startup times for video, if the duration is reported correctly etc.
… This is the main focus on device capability in the test suite.

atai: Back to Nigel's question about how W3C could contribute with future tests
… I also had the same question about where requirements are listed.
… I think W3C could create some baseline set of requirements.

TV application development

Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0007/Media___Entertainment_IG_Meeting_11_Sep_2023.pdf

[Slide 15]

ChrisLorenzo: Problem space is launching a web application on TV sets. We'd like to create a common mechanism. One of the solutions could be to use DIAL that Netflix developed.
… I'd like to gather support for such a common mechanism.
… It's really challenging to test applications on TV sets which means that you tend to avoid it as much as you can.

cpn: Really interesting because Louay's presentation shows the need to use a DVB modulator in order to launch the tests on a TV.

<kaz> Presentation API

<kaz> Remote Playback API

ericc: Have you looked at the Presentation API? Two specs: Presentation API, about discovery of devices, authentication, and establishing a channel of communication to the devices so that you can send it commands. Then there's the Second Screen API that allows you to get you much more control.

ChrisLorenzo: I will look into it.

Louay: We use a DVB modulator as cpn mentioned. But we also look at using DIAL on HbbTV devices because HbbTV 2.0 requires it.

cpn: Is there a distinction between development phase and deployment?

ChrisLorenzo: Yes. One flow is developer mode. I want to simplify that flow. And then another flow is deployment to TV. If you look at mobile phones and PWA in general, it's the ability to tell some application store about your application at a certain location.
… That would be ideal from a development process. It's just a matter of agreeing that it's the right way to go. Microsoft is creating such an application store for instance.

Wolfgang: Depending on how you launch an application, functionality can be different.

ChrisLorenzo: Yes, permissions can be different. That's another part of it. The application store approach may come with that set of permissions.

cpn: I'd like to conclude on where to go next.
… That's something that we talked about before.
… It's not entirely clear to me what new activity we should start.

Wilaw: I have a couple of suggestions to move forward. We need to start somewhere.
… We have a test that CTA WAVE is developing that also tests performance.
… I believe we should start with it.
… And then find a content brand that is well-known enough to say "to show my content, you need to pass these tests".
… Netflix would likely be the last company to join such an initiative because they have their own way to do that. That's fine.

cpn: From a Media WG perspective, we want our work to be reviewed as early as possible by this community to orient our design.

Wilaw: I don't think W3C is the right group to try to coerce industry to use technologies. I think that we should go back to WAVE and convince them to look into creating threshold levels for performance.
… Then, in parallel, work with W3C to develop necessary features.

nigel: Just a reminder that not all web tests are in WPT.
… For instance, IMSC tests and TTML tests are in different repositories.

Joint Meeting with Timed Text Working Group and Media Working Group

Nigel: We have 3 topics: IMSC HRM, DAPT, TextTrackCue
… TTWG deals with formats for timed text: TTML and WebVTT
… TTML is profiled, IMSC is a subset of TTML2. It has 3 active versions, feature enhancements in each version
… Different people have adopted these versions, so we maintain them al
… Constraints on document complexity, so implementers can present the captions closely to how they're intended by the author
… Has a relationship with performance, per previous discussion. Performacne related to document complexity
… Refer to the HRM from the other specs, people with a requirmeent to use HRM can also require use of the HRM complexity levels
… The HRM is a Candidate Rec, so we're looking for implementation evidence
… Worlflow: authoring tool creates a document, goes through presentation processor, then either succeeds or fails

<kaz> IMSC Hypothetical Render Model

Nigel: It is hypothetical - it describes a hypothetical implementation pipeline

<kaz> Figure 2 Hypothetical Render Model

Nigel: concepts like ability to decode images, cache things, buffer, switch to display them
… As the presentation of captions changes, we give that a number, defines the entire presentation for a specific period of time

<kaz> Figure 3 rendering and presentation of Intermediate Synchronic Documents

Nigel: The idea is we can know what the semantic presentation is described as at a time, then the presentation processor displays it, then at display time it needs to be composited in
… If the document is so complex that the render model says it can't be rendered in the time available, it would fail the test
… It applies to both image and text profiles
… The spec is a CR, and essentially stable. There is an HRM test suite
… [shows example test]
… There's one open source implementation, in Python
… We're looking for implementation evidence to meet our CR exit criteria
… One way to do that is demonstrate that authoring tools generate valid documents
… The CR wording matches the charter, after we had objections
… Any questions?

<kaz> Status of his Document

Francois: It's a hypothetical model and a test suite. How can you be sure implementations don't deviate from the hypothetical model, so the tests are still valid for them?

Nigel: it's a model that applies to document instances, it's not specifying processor behaviour
… However, it would be possible to take a real presentation processor and ask if it presents documents that pass the HRM tests. I don't think it's a substitute for real world tests

Francois: Does it define multiple levels?

Nigel: Just one

Francois: Could be extended to analyse the complexity of an entire TV app?

Nigel: It could... Even a warning system to show it's close to the limit would be good

Francois: It can show through devtools too. Web Apps WG is making tools to make that possible

DAPT

Nigel: The DAPT spec is a profile of TTML2
… It affects localisation and accessibility
… Editors are from BBC and Netflix
… There's a lot of diversity in the approach for creating dubbing scripts or audio description
… What's missing is an open standard exchange format that supports the steps needed
… When we analysed workflows we found we weren't looking at two separate specs, so create a single standard

<kaz> Dubbing and Audio description Profiles of TTML2

Nigel: We're seeking feedback. We define things like dubbing scripts and transcripts
… We have example documents in the spec
… It allows you to block out times when you might put scripting information
… Metadata to say it's an AD document
… and languages

<kaz> Example 4

Nigel: We can include the recorded audio version. When mixing AD into program audio you can duck the program audio. That requires smooth animation of the audio gain
… We use TTML2 syntax for that
… You can embed the audio directly in the document, base64 encoded
… It describes the workflow steps, source and target language
… Then there are detailed adaptations, specific timings, then create the dubbed version, and create an as-recorded script
… Those are the use cases we're covering
… In terms of implementations, it can be used to implement an authoring tool, or in the AD world, distribute this to clients, map it to client side calls in Web Audio or Text Track Cues
… or Web Speech API
… Client side mixing using the text means you can use screen readers to read the text
… Goal is to provide an overall accessible solution

<kaz> Figure 1 Class diagram showing main entities in the DAPT data model

Nigel: We have lots of questions, as issues in the document
… Current status is wide review of the working draft
… Please have a look and your feedback is welcome
… Also we're going through horizontal review
… I have had feedback from people. Some was to say it can also be used in the production process for captions whether translated or not
… E.g., if the dubbed version doesn't match the translation subtitles
… And people unexpectedly starting to implement
… There's still work needed on the spec
… Any thoughts or questions?
… Also, on application complexity on TV, asking the device to do real time document processing and mixing, can be tough

Francois: Is there anything to explain the TTML features and how DAPT overlaps or differs?

Nigel: Not straight away. Other groups create TTML profiles too. What would you like to achieve?

Francois: It's about understanding all the specs and the HRM, and why we need the different profiles, why IMSC can't be used in a dubbing context. But I'm coming from an outside perspective

Nigel: They're targeted at different use cases. IMSC is for captions to be presented to the audience for hard of hearing requirements
… It's designed to be a useful subset with the right features. For dubbing and AD we're thinking about production processes but not mainly for presentation of text but creation of audio descriptions
… Interesting is if you look at the capabilities for each, there's more emphasis on metadata in DAPT and less on styling
… For subtitles and captions I care more about styling
… The common interchange is you can take an as-recorded dubbing script then use it as a basis to add the styling to make an IMSC document, take out the production metadata
… Intent is the work with each other and not conflict

TextTrackCue

<kaz> -> TextTarck API from HTML Living Standard

Eric: The TextTrack API only has support for WebVTT as a caption format
… For various reasons some sites don't use WebVTT and use another format and convert that format to DOM nodes and use the WebVTT as a means to know when to insert their cues into the DOM
… So the browser is responsible for knowing when it's time to show and hide cues, and script is responsible for inserting nodes in the DOM to show the cues
… That works, up to a point. Where it falls apart is in the US, the FCC mandates that the user must be able to have their own preferences for how captions are styled
… Devices have system level preferences for that. But a browser can't expose those prefs to script, as it would be fingerprinting surface
… When a web page used WebVTT where all the responsibility for captions is given to the browser, the browser can apply the user preferences to the cues and honour the user's preferences
… When a script makes the DOM nodes, the browser has no idea that what's inserted in the DOM is supposed to represent a caption
… So there's no way to apply user preferences, so every site has to have their own version of the styling preferences, which doesn't work well
… We have come up with a proposal. VTTCue inherits TextTrackCue, but TextTrackCue doesn't have a constructor
… We propose to give TTC a contructor that takes a DOM node, so script can do what it needs to do to create DOM nodes from the format they're using
… then the browser is responsible for putting it into the shadow DOM and apply the user styles
… We added attributes so you can tag the node representing the cue and the background
… There are other minor things, moving things from VTTCue to TextTrackCue
… But it's a simple proposal that can make it possible for sites that want to use non-WebVTT formats let the browser apply the user styling
… We'll present this tomorrow at 5pm in the TTWG meeting, and show a demo

Andreas: If we agree on this kind of requirement and approach, would the MEIG describe the requirement and feed it to WHATWG for HTML?

Eric: There'll be changes in the WebVTT and HTML specs.
… The way cues are rendered is an implementation detail. Maybe changes are needed in CSS, not sure
… We'll send PRs to each spec that needs changing

Chris: So using TTWG as the place to get consensus on the approach

Nigel: I'm interested in the data model for captions, as there are different understandings. So having something that meets everyones needs will be valuable

ChrisLorenzo: Will there be some non-HTML format, as we're using Canvas and WebGL?

Nigel: When we've thought about this challenge before, we though of using JSON. Accessibilty will be an issue with WebGL, as you should be exposing the text to assistive technology

Francois: If you're rendering, you won't be applying user preferences

Eric: Unless there's an API to render a document fragment to a canvas, that's a whole other thing

Breakouts

<kaz> NHK's breakout: Facilitating media content distribution across industries

Oomata: We have a breakout on media metadata, it's important, so talk about use cases in the current media industry. What should we think about for common requirements?
… If you have time to please come, it's from 12:15-13:15 at Azalea - Low Level

<kaz> Christopher's breakout: HDR on the web

ccameron: I also have a breakout on HDR, coming for images and video. A discussion on how to integrate CSS colors, canvas and WebGPU, some discussion on standardising rendering of HLG and PQ on desktop and mobile, discussing ISO standards from TC42

<kaz> 17:15-18:15 at Nervion-Arenal II - Level -1

DVB Liaison statement

Andreas: It's an update from DVB-I, a spec that combines broadcast and broadband and an update to TV Anytime on signalling of accessibility services
… The work shared with this group tries to consolidate and extend the signalling of a11y services, matching preferences
… Will be discussed tomorrow in TTWG. The group would be happy to receive comments

<kaz> a/TTWG/TTWG at Tech room, Low Level/

Nigel: This topic about marrying user a11y preferences and matching to what media is available seems to be an active discussion among groups
… People are keen to coordinate
… But on the web we're sensitive to privacy issues and need to be carefully handle

Chris: Happy to help in MEIG on the coordination if we need to

Andreas: We organised an EBU meeting with different SDOs and each presented its own approach, and each wanted more bilateral coordination

Media WG Update

[Slide 17]

<kaz> Issues in scope for v2

<kaz> Managed Media Source - implemented in Safari. Discussions

[Slide 18]

cpn: Lots of discussions on Managed Media Source in MSE. Are there other priorities related that this community would like to see addressed?

[Slide 19]

cpn: Same question for EME
… We should have a FPWD ready soon

[Slide 20]

cpn: Also check Media Capabilities

Priorities for 2023-2024

[Slide 21]

cpn: I very much welcome your input on priorities that the Media & Entertainment IG should have for next year.

[meeting adjourned]

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Succeeded: s|@1|https://www.w3.org/2011/webtv/wiki/images/4/4d/WAVE_TPAC_2023_-_Survey_Results_%26_WMAS2023.pdf

Succeeded: i/cpn: I'd like/nigel: Having a way to express those common non-functional requirements could be a common resource.

Warning: ‘s/@2/https://github.com/w3c/media-and-entertainment/files/12574037/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf/’ interpreted as replacing ‘@2’ by ‘https://github.com/w3c/media-and-entertainment/files/12574037/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf’

Succeeded: s/@2/https://github.com/w3c/media-and-entertainment/files/12574037/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf/

Succeeded: s|Slideset: https://github.com/w3c/media-and-entertainment/files/12574037/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0006/2023-09-11-CTA-WAVE-Streaming-Media-Test-Suite-Louay-Bassbouss.pdf

Succeeded: s|Slideset: https://docs.google.com/presentation/d/1sadWsW1NZkfjlUdZ5FkXa_taUCzp1jyBNtemqY2WXgE/edit#slide=id.p|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0007/Media___Entertainment_IG_Meeting_11_Sep_2023.pdf

Succeeded: s|Slideset: https://www.w3.org/2011/webtv/wiki/images/4/4d/WAVE_TPAC_2023_-_Survey_Results_%26_WMAS2023.pdf||

Succeeded: i|Have you|-> https://www.w3.org/TR/presentation-api/ Presentation API|

Succeeded: i|Have you|-> https://www.w3.org/TR/remote-playback/ Remote Playback API|

Succeeded: s/rrsagent, draft miutes//

Succeeded: s/ahve/have

Succeeded: s|[Slide 13]|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Sep/att-0007/Media___Entertainment_IG_Meeting_11_Sep_2023.pdf

Succeeded: i/ChrisLorenzo: Problem space is launching/[Slide 15]

Succeeded: s/Workflow/... Worlflow/

Succeeded: s/tie/time/

Succeeded: s/tests?/tests/

Succeeded: i|The TextTrack API|-> TextTarck API from HTML Living Standard

Succeeded: s/We have/Oomata: We have/

Succeeded: s/12-1pm/12:15-13:15

Succeeded: s/18:15/18:15 at Nervion-Arenal II - Level -1/

Succeeded: s/13:15/13:15 at Azalea - Low Level/

Succeeded: i/Slide/scribenick: tidoust/

Succeeded: i|18|-> https://github.com/w3c/media-source/milestone/8 Issues in scope for v2|

Succeeded: i|18|Managed Media Source - implemented in Safari. Discussions|

Maybe present: Andreas, atai, ccameron, Chris, ChrisLorenzo, cpn, Eric, ericc, Francois, kaz, Louay, nigel, Oomata, Wolfgang

All speakers: Andreas, atai, ccameron, Chris, ChrisLorenzo, cpn, Eric, ericc, Francois, JohnRiv, kaz, Louay, nigel, Oomata, Patrick_Griffis, Wilaw, Wolfgang

Active on IRC: atai, cpn, igarashi, irc, JohnRiv, kaz, lilin_, Louay, Mizushima, nigel, tidoust, Wilaw, young