13:57:02 RRSAgent has joined #me 13:57:06 logging to https://www.w3.org/2025/07/01-me-irc 13:57:06 Zakim has joined #me 13:58:44 endo has joined #me 14:00:24 ohmata has joined #me 14:00:30 agenda: https://lists.w3.org/Archives/Public/public-web-and-tv/2025Jun/0001.html 14:00:37 meeting: Media and Entertainment IG 14:01:12 present+ Kaz_Ashimura, Hiroki_Endo, Rob_Smith, Hisayuki_Ohmata, Kensaku_Komatsu, Ryoya_Kawai, Chris_Needham 14:01:20 rrsagent, make log public 14:01:26 rrsagent, draft minutes 14:01:27 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 14:01:57 chair: ChrisN, Igarashi 14:02:03 present+ Tatsuya_Igarashi 14:02:07 rrsagent, draft minutes 14:02:08 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 14:02:28 Igarashi has joined #me 14:02:28 present+ Francois_Daoust 14:02:35 present+ 14:03:04 present+ Walfgang_Schildbach 14:03:28 RobSmith has joined #me 14:04:32 scribenick: kaz 14:04:44 topic: New participants 14:04:55 kaz: first time for Wolfgang? 14:05:05 ws: yes, this is my first time :) 14:05:09 Welcome! 14:05:15 topic: Agenda 14:05:28 cpn: TPAC planning 14:05:35 ... Liaison statement from MPEG 14:05:44 ... Sync on the Web 14:05:57 ... DataCue API updates 14:06:26 rrsagent, draft minutes 14:06:27 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 14:06:35 topic: TPAC 2025 Schedule 14:06:51 cpn: W3C staff preparing for the meeting 14:07:04 ... will be held in Kobe on 10-14 Nov 14:07:30 ... joint meetings with Timed Text and APA 14:07:43 ... may be some schedule conflicts to be resolved 14:08:16 ... potential change to the schedule 14:08:28 ... GitHub issue 110 there also 14:09:08 -> https://github.com/w3c/media-and-entertainment/issues/110 TPAC 2025 planning 14:09:28 topic: DataCue API 14:09:30 cpn: revisit later 14:09:36 topic: Liaison from MPEG 14:09:55 cpn: got a liaison statement from ISO/IEC JTC1/SC 29/WG 2, aka, MPEG 14:10:10 ... also got a reminder recently 14:10:44 ... questionnaire about market and practical considerations for a next generation video codec 14:10:57 ... MPEG is meeting right now in Korea 14:11:31 -> https://lists.w3.org/Archives/Member/member-web-and-tv/2025Jun/0000.html Kaz's message o the liaison statement (Member-only) 14:11:47 cpn: not sure if they need our official response as W3C 14:12:35 ... but does anybody have any additional context? 14:13:02 ... interested in the opinions of those participating in MPEG 14:13:10 ... any particular insights? 14:13:14 q+ 14:13:33 rs: don't have any particular insight myself, but... 14:14:01 ... as you mentioned, W3C as a whole doesn't have any particular position about this 14:14:09 cpn: right 14:14:30 s/rs/ws/ 14:14:43 rs: discussion withing OGC 14:14:54 ... can paste information here on IRC 14:15:14 https://portal.ogc.org/files/108074#GIMI 14:15:28 ... discussion around next generation video codec still ongoing 14:15:32 cpn: interesting 14:16:01 rs: that's a CfP which is already past 14:16:28 ... encoding images at different level of detail 14:16:38 ... image of map and zoomed in the image 14:16:51 ... work is done to optimize cloud access 14:17:14 ... related to geospatial viewpoint 14:17:30 present+ Piers_O'Hanlon 14:17:35 cpn: ok 14:17:42 q+ 14:17:46 ack r 14:18:07 ... the letter we received mentioned they would welcome input from individuals 14:18:56 Kaz: I contacted them to ask about their deadline, they meet this week in Korea. It might make more sense for us both to have more collaborative discussion 14:19:07 ... Someone from Huawei would like to join this call to discuss more 14:19:21 i|contacted|scribenick: cpn| 14:19:31 Chris: I'd be happy to organise that. yes 14:19:48 q- 14:20:08 action: kaz and chris to see how to continue this discussion about mpeg 14:20:17 topic: Sync on the Web 14:20:27 scribenick: cpn 14:20:41 s/mpeg/mpeg questionnaire/ 14:20:50 rrsagent, draft minutes 14:20:51 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 14:20:57 Topic: Sync on the Web 14:21:25 Ken: I'll introduce the Sync on the Web CG, where I'm the chair 14:22:04 ... The CG is beginning, we are doing some work on immersive interactive live viewing 14:22:26 ... The CG is really related to the new media transport technology, MoQ 14:22:33 ... It's discussed at IETF 14:23:12 ... The MoQT Transport Protocol includes features, but I'll explain the parts related to our sync on the web activity 14:23:53 ... Media data and arbitrary data are handled using tracks. Each track is indicated, there's a publish-subscribe system 14:24:14 ... Object data is handled as well, e.g., lighting data such as DMX 14:24:53 ... Object means a short period of data in a track. which could be a video frame in a video track, and maybe a short period (20ms) of audio in an audio track 14:25:33 ... The idea of sync on the web. MoQT can contain metadata inside objects. It's similar to RTP headers, but any data can be contained in the object 14:26:00 ... This includes the capture timestamp as well. The subscriber (receiver) can use this to synchronise track data: video, audio, arbitrary data 14:26:05 ... Time alignment is done in the subscriber 14:26:14 ... Data tracks will be synchronised in the client software 14:27:07 ... Demo shows the transport of audio and video, and motion data analysed on the sender side. On the receiver side, it can render avatar data 14:27:23 ... We can also send MIDI data, and the receiver can use this to play sounds and effects 14:27:40 q+ to ask what kind of / how many streams of data is expected for the data track 14:27:42 ... Previously hard to realise, using WebRTC or HLS 14:28:24 ... I established the Sync on the Web CG. Current work is use case study and gap analysis, and coordination with related groups such as MEIG 14:28:45 ... I'll talk about an exact use case that is our current activity, immersive interactive live viewing 14:29:34 ... Live viewing may be a Japanese term, not known so much in English? 14:29:44 ... ChatGPT suggested live screening for movie theatres 14:30:20 ... Uses satellite communications, real time transfer of audio and video to a movie theatre. People can see the live entertainment show from different movie theatres 14:31:01 ... Currently one-way communication, so people can watch a live show, but they can't communicate with the artist, e.g., to send a call and response to the main venue 14:31:34 ... It's a frustration from the satellite audience, so we want to introduce some interactive experience for the audience 14:31:54 ... I'm collaborating with Yamaha. They're developing interesting technology called GPAP - general purpose audio protocol 14:32:27 ... GPAP is a recording technology to record all live stage data: audio, video (sometimes), and lighting (DMX) into one WAV file 14:32:57 ... This is realised using Dante, which handles multi-channel data. Yamaha is developing audio formats including lighting data, so not only audio 14:33:30 ... GPAP is a recording technology, but I want to introduce live interactive features using GPAP, transport real time GPAP data over MoQ 14:33:44 ... Latency is about 1.5 seconds 14:34:12 s/1.5/0.1/ 14:34:42 ... Demo. Synchronised data is recorded using GPAP. When recording entertainment, people can enjoy not only the video screen but also the synchronised lighting 14:34:57 ... We now introduce real-time communication, interactive live viewing with GPAP 14:35:40 .... Diagram of GPAP over MoQ. Audio and DMX data is transported to the sender side. These are separated into audio and data tracks in MoQ 14:36:15 ... Each object can include capture timestamp data so that, on the receiver side, we can interleave the data with time alignment 14:36:42 ... At the satellite side we can re-generate the GPAP data 14:37:35 ... Demo: call and response and audience reaction at the satellite venue is displayed at the back screen 14:38:14 ... Here's an article, link: xxx 14:38:49 ... Gaps for sync on the web and GPAP over MoQ. Currently this cannot work in browsers. The reason is that browsers cannot handle Dante 14:39:17 ... On Windows, Dante works with ASIO, not handled in browsers. So we can't realise these services in browsers 14:39:45 ... On Mac, because Dante is realised using CoreAudio, the browser can handle the data. But only two types of data in Mac browsers 14:40:06 ... To treat the DMX data we have to handle more than two channels so it's hard to realise in current browsers 14:40:53 ... I'm planning a local community meetup on 9 November, the day before TPAC. Details not decided, but I'll announce later, and I'd be happy if you could all join the meetup 14:40:54 ... Thank 14:41:03 q? 14:41:08 ack k 14:41:08 kaz, you wanted to ask what kind of / how many streams of data is expected for the data track 14:41:16 s/Thank/Thank you!/ 14:41:42 Kaz: Thank you for presenting. I was wondering about the data track mechanism, what kind of data, and how many streams can be included? 14:41:53 wschildbach has joined #me 14:42:58 Ken: The reason the data track is included in MoQ is to treat time related data with video or audio. That is similar to metadata but it can do more. I use DMX lighting data, it has to contain about 1Mbps of data, about the same as audio 14:43:15 Kaz: Also geospatial or position data of each audio object? 14:43:39 Ken: That's an interesting use case for the data track. Such position data would be important to make some kind of immersive audio 14:44:14 Wolfgang: Are you saying MoQ doesn't have a synchronisation mechanism of its own? 14:44:40 Ken: The MoQ spec doesn't have synchronisation, but each object can contain metadata including timestamps 14:44:43 q+ 14:44:51 ... On top of MoQ we can develop a time alignment service 14:45:18 Wolfgang: So you propose a general purpose timestamping mechanism for any data over MoQ 14:45:31 nigel has joined #me 14:45:46 Ken: Yes, has to be using the timestamp. It's hard to do using WebRTC and HLS 14:45:53 q? 14:45:56 q+ 14:46:30 Rob: Definite overlap with the work I'm doing, there's already an arbitrary data sync in WebVMT, which would accommodate this 14:46:54 ... What I don't know if it's sufficiently accurate in timing, if the DMX data is similar size to the audio data 14:47:07 ... Would be interesting to experiment. Relates to DataCue 14:47:37 Ken: I think the data track can handle any arbitrary data, so VTT or DataCue. Other data formats could be included in MoQ 14:47:55 https://www.w3.org/TR/webvmt/#data-synchronization 14:48:45 cpn: are you using Web Transport for GPAP over MoQ? 14:48:56 ken: yes, that's true 14:49:20 cpn: prioritization mechanism for track to be resolved? 14:49:38 ken: now browsers have capability to handle tracks 14:50:05 ... what period of data accuracy is important for time alignment 14:50:07 Ken: Accuracy of the time alignment, for video, 33 ms for video frame accuracy 14:50:12 ... Browsers can handle this 14:50:37 cpn: for the Dante format... 14:50:47 ...is this a media format? 14:50:58 ken: yes, that's an audio format 14:51:26 cpn: wondering about the relationship with browser's channels 14:51:34 https://en.wikipedia.org/wiki/Dante_(networking) 14:51:40 ken: it's an audio operation system 14:52:22 Ken: Dante can handle 128 channels from one audio interface, but from browsers, in AudioWorklet I found that only 2 channels of data can be handled 14:52:49 ... If we use 4 or 8 channels of data in Dante we can't use those additional channels. That's the gap, I think 14:53:21 ... That's on the Mac. On Windows it's ASIO, not native in Windows and the browser can't handle it 14:53:55 Chris: Could be a topic for the Audio WG or CG 14:54:29 i|it's an|[[ Dante is the product name for a combination of software, hardware, and network protocols that delivers uncompressed, multi-channel, low-latency digital audio over a standard Ethernet network using Layer 3 IP packets. ]]| 14:54:34 rrsagent, draft minutes 14:54:35 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 14:54:47 Ken: For big concert venues, the technical people have to manage many tracks of data. In live entertainment cases, TV, to work with Dante would be important 14:55:13 cpn: what's next? 14:55:22 ... discussion by the CG? 14:55:36 ken: maybe some kind of CG meeting 14:55:41 Ken: Need more use cases for sync with media 14:55:57 cpn: if you like, I'd happy to follow up with you 14:56:06 ... for additional use cases 14:56:11 ken: thanks 14:56:26 cpn: to get attention from people for the use cases 14:56:42 ... this is really interesting user experience 14:56:58 ... synchronize devices at peoples rooms 14:57:08 topic: DataCue API 14:57:18 cpn: final topic for today 14:57:31 rs: would you like summary? 14:57:35 cpn: yes :) 14:57:52 Rob: I raised a proposal to make a small change to the TextTrackCue constructor 14:58:21 i|are you using W|scribenick: kaz| 14:58:25 ... It has start time and end time, abstract payload, and extended to define the cue types. Relates to the previous presentation 14:58:30 i|Accuracy of the time|scribenick: cpn| 14:58:46 i|for the Dante format|scribenick: kaz| 14:59:00 i|Dante can handle 128|scribenick: cpn| 14:59:05 rrsagent, draft minutes 14:59:06 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 14:59:19 ... The change is to expose the inheritance, the naming is strange, TextTrack, but that affects HTMLMediaElement, so renaming that would't bring benefit 14:59:19 present+ Nigel_Megitt 14:59:38 ... Discussion on constructor access and instantiation 15:00:07 i|what's next|scribenick: kaz| 15:00:08 ... Benefits in terms of accessibility, TextTrackCue is supported in 95% of browsers, enable community development 15:00:28 i|Need more use cases|scribenick: cpn| 15:00:34 i|if you like|scribenick: kaz| 15:00:46 i|I raised a|scribenick: cpn| 15:00:46 ... Efficiency, better than VTTCue. There are web platform tests, with variable pass rate 15:00:50 rrsagent, draft minutes 15:00:52 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 15:00:59 ... Proposal to discuss in WICG on 15 June 15:01:05 s/June/July/ 15:02:24 q+ 15:03:50 q+ 15:03:59 ack r 15:04:00 ack c 15:04:52 -> https://github.com/WICG/datacue/pull/37 15:04:53 Nigel: Eric made a point about compatibility with the Apple proposal, and the PR DataCue #37 was just merged. For me it wouldn't fly as it is now 15:06:18 ack n 15:06:30 Chris: I'll try to organise the discussion 15:07:57 q- 15:08:32 topic: Next call 15:08:36 cpn: August 5 15:08:43 ... agenda proposal is welcome 15:09:00 [adjourned] 15:09:12 rrsagent, draft minutes 15:09:13 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html kaz 15:20:02 i/topic: Agenda/Slides: https://www.w3.org/2011/webtv/wiki/images/4/48/2025-07-01-MEIG.pdf/ 15:21:20 s/xxx/https://www.tvbeurope.com/media-delivery/ntt-com-partners-with-yamaha-to-trial-interactive-live-viewing-technology 15:22:14 cpn4 has joined #me 15:22:31 s/xxx/https://www.tvbeurope.com/media-delivery/ntt-com-partners-with-yamaha-to-trial-interactive-live-viewing-technology 15:22:37 rrsagent, draft minutes 15:22:38 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html cpn4 15:42:38 s/topic: Sync on the Web// 15:42:40 rrsagent, draft minutes 15:42:41 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html cpn4 15:47:12 scribeoptions: -noEmbedDiagnostics 15:47:14 rrsagent, draft minutes 15:47:15 I have made the request to generate https://www.w3.org/2025/07/01-me-minutes.html cpn4 18:29:35 Zakim has left #me