04:54:42 RRSAgent has joined #me 04:54:42 logging to https://www.w3.org/2021/10/27-me-irc 04:54:48 Zakim has joined #me 04:58:12 takio has joined #me 04:58:34 kaz has joined #me 04:59:03 kaz has joined #me 05:00:02 tidoust has joined #me 05:00:45 eehakkin has joined #me 05:00:57 Tomoaki_Mizuhima has joined #me 05:01:04 RobSmith has joined #me 05:02:15 meeting: Media and Entertainment IG vF2F meeting - Day 2 05:02:30 igarashi has joined #me 05:03:52 Topic: Introduction 05:04:09 zacharycava has joined #me 05:04:26 https://docs.google.com/presentation/d/15-QdWc87IiUhlPOwER7Gxde1a2FvG5h7OYMva0eGc_s/edit <-- Chris's slides 05:04:35 h_endo has joined #me 05:04:43 kinjim has joined #me 05:04:46 cpn: quick intro 05:04:59 ... 2nd meeting of MEIG during TPAC 05:05:12 ... co-Chairs here 05:05:18 ... feel free to contact us 05:05:31 ... our mission from the Charter 05:05:38 ... resources here 05:05:49 ... home page, charter, GitHub, etc 05:05:58 tohru_takiguchi has joined #me 05:06:13 ... minutes will be published publicly later 05:06:23 ... note taken on IRC 05:07:04 ... be aware of the Code of Conduct and Patent Policy 05:07:15 kaz: queue management by Zoom's raise hand 05:07:27 ... I'll add them to speaker queue on the IRC side 05:07:49 i/quick/scribenick: kaz/ 05:07:58 cpn: Mr. Sato's presentation first 05:08:08 Youngsun_Ryu has joined #me 05:08:16 i/Sato/topic: NHK's update on Hybridcast 05:08:20 @@@slides tbd 05:08:24 scribe+ cpn 05:08:47 Sato: I'm from NHK, I'll present an update and issues from Hybridcast 05:09:15 ... First, I'd like to explain our future vision of a web based broadcast platform 05:09:48 ... We aim to make it possible to use any viewing environment and provide the same UX regardless of device and transmission path 05:10:02 ... Our goals are to make UX of broadcast and internet streaming seamless 05:10:26 ... The same quality of service and viewing experience for broadcasting and internet 05:10:46 .... Two requirements. A TV and smartphone can be connected for remote control 05:10:59 present+ Kaz_Ashimura__W3C, Tatsuya_Sato__NHK, Chris_Needham__ BBC, Youngsun_Ryu__Samsung, Eero_Hakkinen, Francois_Daoust__W3C, Frode_Hrnes, Hiroki_Endo__NHK, Tatsuya_Igarashi__Sony, Kinij_Matsumura__NHK, Rob_Smith, Takio_Yamaoka__Yahoo!_Japan, Tohru_Takiguchi__NHK, Tomoaki_Mizushima__IRI, Zachary_Cava 05:11:02 ... We're considering providing content to devices using W3C WoT technology 05:11:18 Subtopic: Seamless switching between broadcast and internet streaming 05:11:21 present+ Jake_Holland__Akamai 05:11:35 Sato: We're using an OS and platform independent HTML app 05:11:55 ... A broadcast oriented managed application is used for presenting broadcast programmes and the content selection UI 05:12:10 ... A broadcast independent application runs independently of broadcast services 05:12:17 ... It's used for content selection and internet streaming 05:12:26 agenda: https://github.com/w3c/media-and-entertainment/issues/71 05:12:36 ... It's possible to switch seamlessly between broadcast and internet streaming 05:12:46 ... [Application demo] 05:13:36 ... The initial screen shows live broadcast programs, with ondemand programs 05:13:57 jake_ has joined #me 05:14:09 ... You can go to live and on-demand programs or transition directly from internet to broadcast without returning to the home screen, and vice versa 05:14:26 q? 05:15:02 ... This application has a remaining issue that transition between broadcast and internet streaming isn't as fast as switching between broadcast channels 05:15:29 ... Requirments: Low delay for video playback, to allow users to view video at the same time 05:15:50 ... Firing events with precise timing accuracy, for dynamic ad insertions and programme-linked UI 05:16:08 ... Reducing latency in online delivery. Would CMAF with WebTransport be a solution? 05:16:40 ... Event firing in MSE playback is also an issue. Accuracy of event handling in JavaScript is affected by other processing on the device 05:17:07 ... Hybridcast Connect allows you to connect your TV to your smartphone. It provide device discovery and a command interface 05:17:35 ... The protocol uses open standards to connect the devices. In our current spec version, it's built on open standards such as DIAL 05:17:50 ... and two-way communication using WebSocket 05:18:16 ... Open and secure standards are desired for communication, eg, HTTPS in local networks 05:18:28 rrsagent, make log public 05:18:28 ... We're looking at new ways to present content using IoT devices 05:18:36 rrsagent, draft minutes 05:18:36 I have made the request to generate https://www.w3.org/2021/10/27-me-minutes.html kaz 05:19:10 ... Examples include presenting audio on smart speakers, or news on a smart mirror. You could change the color of your room lights, linked to the content 05:19:13 chair: ChrisN, Igarashi 05:19:14 rrsagent, draft minutes 05:19:14 I have made the request to generate https://www.w3.org/2021/10/27-me-minutes.html kaz 05:19:35 ... Issue is that there's no established method to present content based on device characteristics. WoT is promising for this issue 05:19:46 ... Our vision is an IoT based media framework 05:20:06 ... It delivers broadcaster content to a device without a broadcast tuner, integrated broadcast and broadband services 05:20:23 ... Connect with various IoT devices and internet services 05:20:35 ... [Demo] 05:21:10 present+ Kazuhiro_Hoya__JBA 05:22:26 ... Devices in this video are operated using Hybridcast Connect and Web of Things 05:22:29 ... Thank you 05:22:34 q? 05:22:49 cpn: thank you for your presentation! 05:22:53 q+ 05:23:26 Kaz: I work for both MEIG and WoT WG 05:23:43 ... You mentioned several issues on TV performance, timing mechanisms 05:24:04 ... There was discussion on performance at the last MEIG meeting. Are you interested to join that discussion? 05:25:03 Sato: Yes, I am 05:25:32 Kaz: What kind of WoT Thing Description was used? Maybe you could provide input on the WoT side as well 05:26:04 ... You could work with Endo-san for that purpose 05:26:23 present+ Hiroshi_Fujisawa__NHK 05:26:45 q+ Rob 05:26:51 ack k 05:27:17 Sato: I'd like to continue work on WoT, yes 05:27:28 ack rob 05:27:34 q+ cpn 05:28:01 Rob: Regarding IoT integration. Are you looking for a synchronisation mechanism for timed events? 05:28:34 ... For example, synchronising room light changes, should that be synchronised to the media? 05:30:01 Sato: Currently we're using broadcast content timecode for synchronisation, but want to use the MTE mechanism 05:30:30 kazho has joined #me 05:30:39 ack cpn 05:30:41 Rob: I wonder if there's interest to use DataCue, which we'll discuss next 05:30:49 cpn: wanted to ask about secure connection 05:31:04 ... there was a group named HTTPS local CG 05:31:13 ... wondering about the activity 05:31:18 ... any updates? 05:31:35 ... one of my involved groups, Second Screen, also working on discovery and connection 05:31:52 ... so wondering if it could be a possible solution 05:32:00 iga: I'm co-Chair of the CG 05:32:17 ... the current status is not very active 05:32:21 ... due to several issues 05:32:26 ... including the COVID situation 05:32:41 ... we're working on the issue almost 4-5 years 05:32:49 ... discussed several possible solutions 05:32:58 ... but have not got feedback 05:33:09 ... if you have any feedback, would be welcome 05:33:11 q+ 05:33:24 i/wanted to/scribenick: kaz/ 05:33:39 q+ 05:34:14 ack k 05:34:20 Kaz: There's also interest in WoT discovery capability. Also decentralized identity. Need to continue discussion 05:34:24 ack j 05:34:42 Concrete proposal: https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/ 05:34:48 Jake: Plex has a solution that they have published 05:34:57 i/There/scribenick: cpn/ 05:35:15 ... Not sure how to submit a concrete proposal, but it's the solution I think of when this issue comes up 05:35:50 ... I wonder if it's helpful. Plex is a local media server. They changed their servers to use HTTPS to continue interoperating with browsers 05:36:12 ... The article describes how it works. Not sure how applicable it is to your use case 05:36:34 cpn: the other point you mentioned is seamless switching 05:36:43 ... you have a low-latency protocol for that purpose 05:36:50 igarashi_ has joined #me 05:36:52 ... what would cause the delay 05:37:07 ... what about content buffering, etc.? 05:37:17 ... maybe could you describe the mechanism a bit? 05:38:00 sato: this issue broadcast content and internet content are different pages on the Web contents 05:38:09 here is the http local network cg github. https://github.com/httpslocal 05:38:11 ... and causes the delay problem 05:38:13 Sato: The broadcast and streaming content is another document in the web app 05:38:50 cpn: HbbTV is moving to adapt 05:39:10 ... may be the best place for further discussion 05:39:26 sato: thanks 05:39:32 The CG has studied about the Plex solution. One of issues is that it requires TLS server certificates for a bunch of IoT devices. 05:40:18 cpn: if you'd like to organize future meetings to focus on the topics, happy to organize it 05:40:27 present+ Yajun_Chen 05:40:28 The other issue is that it does not support adhoc discovery of device on local network. 05:40:34 cpn: any other questions? 05:40:36 (none) 05:40:45 topic: MTE updates 05:41:11 i/MTE/cpn: thank you very much for presenting!/ 05:41:23 cpn: update on the project of Media Timed Events 05:41:30 ... running for a while in the MEIG 05:41:37 ... background briefly 05:41:48 ... HTML5 spec included DataCue API 05:41:56 ... but removed in WHATWG HTML 05:42:14 ... WebKit is the only main stream browser to implement it so far 05:42:23 ... HbbTV uses HTML DataCue 05:42:40 .. MSE issue #189: add support for media-encoed events 05:43:06 i/background/... [History and background]/ 05:43:38 ... MEIG MTE TF since 2018 following input from ATSC and DASH-IF 05:43:51 ... CTA WAVE proposed CMAF MSE Byte Stream Format spec 05:44:07 ... WICG has DataCue repo since 2018 05:44:18 present+ Calvaris 05:44:25 ... [Use Cases] 05:44:39 ... don't dive into the details of each use case but... 05:44:47 yajun_Chen has joined #me 05:44:55 ... lecture recording with slideshow 05:45:02 atai has joined #me 05:45:07 ... video with synchronized map display 05:45:15 ... client side dynamic content insertion 05:45:52 ... - question about how much demand for this 05:46:14 yajun has joined #me 05:46:28 ... etc. 05:46:36 ... [Developer benefits] 05:46:56 ... apps must currently either use VTTCue or custom app code 05:47:42 ... for custom JS code: HTMLMediaElement timeupdate events are too coarse (250 ms) 05:47:51 ... for accurate synchronization 05:47:59 ... also polling is expensive 05:48:14 ... for VTTCue: can't store ta objects directly 05:48:29 ... really intended to cue rendering 05:48:34 ... [API proposal] 05:48:44 .. several proposals consist of 3 parts 05:48:59 ... 1. DataCue API based on existing WebKit imple. 05:49:17 ... with two values 05:49:30 ... data itself can be any structure 05:49:50 ... 2. mappings for browser-generated timed matadata events 05:49:58 ... currently in manifests 05:50:06 s/currently/carried/ 05:50:36 ... 3. extending the TextTrackCue endTime to support +Infinity 05:50:56 ... we don't know about the end which can be updated 05:51:06 ... [In-band emsg event handling] 05:51:13 ... collaboration with DASH-IF Events TF 05:51:34 ... defining interoperability guidance for DASH events 05:51:39 ... a lot of open questions 05:51:45 ... how event dispatch 05:51:51 ... MSE-based playback 05:51:55 ... etc. 05:52:18 ... [In-band emsg event subscription API 05:52:21 s/API/API]/ 05:52:32 ... requirements: 05:52:51 ... allow web app to set the dispatch mode on-receive or on-start 05:53:09 ... allow web app to tell the browser which events to surface to the app 05:53:29 ... some feedback from Safari WebKit 05:53:36 ... only implementation so far 05:53:45 ... limitation on media playback engine 05:54:03 ... only support on-dispatch mode 05:54:14 ... [In-band emsg event handling] 05:54:17 ... open questions: 05:54:26 ... is there still interest in this feature? 05:54:35 ... should we leave in-band event parsing to JS? 05:55:33 ... also editorial help wanted to develop the explainer and the spec draft 05:55:48 ... if interested, please let me know 05:56:01 ... help is appreciated 05:56:10 ... [TextTrackCue unbounded end time] 05:56:18 ... part of the proposals I mentioned 05:56:22 ... April 2021 05:56:33 ... thanks to the help from Rob 05:56:44 ... HTML spec change (#5953) accepted 05:56:56 ... WebVTT spec change (#493) accepted 05:57:09 ... and Web Platform tests contributions (#28394) accepted 05:57:26 ... implementation bugs are filed 05:57:36 ... code contributions are neeed 05:57:44 s/neeed/needed for browsers/ 05:57:54 ... [Unbounded cues in WebVTT] 05:58:00 ... do we need syntax to handle it? 05:58:09 ... #496 05:58:13 ... two main use cases 05:58:25 ... timed metadata in live streams (chapters, etc.) 05:58:28 ... live captioning 05:58:49 ... requirements 05:59:01 ... allow a cue to have an unbounded end time 05:59:11 ... etc. 05:59:19 ... [Unbounded cues in WebVTT: current status] 05:59:30 ... WebVTT issue #496 idscusses syntax options 05:59:42 ... for timed metadata, we concluded... 06:00:12 ... still open questions 06:00:23 ... may still be a need 06:00:34 ... establish a requirement in that area 06:00:46 ... [Unbounded cues in WebVTT: current status] 06:01:09 ... (diagram with Chapter 1 and Chapter 2 at the top) 06:01:37 ... (and segment 1, segment 2 and segment 3 corresponding to those chapters) 06:02:09 ... in segment 3, the chapter 2 contineus and the endTime extended 06:02:28 ... media packaging mechanism to deliver the video stream 06:02:43 ... it allows the player to know about the chapters 06:02:47 ... [Documents] 06:02:52 ... that's about where we're 06:03:07 ... requirements for media timed events 06:03:16 ... DataCue API requirements 06:03:21 ... DataCue API 06:03:25 ... etc. 06:03:29 ... [Next meeting] 06:03:35 ... Monday, 15 Nov. 2021 06:03:43 ... your participation welcome 06:03:58 q? 06:04:07 ... any questions? 06:04:10 q+ rob 06:04:20 rs: great summary 06:04:24 ... a couple of points to add 06:04:32 ... asked about the purpose of the API 06:04:41 ... is this still required? 06:04:57 ... is there common interfaces? 06:05:09 cpn: depending on which part you mean 06:05:29 ... possible to implement media parsing 06:05:45 ... the other part is actually for your own data 06:05:51 ... that application creates 06:05:58 ... not interested to extract the event 06:06:15 ... you create yourself 06:06:21 ... you can use VTT cue 06:06:28 ... to handle those events 06:06:43 ... but it's a bit inconvenient 06:07:00 ... some extent, can be done today using extend JS 06:07:30 ... maybe interested in optimization 06:07:47 ... not strong interest by browser vendors so far 06:08:10 ... the other issue with VTT cue is timing accuracy for event 06:08:20 ... spec change for HTML to handle timing accuracy 06:08:34 ... could be up to 250ms so far 06:08:48 ... would have much more accurate mechanism 06:08:59 ... not sure what would be the best number, though 06:09:18 ... initial implementation for Chrome, but not really sure 06:09:26 ... should follow it up 06:09:40 rs: pushing datacue into JSON 06:10:06 ... you can have JSON object with particular identity type 06:10:20 cpn: that is a key benefit for datacue 06:10:33 ... it's the way Safari handles it 06:10:40 ... the type field is important 06:10:57 rs: the other thins is 06:11:08 ... you mentioned data cue api 06:11:36 cpn: didn't show the id 06:11:45 ... would be inherited by datacue 06:11:54 s/didn't/sorry I didn't/ 06:12:10 ... what's the mapping with the ID and the event 06:12:18 ... need to be defined 06:12:22 rs: tx 06:12:24 q? 06:12:29 q+ 06:12:32 ack r 06:13:19 kaz: quick comment 06:13:22 Kaz: For automated live captioning in zoom, what mechanism are they using, and is it related to your proposed use case? 06:13:30 s/kaz: quick comment// 06:14:03 cpn: yeah, I don't what they're using but related to my use case on synchronization for live captioning 06:14:33 s/what/know what/ 06:14:50 ... issue for timed text wg as well 06:15:01 ... we have good relationship with them 06:15:07 ... same underlying mechanism 06:15:27 ... MEIG work related on timed event 06:15:42 ... collaboration on common technical issues on unbounded cues 06:15:48 q? 06:15:51 q- 06:15:54 The Zoom live captions are visibly correcting themselves in real time, e.g. happy -> happening 06:16:32 cpn: any other comments? 06:16:35 (none) 06:16:42 cpn: thank you for the discussion 06:16:49 ... feel free to contact us 06:17:09 ... let's talk about how to continue the discussion for the topics today 06:17:27 ... apologize we went over the time 06:17:44 topic: Next meetings 06:18:00 cpn: Nov. 2, 1am UTC 06:18:06 ... MiniApps joint meeting 06:18:17 ... follow-up meeting of the Monday meeting 06:18:38 ... different approach for Web technologies to handle media 06:18:49 ... initially exploratory discussion 06:19:38 ... also we'll need to start a new activity on performance based on the Monday discussion 06:19:44 ... participation in that is also welcome 06:19:53 ... we'll make announcement on the mailing list 06:20:08 ... thank you, all! 06:20:16 ... look forward to seeing you soon 06:20:20 [adjourned] 06:20:21 thanks chris 06:20:32 rrsagent, make log public 06:20:36 rrsagent, draft minutes 06:20:36 I have made the request to generate https://www.w3.org/2021/10/27-me-minutes.html kaz 06:21:35 present + 08:24:48 atai has joined #me 08:31:47 atai1 has joined #me 10:07:44 atai has joined #me 10:09:41 atai has left #me 10:46:47 Zakim has left #me 12:06:09 Karen has joined #ME 13:00:37 Karen has joined #ME