14:58:14 RRSAgent has joined #me 14:58:14 logging to https://www.w3.org/2022/12/06-me-irc 14:58:19 Zakim has joined #me 14:58:26 Meeting: MEIG monthly meeting 14:59:48 present+ Chris_Needham, Hisayuki_Ohmata, Kinji_Matsumara, Ryo_Yasuoka 15:00:41 Agenda: https://www.w3.org/events/meetings/ad14eb8f-4411-497a-accd-3d73fb310da8 15:00:54 scribe+ cpn 15:01:43 Present+ Tyler_Horiuchi, Kaz_Ashimura 15:02:02 Present+ Tatsuya_Igarashi 15:02:13 igarashi has joined #me 15:02:28 nigel has joined #me 15:02:47 Hisayuki_Ohmata has joined #me 15:03:38 Present+ Rob_Smith, Piers_O'Hanlon 15:04:03 Present+ Nigel_Megitt 15:04:45 scribe: nigel 15:04:54 present+ Igarashi 15:05:03 rrsagent, pointer 15:05:03 See https://www.w3.org/2022/12/06-me-irc#T15-05-03 15:05:16 rrsagent, make logs public 15:05:39 Topic: Agenda 15:05:53 Chris: Thank you for joining this MEIG monthly meeting 15:06:04 RobSmith has joined #me 15:06:22 .. Two main topics. 15:06:42 .. Media Timed Events: Most of the meeting - would like a sense of what to do next. 15:06:51 kinjim has joined #me 15:06:59 .. Rob will give an update on the related work he's been doing on WebVMT. 15:07:08 rrsagent, make log public 15:07:12 rrsagent, draft minutes 15:07:12 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html kaz 15:07:24 .. Application Development, what we do next there. 15:07:28 Topic: Media Timed Events 15:07:33 Chris: Sub-topics: 15:07:37 .. DataCue API proposal 15:07:48 .. Browser API for surfacing DASH emsg events via DataCue 15:07:56 .. Support for H.264 / H.265 SEI events 15:07:59 .. WebVMT 15:09:34 Subtopic: DASH emsg events 15:09:45 Chris: Discussed at TPAC in September this year 15:10:14 present- Igarashi 15:10:17 Karen has joined #ME 15:10:37 piers has joined #me 15:10:37 Slides -> https://docs.google.com/presentation/d/1OFQIPg7V8BCAe9BIpKINd9PWAkO9Ip1vp-Zq2G7eM50/edit 15:11:06 -> TPAC discussion: https://www.w3.org/2022/09/16-mediawg-minutes.html#t09 15:11:24 Chris: We discussed allowing DataCue to surface some in-band events to the browser. 15:11:35 present+ Tom 15:11:39 .. Matt Wolenetz who works on Chrome fed back that there is some potential interest in the use cases 15:11:50 .. but some technical details that needed to be figured out. 15:11:59 .. He was looking for more specific proposals. 15:12:17 .. Detail in MSE#189 that captures his particular questions. 15:12:32 -> MSE #189 https://github.com/w3c/media-source/issues/189 15:12:42 rrsagent, draft minutes 15:12:42 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html kaz 15:12:43 .. Implementing this is non-trivial at this stage from their perspective. 15:12:51 .. At the same time, we've been collaborating with the DASH-IF 15:12:59 chair: Chris_Needham, Tatsuya_Igarashi 15:13:03 .. who are developing interop guidelines for DASH generally, including event handling. 15:13:21 .. It seems that where they've got to is figuring out how interop should work with EMSG events. 15:13:41 .. For them, it feels too early to propose a browser implementation given practical details 15:13:52 .. of interop for how events are delivered in the media and acted on in the player. 15:14:04 .. They suggested for now, leave this to player libraries rather than baking into the browser. 15:14:21 .. Establish support first, e.g. in dash.js, then potentially in the future review browser integration. 15:14:45 .. Representatives from the CTA WAVE project said they are in support of something, 15:14:59 .. there's a question about whether they are still interested in supporting and pursuing that, 15:15:13 .. because it needs somebody to actively contribute to move that forward. 15:15:26 .. The proposal I'm sharing with you is: let's stop activity on this particular aspect 15:15:39 .. unless we get a strong indication from members of our group, or WAVE, who we can liaise with, 15:15:51 .. to indicate that this is important enough to invest more time in, to develop proposals. 15:15:59 q? 15:16:00 .. Any thoughts or suggestions? 15:16:21 .. 15:16:32 rrsagent, draft minutes 15:16:32 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html kaz 15:16:34 .. I will contact my contacts in the WAVE project and explain that's where we've got to. 15:16:41 .. That's the next step for us on this. 15:16:49 Subtopic: DataCue API 15:17:07 Chris: This is valuable even without EMSG support because it supports a wider set of use cases. 15:17:23 .. If we move the parsing of the messages from the media into javascript then there still needs 15:17:32 .. to be an API for scheduling and triggering events at the right time. 15:17:43 .. The current API for that is VTTCue which is more oriented towards caption rendering. 15:17:57 .. Having something more targeted towards timed metadata has value still. 15:18:16 .. Since the VTTCue API exists and can be used it may not be such a compelling reason 15:18:20 .. for introducing a new API. 15:18:41 .. I'm proposing here that we reframe the DataCue proposal independent of any specific kind of 15:18:46 .. timed metadata. 15:18:55 .. This requires some work to update the explainer, look at the draft spec. 15:19:08 .. I'm asking for indications that this is a useful thing to do, and is worth spending time on. 15:19:24 .. It has been moving fairly slowly and may or may not be worth spending time, 15:19:29 .. depending on the interest in using it. 15:19:41 .. If you're interested, please let me know and get in touch because that helps make the case 15:19:45 .. for doing more work. 15:19:52 .. If you'd like to contribute towards it then even better. 15:20:15 .. [wonders if colleagues from ByteDance are on the call] 15:20:31 .. Their use case is surfacing H.265/H.264 SEI events as timed metadata because 15:20:43 .. they have a production pipeline that encodes their timed metadata in this way. 15:21:01 .. They want accurate rendering so using video callback is useful. 15:21:26 .. The lack of SEI support on iPhone devices, with browser native HLS playback, is an issue for them. 15:21:42 .. I'd like to ask them: if we move towards MSE implementation on those devices does that change 15:22:00 .. their need for API support for events, e.g. if we can move the application handling into javascript. 15:22:06 q? 15:22:22 Subtopic: WebVMT 15:22:43 RobSmith: I've been working on two strands. 15:23:04 .. First, OGC Testbed. OGC looks after location geospatial standards. 15:23:18 .. I'm involved in moving features, speed, direction etc. 15:23:40 .. 3rd year doing this. Last year, tracking cyclist using a Dashcam from a moving vehicle. 15:23:46 .. Not trivial. This builds on that. 15:24:02 .. This year we are interested in aggregating data from multiple sources. Two use cases from road networks people 15:24:13 .. (UK authorities). They are the most mundane! 15:24:22 .. Identifying traffic travelling in the wrong direction, 15:24:52 .. Litter monitoring, the buildup of foreign objects - paper cups, bottles etc, bits of tyre or other debris, 15:25:20 .. traffic cones where they should not be, and other maintenance issues like potholes, 15:25:27 .. growth of vegetation covering signs etc. 15:25:32 .. These are two sides of the same coin. 15:26:00 .. Identifying static objects from moving vehicles or moving vehicles from static locations. 15:26:29 .. Geotagged video can help model when things are building up and intervention is needed. 15:26:40 .. The work done is identifying what the issues are in determining orientation. 15:27:03 .. Hot off the press! Just at the start of this month, new standard: GeoPose, combination of location and 15:27:14 .. orientation, which way you're pointing in some sense. 15:27:34 .. I built a 3D compass demo, in conjunction with the GeoAlignment activity in W3C. 15:27:44 .. First in a web page, then in an app for experimentation. 15:28:02 .. The 3D part is because it floats like a ship's compass. If you tilt/pitch/roll the compass stays in the 15:28:10 .. horizontal plane so you get an accurate heading. 15:28:26 .. The engineering report is being reviewed, due for publication in January 2023. 15:28:34 .. A free of charge app will accompany it. 15:28:58 .. The other thing is getting WebVMT from an ED to publication. 15:29:13 .. Spatial data on the web are offering to publish it. 15:29:35 .. I've been adding new features including data synchronisation, altitude (which is contentious 15:30:03 .. due to the use of WGS84, height above the ellipsoid), and CSS selector system support, 15:30:14 .. so the different paths and zones can be styled in different ways by CSS. 15:30:30 .. Spatial Data on the Web are reviewing prior to publication early next year. 15:30:40 .. Issue: the HTML interface for timed metadata and how to represent it. 15:30:56 .. In WebVTT the timed text format, metadata is handled as JSON objects, structured in some way. 15:31:09 .. The question is how to make that available in HTML and whether there's any precedence for creating 15:31:13 .. an interface of that sort. 15:31:15 q+ 15:31:18 q? 15:31:21 q+ 15:31:42 Nigel: Could you explain what you mean about making the data available in HTML? 15:31:56 Rob: From the WebVTT, timed metadata is a text encoding of a JSON object 15:32:08 Louay has joined #me 15:32:14 present+ Louay_Bassbouss 15:32:21 present+ Louay_Bassbouss 15:32:22 ... The references in that document refer to HTMLTextNode, which doesn't seem appropriate for timed metadata, as it has fort, colour, display region, etc 15:32:45 ... so how to make data available in the HTML interface. One way could be plain text rather than HTML text, then parse as JSON 15:33:02 ... But JSON is a structured data format, and there doesn't seem to be anything to handle that 15:33:14 Nigel: What would a representation of the data look like ideally? 15:33:37 ... To an end user, I mean 15:33:55 i/Could you exp/scribenick: cpn/ 15:34:02 Rob: It depends on the designer of the page, the metadata could represent anything 15:34:43 ... How might you represent temperature visually, there are different ways. But you still need to pass the data through, with a timestamp attached 15:34:58 Nigel: So a cue handler? 15:35:35 Rob: Yes, you know it's a JSON object. DataCue has an associated type, so you could have a JSON schema to describe the structure 15:35:48 ... How it's displayed is an implementation issue 15:36:23 Piers: There's a geolocation API already, how is it different? 15:36:47 Rob: That API describes your current location, but with geo-tagged video there are multiple locations, based on the video 15:37:27 ... Commercially available devices, smartphones and dashcams, these are location aware and record video 15:37:41 ... So having a way to identify those locations and things appearing in the video is useful 15:38:05 ... Relate objects in the frame to location information, e.g., inferred from imagery 15:38:09 scribe: nigel 15:38:13 q? 15:38:16 Chris: Anything written down on this? 15:38:20 ack ni 15:38:20 ack n 15:38:44 RobSmith: I wanted to consult here first because the broadcast perspective is different to the 15:38:53 .. geospatial perspective. 15:39:07 Chris: Yes. I don't know that we'd put the data into the HTML, 15:39:21 .. you'd fetch a separate resource rather than having it inside the HTML. 15:39:39 RobSmith: I mean something like that, where the WebVMT resource would be equivalent to the WebVTT 15:39:56 .. resource, and would contain content timed synchronous with some timeline like media for example. 15:40:12 .. It's how to access that, get the data from the WebVMT file to the JS interface or some API that the 15:40:21 .. programmer can access to read, process and display it. 15:41:01 Chris: Typically in entertainment use cases you have a JS media player library that's typically using 15:41:15 .. MSE to fetch the media segments, and it could also be responsible for fetching other kinds of document, 15:41:36 .. then there's custom JS code to parse the document, extract the object and then apply whatever 15:41:44 .. logic is needed to turn it into a visual representation. 15:42:06 .. I think there's a question that I don't think we'll have time to answer today, but we can follow up 15:42:09 .. in a future meeting. 15:42:24 q? 15:42:40 Rob: Yes, the key to this is how to represent generic data with some arbitrary data and identify it 15:42:55 .. on the other side and know what it means and get what you're expecting from it. That's the trick. 15:42:57 ack k 15:43:18 Kaz: From the M&E viewpoint, a possible use case might be metaverse, with 3D video and user avatars, 15:43:41 .. and you need to identify the user's position, direction and time, for overlaying on video content. 15:44:02 .. Another possible use case is some kind of robots are getting popular in smart cities and buildings, 15:44:18 .. and identifying those automatically driven vehicles or robots might be a use case. Those robots 15:44:26 .. can also provide detailed signage too. 15:44:37 ack k 15:44:48 Rob: Agree. AR is another potential use case, overlaying virtual objects on a video feed, that are location-specific. 15:45:05 Chris: This is a whole thing in itself. Interested in connections to other W3C groups? 15:45:36 Kaz: Yes, I started discussions with some at TPAC and those use cases are also for web-based digital 15:45:50 .. twins. 15:46:07 Chris: We'll follow up on these ideas. 15:46:08 q+ 15:46:16 .. and the HTML integration parts. 15:46:17 s/based digital/based digital twins/ 15:46:22 s/.. twins.// 15:46:29 rrsagent, draft minutes 15:46:29 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html kaz 15:47:13 ack n 15:47:14 Nigel: About understanding what's in the JSON, it sounds like you want to define a vocabulary, which has been done before. Could look at things like RDF 15:47:20 Chris: or JSON-LD 15:47:43 Topic: Application Development for Consumer Products 15:47:50 Chris: We've had a few meetings on this. 15:48:10 .. Started with Chris Lorenzo (ComCast) talking about lightning-JS to allow devs 15:48:27 .. to provide more performant UI on TVs. We had a series of meetings talking about 15:48:42 .. different perspectives of the experience of building TV applications, and the performance issues 15:48:50 .. because of memory or CPU limitations on different devices. 15:49:03 .. We heard input from NHK and BBC in previous meetings. 15:49:15 .. What I wanted to think about next is what we do with this information and what might our next steps be. 15:49:31 .. Perhaps it would be useful to consolidate this information into a single place 15:49:38 .. as a way to capture everything. 15:49:41 .. I started a draft document. 15:49:50 https://docs.google.com/document/d/1eiHVKiBMwss7YFiPdXRw80xRemks7yJJ21KC3dbhIHA/edit# 15:50:07 Chris: This is not complete. There are lots of gaps. 15:50:38 .. It might be a useful starting point to capture everything and 15:50:50 .. for each issue raised, to figure out what practical steps we could take. 15:51:02 .. For example, some may point to a new standardisation requirement or a change to a standard. 15:51:10 .. Some might be about implementations or adoption. 15:51:14 .. Some might be testing related. 15:51:30 .. As a way to try to guide us to what to do next, writing this stuff down and doing analysis 15:51:37 .. to figure out what's needed could be helpful. 15:51:57 .. [shows document on screen share] 15:52:16 .. Performance issues: HTML and CSS based rendering challenges: 15:52:28 .. use of animations and transitions in particular, where it's not possible to know if an effect 15:52:38 .. will apply smoothly from a visual perspective. 15:53:03 i|Perfo|-> https://docs.google.com/document/d/1eiHVKiBMwss7YFiPdXRw80xRemks7yJJ21KC3dbhIHA/edit#heading=h.k4lch7anjow Draft document on Application development for consumer devices| 15:53:05 .. The CSS Will Change module has been proposed as a hint to the UA, to pre-render animations. 15:53:23 .. There may not be a specification requirement here, but a testing requirement to discover which 15:53:35 .. devices work well or less well with certain animation effects. 15:53:45 .. There's an open question about standardising the testing approach. 15:53:59 .. For Canvas based rendering I'm hoping that Chris will write something here. 15:54:21 .. Potential opportunities like with the lightningJS framework. We've mentioned an accessibility concern 15:54:29 .. with that, writing directly to a canvas. 15:54:46 .. Then maybe its an implementation issue. 15:55:06 .. I'm hoping to identify for each problem area what standardisation needs there are, if it is an implementation issue etc. 15:55:20 .. This goes on, e.g. memory allocation, pointing to existing specs where they exist. 15:55:25 .. Spatial nav and voice control. 15:55:35 .. Application distribution and installation. 15:55:40 .. Development and debugging. 15:55:49 .. Application launching 15:55:56 .. Stream switching with minimal latency. 15:56:11 .. I invite your suggestions and feedback as where we should go next with this. 15:56:16 .. Is this a useful exercise? 15:56:24 .. Worth capturing more detail? 15:56:31 .. Turn it into a useful set of recommendations? 15:56:48 .. I'd like to invite your collaboration to help capture this, if it is useful. 15:56:57 .. I need your help! 15:57:11 .. Turning this information into something actionable and useful from a standardisation perspective. 15:57:20 .. The document is here, you are welcome to add your comments to it. 15:57:34 .. If there is any particular topic here that interests you I would encourage you to add details 15:57:49 q? 15:57:49 .. where they are missing, and then we can review it in a future meeting, to see if we 15:58:05 .. have identified gaps that need to be addressed or potential liaisons with other groups. 15:58:21 .. Any thoughts or questions? Is this useful? Is it the right direction? 15:58:58 scribe+ cpn 15:59:28 q+ 16:00:13 Kaz: I think the topic and approach is good. There are several technical topics mixed up here. 16:00:34 ... We should be careful to describe what is the pain point from each viewpoint, expected use cases from services, etc 16:01:14 ... Each company or industry is expected to describe their initial idea, then we can look at that and look at a more detailed template for requirements 16:01:36 Chris: We could make this into a template if useful 16:01:53 ack k 16:02:32 ... Better to consolidate into a document or consider separately. For example, we could have separate GitHub issues for each item 16:03:25 ... We can ask input from people with interest 16:03:52 Chris: Please make suggestions in the document on topics you're interested in 16:03:53 q? 16:04:19 Topic: AOB 16:04:23 Chris: Anything else? 16:04:25 [nothing] 16:05:06 rrsagent, draft minutes 16:05:06 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html kaz 16:38:38 s/.. Implementing this is non-trivial/Chris: Implementing this is non-trivial 16:38:53 rrsagent, make minutes 16:38:53 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html nigel 16:39:21 i/Nigel: About understanding what's/scribe+ cpn 16:39:23 rrsagent, make minutes 16:39:23 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html nigel 16:40:40 scribeOptions: +diagnostics 16:40:42 rrsagent, make minutes 16:40:42 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html nigel 16:41:09 scribeOptions: embedDiagnostics 16:41:11 rrsagent, make minutes 16:41:11 I have made the request to generate https://www.w3.org/2022/12/06-me-minutes.html nigel 17:36:08 nigel has joined #me 18:18:31 Karen has joined #ME 18:19:22 Zakim has left #me 19:11:48 nigel has joined #me 19:20:41 Karen has joined #ME 19:44:25 Karen has joined #ME 20:14:26 nigel has joined #me 21:25:13 nigel has joined #me 21:56:09 nigel has joined #me 22:05:36 Karen has joined #ME 22:25:20 nigel has joined #me 22:45:46 nigel has joined #me 23:25:17 nigel has joined #me