07:28:51 RRSAgent has joined #webscreens 07:28:51 logging to https://www.w3.org/2019/05/23-webscreens-irc 07:28:56 Zakim has joined #webscreens 07:29:02 RRSAgent, make logs public 07:29:12 cpn has joined #webscreens 07:29:25 Meeting: Second Screen WG/CG F2F - Day 1/2 07:29:28 Chair: Anssi 07:29:40 Agenda: https://www.w3.org/wiki/Second_Screen/Meetings/May_2019_F2F#Agenda 07:29:44 Present+ Chris_Needham 07:30:22 Present+ Francois_Daoust 07:31:01 mfoltzgoogle has joined #webscreens 07:35:57 Present+ Scott_Low 07:37:00 Present+ Mark_Foltz 07:37:17 Present+ Anssi_Kostiainen 07:39:07 Present+ Eric_Carlson 07:39:42 Present+ Nigel_Earnshaw_(remote) 07:39:55 Present+ Peter_Thatcher 07:40:41 RRSAgent, draft minutes v2 07:40:41 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 07:45:11 scribe: tidoust 07:50:01 Present+ Christian_Klockner 07:50:22 Agenda link: https://docs.google.com/document/d/1olevq7kh90vWBYaY9XnyxUF0qXb2YmHa4Qubqm_crwQ/edit#heading=h.mcj0y38ml9x8 07:50:31 Louay has joined #webscreens 07:50:41 Present+ Louay_Bassbouss 07:52:14 Topic: Agenda bashing 07:52:55 mfoltzgoogle: We'll start with introductions. We've been working on the spec for about 1.5 years. 07:53:15 ... We've more or less met all requirements for a 1.0 draft, published some time ago. 07:54:01 ... I'll start the day with looking at what we've done in the WG and CG in the past 4 years, their implementation status. 07:54:27 ... Then Peter will provide an overview of the draft 1.0 spec of the Open Screen Protocol, with changes since TPAC 2018. 07:55:45 ... The first area we still have questions on is authentication. Two main directions. One is PAKE, another is question-answer challenge. We got some feedback internally from security folks that could change the direction we choose, so we want to discuss that today. 07:56:17 anssik: If that feedback from Chrome security could be public, that would help with documenting horizontal review for the spec. 07:57:11 mfoltzgoogle: Then a few more general issues. How do we use TLS 1.3 for instance? Mitigations for remote network attackers, etc. We'll touch on a few of those. 07:58:05 ... The remaining topics for discussion are more details on messages exchanged by the two parties. How IDs are handled. A few others. 07:58:43 anssik: I'll have a question. Currently we map from protocol to API. Do we want to do two-way mapping so that the API spec mandates use of the protocol? 08:00:17 mfoltzgoogle: I will have to think about that. I think I would be fine adding implementation notes on how to use the Open Screen Protocol. 08:01:17 ... I certainly think that the Open Screen Protocol should be mentioned in the API spec. Whether it's normative or not, I don't know. 08:01:22 anssik: Yes, it's an implementation choice. 08:02:08 mfoltzgoogle: At the conclusion of day 1, the goal is to have directions on ways to resolve most issues against the 1.0 spec, so that we implement the resolutions, e.g. by TPAC. 08:04:29 [Group runs a quick round of introductions] 08:06:46 anssik: Small but very effective group :) 08:06:55 ... Any immediate question on agenda for day 1? 08:08:46 Topic: Overview of Group Work 08:09:19 [Mark goes through slides] 08:10:08 mfoltzgoogle: A bit of history. Work started before I joined with a breakout session at TPAC. The Presentation API was the first idea that came out of these discussions, incubated in the Second Screen CG for a year or so. 08:10:53 ... Then we discussed transition to a Working Group, adding Remote Playback API. But we left implementations up to browser vendors. 08:11:25 ... We knew we'd have to focus on interoperability at some point, which is the reason why we kicked out work on the Open Screen Protocol. 08:12:09 ... Not much work in the WG in the meantime, with most of the work happening in the CG to prepare the Open Screen Protocol. 08:12:55 ... In April of this year, we assembled the different bits that compose the Open Screen Protocol and released v1.0. 08:13:18 ... The Presentation API allows a Web app to ask the user to present a different page on a second device. 08:13:54 ... What the browser needs to do is, given a URL, figure out which devices can render it, asks the user to select a device, and establish the connection to run the presentation. 08:14:15 ... A controlling page can close the presentation, or close the connection without closing the presentation and reconnect later. 08:14:46 ... One example is a slideshow of images. It shows the different features of the API. 08:15:19 ... In Chrome, this works with connected displays and Chromecast devices (not sure about cloud displays) 08:15:22 RRSAgent, draft minutes v2 08:15:22 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 08:16:05 ... The other specification is the Remote Playback API. As opposed to presenting another page, it focuses on remoting an audio or video element. 08:16:32 ... The user or the page can ask the browser to look for compatible displays to remote playback of media. 08:17:08 ... What is the CG really focused on? For these two features, we're addressing interoperability through protocol incubation. 08:17:48 ... We wanted to make sure that we could handle the case where the controller and receiver are on the same local area network (LAN). We also decided to focus on the 2-UA mode. 08:18:14 ... For the remote playback, we focused on the case where the media has a "url" attribute. 08:18:34 ... In other words, we chose not to focus on streaming to start with. That said, a bit of work has been done in that area too. 08:19:33 ... About functional requirements, we wanted to make sure that we could cover all the parts in the APIs that are left as "implementation details". 08:19:58 ... [Mark reviews functional requirements slide] 08:21:00 ... We also considered non-functional aspects. Make sure UX can be good, make sure we preserve privacy and security. Make sure that implementations can be efficient in terms of memory and battery since low-end devices may be used as receivers. 08:21:40 ... We realized that there are lots of different scenarios that people are looking into (e.g. VR, cloud gaming), so we wanted to make sure that things can be as extendable as possible. 08:22:44 ... We brainstormed about the different layers of the protocol stack. Before endpoints can communicate, they need to discover each other, authenticate each other, agree on a transport mechanism that they can use to exchange messages. 08:23:15 ... For discovery, we chose mDNS / DNS-SD. That gives you an IP and port that can be used to establish a connection. 08:23:29 ... For authentication, we're going to use TLS 1.3 with mutual authentication. 08:23:44 ... The transport layer will be based on QUIC. 08:24:20 ... To actual provide the message syntax, we looked at different possibilities, and decided to go with CBOR. 08:24:44 anssik: A lot of this work relies on work done in IETF. 08:25:24 ... Question on where the work should happen. Overlap between people at the API level so work happening in CG for now. 08:26:20 mfoltzgoogle: We analyzed requirements, possible technical solutions. We took some time defininf authentication approaches Challenge/Response using HKDF, and J-PAKE. 08:26:38 ... The v1 draft needs more feedback but is complete in terms of addressing requirement. 08:27:00 ... We've worked on an Open Screen Protocol Library that implmeents part of the 1.0 spec. 08:27:21 ... Hope is that the library will help drive adoption, as happened in WebRTC. 08:27:32 s/defininf/defining/ 08:27:55 ... We've tried hard to minimize dependencies, so that others can adopt the library easily. 08:28:25 ... It's complicated, we'll get to that, but goal is to make it work both inside Chromium and outside Chromium. 08:29:38 ... We're still debating authentication mechanisms, some ID issues, some message details, and then capabilities and extensions. 08:30:27 ... Above and beyond the spec itself, a few items that we may want to complete. We never fully defined what the requirements are for the Remote Playback API. The TAG asked for an explainer, which we started to draft. 08:31:22 anssik: We're doing that a bit backwards. Most groups come with an explainer first and then work on a spec. We have the spec already, and now writing an explainer. 08:33:25 mfoltzgoogle: Another document should talk about pros and cons of specifying custom schemes for use in the Presentation API. Many use cases rely on non HTTP schemes, such as Cast, DIAL, hbbtv. It seems important to reflect on how to do that properly. 08:33:56 ... Also additional security analysis, and a few items that we'll dig into on Day 2. 08:34:30 anssik: Regarding horizontal reviews, the TAG has indicated interest to review CG work. 08:34:37 ... I don't know about others. 08:36:23 scribenick: cpn 08:36:39 francois: Good question, some may want to look into it. Depends what we want to do next, we can ask the different horizontal groups, e.g., security, accessibility for remote playback API around synchronization. 08:37:03 anssik: this is unusual, the W3C process doesn't recognise CG work 08:37:17 ... let's start with TAG review, then we can ask the other groups 08:37:23 scribe: tidoust 08:37:43 anssik: No clear guidance for CG work, that's the conclusion for now. 08:38:23 mfoltzgoogle: I know some groups may be interested at looking into some aspects of it. For instance, the Media & Entertainment IG has been looking at synchronization aspects. 08:38:54 ... I would rather have them look at specific pieces of the protocol, so as not to write too many explainers. 08:39:09 anssik: OK, let's work on this tomorrow. 08:39:46 mfoltzgoogle: OK. As mentioned earlier, a parallel project is to develop the Open Screen Protocol library. 08:40:00 ... That library is slowly converging to what the spec defines. 08:40:29 ... Full mDNS support, full QUIC support. We have a platform abstraction layer that allows to port the code to different platforms. 08:41:09 ... We have CBOR support via code generation based on CDDL to create parsers. Saves a lot of programming. 08:41:30 ... We recently landed all the messages needed to support the Presentation API. 08:42:00 ... All of these features have been demonstrated in examples in C++ and Go. Peter will give us the data. 08:43:00 ... The things we have not yet finished: we haven't finished doing authentication, in part because we're still discussing them. Also Remote Playback. We're also planning to integrate the library in Chromium for the controlling part. 08:44:04 ... We have been doing some exploration on media streaming. We may or may not want to add that into the scope of the group. We had a few discussions on doing LAN traversal using ICE. Not really in our CG scope either. 08:44:27 anssik: What's the driving use case for LAN traversal? From mobile to screen? 08:44:53 mfoltzgoogle: Main driver is education use cases. 08:45:06 ... Or when you're at your friends house. 08:45:19 ... No direct network connection. 08:47:08 ... We also did as a group other investigations to discover devices. mDNS has a few hiccups. You may not be able to use multicast at all, e.g. for LAN traversal. Possibilities include Bluetooth, NFC, QR codes, etc. 08:47:32 ... Also Peter has been looking at implementations based on lower primitives such as WebTransport/WebCodecs. 08:47:45 ... There may be feature requests for a V2 as well. 08:48:21 anssik: Have people in the room thought about requirements for a V2? I'm thinking about HbbTV for instance. It would be good to document these requirements somewhere, just to make sure we have that data. 08:49:45 ... A good topic for day 2. 08:51:30 topic: Brief Overview of Draft 1.0 Spec 08:54:11 Peter: There's the bucket of things we agreed on and that have been done, some remaining to do, and then things we have not yet agreed on. 08:54:25 ... [going through lists on slides] 08:55:39 ... About CDDL, we need to indicate type key for a given message, although I'm not sure we agreed on that actually, probably not in the right list 08:56:26 ... We agreed on messages for both specs, with Remote Playback messages being extensive, now done. 08:56:47 ... For streaming, we agreed on the concept of an audio- and video-frame. That's now in there. 08:57:22 ... Some things we did but still need agreement on. 08:58:23 ... First is thus the need to indicate a type key for a message. 08:59:49 PROPOSED: Keep comment in CDDL to indicate type key for a given message 09:00:01 anssik: Seems the most straightforward way 09:00:09 RESOLUTION: Keep comment in CDDL to indicate type key for a given message 09:00:34 Peter: mDNS has a limit for the size of the display name. 09:01:04 ... It's allowed to have a longer display name but what comes across mDNS is going to be truncated. 09:01:46 ... We want to make sure that the agent must compare the truncated one with the full one and make sure it's a prefix. 09:02:04 anssik: Would the user see the truncated name based on the mDNS before seeing the full name? 09:03:54 Peter: It's possible that the browser would display the names before, but check enforces the prefix does not change. When you truncate, you put as many characters as possible. The other endpoint can tell the string was truncated thanks to the last character. 09:04:01 ... 64 characters, I think 09:04:50 Eric: It would be a mistake for user agents to display to users anything other than a full name. 09:05:05 ... There should be advice to only display the fullname 09:05:30 ... With Unicode, Emojis, it's easy to fill 64 characters. 09:05:50 anssik: Just make sure that implementers are aware of the issue. 09:06:30 Eric: make it a should not display truncated strings. 09:07:48 mfoltzgoogle: The other item is that we may also mention the name collision protocol in the mDNS spec. If distinct endpoints advertize the same name, the mechanism allows them to figure that out and resolve the collision. 09:09:40 anssik: Also how to distinguish strings that look similar but use different (similar-looking) characters 09:10:07 ... That's a UI guideline. It may fit more in the API spec 09:10:31 Peter: I believe we could put that in the protocol spec so that agents which are not browsers are also aware of that. 09:11:14 PROPOSED: Add recommendation at the SHOULD level for agents to display fullname (as opposed to not to display truncated display names) 09:11:23 PROPOSED: Add recommendation at the SHOULD level for agents to display fullname (as opposed to display truncated display names) 09:12:02 s/PROPOSED: Add recommendation at the SHOULD level for agents to display fullname (as opposed to not to display truncated display names)// 09:12:46 scottlow: Should the spec mandate the receiver to show its own name? 09:13:25 Peter: But you wouldn't want all displays to suddenly show their name. 09:13:48 mfoltzgoogle: For first authentication, yes, it would be reasonable for receiver to show its name. 09:15:07 ... Chromecast dongles show their name when it's on and no application is launched. It may be tricky to require receivers to do so though. 09:15:23 s/it's/they are/ 09:16:01 anssik: Maybe in the future, you have meeting rooms where you have screens everywhere. Hard to tell how to do it today though. 09:16:39 mfoltzgoogle: Separate from authentication, some way to disambiguiate receivers that have similar names would be useful. 09:17:11 Peter: In the future, we may look into other types of discovery and authentication mechanisms. 09:17:51 anssik: This seems like a good v2 feature. "Find my second screen!". Scott, if you want to raise it on GitHub, that would be good. 09:18:08 RESOLUTION: Add recommendation at the SHOULD level for agents to display fullname (as opposed to display truncated display names) 09:19:08 Peter: Along with the name being shown, we added a mechanism with two flags inside the agent info to tell whether the agent can do audio/video or audio only. 09:19:36 ... Currently, the flags are just bits. We may expand on them to show which protocols and formats you speak. 09:20:08 Eric: I just want to make sure we bake it in. 09:20:46 Peter: After authentication, you have access to much more information. This is before authentication. 09:21:03 anssik: This would only be used for the UI? 09:21:46 Peter: Not only, you can use this to know that the device is an audio device if you're going to do streaming. 09:22:21 Pull request for additional capabilities in agent-info: https://github.com/webscreens/openscreenprotocol/pull/171 09:23:27 V2 issue raised on GitHub around "pinging" a receiver: https://github.com/webscreens/openscreenprotocol/issues/174 09:23:50 Peter: For any of the protocol, you'll want to use different icons. 09:24:25 mfoltzgoogle: It's been discussed but we don't expose device capabilities right now. We don't allow to filter devices out based on capabilities for now. 09:25:05 Peter: It's not just for Remote Playback. There are different use cases where you want to use different icons. 09:26:58 Peter: Regardless on the agreement on booleans or anything, some agreement on agents having a way to describe audio/video capabilities 09:28:10 RESOLUTION: Agreement to use receives-audio/receives-video capabilities as specified 09:28:28 Peter: Moving on to length prefix 09:29:38 mfoltzgoogle: CBOR messages all have an inherent length. The length prefix would simply have allowed the client to know how many bytes it needs before it sends that for parsing. But that's not needed. 09:29:54 Peter: The parse tells you whether it's done. 09:30:25 ... The question of what we do for forward compatibility when we change structure of the fields, that's something that we should create an issue about. 09:30:56 PROPOSED: Don't do length prefix for CBOR messages 09:31:18 RESOLUTION: Don't do length prefix for CBOR messages 09:31:55 Peter: Separately, we should create an issue for when you can add/remove fields and how you can do that (forward-compatibility) 09:33:26 ... Now, for the type key prefix, I chose a QUIC uvarint (2 bits for size). I looked into CBOR tags. I looked into 1-bit varints. The only downsize is that the number of values you can get is 64. You may get 128 with other mechanisms. 09:33:48 mfoltzgoogle: Better than CBOR where you have only 24. 09:33:57 Eric: How many do you have now? 09:34:07 Peter: Around 50. 09:34:42 ... I divided them into the ones that should be small and ones that don't matter. The total space is enormous. 09:35:04 RESOLUTION: Use a QUIC uvarint (2 bits for size) for type key prefix 09:35:56 Peter: The next two go together. Related to the Remote Playback API. Plenty of mechanisms, notably around texttracks. 09:37:24 ... You can change text tracks or change existing ones by adding a cue. You can also change the mode of the text track. 09:37:38 ... At TPAC, we had not figured out a way to do all of this. 09:38:17 ... The only limitation is that, when you add a cue, there are many things that you may want to do related to positioning, etc. Not addressed here. 09:38:57 ... At least, it lays the foundations to be able to add/remove cues and manage text tracks. 09:39:32 ... Get placement, positioning is v2, but the ability to manipulate text tracks and cues is v1. 09:40:04 Eric: Right, this seems sufficient for WebVTT support. 09:41:15 [Discussion on generic cues] 09:41:16 scribenick: cpn 09:41:40 Chris: We want to be able to support TTML and IMSC cues, which we can't do with Remote Playback API as it stands 09:41:55 Eric: Generic Cue, discussed at FOMS is the planned solution there 09:42:18 Chris: We are also proposing DataCue, in early stage incubation, could be a V2 feature to discuss 09:43:00 RESOLUTION: changed-text-tracks on remote playback controls allows for adding and removing cues instead of separate method 09:43:12 RESOLUTION: added-text-tracks allows for adding text tracks 09:43:22 scribe: tidoust 09:43:55 Peter: Now two slides that should go faster. First things that probably don't need agreement. 09:44:42 ... First thing is that we used "agent" for an implementation of the Open Screen Protocol. The next is that we borrowed "controller" and "receiver" from API specs 09:44:42 Q+ 09:45:05 ... In the context of streaming, we used "sender" and "receiver". 09:45:15 ack tidoust 09:45:29 scribenick: cpn 09:45:52 francois: I note that controller isn't the exact same term as used in the Presentation API, but that's explained in the spec 09:46:22 ... Reading the current spec, I'm not clear as to what an agent needs to implement 09:46:46 ... In the Presentation API, an agent may want to take on different roles, partial implementations, etc 09:47:25 ... Would it be better to precisely define what an implementation needs to support? 09:48:08 Peter: We could say what all agents must implement. For example, it's not required to implement both Remote Playback and Presentation API, or either - you could just do streaming 09:49:43 francois: What do I need to test for, when testing an implementation, i.e., normative statements? It's not entirely clear to me 09:49:51 anssik: What is the best practice for test suites for protocol work at IETF? 09:51:46 scribe: tidoust 09:52:01 peter: Typically there are interoperability events, where implementers come together 09:52:01 mfoltzgoogle: I think the spec should map which parts should be impemented by different conformance classes 09:52:01 ... This would help clarify what conformance to the spec means 09:52:02 s/impemented/implemented/ 09:52:02 peter: Good idea to map conformance classes to capabilities 09:53:26 ACTION: Mark Foltz to document which parts of the specs apply to which conformance class from an API perspective 09:54:38 Peter: On to the hash used for mDNS fingerprint. sha-512 or sha-256, but not md5 or md2. We just want to make sure we're not using old insecure ones. 09:55:30 ... The mDNS timestamp, we ended up renaming it "mv" instead of "ts". 09:56:12 ... Maybe we can save the bit on authentication for later, but if we go with no JPAKE, there are certain parameters that you need to pick for HKDF. 09:56:52 ... Maybe we don't need a resolution here, I just wanted people to be aware that these little decisions were made. 09:57:43 Nigel: Only comment is that, over time, parameters may need to change. 09:58:00 Peter: Some of these are on the wire things that the agents can negotiate. 09:59:13 ... The tuning of the Presentation ID was initially done by the controller. But we realized that it's easier if the receiver chooses the ID to make scoping easier. It didn't really have any impact on the protocol. 09:59:53 ... Next point is that originally we had HTTP headers a big blog of HTTP/1.1. Doesn't make sense, so replaced as key/value pairs. 10:00:59 ... The protocol was designed to be completely independent of the APIs so that you could do implement it between two non-browsers. There's now a separate section for the mapping between the APIs and the protocol. 10:01:39 ... In the Remote Playback API, when you want to determine which audio track is enabled/disabled, now using a set of IDs and not booleans. 10:02:13 ... For streaming, I showed a payload of "any", but "bytes" is more logical. 10:02:26 ... "frame sequence number" instead of "frame ID" 10:03:21 ... Some streaming capabilities have been added, e.g. color profiles, native resolutions, minimal video bit rate, max audio channels, etc. 10:04:02 [Some discussion on color profiles, Media Capabilities and reference to CSS spec] 10:04:37 https://wicg.github.io/media-capabilities/#display-capabilities 10:05:51 Peter: So we should look at the Media Capabilities which then references CSS. 10:05:54 Eric: Exactly. 10:06:23 Peter: Moving on to things have not been done. 10:06:29 ... First one is PAKE or not. 10:07:19 ... Another one is the possibility to extend capabilities. Links to #123 with a PR #171 that could perhaps help address it. 10:07:48 mfoltzgoogle: What's in v1 is the framework for extension, but touch screen example is v2. 10:08:44 Peter: Extensions could use numbers to express support for capabilities. 10:09:04 Eric: Information is only between sender and receiver? 10:09:24 Peter: Yes, right now, there's no API for it. However, there's one place where I thought this could be useful. 10:09:58 ... E.g. prompt and only show things in the list that have this set of capabilities. 10:11:19 scribenick: cpn 10:11:46 Eric: How does an application know whether to prompt? It seems that exposing this information to applications increases fingerprinting surface 10:13:16 ... Could use it to detect whether a user is at work or at home, for example 10:13:33 Peter: This example (slide 170) show how API capabilities map to the protocol 10:13:54 scribe: tidoust 10:14:11 Peter: Next is about 0-length connection ID. 10:14:21 PROPOSED: PR #170 is good for landing 10:14:36 s/170/171 10:14:47 PROPOSED: PR #171 is good for landing to address issue 123 10:14:59 RESOLUTION: PR #171 is good for landing to address issue 123 10:16:47 Peter: The QUIC protocol changed recently with regards with connection IDs. Now, the short header does include one of the connection IDs all the time, which means that you have to decide whether you're going to use 0-length connection IDs. The big limitation here is that you cannot have connection migration. 10:17:13 ... If I change my IP and port, and if I don't send that connection ID, you're not going to know that it's me. 10:17:34 ... However, 0-length connection IDs allows to reduce the size of packets. 10:17:57 ... Connections are pretty ephemeral anyway, so not a big deal. 10:18:08 RRSAgent, draft minutes v2 10:18:08 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 10:18:32 ... I think we should say that we're going to use 0-length connection ID. That's issue #169. 10:19:03 ... I put it at the SHOULD level in PR #170. Maybe between client and server, you might want to use real connection IDs. 10:19:32 Eric: What percentage of the message would be used by the connection ID? 10:20:20 Peter: It depends on the length of the number you're going to use. It could be two bytes. In lot of cases, this may not be a big deal. 10:20:41 ... If we were then to do QUIC with ICE, then you would never have a connection ID. 10:21:13 mfoltzgoogle: The other use for this is if you have proxies in place. 10:21:34 Peter: Yes, in that context, it might be useful to have connection IDs, which is why it stuck to the SHOULD level. 10:22:17 PROPOSED: PR #170 is good for landing to address issue #169 10:22:35 RESOLUTION: PR #170 is good for landing to address issue #169 10:24:00 Peter: One of the decisions when writing the spec was to pick type keys. We picked some. One thing we could do is to talk about it. 10:24:57 ... A possible V2 thing is remote decoding. Instead of giving a URL, you would stream media over the wire. I wrote a pull request that describes how to stream media over the wire. 10:25:14 ... It's possible, but we need to decide whether that's v1 or v2. 10:25:49 scribenick: cpn 10:26:34 Chris: This also affects Media Capabilities API, as you'd want to use the capabilities of the remote device rather than the controller, to decide which segment representation to request 10:26:46 scribe: tidoust 10:27:24 anssik: Isn't that like 1-UA mode? 10:27:54 mfoltzgoogle: The basic capability needed for media remoting or 1-UA is streaming. 10:28:26 anssik: We heard from Eric that he would like to see this earlier than later 10:29:37 mfoltzgoogle: It doesn't delay our current roadmap for the spec, like wide review or the like. 10:30:16 ... It expands the scope of the spec. 10:30:52 anssik: The API doesn't distinguish between 1-UA and 2-UA. 10:31:23 mfoltzgoogle: Yes, the API was designed to be agnostic as much as possible to the underlying mode. 10:31:33 ... Some things are easier in a given mode. 10:31:55 anssik: So no change to the API. So it's only about expanding the scope of the protocol. 10:32:23 mfoltzgoogle: Yes. We'd have to look further into things such as color support, codec capabilities, etc. 10:33:02 anssik: Streaming would help get more implementations. 10:33:13 ... I don't know if we can take a decision right now. 10:33:39 ... We don't want to do work that is not useful. 10:33:56 mfoltzgoogle: We're not going to be able to support all of the features. 10:35:00 Eric: it's a very important feature. 10:35:20 anssik: OK, let's come back to this issue tomorrow and see if we can take a decision. 10:37:40 Eric: One question about is what happens if application changes it after remoting. Does it get re-evaluated? 10:38:29 ... Are there issues with syncing up again? 10:38:46 Peter: From a protocol perspective, question is whether the information gets sent to the receiver. 10:38:56 ... so that it can make its own decision. 10:39:23 Eric: Yes. Depending on the capabilities of the decoder on the receiving size, you may get a different choice. 10:39:45 Peter: so question is whether the selection is done by the controller or the receiver. 10:40:22 ... If I were to make a PR, I would need to have a complete set of all the data that needs to be passed over. 10:41:03 mfoltzgoogle: Yes, I don't think it's feasable to have the controller evaluate those on the receiver's behalf. It should be up to the receiver to choose the source. 10:41:34 ... The Remote Playback API does not say anything about that. It just talks about synchronizing state. 10:47:21 [Lunch break] 10:47:28 RRSAgent, draft minutes v2 10:47:28 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html tidoust 11:47:34 cpn has joined #webscreens 11:54:18 Louay has joined #webscreens 11:54:23 present+ Louay_Bassbouss 11:56:00 Action: Mark to review open issues and tag as v1-spec as needed 11:56:26 Topic: Issue 158 11:57:43 Peter: How precisely do we need to describe how a user agent should send remote playback control messages, when the state changes on the controlling media element? 11:57:54 Eric: For UAs to have the same behaviour, it would need to be verbose, as per the HTML spec 11:58:59 Eric: More will be needed, as HTML only talks about this for local playing content 11:59:04 ... e.g., the stalled event when no data is received. The receiver side would have to have the same thing 11:59:13 ... You would have to send a message back, and make those associations explicit 11:59:30 Mark: I'm proposing to refer to parts of HTML 11:59:37 ... There probably aren't that many, things that change without input from the user 12:00:18 ... 'progress', 'stalled' events - a handful of events were you need to send state back 12:01:57 Peter: Using 'stalled' as an example, we have a bit in the message for that, but it doesn't currently say *when* you should send the message 12:02:23 Eric: Would be enough to say the message needs to be sent when HTML wants to generate the even 12:02:26 s/even/event/ 12:02:44 Peter: What if the HTML spec changes? 12:03:00 Eric: Its unlikely, it's been stable for a couple of years 12:04:47 ... There's a difference between state changes caused by sender input, and changes that happen at the receiver 12:04:52 ... Only a few things can change without input from the controller 12:05:17 Peter: Can the receiver UA allow the user to make changes? 12:05:19 Eric: Yes 12:06:01 Peter: Can we say, when the state looks different from the JavaScript perspective, a message should be sent? 12:06:02 Eric: Yes 12:07:00 Eric: We should talk about 'muted' and 'volume'. Should it be possible to control the audio characteristics of the receiver from the controller? 12:07:32 ... Does the volume attribute represent the volume at the controller or receiver, and do they have to be in sync? 12:08:14 Peter: We do have 'volume' and 'muted', which syncs the two media elements, equivalent to executing from JavaScript 12:08:48 Eric: With AirPlay, changing the volume on the controller doesn't affect the receiver. You use the remote control to change volume on the device 12:09:02 Peter: So as written now, it would change the volume, not the hardware volume but the attenuation 12:10:03 Anssi: Is this an area where we need flexibility for implementers? 12:10:37 Mark: Yes. From our experience, having a protocol to support the hardware volume is important, so the protocol needs to distinguish those cases 12:11:05 ... Should add control of the hardware volume as a separate protocol feature 12:12:33 Peter: I propose: For fastSeek(), play(), pause() are called, send a message, or if the attributes observably change, send a message 12:13:02 Eric: It would be logical to update it on progress events, should be every 350ms 12:14:05 Peter: What things are on demand? There's currentTime 12:14:44 Eric: That might be all 12:15:54 Action: Peter to raise a pull request for issue 158 12:16:20 Peter: Other remaining issues not done relate to codec names and mime types 12:17:55 Peter: Three options: put the whole mime type, use the mime type without the prefix 12:18:14 Eric: using without the prefix won't work, there are lots of audio types 12:18:30 Peter: For well known types, we'd want a known string there 12:18:36 Eric: Also info on profile level? 12:18:53 ... If we do that, we could use extended mime type 12:19:54 Peter: How complex would that get? 12:19:54 Eric: It's different for every codec, each codec defines its own 12:20:10 Mark: To support media remoting on top of streaming, we'd want to send the same fidelity of data about the media stream as is available through HTML 12:20:28 ... Are extended mime types the most accurate way to describe them? 12:21:24 Peter: Could put the codec string from the mime type? 12:21:29 Eric: Yes, it's the best we have 12:21:42 ... rfc6381 12:21:59 Mark: Are there RFCs for the syntax for describing different codecs? 12:22:15 ... Need consistency for how to interpret the strings 12:23:20 Peter: Last one is about capabilities for HDR 12:23:35 Eric: It's in scope for the new Media WG 12:24:00 Mark: There's whether the media engine understands the metadata, and whether the display is capable of showing it 12:25:26 Eric: Don't try to solve it here, wait for it to be solved for HTML media, then use that here 12:25:48 Anssi: This concludes the overview session 12:26:06 Topic: Authentication 12:26:31 Peter: At TPAC, we decide to investigate challenge/response, simpler than J-PAKE 12:26:54 ... We specced it, cleared it with security people, then realised there's an issue 12:27:02 ... The security folks recommended using scrypt to harden the PIN number 12:27:35 ... This requires specifying how "memory hard" it should be 12:27:57 ... Could be 32MB RAM 12:28:15 ... We asked if we could get away with less, but awaiting reply on that 12:28:57 ... So we made a PR for J-PAKE as a back-up plan. Talking to the security folks, not clear which PAKE to use (SPAKE, J-PAKE, OPAQUE) 12:29:20 ... SPAKE2 nice, has implementations 12:29:36 ... J-PAKE has an RFC that's done. Security folks don't have clear advice 12:29:53 ... Also not clear how "memory hard" it is 12:30:19 ... If challenge/response doesn't need a lot of memory, we'd use that 12:30:43 ... If PAKE doesn't need a lot, we could use that 12:30:45 ... otherwise, not sure what we'd do! 12:31:19 ... We didn't find any C++ implementations of J-PAKE, so we'd have to implement ourselves 12:31:26 ... SPAKE2 has an implementation, but the spec is draft 12:31:43 ... We're hoping to move ahead with challenge/response 12:31:58 ... Want input from security reviewers 12:32:30 ... Someone from IETF suggested asking their review, but requires writing an IETF draft 12:32:54 Mark: Did the relationship between memory and PIN entropy come up? 12:33:29 Peter: Yes, there is a relationship. If the PIN is very large, we wouldn't need the scrypt stage at all 12:33:37 Eric: We wouldn't want more than about 8 12:34:32 Nigel: I'm not sure these things are orthogonal. scrypt is useful for mapping a small dictionary of passwords is something of larger entropy 12:34:50 ... making it invisible to a man in the middle 12:35:06 ... How to stop someone precomputing the mapping between all common passwords and the handshake? 12:35:34 ... The latest draft of SPAKE introduces scrypt. They don't have a robust way to protect against simple passwords and rainbow tables 12:36:00 ... Using a salt string, there's a race to produce an acceptable challenge/response as a legitimate describes 12:36:07 ... scrypt slows down the attack 12:36:55 ... I think you still you have the problem of an attacker knowing you use JPAKE and a limited number of passwords 12:37:30 ... The latest draft of SPAKE has memory hard functions. A PAKE algorithm gives you a shared secret 12:37:56 ... You may end up with the same weaknesses, needing large passwords, or use scrypt. There's a tradeoff 12:38:51 Peter: It's wasn't clear to me that the PAKE was more secure, does it need a memory hard function 12:39:04 ... SPAKE2 spec says the MHF is out of scope of the document 12:39:27 ... The parameter selection is out of scope 12:44:20 cpn_ has joined #webscreens 12:44:26 scribenick: cpn_ 12:44:30 Nigel: So scope it to the most constrained device, but then your attacker has much more compute resource than you do 12:44:35 Mark: The memory requirement and search space determines what the attacker needs to do 12:45:31 Mark: Regarding common passwords, there are solutions that don't allow the user to set the password, but generate them from a strong RNG 12:45:43 ... That also changes the parameters of the attacker's solution, to get real entropy 12:45:50 Nigel: I agree, I think that's the right thing to do 12:46:44 Peter: The part of the spec is the cost of the scrypt, if we can get the number down to 10, would be good 12:47:04 Peter: Moving on to PINs. Need to resolve who shows the PIN and who enters the PIN 12:47:29 ... Both sides could include an auth-capabilities message that includes a value indicating PSK ease of input 12:47:53 ... 100 is super easy, 0 is impossible to enter 12:48:03 ... Whichever side is easier gets to do the entry 12:48:17 ... In a tie, the server presents, and the client inputs 12:49:00 ... For example, a phone has easy numeric input, and the TV says it's possible, but harder, so here the TV shows the number or QR code and the phone enters 12:49:35 Or, a phone and a speaker, or a TV and speaker 12:49:42 s/Or,/... Or,/ 12:50:30 ... It requires two fields: the ease of input and the type to use (numeric, alphanumeric, QR code) 12:50:52 ... We discovered the need for this when we tried to implemented it 12:51:07 Mark: This is send pre-authentication. Are there any downgrade attacks possible? 12:51:42 ... It doesn't change the cryptographic parameters of the challenge? 12:51:44 Peter: That's right 12:52:27 Peter: With Asian languages, alphanumeric is more difficult so they tend to stick to numeric codes 12:53:32 ... The QR code would have to encode the same PIN, as you'd display both together 12:54:07 Anssi: In the speaker / TV case, could it be done without user interaction? 12:55:32 Nigel: There's also a human aspect to this, as people want it to be straightforward, whereas the protocol demands high entropy 12:56:35 ... It's hard to see how choosing either one for input would go wrong 12:58:59 RESOLVED: Agreed use auth-capabilities message for decision where to input the pairing code 13:00:17 Chris: We did a study into the human aspects: https://dl.acm.org/citation.cfm?id=2858377 13:00:35 Mark: We may want to get accessibility review on this 13:01:48 RRSAgent, draft minutes v2 13:01:48 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 13:03:58 [discussing issue #111] 13:04:13 "The challenger must limit the time the responder has to send a response to 60 seconds (to avoid the possibility of brute-force attacks.)" https://webscreens.github.io/openscreenprotocol/#authentication 13:04:47 cpn has joined #webscreens 13:04:55 scribenick: cpn 13:04:56 Peter: We propose having a field to indicate the minimum bits of entropy, from 20 to 60, and the default is 20 13:05:38 Mark: How would an agent choose the entropy? Hardware limitations 13:06:16 Peter: It's a balance between difficulty of input and security 13:07:13 Mark: This allows us to change the requirement over time, so we may want to increase the number of bits 13:07:47 ... Seems like a good thing to negotiate 13:09:27 ... Typically, the controller will be the one setting the minimum 13:11:52 Peter: An attempt to downgrade would result in auth failure 13:11:52 Nigel: So it's an insistence 13:12:15 Mark: So the spec should include an analysis of the downgrade 13:14:50 RESOLVED: Move ahead with psk-min-bits-of-entropy, with a range of 20 to 60 to resolve #111 13:15:20 [discussing issue #135] 13:15:22 Peter: For issue 135, the kinds of certificates. Should we support EC certs or RSA? 13:16:14 ... We should require acceptance of EC certificates, but can use RSA if you want 13:16:48 Nigel: TLS 1.3 doesn't do RSA encryption, so this goes with the flow of the community 13:17:39 Peter: What certificate extensions should we use? Mark suggested extensions should be ignored 13:17:56 Mark: In the future, we may want to specify attributes of the certificates that are used, but not ready to spec what those are now 13:18:26 ... Implementations today should not look at extensions, unless added to the spec 13:19:11 ... TLS 1.3 extensions don't seem necessary in our application, but I'd like to see how implementations make use of extensions to decide what the spec should say about that 13:19:39 Nigel: This is a wise approach. Until you have a use case, you don't know you need an extension 13:20:50 RESOLVED: For issue 135, require acceptance of EC certs, ignore cert extensions, and no requirement for TLS1.3 extensions 13:21:15 [discussing issue #118] 13:21:20 Peter: Issue 118 relates to the UI for trusted and untrusted data 13:22:08 ... What should we say anything in the spec about what the text that displays the PIN is like? 13:22:21 ... Do I show whether you're authenticated or not, or show previous failed auth attempts? 13:23:04 ... Do we need to show the auth state of the name or icon before authn 13:27:11 cpn_ has joined #webscreens 13:27:28 scribenick: cpn_ 13:27:31 Anssi: Generally, specs don't go into the UX, each platform has its own guidelines 13:27:34 Eric: We'd want to do some experimentation, as we don't have much experience with this 13:27:38 Chris: Non-normative text to illustrate the flow? 13:27:41 Peter: We may want to say that agents should make it clear which other agents are authenticated or not, for example. 13:28:50 Mark: We may be able to write some general principles 13:28:59 ... The spec talks about flagging devices that may be trying to impersonate other devices 13:29:23 ... Could go in the main spec or a separate paper. I agree we shouldn't require specific UI in the spec 13:33:24 RESOLVED: The spec should not require specific UX but may want to give guidance on particular aspects, e.g., showing whether an agent is authenticated or not 13:33:52 Topic: Security and Privacy 13:49:46 Mark: Some other open security and privacy issues, not covered by the auth section 13:50:27 ... We talk about TLS1.3 being important for the security architecture for OSP 13:50:38 ... Avoid issues with past TLS implementations 13:50:58 ... A few key issues that any application should be aware of, some potential attack vectors 13:51:35 ... Attempts to change how the handshake is done, change modes of use, downgrades, ciphers have tradeoffs 13:51:39 ... attacks based on timing and length of payloads 13:51:46 ... Content of keys and secrets 13:52:13 ... If the key is compromised, does that compromise future sessions: forward secrecy 13:52:58 ... 0-RTT, early data, which can potentially improve performance, but replay attacks are possible 13:53:15 ... There's RFC 8446 C-E that's good background material 13:53:45 ... For each attack vectors, there are high level things we can do to make it more resistant to attack 13:54:19 ... We can forbid OSP endpoints from downgrading to TLS 1.2 13:54:21 Peter: Does QUIC require TLS 1.3? 13:54:28 Mark: So we may get this for free... 13:55:02 Peter: (checks) It does require it 13:55:13 ... The solution for avoiding cipher based attacks is to require longer ciphers 13:55:18 s/.../Mark: / 13:55:37 Zakim has left #webscreens 13:55:37 ... We should do some benchmarking to guide our choices 13:56:13 ... For timing attacks, some are based on time to encrypt the payload. TLS 1.3 requires use of constant time ciphers 13:56:53 ... We discussed using TLS pre-shared keys, but in the spec I don't think we require them to be used 13:56:58 Peter: We aren't using them 13:57:13 Mark: That removes issues with compromise of pre-shared keys 13:57:37 ... For replay attacks, there are advantages to using early data, we'd want to use for certain message types 13:58:32 Peter: The simplest way to avoid 0-RTT problems is to not use it. I don't think we have a use case that would benefit from 0-RTT 13:59:04 ... For now, let's not use 0-RTT and reassess if we think there's benefit 13:59:18 (general agreement) 13:59:22 Zakim has joined #webscreens 13:59:36 RRSAgent, draft minutes v2 13:59:36 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 14:00:07 Peter: The QUIC WG and TLS WG are pushing for only strong ciphers, not sure we need to do more than use what they've chosen 14:00:21 Mark: I wanted to check which ciphers have good hardware acceleration support on ARM chipsets 14:00:42 ... Impacts for CPU requirement, particularly for media streaming 14:01:58 ... It basically comes down to which key length you use with AES, or ?. Wasn't clear whether block based ciphers are a good fit for streaming applications 14:03:31 ... For items 1,3,4,5, we propose to update the spec to require TLS 1.3 and constant time ciphers. For 3 and 4, note that those features won't be used. For 2, consider hardware requirements when recommending ciphers 14:04:29 Nigel: Regarding pre-shared keys with TLS 1.3. There are two circumstances where it's used: out of band or resumption, something we should note in the spec 14:05:24 Mark: Session resumption is part of the spec 14:05:41 Peter: The application could rule it out, to avoid storing state between sessions 14:12:49 cpn has joined #webscreens 14:12:58 scribenick: cpn 14:13:05 Action: Peter to research if there is advantage to session resumption outside of early data 14:13:09 Peter: Resuming a connection could use less power. Then we'd have to figure out what needs to be stored for session resumption 14:13:14 Eric: Are there any issues with keys? There are for content that needs authentication. What happens if you start playback on the controller then continue remotely. Do you require authentication? 14:13:16 Peter: No, you shouldn't require it. 14:13:19 Eric: So how does it work for encrypted content? 14:13:23 Peter: The keys for DRM are completely separate 14:13:26 Mark: This is about securing the connection between controller and receiver. It doesn't describe how the content is encrypted in addition. 14:13:48 Peter: I'm OK with banning 0-RTT and TLS less than 1.3. 14:13:48 Mark: I think it depends on the ciphers we choose for item 2. 14:13:48 ... I'd like to understand better the trade-off with constant time AEAD ciphers. 14:13:49 ... Most of the ciphers for TLS 1.3 are constant time. 14:13:52 ... We may get it for free 14:14:33 [discussing issue #131] 14:14:37 Mark: Issue 131. What can network agents outside the LAN potentially do, if they can route traffic to OSP agents on the LAN 14:15:15 ... This happens a lot, as home routers are very bad, and allow internet traffic to be routed to internal endpoints due to poor UPnP implementations 14:16:30 ... Simplest thing to do is nothing. The worst that happens is a DoS, where you swamp the device with authn requests or failed handshakes 14:16:44 ... Or we can tell the user if we detect attempts from unexpected network endpoints 14:17:21 ... For mitigation, I had an idea (but not convinced of) is to put something in the early handshake that could only be found through mDNS 14:17:56 ... Most restrictive, we could ban connections from non-private IP addresses 14:21:05 ... https://www.theverge.com/2019/1/2/18165386/pewdiepie-chromecast-hack-tseries-google-chromecast-smart-tv 14:23:19 Mark: Better to do earlier, before attacker can have any side effects on the target device. Ideal if the extra data is provided before a PIN prompt is shown 14:23:22 Eric: Required, rather than ideal 14:23:40 Mark: Don't want to prevent us from using ICE in the future 14:24:37 ... Advertise a token through another mechanism, eg, Bluetooth 14:27:38 Nigel: Is there information that's already present that could be used, rather than having to generate a separate token? 14:28:34 Mark: We currently advertise the fingerprint, but that's not unique information, we'd want something that's only accessible to the discovery mechanism 14:29:35 RESOLVED: Add a token, to be advertised through mDNS, and be required in the authn required prior to PIN display 14:30:09 Mark: In the Presentation API spec, we had privacy review feedback. If you start a presentation, and another party also connects, then the UA should be able to notify the user that this happened 14:30:23 ... So that you know that information you share may be visible to someone else 14:31:00 ... The protocol didn't have a way to notify all controllers of the individual connections added to a presentation 14:31:44 Anssi: Should the joining party should also be notified? 14:32:18 Mark: This should address both cases. To open a connection to a presentation, you send a request and get a response 14:32:34 ... We can add the number of presentation connections and send an event to the other connections with the number 14:32:45 ... So everyone who is connected has the same view of the number of connections 14:32:54 s/should also be/also be/ 14:33:44 RESOLVED: Add a presentation connection changed event that includes the number of connections to the presentation 14:36:11 Mark: Issue 114, trusted displays. This is a complicated topic. It's important to come up with ways to distinguish levels of trust, and a lot of requirements to think about 14:36:11 ... No concrete proposal yet. 14:36:52 ... We've focused on MITM, ensuring data remains private, we haven't focused on providing provable properties of the device (manufacturer, software, specific name, etc) 14:37:24 ... If we want this, we should add an attestation protocol. 14:37:41 ... What facts do we want to verify? Where to they come from: the manufacturer or the software? 14:37:54 ... Information to verify manually by the user. 14:38:27 ... Who cares about doing the verification: the user agent or the webpage? is it for the user or application? 14:39:06 ... When we've gathered some of this information, can make a proposal. But would be speculative to do it now 14:39:48 ... My request to the group is to think about these questions and feed back. We'll want to get internal input from Google 14:40:04 Peter: What do other browsers feel about the question of streaming from tab capture when there's encrypted content on the page? 14:40:15 ... Cast allows that now, but only because of the certificate on the device 14:41:09 Eric: Also, what if I'm signed in and play encrypted content, then fling the URL, The other device would need to be able to decrypt using the keys already exchanged 14:41:50 Mark: Netflix are interested in this 14:42:28 Eric: AirPlay has a mechanism to share information such as cookies 14:42:47 ... A future version of AirPlay that's OSP based would have to have this 14:43:20 Mark: It requires a solution for managing root keys 14:44:20 Peter: We can standardise the mechanism, but then it's up to vendors which root keys they support 14:44:34 Mark: We want to avoid having all receivers having to be certified by all controllers 14:45:14 Nigel: We have to be careful, our experience as a broadcaster, getting trust in a horizontal market is very difficult 14:45:39 ... Also implies management and a compliance regime 14:45:48 Peter: Feels like a V2 issue 14:46:55 Mark: Knowing what information is important from one agent to another will help start that process (manufacturer, serial number, certificates, ...) 14:47:05 Peter: The use case is "can this agent be trusted with encrypted content" 14:47:48 Mark: That usually implies something about the hardware, where it came from, and a trusted software stack 14:48:47 RESOLUTION: Issue 114, defer to V2, and invite feedback on which information is important to establish trust between agents 14:50:44 Topic: Presentation API Protocol Issues 14:51:58 Mark: We realised that a couple of messages, there's nothing the embedder or JavaScript needs to know about what happened 14:52:05 ... We could simplify the protocol around closing connections and terminating presentations 14:52:24 ... This would simplify implementations, but we'd lose some debugging information 14:53:12 ... When you close a connection from the receiver, we send an event to the controller saying the connection is closed 14:54:01 ... Proposal to send a close event, then send a change event to all other parties. Then we could remote close-request and close-response, as it doesn't require a response. 14:56:47 Peter: A close response doesn't go anywhere in the Presentation API 14:58:31 Mark: The channel is basically useless once one side closes it 14:58:38 Peter: And the next time it's used, you'd get an error anyway 14:59:24 Mark: Terminate works the same way. When you decide to terminate, you only signal one side. 14:59:53 ... But you can end up with a presentation that's still running that you can't terminate, unless you reconnect 15:00:12 ... If we change the change the spec to give the controller feedback on termination, we could use request/response, or use an event 15:02:34 Peter: Seems strange for the controller to send a terminate event, as it's something that occurs at the receiver 15:02:46 Mark: An event is more a request without a response 15:04:35 Peter: What about a receiver that refuses to terminate, there'd be no way to know 15:05:41 Mark: We don't have a way in the spec to see if a presentation is still running 15:06:05 ... Keeping the response would be helpful for debugging issues 15:06:16 ... But, if we follow the spec strictly, it's not required 15:08:36 ... I think Chrome would want to know if termination requests were failing 15:09:23 RESOLVED: Remove request/response messages for presentation close, and keep request/response for terminate and explain why this is needed 15:11:35 Topic: Streaming and Capabilities 15:11:45 scribe: tidoust 15:12:37 Peter: Showed an idea on how we can have audio and video streams 15:13:18 ... [showing streams in the spec] 15:15:06 ... Basically, anything that does not change very often goes into metadata, e.g. cvr or frame size. A frame can reference metadata that has been negotiated previously and does not need to send it over again. 15:16:10 ... For video, you want to be prepared for things such as temporal scalability, where you need to reference frames that can be skipped. 15:18:04 ... For time scale, you don't need to put the nominator and the denominator in the packet, only the nominator is enough. 15:19:49 ... The part that is in the pull request is the concept to start or stop a session. The sender indicates the codec it supports. Rather than having the receiver select, the receiver could say which codec profiles it supports. 15:20:50 ... You may want to specify the encoding and screen resolution. 15:21:10 Eric: Do we need the same thing in the offer? 15:21:51 Peter: That is a good idea. 15:22:23 mfoltzgoogle: Aspect ratio? Which site is responsible for producing that matches the aspect ratio? 15:22:58 Peter: So far, I'm assuming the receiver can do that. 15:25:33 Peter: I can usually include a sync time in some frame that the device can use to sync things up. 15:25:55 Eric: Is it out of scope to send video to a receiver and the audio to the speakers? 15:26:04 Peter: Yes. 15:26:44 mfoltzgoogle: We don't have a protocol yet to do that. 15:26:55 Peter: If you have some ideas 15:27:12 Eric: I don't but I know some people that do. I know it it an issue. 15:27:45 anssik: Don't know if you sync clocks across devices 15:27:53 Eric: yes, through PTP. 15:28:29 ... Sending audio and video here, and also sending video elsewhere. Something that we'd definitely want to handle at some point. 15:29:25 there's https://www.w3.org/community/webtiming/ multi-device timing CG 15:29:28 anssik: There's a community group, called the Multi-Device Timing Community Group, who has been looking into this 15:29:45 ... They have a proof of concept for synchronizing media, which was pretty convincing. 15:30:32 ... Potentially an issue to open? 15:30:56 Peter: Yes, we should track this somewhere. I will look into what solutions may be possible. 15:31:47 Peter: Moving to remoting. Remoting is live streaming. The media is already there. 15:32:31 ... You're trying to transfer from one buffer to another buffer. As in MSE. Different from streaming. 15:33:04 RRSAgent, draft minutes v2 15:33:04 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 15:33:18 ... The way remoting can work is that the receiver sends capabilities to the sender just as we talked in TPAC, and then, rather than offering encodings, sender just send the encoding for establishing a reference and starts pushing the media. 15:33:46 Eric: That's assuming that the app has enough information about the capabilities of the receiver. 15:34:05 Peter: Yes, this supposes the application has access to capabilities information. 15:34:44 ... Need to know maximum bitrate and codec support. 15:34:52 ... Transcoding is always a possible fallback. 15:35:04 ... Size is another dimension. 15:35:19 ... e.g. 720p vs. 4K. 15:35:43 cpn: Also whether it can decode in software or hardware 15:36:14 Peter: Yes, this is all assuming the app has the information or that the user agent can transcode. 15:36:29 Eric: I'd like to avoid the necessity to transcode. 15:37:09 ... Decrypt, decode, encrypt, encode. You may not be contractually allowed to transmit unencrypted frames and may not have encrypt capabilities. 15:38:38 ... We need to come up with a mechanism whereby the application could offer the different possibilities and the receiver could select one without revealing too much information. 15:38:50 ... The application is the only one that knows what its server source can offer. 15:39:13 mfoltzgoogle: Then we would need a way for the app to know which one of these offers is chosen. 15:39:19 Eric: Yes, it would have to know. 15:39:57 Peter: Essentially, it's about having the streaming offer/response exchanges at the API level. 15:41:18 mfoltzgoogle: Through Media Capabilities, the app is already able to tell capabilities. 15:41:27 Peter: This is the opposite though. 15:41:42 mfoltzgoogle: Yes, it's better from a fingerprinting point of view. 15:42:05 RRSAgent, draft minutes v2 15:42:05 I have made the request to generate https://www.w3.org/2019/05/23-webscreens-minutes.html anssik 15:42:56 [end of day 1] 16:25:57 Zakim has left #webscreens