23:43:37 RRSAgent has joined #webscreens 23:43:37 logging to https://www.w3.org/2019/09/15-webscreens-irc 23:43:37 dlibby has left #webscreens 23:43:43 RRSAgent, make logs public 23:44:12 RRSAgent, this meeting spans midnight 23:44:21 mfoltzgoogle has joined #webscreens 23:44:27 Meeting: Second Screen WG F2F - Day 1/2 23:44:55 Agenda: https://www.w3.org/wiki/Second_Screen/Meetings/September_2019_F2F#Agenda 23:44:57 Chair: Anssi 23:45:39 dlibby_ has joined #webscreens 23:45:47 Peter has joined #webscreens 23:45:57 msw has joined #webscreens 23:46:19 staphany has joined #webscreens 23:47:30 ericc has joined #webscreens 23:48:26 Present+ Takumi_Fujimoto, Peter_Thatcher, Mark_Foltz, Anssi_Kostiainen, Eric_Carlson, Francois_Daoust 23:48:47 scribe: tidoust 23:49:21 Topic: Day 1 start 23:50:29 Anssi: Welcome, all! [going through the overall agenda: group discussions, joint meeting with Media & Entertainment IG in the afternoon, detailed agenda in Google Docs] 23:50:58 ... Work on the APIs started several years ago, in a stable state. Work on Open Screen Protocol has been progressing since then. 23:51:22 ... We try to meet twice a year, at TPAC, and usually one time in between. Last meeting in May in Berlin. 23:51:45 ... We usually make decisions during meetings and then enact the decisions in between the meetings. 23:52:29 Present+ Daniel_Libby, Hyojin_Song, David_Schinazi, Victor Vasiliev, Mike_Wasserman, Staphany_Park 23:54:36 mfoltzgoogle: For the joint session with Media & Entertainment IG, I didn't prepare any specific material, planning to go through generic presentation. 23:55:07 anssik: You might reuse the material that you're going to present during the AC meeting 23:55:26 ... Q&A style session 23:56:27 ... Let's do a quick round of introductions. 23:56:58 ... I'm Anssi, from Intel, chair of both Second Screen WG and CG. Small and productive groups with well-scoped problem. 23:57:31 ericc: Eric from Apple. Engineer working on Webkit at Apple. 23:58:07 Francois: from W3C, team contact. 23:58:32 takumif: Takumi, from Google. Working with Mark and Peter. I have some proposals for the Presentation API that I'll present tomorrow. 23:58:54 Peter: from Google. Looking at making the Cast ecosystem more open. 23:59:35 mfoltzgoogle: from Google. Spec editor for the Presentation API. Now focused on the Open Screen Protocol. I'm the de fact editor for that. Trying to make connected displays usable by developers in general. 00:00:07 Staphany: from Google. Working Screen integration proposal. 00:00:34 Mike: Picking up work started by Staphany. Related work. 00:00:59 Victor: from Google. TLS 1.3, interested in authentication. 00:01:21 David: from Google. Chair of mDNS group at IETF. Interested in discovery. 00:01:47 Hyojin: from LG. Been looking at integrating displays. 00:02:07 Daniel: from Microsoft. Working on Edge. 00:02:37 anssik: Thanks. Going back to the agenda, any other topic you'd like to bring to the agenda? 00:03:14 ... We recently added a slot for Josh to present accessibility use cases. 00:04:32 mfoltzgoogle: Day 1 is focused on protocol work. Day 2 is to discuss API work. I invited Jer and Mounir to join morning of second day. Afternoon around planning, as we need to think about rechartering by the end of the year. 00:04:56 anssik: Goal is to have rough consensus on a proposal for a new charter. 00:06:24 Mike: Happy to present screen integration, display positioning if you think that can be useful. 00:06:57 anssik: Good point, we may be able to discuss it today before we close, otherwise tomorrow before planning. 00:07:33 ... Let's add "Screen enumeration, window placement & window segments" to the agenda then. 00:07:55 ... Daniel, any specific questions you would like us to discuss? 00:08:12 Daniel: Nothing at the moment. More exploring for now. 00:08:27 Topic: Brief overview 00:09:32 anssik: We'll put a public copy of the slides after the meeting. 00:09:52 s/anssik:/mfoltzgoogle:/ 00:11:05 mfoltzgoogle: I'm going to give a lightning talk on what we've done until now. We had a productive session in Berlin where we made a number of resolutions. I'm going to mention pull requests that have been merged since then, and look at new points not addressed during Berlin. 00:11:27 ... We landed quite a few changes to support not only remote control but also streaming. 00:12:09 ... Looking at history: it started with the Presentation API before I joined. Creating a new Web page and presenting it in another display. 00:12:30 ... The Presentation API quickly transitioned to a WG, now a Candidate Recommendation. 00:12:47 ... The Remote Playback API quickly came around as well. 00:13:08 ... Issue with interoperability prompted work on protocol. 00:13:33 ... In Berlin, we said we were at version 1.0 of the open screen protocol, but then we made quite a few changes since then. 00:13:48 ... The Presentation API allows a controlling web page to present a URL on a receiving device. 00:14:25 ... The receiving device could be a separate Web renderer. For Google, Chromecast is our main target for this API. It could also be a display connected through HDMI. 00:14:59 ... The browser discovers the displays that can be used and lets the user pick up the display that it wants. Not an enumeration API, the controlling page does not see the list. 00:15:25 ... The connection features a communication channel that pages can use to exchange commands, or whatever. 00:15:39 ... Both sides can close or terminate the presentation at any time. 00:16:15 anssik: Whatever gets presented on the second screen is considered one window without chrome. No use case where we're considering multiple windows on the second screen. 00:16:24 kzms2 has joined #webscreens 00:16:44 mfoltzgoogle: [Demo of a slideshow using the Presentation API] 00:17:06 https://googlechromelabs.github.io/presentation-api-samples/photowall/ 00:17:38 mfoltzgoogle: The group also worked on the Remote Playback API. Similar concept but focused on audio/video remoting. 00:18:10 ... The page can request remote playback of a media element, user selects display in a list as for the Presentation API. 00:18:29 ... Media state is synchronized between local and remote device. 00:18:53 ... Most implementations don't have simultaneous playback, meaning local playback stops when remote playback starts. 00:19:34 ... Two main ways this can be implemented: one is to take the URL and send it to the remote display (media flinging), the other is to stream the media to the remote display (media remoting). 00:20:29 ... To address interoperability issues between displays, we decided not to force an existing proprietary solution, but rather to build up an open solution. That triggered the work on the Open Screen Protocol in the Second Screen CG. 00:21:35 ... Initial scope is devices on the same LAN. For the Presentation API, we focus on the flinging use case, not the remoting use case. Same thing for the Remote Playback API, but we've also added support for media streaming. 00:22:13 ... The goal of the protocol is to create a fully interoperable solution between browsers and displays. 00:23:16 ... We started by looking at functional requirements (discovery, control needs, communication channel, authentication and confidentiality, media commands) 00:24:34 ... There are a bunch of non functional aspects that we think are important, including usability, privacy and security, efficiency, making sure that implementation is doable on constrained devices. 00:25:08 ... And also making sure that people can extend the protocol if needed, and that the protocol can be upgraded. 00:25:29 ... Also want to preserve compatibility with existing devices if possible. 00:26:26 ... Looking at the protocol stack, it includes discovery, authentication, transport and message format. We looked at many alternatives. 00:26:45 ... We tried to reuse standards that already exist, or are being worked on in IETF. 00:27:05 ... For discovery, we decided to support mDNS and DNS-SD. 00:27:29 cpn_ has joined #webscreens 00:27:57 ... For authentication, we chose TLS 1.3, mostly because of link with QUIC. In the absence of broadly available mechanism, we chose SPAKE2 for exchanges on keys. 00:28:06 ... For transport, QUIC. 00:28:20 ... For message format, a language based on CBOR for efficiency. 00:29:13 ... Agents will do various operations, [reviewing the life of an agent] 00:29:44 ... We require authentication as soon as possible after discovering agent name, capabilities and status. 00:30:32 ... Bringing us to where we are today, we looked at requirements, alternatives, came to consensus with the current stack. 00:30:58 ... We rolled up everything to a document that we called 1.0, but then we decided to add streaming, so back to 0.9. 00:31:22 ... We've been working on an Open Screen Protocol Library implementation, ~50% done. 00:31:43 ... There's still about a dozen issues open on our GitHub that we should resolve. 00:32:56 ... We're going to discuss some of them today, e.g. support for TLS 1.3, enums, refinements to remote playback protocol, when can agents can disconnect/reconnect using QUIC properly, etc. 00:33:55 ... In parallel, we worked on a TAG Explainer to describe what we're doing and why we're doing it. We need to finish it. 00:34:38 ... Also need to look at how we support other schemes like cast: or hbbtv:. We need to finish the document that explains the pros and cons of supporting custom schemes. 00:35:23 ... In the Open Screen Protocol Library, we implemented mDNS, QUIC, code generation tool for CBOR, support for the Presentation API, demos in C++ and Go. 00:36:07 ... Still some work to do such as SPAKE2 implementation, evolve the protocol in terms of agent capabilities, support the remote playback protocol properly and expose an API. 00:36:18 anssik: No coupling into any Google services? 00:37:00 mfoltzgoogle: No. We do import some code from Chromium. We developed the library so that clients can inject their own implementations of something when needed. 00:37:28 ... We're looking at integrating the Open Screen Protocol Library into chromium down the lane. 00:37:43 anssik: Edge Chromium uses the same WebRTC stack? 00:37:47 Daniel: Yes. 00:38:45 mfoltzgoogle: Some interesting ideas for a "v2". We'd like to discuss them to some extent here. Making it more generic to inject metadata during remote playback (GenericCue/DataCue). 00:39:16 ... Multi-device timing/media sync, could be brought up during the joint session with the Media & Entertainment IG. 00:39:35 ... Attestation [scribe missed explanation] 00:40:10 ... Game controller input could be a good use case. We left in a generic data frames. Are there more specific scenarios? 00:40:37 ... There are cases where mDNS does not work. Do we want to have backup/alternate discovery schemes? 00:40:48 ... And a few others that we probably won't discuss during this meeting. 00:41:50 Daniel: The implementations on non-browser agents, what does that look like? 00:42:07 mfoltzgoogle: The intention is to make it very easy to make products in this space. 00:42:25 ... There will be multiple protocols for a while. 00:42:37 anssik: This is the only fully open specified protocol for this use case. 00:43:15 David: Question on authentication. How do SPAKE2 and QUIC interact? QUIC connection is non-authenticated. 00:43:33 Peter: You just validate that the other side has the PSK. 00:43:56 David: Crypto binding between the QUIC connection and SPAKE2? 00:44:11 Peter: No, we just remember the certificate for reconnection. 00:45:08 David: That's insecure. If someone man-in-the-middle the first QUIC connection, you may end up with a wrong local certificate. There needs to be a binding. 00:45:16 Peter: Yes, covered with a fingerprint. 00:45:24 [to be discussed later] 00:45:40 Topic: Review of Open Screen Protocol changes since Berlin 00:45:50 mfoltzgoogle: Naming is hard! 00:46:15 ... Current set of names, which are being used in the spec: 00:46:28 ... 1. Open Screen Protocol Agent: general 00:46:45 ... 2. Advertising Agent: mDNS responder that also plays a QUIC server role 00:47:06 ... 3. Listening Agent: mDNS listener / QUIC client 00:47:14 Dongwoo has joined #webscreens 00:47:21 ... 4. PSK Presenter: during authentication 00:47:34 ... 5. [missed] 00:48:07 ... Also from an API perspective, we talk about the Controller, the Receiver for the user agents. 00:48:26 ... For streaming, we talk about Media Sender and Media Receiver for agents that send/receive audio/video frames. 00:49:06 ... We don't assume that these roles are tied together. You could imagine that the Controller is a Media Sender and vice versa. 00:49:45 ... Agents can choose what role they want to play. Some use cases from Media & Entertainment IG for Smart TVs for instance. 00:50:07 ... We made some changes to some of the protocols in the OSP. 00:51:05 ... We decided that we don't need QUIC connection IDs, and length prefixing for CBOR messages. In the spec, we have a message type ID as CDDL comments. 00:51:34 ... We use QUIC varints to encode message type keys, shorter ones for most common ones. 00:51:48 ... For TLS, we require EC certs and ignore most extensions. 00:52:48 ... We wrote in more detail how agents should collaborate. They can advertise "capabilities" for which we reserved a range of numbers. Capabilities describe not only message types but also rendering capabilities (supports audio, supports video). 00:53:29 ... Possibility for an agent to change its metadata and notify other side of the changes. 00:54:02 ... Some changes to counter for protocol messages. 00:54:19 ... Registration for extended capabilities and messages should be done through a pull request on GitHub. 00:54:58 ... Remote playback is complicated because you not only send audio/video but also text tracks, event tracks, and that's highly dynamic. 00:55:16 ... Changes to text track are now supported by the OSP. 00:55:56 ... We made some simplification on the messages, merging request and response into an event message. 00:56:20 ... Now an event that tells the controller how many independant connections there are to a presentation. 00:57:07 ... We made changes to authentication as well. Devices vary in their input capabilities (keyboard, TV remote, etc.). Agents can negotiate who shows the pairing code, to optimize and choose the length of the PSK. 00:57:57 ... We added a token to mDNS to prove that you're part of the local network. 00:58:12 ... [going through other changes] 00:58:33 ... Added some guidance when mDNS only contains truncated display name. 00:59:05 ... We just added streaming to the OSP. 00:59:57 ... We kept data frames, added session negotiation, stats reporting (success of playing back the media, network stats), added support of attaching media session for remote playback (MSE/HLS video) 01:00:11 ... And added support for screen rotation. 01:01:26 ... All the terminology stuff has been percolated throughout the spec. We emphasized the link between conformance and protocol capabilities. From the OSP spec to the API spec, but not from the other way. We may also do a two-way thing through non-normative statements. 01:01:51 ... That kind of covers the retrospective. Next is to deep dive into unresolved issues. 01:02:01 RRSAgent, draft minutes vé 01:02:01 I'm logging. I don't understand 'draft minutes vé', tidoust. Try /msg RRSAgent help 01:02:03 RRSAgent, draft minutes v2 01:02:03 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html tidoust 01:33:54 tidoust has joined #webscreens 01:35:38 mfoltzgoogle has joined #webscreens 01:38:43 takumif has joined #webscreens 01:39:48 ericc has joined #webscreens 01:40:08 Topic: Open Screen Protocol 1.0 issues to discuss 01:40:45 mfoltzgoogle: 4 high-level topics, some of which covered in Berlin to some extent, some of which need more discussion. 01:41:17 ... We decided a while ago to use QUIC and TLS but we didn't define the profile that we wanted to use TLS with. TLS provides many options that endpoints can negotiate. 01:42:02 ... We're mostly following the usual pattern. Curved cryptography was better than RSA. 01:42:43 Victor: If you use TLS 1.3, you basically get all of this out of the box. TLS does the handshake and QUIC does the encryption. 01:43:15 ... The short version is that, if you use 1.3, you can get away of specifying profile, because 1.3 gets rid of all the bad choices. 01:43:55 ... Constant time is a property of implementations, not of ciphers. Basic implementations of AES are not constant time. 01:44:10 ... Just 1.3 gives you sufficient security. 01:44:12 ACTION: mfoltzgoogle to remove notes about the TLS profile; if you use 1.3, you can get away of specifying profile, because 1.3 gets rid of all the bad choices. 01:44:23 anssik: Could you document that in an issue? 01:44:27 Victor: Sure! 01:45:15 mfoltzgoogle: Banning the bad ciphers, the other is telling agents what ciphers they must support, and 1.3 has some mandatory and some recommended. 01:46:23 ... Currently the spec does not have a distinction between what type of agents needs to support what. We wanted things to be symmetric. Support for constrained devices may require support for non hardware implementations. 01:47:46 Victor: For signature algorithms, it depends entirely on what you put in your certificate. 01:48:26 mfoltzgoogle: I have a separate presentation on certificates. 01:49:06 ... The last TLS implementation choice is what extensions are mandatory (the ones that are mandatory in the spec), and basically all optional are ignored. 01:49:53 David: One quick point about extensions. You may phrase it iin such a way that some implementations can still use them (opaque and transparent for applications) 01:50:51 mfoltzgoogle: We wanted to make sure that there are efficient ciphers available. I couldn't find benchmarks. 01:51:49 ... Proposal is to fill in the table. AES-128-GCM is likely going to be the fastest. 01:52:15 Victor: AES is faster if you have acceleration. 01:52:48 ... On Intel processors, accelerated AES is about two times faster. Roughly. 01:53:04 mfoltzgoogle: I can spend an hour and fill that table. 01:53:27 ... We'll likely recommend AES-128-GCM and CHACHA20 in the end. 01:53:45 ... The next is deciding which signature algorithms to support. 01:54:09 ... secp256r1 is mandatory. Again, no benchmark available. We don't have a clear data decision. 01:55:01 Victor: QUIC note about performance. You should also check for signature and verification. ECDSA is usually faster for signature, but slower to verify. About the same thing in the end. 01:55:31 mfoltzgoogle: I did find some benchmark about EdDSA that gives better benchmark. 01:55:44 David: Yes, it could be a good fit here. 01:56:31 Victor: ECDSA has a potential issue that you can leak your private key if you use the same random numbers. EdDSA has some internal protection. It's more fullproof. 01:57:28 mfoltzgoogle: The only action item I propose is to fill the benchmark. EdDSA 25519 seems the best candidate here. 01:58:15 Victor: TLS uses "mandatory" very liberally. I wouldn't worry too much about mandatory things in TLS 1.3. 01:57:28 ACTION: mfoltzgoogle to fill in benchmark for ciphers. 01:57:28 ACTION: mfoltzgoogle to fill in benchmark for signature algorithms. Also check for signature and verification. ECDSA is usually faster for signature, but slower to verify. 01:57:28 ACTION: mfoltzgoogle to update TLS section to "comply with the spec" for TLS cookies and session resumption, and that's it. 01:59:08 mfoltzgoogle: Last one is session resumption, which Victor suggests we don't do if we can get rid of it. 02:00:06 ... In most cases, disconnection/reconnection is less than one every 10 seconds. 02:00:52 ... We need to provide notes about agents close connections. I assume for QUIC, it's a memory saving to close the connection. 02:01:38 ... We're leaning towards requiring session resumption. If we do get data that it's not viablfe, we may have to revisit. 02:01:52 David: Not planning on using 0-RTT data? 02:01:54 mfoltzgoogle: No. 02:01:54 RESOLUTION: Specifically ban 0-RTT data. 02:02:14 David: That's reasonable. I wouldn't ban it though. What is the purpose of banning TLS features? 02:02:54 Victor: Most of the time, session resumption puts the session cache in memory. 02:03:49 David: There is no real downside to having session resumption. For 0-RTT data, you have to do spec work to identify when it can be used, e.g. to start with to request capabilities. 02:04:08 Victor: Both parts have to opt-in for session resumption. 02:04:36 ... Almost all extensions you can ignore them on both sides, apart from a few restricted ones. 02:06:25 ... It's really important for parties to be able to ignore extensions on the Web. 02:06:40 ... 50 extensions registered or so, but only ~20 real. 02:07:08 ... Some of them are mandatory. QUIC defines extensions. 02:07:42 David: Note the "extension" mechanism in TLS dates back from previous versions, hence the notion of "mandatory extensions". 02:08:05 mfoltzgoogle: How do I say that agents must support mandatory extensions for QUIC? 02:08:21 Victor: You just say "comply with the spec", and that's it. 02:10:10 [some more discussion about extensions] 02:11:10 mfoltzgoogle: Same thing for the cookie extension. Not something that we're going to use, I believe, and brings complexity. It requires the client to track additional state. So something we may disallow or simply not mention. 02:12:04 David: QUIC has its own mechanism for an endpoint to prove that it's not attacking the other endpoint. From your perspective, you take a QUIC/TLS library and basically you're done, so I wouldn't mention it at all. 02:12:19 Victor: Yes, something that QUIC does automatically. 02:12:31 mfoltzgoogle: OK, so no mention o the Cookie extension. 02:13:09 ... The companion to this spec decisions is the format of the certificate described in the spec. I'm open to feedback on that. 02:13:16 Victor: Yes, will do that. 02:13:16 ACTION: Victor to provide feedback on TLS certificate requirements 02:13:52 mfoltzgoogle: The rest of the story around authentication is that we cannot mandate UI but we wanted to make sure that there are sensible guidelines. Based on prior art in this area, 5 guidelines: 02:14:10 ... 1. render information that hasn't been verified differently 02:14:44 ... 2. We have a set of flags such as need to re-authenticate an agent. 02:14:56 ... display it differently in such cases. 02:15:06 ... 3. UI hard to sppok 02:15:15 ... 4. Make the user take action to input the PSK. 02:15:26 ... 5. Meet accessibility guidelines when showing & inputting the PSK. 02:15:34 ... Also some language around rotation. 02:16:34 ... Question for the group: do we agree that this is a good list (with the added note on PSK rotation)? 02:16:59 anssik: I think this is good stuff, goes beyond what many specs do. 02:16:59 ACTION: mfoltzgoogle to make sure to note that there is a fresh PSK for each authentication. 02:17:11 mfoltzgoogle: Yes, I want to make sure that people see it, hence not a separate spec. 02:18:23 ... Who do we allow authentication with? We devised a mechanism where through mDNS you advertise a random token, to prevent attack scenarios where devices force your TV to show a PSK. 02:19:01 ... [goes into detail of auth-initiation-token] 02:19:57 ... Is this good enough as a first mechanism to prevent misuse of authentication? We may need to add other mechanisms later on based on experience. 02:20:00 anssik: No concern. 02:20:46 mfoltzgoogle: Finally, a topic that has been through the group for quite some time is what PAKE to use. We started to look at J-PAKE, but it requires multiple round-trips and is not implemented in most libraries. 02:20:52 Victor: It's very hard to implement in practice. 02:21:29 mfoltzgoogle: Second Challenge/Response proposal led to concerns about memory usage. 02:21:53 ... That's why we switched to SPAKE2, better fit for the use case. Any other alternative? 02:23:19 Victor: IRTF group looking at PAKE has plenty of PAKES on their agenda. Simple and argumented PAKEs. For pin codes, simple PAKE seems preferrable. They don't have guidelines to pick up simple/argumented. Working on it, but probably not done by the time you take a decision. 02:24:07 ... Name of the group is CRFG(?) 02:24:39 ... In short, I would say: "Go with SPAKE2 but leave some window for possibility to upgrade". 02:24:52 mfoltzgoogle: Contact info for SPAKE2? 02:25:01 David: That's on the draft. 02:25:14 Victor: There is also a GitHub repo. 02:25:46 mfoltzgoogle: I'm particularly wondering about the standardization timeline. 02:25:57 ... OK, that covers the ground on authentication. 02:25:57 ACTION: mfoltzgoogle to contact the maintainer of the SPAKE2 RFC as the current note has expired. 02:27:03 Topic: Remote playback issues 02:27:55 Peter: Since Berlin, we added!refined the playback update algorithm. We filled up the table for defaults/required added. Which fields the receiver is required to honor from the controller. 02:28:10 ... We landed the remoting PR (remote playback via streaming). 02:28:39 ... The way we modeled it was that there is basically a new field in the start-request and start-response messages 02:28:58 ... Basically you have an optional streaming session attached 02:29:25 mfoltzgoogle: This is in addition to the URL that is passed? 02:29:30 Peter: Yes. 02:29:53 mfoltzgoogle: How does the receiver know whether to load the URL or to wait for the remote stream? 02:30:10 Peter: That's actually a good question. 02:31:29 mfoltzgoogle: An implementation might want to create a streaming session just in case that the video element switches to MSE later on. We might need a way for the receiver to know whether it should start with a URL or to wait for a stream. 02:31:56 Peter: What we're lacking here is a way for the receiver to know what the controller is planning to do. 02:32:47 ericc: In the local case, just changing the source is not enough, you have to call load as well. 02:33:21 Peter: As currently written, we would expect the controller to send a source changed event. 02:33:42 ... What we don't have is what should the receiver do with a blob URL. 02:34:08 ... We need a "There's no longer a source, time to switch to remoting" message. 02:34:13 ericc: Yes, that makes sense. 02:34:41 mfoltzgoogle: Pre-roll ad is a use case I'm thinking about. 02:34:45 ericc: That is very commonly used. 02:35:08 ... Youtube does that a lot. Start with MSE, and then use a URL. 02:36:09 RESOLUTION: pthatcher to add a "There's no longer a source, time to switch to remoting" message. 02:36:09 Peter: OK, seems we have a plan there. 02:37:02 ... Some things not done since Berlin: add extended mime types to remote playback HTMLSourceElement.type and CSS media query to remote playback HTMLSourceElement.media. 02:37:16 ... Should these go in the availability request as well? That's how I wrote the PR. 02:37:20 ericc: Seems reasonable, yes. 02:37:57 Peter: Second one is use of Media Capabilities / CSS colorspace values. CSS Colors Level 4 is the right reference? 02:38:13 ericc: I believe so, but there are some planned discussions in the CSS WG around that. 02:38:43 mfoltzgoogle: Fielf on which message? 02:38:51 Peter: receiver-video-capability 02:38:58 s/Fielf/Field 02:40:32 Peter: Another things we haven't done is recommended HTTP headers because I didn't know which ones to recommend. 02:40:43 Victor: Related question, are there credentials in any way? 02:41:20 mfoltzgoogle: The Remote Playback API doesn't really require/prevent agents to send credentials. 02:41:43 ... At the protocol level, question is "do we want to have support for that?". 02:41:50 ... Only thing needed is locale exchange. 02:42:29 David: And I don't want to give my credentials to my TV. In general, I don't trust a TV with my credentials. 02:42:58 mfoltzgoogle: If you want to play restricted media in any way, that's more a use case for the Presentation API for the time being. 02:43:13 ... Question for the spec is what capabilities do we provide? 02:43:42 ... We should focus on what the API requires, which is the locale, and let everything else up to implementers. 02:44:52 ... The only thing I might add to that is CSP, because if all the remote side is doing is retrieving media then we can perhaps restrict that, but I'm not sure CSP is meaningful for anything other than HTML MIME type. 02:45:24 Peter: Right now, there's only Accept-Language. 02:45:34 mfoltzgoogle: Right. 02:45:43 Peter: So that solves #146 02:46:03 mfoltzgoogle: with exploration of CSP headers, perhaps. 02:46:03 ACTION: mfoltzgoogle to look at CSP-related HTTP headers to pass with remote playback 02:47:22 takumif: Media Capabilities allow you to query things like bitrate but there's no way to tell the remote capabilities. 02:47:44 ... There is no way for the sender page to say that this source element has these bitrate/framerate attributes. 02:48:27 mfoltzgoogle: When we discussed this in Berlin, we brought up the idea of querying the capabilities of the remote device but that raised questions about fingerprinting. Eric, did you dig into it? 02:48:33 ericc: No. 02:48:55 Peter: We should probably create an issue to address that. 02:49:20 mfoltzgoogle: Right now, the controlling page can only try different source URLs until one works. 02:49:51 Peter: On the question of multiple controllers, the idea is to do the same thing as in the Presentation API, but in the Remote Playback API. 02:50:21 ... When we looked at how we might do this, it seems pretty clear that we need some API change. We could overload the prompt, but that seems confusing. 02:51:01 ... For remote playback, should it require the same URL like the Presentation API does? 02:51:47 takumif: Same URL would mean source attribute has the same URL as the one playing on the receiver? 02:51:54 Peter: Yes. 02:52:42 ericc: If you want to support this, it seems that you want a way to connect to an existing session and then get the state so that you can reflect the state locally. 02:52:49 Peter: What would be the permission model? 02:52:58 ericc: I don't know, that's a great question. 02:53:34 mfoltzgoogle: The use case is that I navigate or close the local page, but remote playback continues, and then I want to reconnect through a different media element. 02:53:46 Peter: Then you will know the URL 02:54:15 ericc: Yes, but even if you do, your local state is not going to be the same as on the remote side. 02:54:44 ... Another use case is something like a group playlist. Originates from one machine but you allow other machines to hook up. 02:55:05 ... It might make sense to think of a read-only instead of read-write. 02:56:23 ... One real use case is sitting on the couch, watching something on my phone, switching to my TV, then I may want to use another remote controller, or I may not want to keep my phone awake, so need something magic to reconnect. 02:57:09 mfoltzgoogle: If we did allow prompt to allow to pass this unique session, then that would allow the controller to say that it would be ready to reconnect to an existing session. 02:58:08 [more discussion on transfer of control] 02:58:40 RRSAgent, draft minutes v2 02:58:40 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html tidoust 02:59:52 scribeNick: mfoltzgoogle 03:01:15 Peter: There was an action item about HDR> 03:01:17 TOPIC: HDR 03:01:46 ...The crux of it is an enum describing which HDR capability the device has. 03:02:04 ..The second issue is about width + height, which we already cover. 03:02:35 ...ericc: The enum discussion will be resolved soon. 03:02:40 tidoust has joined #webscreens 03:03:41 Peter: At that point, we just map those over to capabilities and reference that document. 03:04:06 s/TOPIC:/Topic:/ 03:04:31 mfoltzgoogle has joined #webscreens 03:04:48 RRSAgent, draft minutes v2 03:04:48 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html tidoust 03:07:15 msw has joined #webscreens 03:07:24 tidoust has joined #webscreens 03:58:52 ericc has joined #webscreens 04:04:52 takumif has joined #webscreens 04:11:31 scribeNick: anssik 04:11:34 scribe: anssik 04:11:35 mfoltzgoogle has joined #webscreens 04:11:47 ericc has joined #webscreens 04:12:29 Topic: Remote Playback and Streaming 04:14:01 Peter: streaming, since Berlin merged outstanding PR for start session and stats 04:15:47 ... Per-codec max resolution not done, new issues since Berlin include codec switching when remoting (issue #223) and bi-dir streaming (issue #176) 04:17:23 ... problem with changing codec: sender might get stuck if sender says "can send you A or B" and receiver says "would like B", if then source switches to A the sender has to transcode 04:20:24 ... proposal to use capabilities instead of codec 04:23:00 ericc: enumerateDevices used much more than getUserMedia, suggesting it may be used for fingerprinting 04:25:34 mfoltzgoogle has joined #webscreens 04:25:51 ... exposing more information before user consent should be avoided 04:26:32 Peter: possible explosion of things the receiver could support a concern 04:26:54 ... it might be receivers just simplify and claim to support a single codec 04:29:44 ericc: if we do not expose this information to script, does this still work with MSE? 04:30:28 ShinyaAbe has joined #webscreens 04:31:02 tidoust has joined #webscreens 04:31:23 ... transcoding is expensive and lossy, so web app may have other alternatives -- transcoding should not happen in general 04:32:11 ... changes made to the media specs so that web app can say upfront these are the things I can offer 04:33:33 mfoltzgoogle: we don't want pages to enumerate devices, only availability bit 04:33:44 ericc: how much info you include in the description of a device? 04:34:38 Peter: train of thought: if the JS provided all the things it might do, e.g. codec, resolution, find me a device that can do that without transcoding 04:35:31 mfoltzgoogle: sites can offer multiple URLs, there's a device that can represent at least one of them, we could have a sender with a list of preferences, then pick the first compatible for that list 04:36:17 ... to do that matching, need capabilities on the receiver, this would be an implementation detail not exposed to web apps 04:36:31 Riju has joined #webscreens 04:39:16 Peter: multiple problems to tackle in this issue, breaking into pieces: 04:40:10 ... API side alternatives: 04:40:58 s/... API side alternatives:// 04:41:16 ... - avoid transcoding 04:41:29 ... - avoid picking device that would fail w/o transcoding 04:41:45 ... - switch codecs w/o transcoding 04:41:54 Q+ 04:41:57 ... - don't expose too many bits of entropy to JS 04:42:17 ... API side, possible solutions: 04:42:53 ... A. Media Capabilities-like querying -- doesn't know ahead of time; can't be used to select the device 04:43:06 ... B. JS says everything it want to do up front 04:43:25 ... Protocol side, possible solutions: 04:43:39 ... A. Sender offers; receiver picks once 04:44:00 ... B. Receiver expresses capabilities once; sender picks dynamically 04:44:32 ... looking at possible combined API+protocol solutions 04:44:42 ... AA: no possible 04:44:46 ... AB: possible 04:44:53 ... BB: possible 04:45:36 s/... AB: possible/... AB: possible, by itself does not solve #2 04:45:44 ... BA: possible 04:45:47 msw has joined #webscreens 04:46:25 s/... BB: possible/... BB: possible; solves 2 04:46:34 s/... BA: possible/... BA: possible, solves 2 04:46:36 MasayaIkeo has joined #webscreens 04:46:48 RRSAgent, draft minutes v2 04:46:48 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html anssik 04:48:06 ... additional problem, do not prompt if nothing is available 04:54:45 Riju: depending on whether H.264 is on sw or hw path, performance differs 04:55:02 peter: we cannot do that pre-prompt, possible post-prompt 04:55:14 ... could we say you get to ask once per page load? 04:56:04 Question is if vp9 hardware decode path is possible and av1 software path, whichbone to choose 04:56:44 mfoltzgoogle: we want to provide good APIs and allow implementers to make tradeoffs how much information to expose 04:57:10 ericc: UX should not differ too much based on privacy-decisions of implementations 05:05:18 peter has joined #webscreens 05:05:21 Problems we're solving Avoid transcoding Avoid picking device that would fail without transcoding Switch codecs without transcoding Don't expose too many bits of entropy to JS Don't prompt if there's nothing available API side Q: Post-prompt MediaCapabilities-like querying JS doesn't know ahead of time; can't be used to select the device F: JS says everything it wants to do up front Protocol side R: Sender offers; receiver picks once S: Receiver exp[CUT] 05:05:36 abilities once; sender picks dynamically QR: not possible QS: possible; by itself does not solve #2, #5 FR: possible; solves #2, #5 FS: possible; solves #2, not #4 and #5 seem incompatible unless the info is somehow throttled or very limited Move ahead with QS See if FS can be used to give us #5 and debate importance vs. #4 05:08:39 Translation: We'll try 1. Receiver capabilities that allow the streaming sender to switch codecs, plus 2. An API on RemotePlayback that allow JS to query post-prompt for capabilities like the MediaCapabilities API does 05:09:13 PROPOSED RESOLUTION: We'll try 1. Receiver capabilities that allow the streaming sender to switch codecs, plus 2. An API on RemotePlayback that allow JS to query post-prompt for capabilities like the MediaCapabilities API does (issue #223) 05:10:12 RESOLUTION: We'll try 1. Receiver capabilities that allow the streaming sender to switch codecs, plus 2. An API on RemotePlayback that allow JS to query post-prompt for capabilities like the MediaCapabilities API does (issue #223) 05:10:37 Topic: Bidirectional streaming / stream request 05:11:41 Peter: if I want to receive media from you (TV pulling up a video doorbell feed), what do I do? 05:11:48 ericc: why not use WebRTC? 05:13:55 mfoltzgoogle: if WebRTC, needs its own auth step 05:15:08 ... do we want a symmetric protocol, or asymmetrical as of today? 05:15:43 keeping asymmetrical allows as of now bi-dir with two independent sessions, just more work 05:16:42 mfoltzgoogle: is it easier to be symmetric or asymmetric, maybe bi-dir should be an extension 05:17:51 Proposal for bidirectional streaming and stream requests: do nothing for now. 05:18:18 You can always make 2 unidirectional sessions 05:19:00 mfoltzgoogle has joined #webscreens 05:19:14 PROPOSED RESOLUTION: Do nothing for bidirectional streaming and stream requests (issue #176) 05:19:24 RESOLUTION: Do nothing for bidirectional streaming and stream requests (issue #176) 05:19:54 RRSAgent, draft minutes v2 05:19:54 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html anssik 05:22:03 Topic: Extensions and Capabilities 05:22:37 mfoltzgoogle: extending the protocol with new messages not understood by all agent a new feature fleshed out since Berlin meeting 05:23:33 ... capabilities in messages not exposed to JS, implementation detail 05:24:49 ... extended capabilities use IDs >= 1000, can add new type of messages or add fields to existing messages 05:26:45 ... added a public registry for capability IDs to avoid conflict, in GH repo now 05:27:19 ... maybe in the future migrate the registry to IANA 05:27:56 ... any concerns using this model for documenting extensions? 05:28:00 [no concerns] 05:29:45 mfoltzgoogle: proposal to merge open PRs except HDR that has an external dep to MediaCapabilities spec 05:31:16 Topic: Open Screen Protocol V2 Features 05:31:38 mfoltzgoogle: areas to explore in the future for the protocol, want to finish v1 before advancing here, also recharter the group 05:32:22 ... v2 issues: Support for DataCue, Attestation, Data Frames use cases, Aternative Discovery, Multi-device timing (joint session topic) 05:33:51 peter: GenericDue / DataCue, proposing to add first-class support for TextFrame 05:34:18 ... similar to DataFrame, but with a payload that matches DataCue/TextTrackCue 05:35:31 ericc: only reason WebKit has support for both .data and .value is to provide reasonable error if .data is touched 05:41:35 ericc: container can contain metadata, media files can have metadata that is not displayed, IDv3 tags can contain e.g. images 05:42:17 ... initial spec just has .data and the script had to know everything about it in order to understand 05:42:35 ... from script point of view, the cues show up as part of tracks 05:42:48 ... from script can make cues as tracks and add them 05:43:52 ... subtitles can come with the container or from the script, and it's clear we need support in streaming for text tracks and cues 05:44:39 peter: take datacue and put it in existing texttrackcue for remoting, and, add explicit text frames for streaming and remoting 05:45:55 ericc: if they can be created from script, we must be able to reconstruct them on the other end, because it can be arbitrary JS object 05:46:29 peter: datacue should be in both the places? right? 05:46:35 ericc: correct 05:48:15 Proposal: add support for DataCue to both streaming/remoting (a new text-frame message) and existing text-track-cue. 05:55:27 PROPOSED RESOLUTION: add support for DataCue to both streaming/remoting (a new text-frame message) and existing text-track-cue 05:58:06 PROPOSED RESOLUTION: add support for DataCue to both streaming/remoting (a new text-frame message) and existing text-track-cue w/o object 05:58:26 RESOLUTION: add support for DataCue to both streaming/remoting (a new text-frame message) and existing text-track-cue w/o object 05:58:47 Topic: Attestation 05:58:49 hyojin has joined #webscreens 05:59:09 mfoltzgoogle: more exploratory, looking for feedback on utility of this, maybe M&E IG is interested in this 06:00:32 ... attestation is to describe how UA finds info about another UA, manufacturer, model, s/n, OS, other capabilities, standards compliance e.g. HDCP support 06:01:30 ... typical means to achieve this via certificates, NOT related to transport auth certs 06:05:05 ... there's a lot of things to investigate further, precedents with EME, WebAuthN 06:05:48 ... do we expose this to apps? Fingerprinting and privacy implementations 06:06:45 PROPOSED ACTION: Start a companion note separately with use cases, requirements, and draft framework 06:07:14 ACTION mfoltzgoogle to start a companion note separately with use cases, requirements, and draft framework 06:07:26 RRSAgent, draft minutes v2 06:07:26 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html anssik 06:07:36 Topic: Alternative Discovery 06:07:49 mfoltzgoogle: what if mDNS does not work? 06:08:05 s/what/problem statement: what/ 06:09:19 ... proposal to define an OSP Beacon format to be obtained through BTLE, NFC, or a QR code 06:10:19 s/OSP Beacon/Open Screen Beacon/ 06:10:43 peter: signaling over QR code should be doable 06:10:59 mfoltzgoogle: beacon should have the info needed to prime ICE state machine 06:11:35 MasayaIkeo has joined #webscreens 06:12:01 PROPOSED ACTION to mfoltzgoogle to write an Open Screen Beacon proposal outside of this CG repo and with an explainer 06:12:01 ACTION: mfoltzgoogle to write an Open Screen Beacon proposal outside of this CG repo along with an explainer 06:14:18 RRSAgent, draft minutes v2 06:14:18 I have made the request to generate https://www.w3.org/2019/09/15-webscreens-minutes.html anssik 06:14:31 mfoltzgoogle has joined #webscreens 06:32:02 MasayaIkeo has joined #webscreens 06:32:39 ericc has joined #webscreens 06:33:44 tidoust has joined #webscreens 06:35:06 mfoltzgoogle has joined #webscreens 06:36:15 MasayaIkeo has joined #webscreens 07:30:47 mfoltzgoogle has joined #webscreens 08:57:44 MasayaIkeo has joined #webscreens