W3C

Second Screen WG F2F - Day 2/2

25 May 2016

Agenda

See also: IRC log

Attendees

Present
Anssi_Kostiainen, Mark_Foltz, Mounir_Lamouri, Rick_Smit, Hyojin_Song, Chris_Needham, Shih-Chiang_Chien, Yavor_Goulishev, Francois_Daoust, Jonas_Sicking, Ed_Connor, Eric_Carlson
Chair
Anssi
Scribe
Chris, Francois, Rick, Mounir, Yavor

Contents

See also the minutes of day 1.

NB: Minutes are rough, imprecise, and possibly wrong from time to time. Check issues on GitHub, linked from these minutes, for additional context.


Presentation request URL issue

Anssi: We parked an issue that arose as a spin-off of issue #153
... Problem is with the presentation URL that may contain additional proprietary parameters that get used instead of the URL itself.

Mark: A bit of history. There was strong pushback initially against supporting non HTTP/HTTPS schemes.
... We found a workaround to allow to pass additional parameters
... I looked at the code. There is very little probability that the extra parameters might collide with existing fragment identifiers.
... Another concern was around what would happen if developers use the URL with other implementations.
... Again, that's not going to do much.
... One use case is to enable capability detection to allow developers to assert that a specific URL will be supported.
... Instead of passing a list, a wrapping library could be used that tries the different URLs in turn.
... We can mix Cast parameters and DIAL parameters. The beginning of the HTTP URL itself will be used for the 1UA case.

Anssi: This suggests no change to the spec.
... Based on implementation experience, we can probably revisit this later on.
... If we publish a CR and realize that we need to change this, we can republish another CR. That will add another delay.

Jonas: I think that we have strong evidence that what we have now does not work.
... You're passing a URL but you're treating it as passing parameters.
... You want a Cast identifier, and since the API only takes a URL, you work around that.
... You're forced by the API to hack it.

<anssik> https://w3c.github.io/presentation-api/#user-interface-guidelines

Francois: The security guidelines recommend that the controlling user agent display the requested origin. In this case, the requested origin does not mean anything, so it could be used by an attacker to pretend that the app will present amazon.com content whereas it's not.

Mounir: Yes, as soon as the spec suggests to use the URL, then it's not an opaque string anymore

Jonas: The semantics of this URL parameter is "load this URL onto the second screen", and that's not what you're doing.

Mark: It seems we're discussing implementation internals. What's the difference in practice?

Jonas: It makes a lot of differences. [analogy with fetch]
... If you are interpreting the values differently from what the specification suggests, then you're not following the spec.

Ed: Where's the interop if the URL is interpreted differently by different implementations.

Jonas: I understand what you're trying to do, but you're working against the spec here. The goal is to take the HTTP URL and load that content. This could lead to a redirect of course, but that's not a concern.

Mark: The meaning of the URL depends on the target device.

Jonas: The spec says to fetch the URL.
... You're subverting the spec.
... With "Navigating to an HTTP URL", the expectation is that you'll follow the HTTP protocol to download the resource.
... If it redirects to other schemes, then so be it, but that's not what happens here.

Mark: Do you have a concrete proposal, then?

Jonas: You need an ability to say "please load this Cast app".
... It should be something other than HTTP.

Mark: Your concrete proposal is that we require another scheme? Can the backend redirect to the cast scheme?

Jonas: Yes.

Francois: If the URL is google.com/blah, you may not even need to go to your backend, that's your domain, under your control.

Jonas: If I request amazon.com/#__castid=abc123, you will interpret it as a Cast app. But Amazon.com is not going to do the redirect to the Cast app. So you should not interpret it as a Cast application.
... You can do that for URL spaces that you own, you cannot do it for other domains.
... Two proposals in the end: 1) you should add some kind of "cast" scheme. This can be done under the hood. 2) My understanding is that companies like Netflix have preferences for launching a DIAL app rather than launching an HTML app, I do see value in having a fallback mechanism.

Mark: We can certainly construct these cast URLs behind the scenes. That will probably not impact the spec itself.

Yavor: For Youtube, we'll use whatever answers first. If it's DIAL, then we'll use DIAL. If it's Cast, we'll use Cast.

Jonas: The specific spec change for the first point is to make it clear that the URL that gets passed to the Presentation API should go through Fetch.

Francois: The spec does say to "navigate to the presentation request URL", so not sure we need something on top of that.

Jonas: My argument is that you can only control redirects for domains you control, not for others.

SC: Question is how would this work with Firefox's implementation of the Presentation API.

Mark: This is really a vendor extension that should not be added to other calls.

Jonas: If you guys have that need, I think others will have that need too. I would not be surprised if Mozilla would want to support Cast applications as well.
... We can do that much more easily if it's a cast:// scheme.
... If it's part of the Google namespace, as in google.com/cast internally redirects to a cast:// URL, we can bake that in Gecko as well, but that's much less ideal.

Mark: The last part of my conversation is that being able to pass a list of URLs will delay things by a couple of months at least.

SC: For Firefox OS, we use app:// scheme to launch native apps. For generic content, it uses HTTP.

Mark: So we have a use case for fallback.

Jonas: Yes.
... If things converge in the future, I can imagine that things converge on HTTP, we'll just ignore the rest.

Anssi: So you want an explicit extension mechanism.

Jonas: Yes.

Anssi: We don't yet have consensus it seems. I'd like not to block the spec today.

Francois: Also wondering about the relationship between URLs in the array. They may not be related.

Ed: Not a problem. That would match the API contract: the ability to pass multiple possibilities.

Francois: OK, same thing as for <video> sources now that I come to think about it.

[discussion on google.com/cast and cast:// scheme]

Mark: Let's try to separate the discussion between separating namespaces which I agree with, and the fallback mechanism.

Jonas: Adding a new protocol is a significant easier thing to do than changing the semantics of HTTP.
... In the end, my summary is that I think that you're doing right now is not per specification.

Anssi: I'm hearing a proposal here which is to keep the API surface as is and restrict your interpretation of the URL in your implementation.

SC: A specific scheme will help implementations detect early that the URL won't be supported.

Mark: We need to have clarity on the fallback mechanism. We plan to support the 1UA case soon.

Jonas: APIs that take either a string or a list of strings exist, no problem with that.

Mounir: Do you guys support 1UA?

SC: Yes, planned for September. 1UA is only for HTTP.
... If the controlling app passes a non HTTP URL, it will be discarded.

Mounir: So you need the fallback mechanism too.

Anssi: So concrete spec changes?

Mark: In PresentationRequest, two constructors, one with a DOMString and the other with an Array of DOMString. Then on the PresentationConnection, we need to expose the URL that got presented in an "url" attribute.

Anssi: What is the impact on the implementation?

Mark: Will need to get back to you on that.

Jonas: Some of what is need already happens anyway.

Mark: Yes, it's more the plumbing in-between.
... And there's also a UX impact.

PROPOSED RESOLUTION: Add a fallback mechanism by adding a new constructor on PresentationRequest that takes an Array of DOMString, and by exposing the presented URL in an "url" attribute on PresentationConnection, pending feedback from implementers that this doable.

Anssi: Good, moving on.

Possible scope for the open protocol CG work

-> Presented slides on open protocol and CG rechartering

mark: prepared discussion for rechartering community group
... put togheter pull request
... interesting discussions about scope deliverables
... little bit of history, CG started 2013
... input from Netflix, Mozilla, Goolge
... ansi checked in boilerplate charter
... thinking about what scope we would like to tackle for this work, 1. network level behaviour, 2. common cases and get them right. 3. build on stable standers 4. bandwidth issues, 5. let it be future proof
... personally I think prototyping is helpful, if I have the resources I would like to build open source resources
... here's a summary for a reasonable scope. discover of presentation displays that share the same LAN, multiple connections per controller and per display, secure connections
... data that is passed between ua's should be secure
... one thing we may want to include is provide a way for vendor extension. Seems that there is some pushback for vendor extensions

Francois: what woudl the CG define?

Mark: extension points for vender extensions

yavor: for dial there's nothing we can use, for that point we need a propietary extension that supports playback controls
... that can be cloud and only use dial to launch the app, if it's standarised it would improve latency and would be easier to support.

mark: the existing messaging for api is designed to handle this, like volume control
... for now I like to keep it really generic and understanding the use cases more

Jonas: so for talking to a html application there's messaging for remote playback, for dial that doesn't exist?

javor: so you can have applications that have messaging both ways?

mark: yes

yavor: that's great

mark: I think a lot of these things are not we don't want to do, but maybe not have a high priority
... webrtc is not great for doing local area screencast, we're working on it

jonas: I share a lot of your concerns, 1 ua mode exist to support old hardware dont really see the point, I think EME would be really great if we could support presentation api with encrypted media
... how does chromecast handle that?

mark: they send a url to the player and player handles it

jonas: what I'm curious about would it make sense for the protocol support sending user credentials

mark: I think that's sensible to do
... we want to make it easy for the url to be fetched

jonas: what im thinkgin about is supporting a netflix dial application where you send user credentials to the app

mark: you cant pass a license from one device to another, that's by design
... out of scope is also interactive presentations, interesting for gaming

jonas: Im curious how important an unreliable channel is

mark: I think gaming, they want a low latency
... couple of ms
... we have our chrome desktop team who is working on this and I could check with them how to handle this
... open question: accesibility, to have screenreader work across controller and receiver

anssi: I would leave accessibility out of the scope

mark: require display access control, like a password, now anyone can access the display

in the past we also discussed the propagating origin information, so the TV nows what's the origin sending it

mark: in the past we also discussed the propagating origin information, so the TV nows what's the origin sending it

anssi: display capabilities was a controversial part of the api

mark: so exposing those capabilites is out of scope for presentation api, maybe remote playback api, do we want support in the protocol for that.

jonas: on capabilities, one use case im interested in is something like supporting chromecast audio
... I feel like you can kinda use ChromeCast, it would be bad UX if you go to pandorra and it says you have a remote device available and it would start playing on your tv

mark: what we do now is give the user info over the device
... so if it's audio show a little speaker icon

anssi: is it an implementation detail?

jonas: I think if you're on netflix and watching arrested development and shows a fling button and if it would fling to your speakers it would be bad UX

mark: you would have to tell the app that it's video content

anssi: you still might want to override that

jonas: maybe audio only vs video only could be a baselevel thing

anssi: what is chromes way, is it like a prompt that tells you which devices you have?

mark: the ux seems to have changed

anssi: this is a point where people are struggeling with

yavor: does the application knows what to expect on the other side?

mark: we decided that the app could feature detect what the device could do

francois: when the user enters an url the website will detect

mark: the requirements fall out of scope that i discussed earlier
... i think its important from ux point of view to know if it's avialable or in use

jonas: so when you show the device picker it shows "jonas tv currently playing netflix" or something?

mark: yes exactly
... we had discussions about power saving, this should also go for tv's

yavor: do you want to support wakeup lan?

mark: we're discussing requirements so it would be a feature request
... connection is pretty similar to the scope discussions, displays should be able to deny connections

anssi: reconnection is now part of application logic

mark: an important part of the spec would be the connection lifecycle
... it's important to allow presentation from secure context, to have strong encryption, there are a number of approaches to handle this

jonas: it's impossible to guarantee that you connect to the device that you're trying to connect to
... at no point you have a secure communication channel, you have to decide which parts you trust

mark: the solution is in public key I think, and trust model
... what do we have to support on the protocol level?
... query command, give the url an id, multiplex like multiple browsing context, passing locale for rendering display

Mozilla's FlyWeb presentation

jonas: this is a presentation that I wrote for people a little less familiair with this topic
... flyweb is a project we're working on at mozilla

mounir: like b2g? ;)

jonas: yes, like b2g ;)
... the problem that we try to fix, we all have these smart devices, we almost always try to connect them trough the cloud. a problem is latency
... the way we do collaborative document sharing, we grant access to another persons account, we have to send an url and then we can start collaborating
... if we imagine the future of the smart hotelroom, and you want to interact with all the smart devices, talking to these smart devices would suck if a.) it requires an application installation and account creation
... so what we want to do is use the fact that the web is really good at on demand application delivery
... using applications is a very nice thing to do, a room service application can offer a nice ui, and show you stuff you could order, and with a tv you could have a nice ui which shows which are playing
... apps are really nice to show ui's but the installation and authentication are less nice
... the web is fairly good at cross platform so it would be nice if we could enable the web to interact with these things. Currently that doesnt works so well because we have some builtin assumptions on the web, severs must be on the internet, must connect to them over tcp/ip
... if we forget these assumptions, servers can be on any device, can talk over other transports discoverable vial locale
... an example is a phone running a flyweb browser and a tv running a flyweb server. in this case it's scanning and finds the tv remote. the browser will establish a connection with the tv and acts a webserver and downloads a webui on the smartphone. at that point it's a web application that can do everything a normal application can do
... another thing we want to do is enabling p2p use cases, like phone to phone or desktop to desktop
... if i have browsed to a website, that website can turm my phone in a flyweb server, a discoverable server
... we have this working
... not production quality, it's not like we're working an actual implementation but so far it seems to be working well
... we use mdns to enable discover, and we can also the http request and websocket connections over the network
... we have some basic ui to scan for nearby devices and launch them
... our hope is to land within the next couple of weeks
... what we have right now is that is should all be encrypted
... the tricky part with this, is encyrpting it with a key that bad people don't have access to, so what we do right is during discover is include a TLS certificate fingerprint

anssi: can i give my friend a flyweb url and he can click that and then gets this page?

jonas: I dont know where' we're going to end up on that
... I have desktop builds where it should be working

anssi: so you have a spec for flyweb? Do you have plans to move that spec for further incubation?

jonas: there are still things we're figuring out

anssi: so you said you're using mdns, so that's the protocol of choice?

jonas: it's up to debate but we're trying to reuse as much as possible

mark: are using bluetooth for messaging?

jonas: the idea is that we can do messaging over different transports, we havent done bluetooth yet
... so this is http right now, but it should also work with https over tls
... so this is running the webserver, so we have a webserver api, which is super simple, it's logging all the request and returns all the requests like a webserver
... the client api is super simple, you can use xhr or a websocket and then you can do the normal stuff

mounir: why not using service workers?

jonas: service workers is for running stuff in the background, at the moment this is not supposed to work in the background
... all we do is fire a fetch event, you can either create a response object from scratch or fetch resources or do whatever
... for socket we fire socket event, you can either accept it in which case you get back a WS object and then it's just a normal WS api
... there are like 1 or 2 interfaces that are new
... we need to define how discover works, so it could work across browsers
... so the network protocols should be defined, like mdns and http

yavor: how do you deploy private keys for the certifcates?

jonas: for this scenario, in this stage during discovery when the browser sends out mdns request, the server will respond with a standard object and plain text entry which is a certificate key
... we remember the certificate and associate it with an url name

francois: from a ux perspective, do you prompt the user?

jonas: this isn't implemented yet
... the idea is that when a website wants to publish a server that we'll ask the user are you ok with this website

mark: can do server website find out the url?

jonas: there might be reasons why the server wants to know it's own url
... but we havent actually had the need for that, most cases just serve relative urls
... something we would like to support where a webpage could say do a discovery, i would like to connect to an iot and say: I want to be the server

mark: it goes both ways, to the publication could publish a remote control app

yavor: does it matter which side is the server for WS

jonas: for a UI it matters
... the next thing we're going to look at to support is bluetooth, but also NFC is somethign we're looking at
... there are several areas where we can still expand this

anssi: we could message this in the CG scope

jonas: this is still in very early stage

anssi: if there's a reasonable overlap it sounds like a good idea, if we dont get huge scope expansion

jonas: I dont think it would make sense to have flyweb go through the second screen WG

francois: right now it's not in scope, but the WG will have to recharter end of October, so it might be a window of oppertunity

jonas: there need to be two interested parties, and dont know if Google is interested
... one thing that i think would be useful into presentation api is the idea that a webpage could be a webserver
... instead of giving an url say get back to me and tell the device to get back to the controller

anssi: so jonas you had a timeline?

jonas: we're landing the code end of this quarter
... and you would have to go into config and turn it on

anssi: when would you be interested to push this to a Community Group?

jonas: I cant give a good estimate because it would be other people doing it
... physical web is very related and very similair but is solves problems in a different way, and if Google wants to expand physical web to this, then it would be interesting
... we struggled a lot with the security on the web vs security on native because it's very different
... but the challenges are very similair
... and I think we can get it as good as native
... relatively easy we can incrementally make it more secure over time

anssi: we can add new things to the CG charter, we have one month review, we could ammend CG with your things when you're ready to get on board

jonas: the main overlap between what you presented and this si the protocol pieces, but for the usecases what we're focussing on right now, most immediately I think it's the discover part where we can share solutions

mark: I know you also want to support BT and potentially WS
... and then if you want to add extension that would be perfectly acceptable to

anssi: if you have resources to share, could you please add them to the minutes

jonas: there's a github repo but it's very out of date

mark: you do have IDL checked in right?

jonas: yeah it has IDL baked in
... I'll past the url of our project page

<sicking> flyweb project page. Often out-of-date: https://wiki.mozilla.org/FlyWeb

jonas: it's mainly for us internally and has stuff that's not really relevant

anssi: so what is your timeline for the CG?

mark: I have some early drafts for some of the protocol stuff, I like to start the virtual collabortion around Q3
... we could flush out a lot of details at TPAC, looking at the resources I have i can spend maybe 20/30% of my time

anssi: TPAC could be a good time to get CG in a stage that we want

mark: I want to charter to be done relatively quickly, in q3 i want to have some public work out

ted: i want to know what it means to have the charter done?

anssi: there's a review period of 30 days, where you can take it to the lawyers

ted: what's the standard working group review time?

francois: usually 6 weeks

anssi: for CG it's one month
... so the proposal is to extend review period to 6 weeks
... i think this is a good comment, because also on our side the lawyers want to look at the papers
... I think we can take that into consideration
... so that means we would have the CG charter done even earlier
... so lets figure out the timelin
... we have like roughly the summer months to come up with a charter, lets say by the end of juli

francois: the process says it's at least 4 weeks, so we could do 6 weekds

ted: realistically speaking 1 month is not enough

mark: I think the WG timeline is mostly around getting the requirements for exiting CR figured out
... and then scoping the work and finish the testing, so I think it will mostly be driven by implentation

anssi: there's often the workload on the specside, that it goes down after CR, so you do the testing and implementors provide input

mark: as we ship new implementations we might get more feedback, i dont anticipate any new feature requirements

anssi: so no one proposes new features at this time? so the expectation is to stabilise the existing charter
... the plan is to extend with 3 or 6 months

Mozilla's input to open protocol discussion

-> Mozilla's protocol draft for the Presentation API

schien: it's a proposal for the open protocol that will be used in Firefox browser and FxOS TV
... the requirements listed in the document is a subset of the requirements listed by Mark this morning
... there is a little bit more details like name and server address
... there will be information about device capability because the resolution might be a factor in the decision the user will make
... also supported media type for remote playback api
... expose I/O capabilities, for example can be used for authentification
... idea borrowed from routers

mark: for aspect, resolution and media types, is that for controllers to filter screens more aggressively or is it for UX purpose?

schien: for remote playback, if we try to do the mirroring, it will use this information
... it might use that for filtering but it's not certain
... we want to expose the protocol version so we can figure out if the device is compatible and add new features later
... for service launching, we need to provide some application id and page url
... also information about the session such as presentation id and bootstrap information to establish the communication channel
... there are some controlling messages that might be used during the service launching phase
... like the channel between two end points like connect/disconnect
... for security, we need to do device authentification
... first approach is passcode verification and also j-pake procedure
... using the passcode, we can create a one time key on both ends

yavor: would you enter the passcode once or every time?

schien: if the device doesn't recognize the other device, it will show the passcode
... using the passcode and the TLS certificate, it can generate a one-time auth key that will be stored on both devices
... so next time the device connect to the TV, the hash of the key can be provided
... so that the TV will know that the device is known

sicking: do you try to authenticate the phone to the tv, not tv to phone?

schien: authentification is both end
... using the j-pake procedure, both sides authenticate each other
... for data encryptions, the control channel using TLS

is/the control channel using/the control channel is using/

mark: in this proposal there will be up to two channels? one is TCP and the other is UDP?

schien: the second is the same as webrtc

mark: what's the reliability vs the control channel?

schien: the control channel is for device availability query and [...]
... for data integrity, there is no need to do more because it's already using TLS and DLS
... [shows diagram of architecture and describes it]
... closing the communication is not trivial
... the control channel can only be established between the controller and the receiving side
... if the presenting context tries to close the communication channel, there will be no mechanism for the receiving side to notify the controlling side
... why the channel is being closed
... in presentation api, there is different close reasons, one is normal close another one is went away

mark: you can make it in two phases, there is closing the connection and if there is no use for the data channel, it can be closed
... if you close the channel without telling the other side why, you have that issue
... but then, it will seen as a network disconnect
... you should send a message sending why before

yavor: is the control channel open at this point?

mark: it would have to use the communication channel

schien: send the reason on the communication channel

mark: it would have to send a special message to say this is a hang up, not a message for the page
... or alternatively, reopen the control channel, send the message and close it

sicking: why do we need two channels?

schien: the control channel is for UA to talk to each other
... while the communication channel is for browsing context
... I try to avoid multiplexing but not sure if worth it

sicking: can't we set up a new channel when creating a new context on the tv?
... you send a control message first then that channel is used as a communication channel

schien: in this case, we need to define a message format for delivering the messages

mark: our track record of implementations is that there is almost 1 communication between Chrome and the Cast device

yavor: is it possible that it is coming from how webrtc works?
... it will require signaling before connection

schien: that's one reason that we need to establish a control channel
... different from the communication channel

mark: our experience is that it's all one channel
... and some messages are treated as communication messages and some as control messages
... I believe there are some benefits of using data channels, for example, if we want to focus on scenarios not in a LAN
... it might behave better when WiFI is flaky
... those are potiential reasons but we decided not to go that way

sicking: another approach is to have webrtc to be the low level communiaction channel and we send message trough webrtc
... the first control message is based on webrtc
... basically, the presentation api's protocol is layered on top of webrtc

schien: unfortunately, webrtc can't be the first connection established
... you need a control channel to bootstrap webrtc

sicking: should we use something than webrtc then?

mark: in my opinion, webrtc shouldn't be mandatory
... unless there is a use case that specifically requires webrtc

schien: the reason why I do that is because on the API side, we can pick between plain text and binary data
... it already defines the data format for delivering plain text/binary data over the same channel
... otherwise, we will have to redefine this for our own message format

sicking: we can use web socket?

mark: there are alternative like web sockets indeed

schien: that's the details we can discuss in the CG
... if you want more details on the proposal, you can check the wiki
... I will keep it updated

anssik: are schien and sicking working together? is flyweb using this work?

sicking: we don't use this in flyweb
... we only use http and web sockets
... for discovery, we use mdns
... there is very little to re-use apart from the discovery part
... even that sounds challenging if it's going to use a passcode because we expect devices without screens to work with flyweb
... I'm a little bit uncertain if that code typing is actually practical
... my concern is that device makers optimize for UX and typing code is good for security but not practical

mark: there are different modes you can pick
... you can do a visual verification instead of typing the code
... most users will bypass but that will solve the security

sicking: even typing the code sounds insecure because one could make the smart tv show anything if you are MIM the TV
... both of those things are non starters to me
... Though, the more stuff we have, the harder it is to hack

anssik: I see three sets of requirements: mark, schien and flyweb
... what can we do for standardization?

sicking: I can see an optional security feature that may or may not be used for discovery and it will be up to the device

mark: I anticipate the authentification to be the hardest part
... I believe we can find a good compromise but we might not have an answer soon
... I'm okay moving forward with other piecies in the meantime
... One requirement that isn't clear is whether WebRTC is a hard requirement
... I would prefer to avoid that for reasons stated earlier

yavor: for us, establishing secure channel is all we need
... it's sound challenging
... whatever secure channel technology works reliably and we can do bi-directional communication, it is good for us

sicking: are there high performance use cases requiring UDP sockets

yavor: not right now, we will not send videos or key strokes
... there is play controls but no game levels interactions
... we mostly send commands and updates
... it's basically more like a chat
... in this model, what's the receiver page? the TV or a page on the TV?

mark: the charter should be clearer but we are not trying to do an app to app authentification
... so the only guarantee is that you talk to a server that we trust

yavor: if we establish some trust, is this trust valid for all sessions that are initiated with that tv or only the time frame of this page?

mark: I think the idea is that the keys will have to be regenerated regularly

yavor: with the flyweb model it's different because you auth with a page
... not with the tv, with actually short lived

sicking: one you established the identity of the other side once, can you guarantee that it will stay true for the session?

mark: yes, unless you navigate

CG rechartering

mark: The charter is work in progress.
... Reiterate what are the goals.
... Pitch to browser vendors to move towards standardization.
... If our work is successful we should discuss what is the proper way to standardize.

tidoust: IETF is proper way to go.

anssik: What is the W3C position?
... The websocket is a good example.

<tidoust> ACTION: Mark Foltz to look into software licenses for prototype implementations in the CG [recorded in http://www.w3.org/2016/05/25-webscreens-minutes.html#action07]

mark: Background of the working group.
... Scope - 5 different use cases.
... Developing of open source protocol is in scope.

sicking: Having issues with the working group producing code.
... Prototype level code is fine, but historically was bad.

anssik: Take implementation feedback.
... What happened from WebRTC?

mark: The backend is shared the presentation layer is separate.

anssik: We need to update the language.

mark: Out of scope is to define what the UserAgent should do.

<anssik> clarification to out of scope: https://github.com/mfoltzgoogle/cg-charter/pull/3/files

mark: UA-1 protocol is out of scope.

sicking: I didn't consider that this group would do FlyWeb. I'm interested in Bluetooth though.
... The presentation API would allow not requiring switching a network.
... Defining bluetooth protocol may belong to different group.

mark: We can look at the discovery and contact the relevant groups later.
... We are not adding backwards compatibility.
... There would be ways to implement vendor extensions.
... 1-UA is out of scope, localization and accessibility too.
... Deliverables are first specifications.

anssik: What is the required format?

<tidoust> ACTION: Mark to strike the Deliverables intro text and jump to the specifications list directly in the CG charter [recorded in http://www.w3.org/2016/05/25-webscreens-minutes.html#action08]

mark: 3 main sections: 1) Discovery 2) Protocol for 2-UA mode 3) Protocol for creating and controlling remote playback.

Edward: Don't constrain it.

mark: Is conformance something that we care now?
... Interesting things: Some ways to use BLE, how to use NFC, having network traversal.

anssik: It is good to have these listed.

Edward: I have some concerns. Can we exclude this section from the scope.

tidoust: Scope helps with patents and keeping the group focused.

Eric: Just leave the first two lines without specifying concrete items.

mark: ACTION: We would remove the items and just keep generic description.

<anssik> https://github.com/w3c/web-platform-tests/blob/master/LICENSE

Edward: Test Suites is section is not clear.

<hober> http://www.w3.org/Consortium/Legal/2008/04-testsuite-copyright.html

mark: Build a test suites to measure if the prototype matches the spec.

tidoust: I think we should strike the whole section.

anssik: We can use the web platform tests instead.

<scribe> ACTION: Mark to strike the test section. [recorded in http://www.w3.org/2016/05/25-webscreens-minutes.html#action09]

<anssik> CG charter template: http://w3c.github.io/cg-charter/CGCharter.html

mark: Merge pull requests add notes and do new pull request.

anssik: We should figure out the timeline when to get all the feedback in.
... We have feedback from Ed that we may need 6 weeks.

mark: I want to TPAC to be productive f2f.

anssik: Beginning of August we can have final chapter approved.
... We are at the end of the day and we can wrap up.
... Thanks to Mark - the host and everyone from Google. Thanks to the new arrivals.
... We are getting close, and thanks to the feedback from the implementors.
... Some people may be concerned that there wasn't enough incubation.
... Let's keep up the momentum.
... Next f2f is in Portugal.

<tidoust> [F2F adjourned]

Summary of Action Items

[NEW] ACTION: Mark Foltz to look into software licenses for prototype implementations in the CG [recorded in http://www.w3.org/2016/05/25-webscreens-minutes.html#action07]
[NEW] ACTION: Mark to strike the Deliverables intro text and jump to the specifications list directly in the CG charter [recorded in http://www.w3.org/2016/05/25-webscreens-minutes.html#action08]
[NEW] ACTION: Mark to strike the test section. [recorded in http://www.w3.org/2016/05/25-webscreens-minutes.html#action09]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/06/01 07:42:00 $