W3C

- DRAFT -

Second Screen Presentation WG F2F - Berlin - Day 1/2

19 May 2015

Agenda

See also: IRC log
See also: Minutes of day 2

Attendees

Present
Mark_Watson (mwatson), Anton_Vayvod (whywhat), Mark_Foltz (mfoltzgoogle), Shih-Chiang_Chien (eschien), Hongki_Cha, Anssi_Kostiainen (anssik), Francois_Daoust (tidoust), Louay_Bassbouss, Stephan_Steglich, Christian_Fuhrhop, Matt_Hammond (mh), Michael_Kang_(observer), Mohammed_Dadas (mdadas), Oleg_Beletski (obeletski), Soonbo_Han, Jean-Claude_Dufourd (jcdufourd), Hyojin_Song (hyojin)
Chair
Anssi
Scribe
Anssi, Christian, Francois, Mark_Foltz, Matt

Contents


Kick Off

Anssi welcomes participants, thanks host, starts round of introductions.

Anssi: I'm editing different specs in other W3C groups. Excited by this group, it's focused, very good group!
... Looking forward to a productive discussion today. Goal is to reach so-called consensus. F2F can help resolve most critical issues that could perhaps go on and on on the mailing-list
... We can adjust the agenda as needed.

Hyojin_Song: Working for LG, editor of CSS spec.

Soonbo_Han: From LG, as well

<hyojin> Hyojin Song :)

Mark_Watson: from Netflix, would like our web site to be able to connect to TV sets.

Anton_Vayvod: from Google, London office. Extending Chrome for Android to support second screens

Mark_Foltz: From Google, editor of the spec

Hongki: From ETRI, Korea

Oleg_Beletski: Samsung Electronics, Finland. Working with Samsung technologies for mobile in particular

Schien: Mozilla, working in TV in particular, interested to connect devices there

Mohammed_Dadas: from Orange, evolving IPTV services for the future. Interested by this group in that regard.

Francois_Daoust: staff contact

Louay_Bassbouss: from Fraunhofer FOKUS, was involved in Presentation API since the work in the Community Group.

Michael_Kang: from LG Electronics, in charge of broadcasting standards for Europe, DVB and HbbTV. Interested in companion screens for TV. Coming here as an observer this time.

Stephan_Steglich: from Fraunhofer FOKUS, I would like to welcome you to our offices here
... Feel free to ask questions during breaks and free evenings!
... [Stephan reviewing meeting logistics]
... Some room reserved in a nearby restaurant (on your own expenses) for this evening. Social dinner tomorrow, part of MWS.
... Room tomorrow won't be the same one, more security involved, there should be some sign at the entrance to guide you to the 6th floor. Louay and myself won't be around tomorrow morning but Christian will be there.

Matt_Hammond: from BBC, we make programs for TV and radio, doing a lot of R&D. I've been working a lot on DVB and HbbTV, companion screens and synchronization in particular. Interesting for us to see that work here. First time in W3C groups.

<cf> Christian Fuhrhop, Fraunhofer FOKUS - proxy for Louay

Chritian_Fuhrhop: from Fraunhofer FOKUS, I'm essentially proxying other colleagues here as they will be busy with the Media Web Symposium

Younghun_Song: from LG Electronics, observing the meeting

Agenda review

Agenda for day 1

Anssi: The agenda has been online for a month or so. I propose to look at it and see if there are things that we want to shuffle around, drop, or add.

Anssi: I'd like Mark to walk us through the spec
... Then a Warm up session, to discuss use cases and requirements and see whether they still match the reality. From time to time, new features come in.

Anssi: [demoing Zakim's queue]

<Zakim> anssik, you wanted to ask about agenda

Anssi: After use cases and requirements, Francois should take us through the evaluation of security and privacy that he made against the Presentation API.
... Matt would like to jump in the warm up session to introduce the discussion on the alignment between HbbTV and the Presentation API.

Matt: Yes, I have a short set of slides to present. 10 minutes should be enough to get up to speed. Then we can discuss during the relevant parts of the agenda and get back to it at the end.

Anssi: Any concern to fit that in the warm up session?

Anton: Shouldn't we do that for other technologies as well?

Anssi: That's a good point. We could do a session about the different technologies, indeed.

Louay: We may also do demos in that session, e.g. with Miracast.

Anssi: OK, my only concern is preserving time.
... I'm hearing Matt could introduce HbbTV, Mark or Anton for Google, Chien for Mozilla and Oleg for Samsung.
... Goal is to extract the main issues.

Oleg: I also raised several use cases that could fit here.

Anssi: Let's use the use cases session for that.
... Then warm up becomes: use cases and requirements, security and privacy, review of different technologies
... After warm up session, we enter the core of the discussions. I was happily surprised that we're using GitHub issues so well. For each issue, I would like someone who is familiar with the issue to present it, then proposals to be made.
... We may resolve on some proposal, but note that resolutions are provisional until after 10 days after the meeting to allow people who are not in the F2F to react. We will pass around calls for consensus on the mailing-list.

<Zakim> whywhat, you wanted to ask what does it mean to be an observer at the F2F meeting?

Anton: Some people mention that they're observing the meeting. What does that mean?

Anssi: Great question. Meetings are restricted to group participants usually. Chair can invite other people provided they request observer status in advance. I approved requests I received for this meeting.

Mark_Foltz: Question on GitHub issues and possibility to edit the spec before resolutions are taken

Anssi: W3C Process is not meant to block you. Let's be flexible. With Git, it's easy to revert things. Proposals can be integrated unless concerns are raised.

Francois: Right, W3C process is not meant to block you, so feel free to work the way you prefer as long as people in the group are fine with it

Anssi: The editor is the one that manages the spec, so whatever works for you

Hongki: For curiosity, our document is an editor's draft, right?

Anssi: Editor's draft is the latest version, published from time to time in TR space

<mdadas> needs to define the "time to time"

Francois: Right, one possibility that I encourage is to automatically publish editor's drafts in TR space. That's easily doable nowadays.

<mdadas> +1 to tidoust proposal

Anssi: Good point. Noting this for later discussion.
... Back to the agenda and the list of priorities. Any need to shuffle things around? [going through the rest of the agenda].
... Focus for day 2 is on new features. In the morning, if we have issues left out from day 1, we should address them on day 2.
... Then new features. Then lunch and after lunch, we have the open session with the 5th Media Web Symposium.
... This gives us an opportunity to engage with the community.
... [reviewing the MWS second screen agenda]

Anssi: After the open session, we'll wrap things up and get to the social dinner.

Stephan: There will be a bus to go to the social dinner.

Anssi: Any question on the agenda?

[none heard]

Walkthrough of the Presentation API

Latest editor's draft of the Presentation API

Mark_Foltz: [introducing the Presentation API]
... The Status of This Work is mostly boilerplate
... Then we list use cases and requirements that we would like to cover, conformance, terminology and examples, and finally we go into technical details about the API, interface definitions and algorithms.
... The security and privacy section is currently a placeholder
... The introduction explains that we want to make sure that we cover a wide range of technologies, using wired and wireless technologies.
... At the core, the page wants to give the browser a URL to some content to present, and then get some way to communicate between the controlling and presenting page.
... We use the terms 1UA and 2UA to reference two different scenarios: in the 1UA case, the user agent streams audio/video content to the remote display. Technologies such as Cast or Miracast are the main target.
... The second use case is when you are talking to another agent. The first UA just gives the URL to the second one.
... This can provide a higher quality in the video experience because you can use the local hardware resources to render the video, which is good, especially if the first device is a mobile phone.

Mark_Watson: I would mention support for native apps on the second device in the introduction. That's in scope but not mentioned right now. The target device may not support HTML5.
... I'm not sure there's anything else needed in the spec to address this use case, but it would be good to see that in the introduction.

Mark_Foltz: ok.
... Moving on to use cases. We want to support presentations, multiplayer gaming and multiple screens to render multiple content to multiple screens.
... One additional detail I did not mention is reconnection to existing sessions.
... Skipping over conformance and terminology sections. Examples show how the API can be used in practice, to start sessions, exchange messages, and also what the presenting page code could look like.

Anssi: We should not underestimate the power of examples. Developers will look at that section first and copy the code that appears there. Please let's maintain this to ensure that the content in that section matches the current API. Think about examples in your pull requests!

Mark_Foltz: [quickly going through the API definition].
... Two main interfaces: the PresentationSession is an object that looks like an RTCDataChannel. That's on purpose.
... It contains the state of that connection, an ID to attach an identifier to that presentation session.
... The other interface is NavigatorPresentation defines the entry points to the Presentation API, to start and resume presentation sessions.
... One important event is onavailablechange that alerts you when a display is available.

Anssi: I note that the stability of "onavailablechange" is under discussion

Mark_Foltz: We need to do a bit of work to define the algorithm for starting the presenting page on the presenting side.
... We started the process of identifying the security and privacy considerations for this API. To be discussed.
... A logistical note, we try to cross-reference GitHub issues from within the spec

Anssi: Not perfect but pretty useful

Use cases and requirements (#68)

See issue #68 - Use case and requirements

Anssi: When we add something to the spec, it should be motivated by use cases, otherwise people will want to inject their own ideas which could lead to scope creep.

Anssi: Is the list we have still valid? Should we add some? Should we drop some? What about the list of requirements?

Hongki: For the multiplayer gaming use case, there's a good and illustrated definition on GitHub. Why is that not reflected in the editor's draft?

Louay: This use case is multiplayer gaming in general, and this is a more concrete example with a Poker game.

Mark_Foltz: It elaborates on the existing use case.

Anssi: Some groups have a separate use cases and requirements document. We may consider doing the same thing. I think that could fit here.
... Are there people in the room interested in contributing to that spec?
... We can put it in a Wiki to start with, and people can contribute there.

<scribe> ACTION: Louay to initiate the Use cases and Requirements document (probably on a Wiki page) [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action01]

Oleg: We were also thinking about use cases. I posted the example of Pictionary game on the mailing-list. I don't think that it adds a lot of requirements. I agree that it makes sense to create a separate document.

Anssi: So maybe you can work with Louay.

Oleg: Another note, I do not see how joinSession maps to existing use cases right now.

Anton: We may want to refactor the use cases as there is some overlap between audio/video sharing and media flinging

<scribe> ACTION: Anton to refactor use cases to avoid overlap [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action02]

Mark_Foltz: I have two potential use cases to fold in the document. One that does not add new requirements but is useful: working on a shared document. The second use case is about companion screens that can accept input. It could be useful to have that in the list.
... How does a touch-sensitive or input-sensitive display relate to the API.

Anssi: That's interesting.

Matt: Reading the Media flinging, I'm also thinking about the application lifecycle. You have some model when you initiate the session from a device, then close that session, resume it. Other people in the room may want to continue to interact with the TV as I'm presenting video with it.

Anssi: Is it many-to-one mapping?

Matt: It could be but my primary thought was around the application lifecycle. It would be good to capture that.

Anssi: We have a specific session as part of the multi-screen session.

Oleg: Having clarifications around lifecycle would make the life of developers way easier as well.

Mark_Foltz: Right, if we have a use case, we should have some code example as well and some sort of state diagram that explains the lifecycle.

Anssi: Right.
... Let's move to requirements.

Mark_Foltz: [going through requirements]
... For launching the presentation, it may be up to the Web app, or the UA may provide a menu to launch the presentation.
... Controlling page and presenting page may or may not be on the same user agent, the API abstracts that away and the pages should not know anything about it

Schien: We have use cases for multiple users but the functional requirements kind of miss that part

<scribe> ACTION: Schien to complete the list of requirements with multiple users ones [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action03]

Jean-Claude: My question is about the note on multi-screen enumeration and named identification requirements being removed. The user agent still lists the screens to the user while the Web page only has "present or not", right?
... The user still gets to choose the target screen?

Anssi: Yes, it's similar to the file picker. Same abstraction to address security issues.

Jean-Claude: You mentioned the ability to delegate rendering but also sensing, user inputs.

Anssi: That could be future extensions.

Jean-Claude: probably you can already do it today over the bi-directional communication channel. I'm just wondering whether mentioning it here could be a good idea.

Anssi: I think we have one issue for that.

Mark_Foltz: Yes, we have

Anssi: I agree that's something missing from the spec. We're not really clear on that yet.

Oleg: Another potentially missing requirement. I understand that you may want to support multiple screens. If that's the case, it may impact the API and should appear in the list of requirements in any case.

Mark_Foltz: Yes.

<mfoltzgoogle> jcdufourd: Adding better support for remote sensors could be a good topic to discuss in the webscreens community group. https://www.w3.org/community/webscreens/

Security and privacy evaluation and considerations (#45)

See issue #45 - Security and privacy evaluation and considerations

Francois: do not expect us to do resolutions in this specific slot
... it is important to review the spec from privacy and security perspective as early as possible
... we may hit these at later stages along the Rec Track
... there's no yes or no answers in this, finding the right balance is the key
... there are bunch of people from WebAppSec who've produced a questionnaire other groups can use to evaluate their specs
... this questionnaire is not complete, but is useful nevertheless
... I'll go through the observations I've made while completing the questionnaire
... secure context, I'll get back to it at the end
... Does this specification deal with personally-identifiable information?
... does not expose the list of devices
... only whether there are one or more displays available
... can disclose user's location, e.g. at home vs. in a hotel
... implications on filtering
... if the API conveys information that it is able to render specific content that'd be a bigger privacy issue
... Does this specification deal with high-value data?
... no
... Does this specification introduce new state for an origin that persists across browsing sessions?
... there may be things we should address related to this
... if you know the URL and id, you can attach to a session running regardless of where you (physically) are

jcdufourd: you said something about filtering, and related risks

Francois: the idea is to allow the web site to show the "display on a second screen" button only if the action could succeed

Anton: proposes to focus on browser initiated presentations
... to solve the issue

Matt: no position on the proposal
... can see the advantages though from the privacy and security perspective
... may work with a relative static web content
... we're talking of a wide range of use cases

Matt: if we have control over the UI widgets then we can customize the UX more

mfoltzgoogle: in cast, can initiate from the browser or the page

<mfoltzgoogle> ACTION: mfoltzgoogle to see if there is any data he can share about user behavior around session initiation from the browser or the page. [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action04]

<scribe> ACTION: anton to explore browser initiated presentation via declarative approach [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action05]

markw: Netflix's UX people would prefer to have control over the UX
... they like the current state where they can render the icon at least
... makes it possible to implement consist UX across browsers

anssik: <output type="screen">

<markw> ... another related, but tangential, issue is how devices that the page knows how to reach via a cloud service, say, can be exposed to the user

Francois: Secure Contexts

<markw> ... in a uniform way with devices on the local network that are discovered and managed by the UA

Francois: WebRTC WG is doing a bunch of specs that have some overlap
... Media Capture and Stream and Output Devices allow enumerating output devices
... we may want to reuse that pattern
... currently restricted to audio only, but would be extended to video
... we want to cover the presenting video and audio content
... not the right abstraction to present video control directly
... Audio Output API proposes a sinkId on an HTMLMediaElement
... that associated the HTMLMediaElement with a device
... something we may want to look at too
... perhaps interact with the WebRTC WG
... to filter devices, by splitting the work to 1) present HTML content 2) dedicated to media

markw: we do not necessarily mean HTMLMediaElement only
... there's a bit more than just render media content

<scribe> ACTION: tidoust to look into WebRTC WG's Audio Output API and its reuse in the context of the Presentation API [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action06]

[breaking out for lunch, back at 1 PM]

[starting now]

Francois: wrapping up the discussion on security and privacy
... the network service discovery spec attempted to solve discovery
... had issues with security
... now the spec requires CORS support
... which mean legacy device cannot be supported
... we should see if CORS is required for the Presentation API
... the API is means for devices that are already able to access the Web
... if we allow specific URL schemes we're open to new class of devices
... a minor point: "Does this specification allow an origin some measure of control over a user agent’s native UI?"
... Does this specification expose temporary identifiers to the web?
... that'd be URL and id
... How should this specification work in the context of a user agent’s "incognito" mode?
... implication of the incognito mode on the Presentation API

anssik: what do other specs say about "incognito"?

Francois: yes, some specs impose extra requirements for "incognito"

<scribe> ACTION: tidoust to look into "incognito" requirements on the spec [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action07]

Francois: this ties into the app lifecycle too, the web developer should understand that
... security requirements on the messaging channel
... While the spec will not mandate communication protocols, it should set some guarantees of message confidentiality and authenticity.
... same problem as with the WebRTC
... can learn or adapt from it
... mark has a nice summary at https://github.com/w3c/presentation-api/issues/45#issuecomment-103376106
... no good answer to security and privacy, we must have the right acumen
... in the Rec Track, at Last Call maturity we are expected to seek review from accessibility, internationalization, security and privacy groups
... of course we can initiate discussion with there group already now
... suggest to initiate the discussion with internationalization soon

<scribe> ACTION: tidoust to seek review from PING and Web Security IG when the spec has adapted F2F changes [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action08]

mfoltzgoogle: do you want to incorporate feedback to the spec's security and privacy section

Francois: low hanging fruits could be baked in already

<scribe> ACTION: mfoltzgoogle to prime the Security and Privacy Considerations section with content from https://github.com/w3c/presentation-api/issues/45 [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action09]

HbbTV 2.0 and the Presentation API (#67)

See issue #67 - Investigate possible compatibility with HbbTV

Subset of Application lifecycle in HbbTV 2.0 slides to be presented
Short description of HbbTV 2.0
HbbTV - essentially web content on a television
Signalling usually in broadcast stream, content in most cases on a server
Broadcast related content can only be presented if boradcaster allows it
Other aplications can be broadcast independend. No access to broadcast stream, but can play IP streams.
HbbTV devices can be discovered with DIAL protocol
DIAL can launch apps on TVs (with addressing of specific security concerns)
Devices get paired, communication is via websocket message passing.
Pairing requires URLs with matching <app-endpoint> string

Relation to Presentation API - parameters would be useful to pass HbbTV specific information
Do applications need to be aware that the device it is paired with is HbbTV?
Lifecycle problems may occur if a presentation starts, but does not connect on web socket. Or an app is still running when communication disconnects.
Processes can't be remotely controlled/stopped.
Re-connection can not be guaranteed. Would likely require a restart of application.
Multiple connections can be made with same URL suffixes, but communication would likely fail on websocket communication establishment.

Slides are attached to github issue, they contain more information...

<mh> slides are here: https://myshare.app.box.com/s/rl2joltd6pyu6hx7flbpc1zu91pbptvu

<mh> also attached to: https://github.com/w3c/presentation-api/issues/67

Presentation API on Cast in Chrome

Mark Foltz: Second screen functionality by browser extension
... Three basic use cases - web page controls video, web page controls slide show, web page mirrored externaly
... Chrome architecture - renders generally separated from browser. Media router component allows rendering on external devices.
... Handles PresentationAPIs and UI for controlling/linking them.
... Communication done with "Mojo IPC" protocol.
... If site for presentation gets available they register to media router.
... App can detect change and then start presentation session on presentation device.
... If device can render content directly or use cast sharing or webRTC to render on another device
... ProviderManager provides common interface to cast/dial/hangout

Schien: How do you perform the webRTC conections?

Mark_Foltz: Basically using the underlying services (in this case Hangout), i.e. some server on the network, not locally,

Presentation API in Firefox

Schien: Two presentation modes
... [Presents architecture].
... Lifecycle stages: Dicovery, control channel establishment, presenting page launching, app-to-app transport channel establishment
... control channel closure - on session closure app-to-app channel closure
... Built-in protocols for Firefox-Firefox connection - presenter broadcasts info on port 9876
... requester determines which end point it can talk to and establishes RTC channel
... security model reduces footprint on presentation device - no access to device storage, indexDB or cookie
... Web pages are launched in a sandbox
... Future work - support for Firefox browsers, resume/join facility, binary messaging, HDMI and WiFI Display support, possibly one-to-many sessions

(Staying in agenda order, not re-sorting the order of issues to be discussed)

(Actually we are changing the order of issues right now - see updated wiki page)

Define a bi-directional data channel between opening and presenting browsing contexts (#46)

See issue #46 - Bi-directional data channel

Anssi: The group has been exploring options such as WebRTC. Have been converging.

mfoltzgoogle: need to clarify this is message oriented ... generating events. no fragmentation.
... Do we need to specify message size limits? Or a minimum supported size?

anssik: only specify if an issue emerges.

mfoltzgoogle: Make explicit that the message channel is reliable and in-order

schien: Agree this is reasonable.

<tidoust> PROPOSED RESOLUTION: adopt the "send"-like interface and clarify in the spec that the data channel is an in-order reliable message-based communication channel.

Anton: Chrome silently doesn't send empty mesages in WebRTC. Could this be expected to work as a "ping" mechanism?

markw: WebSockets spec says nothing about this. Doesn't indicate that empty messages are a special case.

<scribe> ACTION: tidoust to look at how WebSockets and how WebRTC data channels deal with empty messages [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action10]

Define (cross) origin relationship between opener and presenting page (#63)

See issue #63 - Define (cross) origin relationship between opener and presenting page

schien: Propose we define the origin relationship. Webpages can be easily inspected, so security authentication judgement woudl have to be deferred to a separate (e.g. server side) security agent.

But for the Presentation API we might there may not be access to cloud services. Need to define security guarantees.

Propose to learn from window.postMessage design

scribe: For example: If JS inside embedded iframes only wants to talk to outer frame with same origin, it can specify this.
... The browser makes the check

jcdufourd: If I start a Netflix native app from my app, my domain will not match netflix.com domain.

markw: haven't assumed origin associated with two separate UAs
... the two UAs are isolated/independent

mfoltzgoogle: this is an app to app authentication problem ... both sides should use a 3rd party to help with that.
... CORS? Probably doesn't make sense though ... doesn't protect the server
... same origin requirement probably too restrictive and creates obstacle to reusing presentations. Can't see how it can be enforced.

Francois: window can determine origin of its content. 2nd screen (UA2) cannot reliably check the origin of UA1

markw: app cannot trust UA (or the UAs trust in each other) ... app developers must implement this kind of authentication check for themselves

anssik: what are the risks? app on UA1 would have to choose to launch malicious presentation on UA2

schien: Concern is how does the app launched on UA2 know if it can trust the app that launched it on UA1
... UA1 passes origin information to UA2. Will not prevent attacks if UA1 is compromised; but will work if UA1 is uncompromised
... could be done by authenticating a token with a cloud service and passing that across. But does not work if there is a local network but no internet access

jcdufourd: in this situation, the devices on the home network are traditionally considered trusted (e.g. how UPnP works)

Louay: We must also consider sitautions where one of the two parties is an app without origin (because it is not a UA)
... or because it is a packaged web app

<anssik> http://www.w3.org/TR/app-uri/

mfoltzgoogle: proposed approach sounds Firefox OS specific ... is there scope for defining an extension to the basic spec for this, to avoid impacting other use cases?

anssik: We are targetting browsers. Packaged web apps are perhaps appropriate to consider as an extension.

markw: summary: UA2 and UA1 cannot practically agree to exchange info about target origins without mandating something (new) at the protocol level, which is out of scope. Does not give effective trust information.

anssik: but is possible for certain use cases / implementations as described by schien

schien: will go away and consider proposals from mfoltzgoogle

<scribe> ACTION: schien to look to see if there is a chance to apply CORS origin check approaches to Firefox OS specific packaged app [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action11]

<tidoust> PROPOSED RESOLUTION: drop step 3 in 6.3.2 Receiving a message through PresentationSession, the message origin will not be conveyed.

Define security requirements for messaging channel between secure origins (#80)

See issue #80 - Define security requirements for messaging channel between secure origins

mfoltzgoogle: If web apps on both UA1 and UA2 are running in a secure context, can they expect this of the channel?
... But difficult to do between two devices at the browser level.
... Either: encourage web developers to do their own encryption (e.g. using WebCrypto) ... Or we try to provide stronger guarantees.
... suspect it is better to leave it up to the application for now, but leave open to change in the future.
... attacks will be likely within the LAN, so perhaps more limited. This is qualitatively different to the open internet.

What do other people think?

Francois: currently, a web app from a secure origin cannot open an unsecure WebSocket
... this feels analogous
... but we must address this issue in the spec in some way

markw: If you cannot open an unsecure WebSocket connection then it is not possible to communicate with another device on the local network. We need to allow it (but mediated via the Presentation API, not by using the WebSocket API) ... but is this acceptable to the security people? (Green padlock still showing despite an unencrypted communication channel being present)

Francois: we should raise this issue with other groups

markw: app on UA1 could provide fingerprint of self signed cert provided by app on UA2 (for validation by UA1). Thoughts?

<markw> markw: One option to consider would be for the secure page requesting a presentation session to provide the fingerprint of a self-signed cert at the target device that will be used to secure the point-to-point communication between the devices

markw: One option to consider would be for the secure page requesting a presentation session to provide the fingerprint of a self-signed cert at the target device that will be used to secure the point-to-point communication between the devices

mfoltzgoogle: at minimum, some implementation guidance is needed.

anssi: messaging channel security currently out of scope
... but does at some level need addressing

<tidoust> PROPOSED RESOLUTION: keep issue 80 open while we gather more implementation experience. Highlight issue asking for feedback when getting in touch with PING / Web security IG

<markw> markw: Another direction would be for the Presentation API to provide a way for the UA to request the site to help transport key exchange messages, with the resultant keys being used to secure the direct local network communication

<jcdufourd> one of the issues in the above discussion is: in the 2 UA case, with separate implementations on the two sides, there is a need for spec text to have interoperable communication (with or without security)

<anssik> [breaking for coffee for 10 minutes, resuming at 3.40 PM]

Refine how to do session teardown/disconnect/closing (#35)

See issue #35 - Refine how to do session teardown/disconnect/closing

Mark_Foltz: 2 scenarios. One where the user requests that the presentation be stopped through the browser in the 1-UA case. That's pretty straightforward.
... The other scenario is when the user agent closes the presenting session.
... [going through bullet points in issue #35]
... choice between adding a disconnectReason to the statechange event or an error event to PresentationSession. Do not feel strongly one way or the other.

Schien: Only two states right now, there should be more.

Anssi: The list of states is indeed open for revision.

Schien: for entering the "terminated" state, it must be an explicit app call. Successful close from an application perspective.
... Otherwise the connection is disconnected but still alive.

Matt: Also useful while initializing the connection as I understand the connection starts in a "disconnected" state and it would be useful to distinguish that from the end of the application lifecycle.

Matt: It would be interesting to understand what "disconnected" mean in practice.

Mark_Foltz: It is going to be difficult to distinguish between various situations. You don't know in most cases.
... Using joinSession is a way to reconnect to re-establish current session in case of temporary failure.
... I think we're agreeing to add the "terminated" state.

Francois: what does "disconnected" mean for the application developer? Does it mean that he has to call joinSession to resume the session, or will the user agent continue to try to re-establish the session in the background?

Mark_Foltz: "disconnect" really means the UA surrendered. Then the app can implement whatever logic is needed

Oleg: In Samsung Multiscreens tech, quite close to HbbTV, you may pass a boolean flag to "close" that keeps the presenting side open.
... If you close the session from the originating page, what is happening to the presenting page?

Mark_Foltz: Right now, the behavior is really defined for one connecting session and so "close" will request the presenting side to close as well. When we consider multi-screens scenarios, we should revisit that.
... Good topic for the multiple screens session!

Anton: the spec does not say anything about what happens to the presenting page and should say something.

PROPOSED RESOLUTION: for #35, add a "terminated" state to the list of connection state and clarify in the spec what happens on the presenting side when "close" is called

Specify the presentation initialization algorithm (#34)

See Issue #34 - Specify the presentation initialization algorithm

mfoltzgoogle: Need to make sure presentation session is able to know when it's connected.

anssik: From a webdev perspective, better to set session before onload.

obeletski: Single time async operation calls for Promise.

<scribe> PROPOSED RESOLUTION: Promise<Session> NavigatorPresentation.getSession()

Matt: Do we need a "connecting" state while the presentation connection is established?

tidoust_: If we leave the Promise unresolved while the connection is established, then it can serve the same purpose.

obelestki_: What about leaving Promises unresolved for joinSession()?

mwatson: Concerns about programming model. Developers expect them to resolve in a reasonable amt of time.

jcduford,mfoltzgoogle: Concern about garbage collection and implementation complexit.

mfoltzgoogle: PROPOSAL: Strike unresolved Promises proposal from joinSession() algorithm.

<tidoust> PROPOSED RESOLUTION: For #34, keep the "joinSession" algorithm in 6.4.2 deterministic and strike the issue note about unresolved Promises

<tidoust> PROPOSED RESOLUTION: Promise<Session> NavigatorPresentation.getSession() which resolves when the communication channel is established and rejects if the communication channel cannot be established

Specify behavior when multiple controlling pages are connected to the session (#19)

See issue #19 - Specify behavior when multiple controlling pages are connected to the session

mfoltzgoogle: Added joinSession, need to define behavior when there are multiple connected session to the presentations.

schien: PROPOSAL: Promise<Session[]> NavigatorPresentation.getSessions(), onsessionavailable would fire when the list had changed.

tidoust: Do we need to return a Promise to return the list?

Matt: I don't want to know that there are two sessions, don't want to do a diff on the list.

eschien: add an onnewsession handler for additional connected sessions.

Matt: What if you call getSession() after there are multiple sessions available? Don't you want to know about all sessions?

<anssik> https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#navigator-service-worker-getRegistration

<anssik> https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#navigator-service-worker-getRegistrations

anssik: ServiceWorker returns Promises for getRegistration and getRegistrations

Louay: In CG we had onpresent. Why did we reject that proposal?

<anssik> https://www.w3.org/community/webscreens/wiki/API_Discussion#Usage_on_Remote_Screen

mfoltzgoogle PROPOSAL: NavigatorPresentation.sessions[] initialized with a single PresentationSession for the opening page. NavigatorPresentation.onsessionavailable when array changes.

echein: Don't use array attributes, Web developers can modify prototype

eschien: Replace array with PresentationSession[] NavbigatorPresentation,getSessions()

jcdufourd: Not incompatible. For simple case use a Promuse, for advanced cases use getSessions()/onsessionavailable.
... Promise<PresentationSession> NP.getSession(), PresentationSession[] NP.getSessions(), NP.onsessionavailable event when set of sessions changes.

<Louay> discussion in the CG on multiple sessions/channels https://lists.w3.org/Archives/Public/public-secondscreen/2014Nov/0052.html

<tidoust> [Discussion on terminology and the fact that it is confusing to have "session" used on both sides. Mark_Foltz proposes to look at naming alternatives]

<tidoust> ACTION: Mark_Foltz to look at renaming "sessions" for controlling and presenting sides. [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action12]

<mh> PROPOSAL: Possible renaming: getSession() --> isPresentation() | startSession() --> startPresentation() | joinSession() --> joinPresentation()

<tidoust> PROPOSED RESOLUTION: starting point for #19 is Promise<PresentationSession> NP.getSession(), PresentationSession[] NP.getSessions(), NP.onsessionavailable event when set of sessions changes (with possible naming changes)

<anssik> http://w3c.github.io/presentation-api/#common-idioms

<anssik> A presentation is an active connection between a user agent and a presentation display for displaying web content on the latter at the request of the former.

<anssik> A presentation session is an object relating an opening browsing context to its presentation display and enabling two-way-messaging between them. Each such object has a presentation session state and a presentation session identifier to distinguish it from other presentation sessions.

<anssik> https://github.com/w3c/presentation-api/pull/90

<scribe> ACTION: mfoltzgoogle to fix spec to refer to updated Presentation idiom [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action13]

Define user agent context for rendering the presentation (#14)

See Issue #14 - Define user agent context for rendering the presentation

Mark_Foltz: Basically, what we've done on the presenting side is rendering things in "incognito" mode so everything such as local storage, cookies and so on is separate.
... We have questions about some sensor APIs, whether they will be available. We haven't disabled any API right now.
... Permission-granting APIs should probably be discouraged since they would require the user to interact with the presenting screen.

Anssi: Good point, what about touch?

Mark_Foltz: Not a real problem.

Schien: At Mozilla, pretty similar to what Google does, using "private browsing" mode.
... Separate cache as well. Completely isolated.
... In my opinion, because the Web developer wants symmetry on both sides between the 1UA and the 2UA mode, I would propose to disable the APIs that we don't want them to use.

Jean-Claude: Is the 1UA case with multiple connections possible? Does it make sense?

Mark_Foltz: It is possible but we haven't really thought about that for now.
... The way I would summarize the issue at stake is that the presenting user agent should not persist data, disable the APIs that require user permission.

Anssi: The more problematic APIs are the legacy APIs such as geolocation. You could do feature detection but nothing would really happen such as listening to deviceOrientation changes but nothing would happen.

Schien: For packaged application, we only grant permissions to APIs listed in the manifest

<schien> mozilla private browsing: https://wiki.mozilla.org/PrivateBrowsing

Francois: I note initial work on defining Private Mode Browsing mode in the TAG

<mfoltzgoogle> ACTION: mfoltzgoogle to define browsing context in terms of the upcoming spec for private browsing, perhaps using the Mozilla link as an interim reference. [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action14]

<mfoltzgoogle> Two key points:
1. There should not be persistent data kept across presentations for any origin
2. APIs that require user permission should act as if the user canceled/rejected the request

Mark_Foltz: There are things that we may want to address, e.g. what happens when the presenting page calls "window.open".

Jean-Claude: Anything that pops up will be a pain

Mark_Foltz: there should be some informative guidance for implementers, typically.

<anssik> [closing Day 1]

Summary of Action Items

[NEW] ACTION: anton to explore browser initiated presentation via declarative approach [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action05]
[NEW] ACTION: Anton to refactor use cases to avoid overlap [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action02]
[NEW] ACTION: Louay to initiate the Use cases and Requirements document (probably on a Wiki page) [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action01]
[NEW] ACTION: Mark_Foltz to look at renaming "sessions" for controlling and presenting sides. [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action12]
[NEW] ACTION: mfoltzgoogle to define browsing context in terms of the upcoming spec for private browsing, perhaps using the Mozilla link as an interim reference. [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action14]
[NEW] ACTION: mfoltzgoogle to fix spec to refer to updated Presentation idiom [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action13]
[NEW] ACTION: mfoltzgoogle to prime the Security and Privacy Considerations section with content from https://github.com/w3c/presentation-api/issues/45 [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action09]
[NEW] ACTION: mfoltzgoogle to see if there is any data he can share about user behavior around session initiation from the browser or the page. [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action04]
[NEW] ACTION: Schien to complete the list of requirements with multiple users ones [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action03]
[NEW] ACTION: schien to look to see if there is a chance to apply CORS origin check approaches to Firefox OS specific packaged app [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action11]
[NEW] ACTION: tidoust to look at how WebSockets and how WebRTC data channels deal with empty messages [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action10]
[NEW] ACTION: tidoust to look into "incognito" requirements on the spec [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action07]
[NEW] ACTION: tidoust to look into WebRTC WG's Audio Output API and its reuse in the context of the Presentation API [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action06]
[NEW] ACTION: tidoust to seek review from PING and Web Security IG when the spec has adapted F2F changes [recorded in http://www.w3.org/2015/05/19-webscreens-minutes.html#action08]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.140 (CVS log)
$Date: 2015/05/29 07:29:05 $