W3C

Second Screen WG F2F - Day 1/2

24 May 2016

Agenda

See also: IRC log

Attendees

Present
Louay_Bassbouss, Anssi_Kostiainen, Mark_Foltz, Mounir_Lamouri, Rick_Smit, Hyojin_Song, Chris_Needham, Shih-Chiang_Chien, Yavor_Goulishev, Francois_Daoust, Jonas_Sicking, Jer_Noble, Ed_Connor
Regrets
Chair
Anssi
Scribe
Francois, Mark

Contents

See also the minutes of day 2.

NB: Minutes are rough, imprecise, and possibly wrong from time to time. Check issues on GitHub, linked from these minutes, for additional context.


Welcome and Introductions

Anssi: [welcoming everyone]. Thanks to our host. Glad to see new faces in the group.
... Introducing the Second Screen WG: very focused group. 2 specs in the pipe with the Presentation API and the Remote Playback API.
... I work for Intel, I'm chair of this group, also active in other groups and specs in W3C.
... It's been a fun ride to chair this WG, because it's been highly functional.
... My role is to remove impedance on the way so that you can work progress.

Mark: I'm the editor of the Presentation API, working for Google Chrome.
... My team is working on multi-screen experiences.

Mounir: Also working for Google. Together with Anton, we are editors of the Remote Playback API.
... Also involved in implementation of the Presentation API on Android.

Rick: I work for Vigour. We develop multiscreen experiences, heavily building on Web technologies.
... We just joined W3C.

Hyojin: I work for LG Electronics. I've been working in several second screen experiences in the last few years.

Chris: From the BBC, national broadcaster with an online video service.
... We're interested in the Presentation API as a way for users to watch iPlayer content.
... Also been involved with Francois in an EU project called MediaScape where researched multiscreen use cases.

SC: Working for Mozilla. Doing implementation work for Firefox browser and also Firefox TV.

Yavor: Working for Google.
... Youtube live in chrome. Running HTML5 apps on millions of TV. Interested in transport protocols for multi-screen experiences.

Louay: I work for Fraunhofer FOKUS.
... We developed prototypes for the Presentation API.
... We do a lot of prototypes for HbbTV 2.0 standards as well.
... I'm also interested in the discussion of open protocols tomorrow.

<Louay> I need to leave the office now I will connected in few minutes via Hangout mobile

Anton: Working for Google. Implemented the Presentation API. Editor with Mounir of the Remote Playback API.

Jonas: Hi, I'm from Mozilla.

Ed: Hi, from Apple

Jer: Hi, from Apple as well.

Anssi: Thanks for the introductions. Looking at the agenda: we'll start with the Remote Playback API. The spec has much improved lately.
... In the afternoon, we'll look at the Presentation API with a goal to publish a CR soon.
... At the end of the day, we'll talk about testing of the Presentation API.
... [discussing dinner arrangements]

Remote Playback API

Anssi: Thanks for everyone who contributed to this work recently. Feedback from Apple products is great.
... I'm going to ask Anton and Mounir to walk us through the issues and highlight the most important ones.
... Let's start with an overview of the spec.

Anton: [projecting slides]
... Goal is to play media on external (remote) devices, e.g. wireless speakers.
... The API is needed to narrow the use cases to audio/video content which in turn should help reaching out more devices, more browsers.
... No real problem with messaging either, since we can just reuse the MediaElement API.
... Most browsers have a similar feature that is only a UI feature so far, and not exposed to the Web app.
... For app developers, it's less work if they only want to playback audio/video content.
... Looking at requirements, one of the important ones is not to break media websites.
... You don't want to throw exceptions
... We'd like to be compatible with Safari's API as well.
... Taking a quick look at the spec [Anton projecting IDL].
... An attribute to disable remote playback and an attribute to access the remote interface.
... The remote interface is similar to that of the Presentation API.
... The call to connect() yields a Promise that gets resolved when the user picks up the remote device.
... getAvailability() lets one assess whether there are compatible devices available.
... We're on GitHub.
... Looking at differences with Safari's API, the main difference is that availability is implemented as a simple attribute. There are also no Promise returned so no way to really assess success or failure.

Anssi: One question, I think you looked at Windows 10 API, any difference?

Anton: I haven't looked at it, actually.

Anssi: OK, I remember looking at a Wiki page that mentions it.

Mounir: I think that was just a mention.
... Firefox also has some kind of remoting.

SC: In Firefox Android, we're trying to introduce a Cast button in the playback controls interface so that when the browser recognizes supported devices such as Roku, it can propose remote playback.

Anssi: In terms of the spec...

SC: That's media flinging.

Anssi: OK. Let's go through issues then.
... What would be a good start to make this session logical?

Do we need remote.getAvailability()? (#39)

-> Issue #39

Mark: I wanted to probe this API a little bit.
... The 3 issues I wanted us to tackle are: what does it mean to call getAvailability() multiple times on the same media element?
... We tweaked the Presentation API so that it returns the same object.
... The second issue which is different in this case is what happens if the source list changes.
... The third issue is what happens to the availability object when the media element is discarded.

Jer: At least for Safari's initial implementation, I don't think that's necessarily a good idea to have an explicit start/stop listening.

Anssi: Not having side effects on event listeners seems to be good practice.

Ed: I would argue that the fact that we don't care until there's an event listener attached is just a side effect in this case.

Jonas: Do we need to stop things?

Jer: When things are garbage collected?

Jonas: It's not a good idea to tie this to garbage collection because it can take minutes.
... It kind of works, but it's not super.
... I think in APIs that we have that have been aligned with the GC, we're kind of regretting it a little bit.

Jer: Note we have use cases where the media element is detached from the DOM but still playing to the remote device.

Mounir: Right, you could imagine people using MSE, with things outside of the DOM.

Jer: That's a different problem, but yes.

Mounir: More simply, there is audio, no one adds audio to the DOM.

Anssi: Is the intersection observer the solution?

<anssik> https://wicg.github.io/IntersectionObserver/

Mounir: observe/unobserve methods.
... media.remote.observe(), you would start observing availability.

Jer: That wouldn't be an explicit observable object in this case.

Mounir: No.

Anssi: That sounds like a solution.

<scribe> ACTION: Anton to craft a PR to use observe/unobserve pattern for availability [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action01]

Jonas: Isn't the observer pattern sometimes returning an object that can be canceled?

Mounir: Right.
... It actually even takes a callback, which I feel we don't really need here.

Mark: Do we want the page to know the difference between "there are no devices" and "the user did not want to pick up one"?

Anssi: So how granular we want to be in the API. Question is: what are the use cases for exposing the different cases of failure?

Mark: I think that's mostly a power-saving feature.

Anssi: Working out the differences through polling is also pretty bad.

Jer: 3 levels: 1) No one's monitoring. 2) Someone's interested but no need to show the list. 3) Filling up the selection list.
... This might be a UA call in the end.
... We want to be able to have these 3 cases.

Anssi: I don't see any negative.

Mark: The proposal is not to expose the fact that background monitoring is not available.

Mounir: The reason why we exposed that in the Presentation API is that a Web site will want to know whether there are devices available before showing the button.
... And there are devices where you cannot check for availability.
... We need to tell the apps that we cannot tell whether a device is available.

Jer: How do they use that information?

Mounir: At least they know, and can decide whether they display the picker or not.

Anssi: As a developer, I would know that when the user presses the button, there is some expectation that the list won't be empty.

Mounir: Exactly.

Anssi: How is Safari doing this in the default controls?

Jer: All the controls are implemented in terms of the underlying JavaScript API. The button will appear as soon as a device is detected when the controlling device goes to e.g. Wifi, in other words level 2.

Anssi: My personal view is that if, as a user, I click a button and get that spinner that ends up with an empty list, that's not perfect but that's acceptable.

Jer: We have available, non available, or unknown as states. Would adding an unknown state be a solution?

Mounir: I guess the unknown state is a problem because on some platforms, it will always be unknown.
... We probably need to tell the Web page whether we know the state is not going to change.

Jer: OK, so there's a seeking state as well.

Mounir: Also, Android does not recommend doing any background monitoring in some situations, e.g. low battery.

Anton: Another use case comes to mind where you can pair your device without wireless networks.

Anssi: I'm not sure whether we can settle on the unknown state, but we have some direction at least.

Mounir: We can do something that is similar to IndexedDB Observer, where there is an isAlive method that returns a boolean.

Anton: We could add this to the availability object.

<anssik> https://github.com/dmurph/indexed-db-observers/blob/gh-pages/EXPLAINER.md

Mark: I think we have consensus on the Observe/Unobserve pattern. Question is whether we want to add features for devices that do not support background monitoring.

Francois: Does this discussion affect the Presentation API as well?

Mark: I would have to think about similarity of use cases, but it might be a good idea to raise an issue.

Anssi: That's use case based.

Mark: the bar is higher for the Presentation API, because the spec is mostly stable now.

Anssi: I agree.

onstatechange vs. onconnect, onconnecting, etc. (#36)

-> Issue #36

Mark: The proposal is merely to break up the "statechange" handlers into separate events to have consistency with other specs.
... The other questions was whether we need "connecting".

Mounir: I think we need "connecting", yes.

Anton: We felt that it would be useful to return this state, yes. The connection is established already so you can send commands.

Jer: Does this duplicate the functionality of the Promise?

Anton: Yes.
... You would only get the event for cases where the user did not use default controls.

PROPOSED RESOLUTION: break up the statechange into 3 different events as described in: https://github.com/w3c/remote-playback/issues/36#issue-156024443

Allow websites to stop the remote playback (#4)

-> Issue #4

Jer: I've never got a request from people to add a "stop" method.
... It also links to whether the disableRemotePlayback attribute is live or just evaluated at start time. In our implementation, it's live but I don't have any metric on whether it gets used that way.
... I can't imagine a good use case that requires a stop method. At least in our case, we'll always have local playback in the list of devices to choose from.

Mounir: In that case, we could rename the method into "selectDevice" instead of "connect" that suggests disconnection.
... My worry is we might be forcing the UI onto all implementations.

Jer: It's always easier to add a stop method afterwards.
... Would a Web site be able to call "stop" whenever it wants? I don't know, I think the user should always be in charge.

Anssi: If user gesture is required to start, it seems logical to require user gesture to stop it.
... The conclusion seems to me that we don't want a "stop" method at this stage and can revisit that if there are compelling use cases.

Anton: That is fine with me.

Jonas: I feel I don't have enough experience with how the user will interact with the feature.

SC: Right now, we don't have an explicit plan to implement this API yet.

Jer: If every UA is going to have a "stop" button in the picker, then having a method called "connect" is a bit weird, indeed.

PROPOSED RESOLUTION: For issue #4, no "stop" method, add guidance that UA should provide a way to disconnect, and rename "connect" method into something like "showDevicePicker"

<anssik> coffee break, back 10:45am

Define the UA behavior when the disableRemotePlayback attribute is added during the remote playback (#6)

-> Issue #6

Anton: How should disableremoteplayback behave when it changes after playback starts?
... Simple way, if playback is happening, ignore it.
... Other way, is harder, to make the attribute live. If set, we disconnect and stop remote playback, reject Promises.
... In Safari it's live.

Jer: Not a use case driving that decision. We chose to do it live.

Jonas: Removing it should be false.

Jer: Don't think there's a use case. Fine with simplifying spec.

Edward: Example is muted. Changing it later does not have an effect.

Anton: allowfullscreen is only respected when the iframe is created. Changing afterward is no effect.

Anssi: For consistency, we should follow the cow path of allowfullscreen et all.

Jonas: If the attribute is removed, it should revert to default false (live).
... The UI should show up (or be removed) on insertion/removal.
... Live for enabling discovery, but not for shutting down playback.

Anton: How would availability behave? Would it affect the availability state/

Jer: Could make it a UA decision. Anton: If we remove it, re-enable discovery.

PROPOSED RESOLUTION: Use Anton's first proposal (no spec language to address this case). Add a non-normative note that existence of attribute is a hint for discovery.

PROPOSED RESOLUTION: Refine behavior of existing alg's for attribute, i.e. Promise rejections would reference it.

Allow the user agent to choose which media element source to play remotely (#7)

-> Issue #7

Anton: How the user agent chooses which source to play remotely. Does page provide hint, does UA provide source.

Ideally, pass all sources to remote device, it chooses.

Jer: YouTube, could not remote a MSE URL. When they want to AirPlay, they would have to swap source, or use a hidden element.

Jonas: Could have a remote source URL

Edward: Remote friendly attribute for src

Jer: If you want it to work naively, hand entire set of elements to device, have it choose which ones to remote.

Edward: Way to flag elements that are particularly good for remoting.

Anssi: The only reason is if you knew something about the remote device.

Edward: There is a media attribute which could be used for this purpose.

Jonas: Does MSE override any source attributes?

Jer: Create a blob URL from a media source, assign it to source.

Mounir: We do want to support MSE. Anton: Can use mirroring or remoting.

Jonas: Should availability reflect the source list?

Edward: For power saving, can't always determine capabilities of individual devices.

Mounir: Spec does take source list into account. Implementation may vary because of codec reasons.

Jer: If you choose to play, you would get a network or format error if the chosen device isn't compatible.
... Similar to local playback.
... Do we need to spec source selection? Mounir: Not sure, ask Philip.

<scribe> ACTION: Investigate HTML source selection algorithm to decide if it is applicable. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action02]

(Assign to Phillip b/c he's not here)

PROPOSED RESOLUTION: Anton to update spec to be clear that entire source list is considered for availability (if possible) and remote playback

Specify the transition between the local and remote playback when changing remote.state (#25)

-> Issue #25

Anton: When does the state need to be synchronized between remote playback and media element state?

Jer: Maybe we don't need to be explicit about this as long as the behavior of the element is defined.

Anton: What about connecting state? How should changes to the element take effect, do we need to buffer them?

Jer: HTML media is async. Can fire events later, after state is synchronized to remote playback.

Mounir: When connecting, what happens if you call connect() then play(). User experience will differ unless specified.

Francois: May be adding unknown latency depending on responsiveness of the device.

Jer: If we can have all the normal events fire, without having spec language. For example, playing would fire when the remote device had buffered enough data.

Mark: "connecting" adds latency for network sensitive events that would normally fire during playback.

Mounir: While you are connecting, remote player becomes active player.

Anton: We don't have these concepts in the spec.

(remote player vs. active player)

Anton: Websites don't change experience, so we may end up with broken player if the Promise rejects during connecting.

Mounir: Could say that during remote playback (connected), anything in HTML5 media applies to remote player.

Anton: In connecting state, act as if local playback is happening.

PROPOSED RESOLUTION: Anton to define local and remote player concepts in spec as source of behavior for media element. In connecting/disconnected state, local player is active. In connected state, remote player is active.

[Meta] Guidance for HTMLMediaElement, HTMLAudioElement, HTMLVideoElement behaviors during remoting (#41)

-> Issue #41

Mark: Question is if you are remoting, will there be any visible difference for apps?
... In some cases, devices may not support some of the media features, such as playbackRate, closed captioning, etc.
... How should things behave?
... I don't expect this to be solved today, but that will improve the quality of the spec over time.

Mounir: That's also what my proposal about "local player" and "remote player" is about.
... We should have a list of features that are mandated (such as play, pause) and others that don't need to be such as fullscreen.
... Do you want to specify everything explicitly?

Mark: There should be a set of defined behaviors, a set of undefined behaviors, and a few things in between.

Mounir: What happens when you set the volume in iOS?

Jer: There is no volume attribute on iOS so you cannot set it.

Mounir: You can definitely imagine that the UA connecting to a device without knowing all the capabilities of the device.

Anssi: I think we're moving to this new territory where we have UAs with different capabilities.

Mark: Here we have one element that can be in two different states.

Anssi: We cannot anticipate all the features that media elements will have in the future.

Jer: Would it be enough that UA should make best effort to reflect the remote playback locally?

Mounir: I think that would be fine.

Anton: I think listing all the features in the spec would be a bit of a stretch.

Mark: The more we leave it to implementations, the more the risk that we may break existing Web sites that may expect the "local" behavior.

Francois: Right, e.g. sync use cases with TextTracks that would expect currentTime to be accurate.

Anton: We could extend the requirements doc to list the set of minimal features that we would expect devices to support.

Anssi: Guidance or normative assertions?

Anton: Tests for play/pause and seek would be good.

Anssi: You also mentioned setting playbackRate, fast seek.

Mounir: For example, playbackRate was not supported on Android.
... Maybe we can expose a capabilities project if we detect issues with implementations.
... Over-engineering the issue right now is probably not a good idea.

Jer: You could specify the list of capabilities as in getUserMedia to filter the list of remote devices displayed to the user in the picker.
... I don't think we should do it now.

Mounir: Yes, I would expect most sites to just require play/pause and seek.

PROPOSED RESOLUTION: Extend the requirements doc as a start, best effort for UAs to reflect remote state locally otherwise.

Define the interaction with Media Session (#10)

-> Issue #10

Anton: For media sessions, do we want media element to stop playback of the other one.
... Do we want any behaviors to be excluded?

Mounir: Remote playback should have its own MediaSession. When element leaves, it leaves its session.

Anton: Two independent remote playbacks should be possible.

Edward: Global controls work the same way. Can't play locally and remotely
... Could wire up hardware controls to Airplay from desktop.

Mounir: On Android if you Cast to two devices, and locally, they are independent.

Jonas: If you create a session that is remote on one page, and play some content on another page, the first page should not be ducked.
... Hard to do if Media Session is not as well defined.

Mounir: Media Session got shaved aggressively. Jer: Intern implemented a subset of it behind a compile flag.

Jonas: Not moving forward with implementation at this time given recent state of the spec.

Anssi: A bit premature to think of integration.

<scribe> ACTION: Mounir to update issue with comments about remote and local don't fight over output, and suggestion that remote playback can access keys. No spec changes at this juncture. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action03]

Publication as a FPWD (#12)

-> Issue #12

Anssi: Requirements are met by changes to this juncture.

Edward: Collect edits from this meeting.

Anton: Can publish a FPWD in about 4 weeks.

Anssi: As early as possible, send a 2? week CFC. Target end of June.
... Triggers call for exclusions so scope is important to get right.

<scribe> ACTION: Anssi to send a CFC once edits are done. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action04]

<scribe> ACTION: Anton to collect edits and coordinate timing with Anssi. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action05]

<anssik> https://github.com/w3c/remote-playback/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+-label%3AF2F

Presentation API

Anssi: Goal is to go through issues that block publication as CR.
... The spec has had many iterations already. Next step is Candidate Recommendation.
... If all goes well here, we should be able to publish a CR sometimes in June.
... By publishing a CR, we say that all features are in, technical issues are closed, and we signal people that they can look at the spec and start implementations (already started in practice)
... We have processed a long list of issues prior to the F2F
... Summary of recent changes?

Francois: First one is horizontal review with accessibility group. Agreement not to change anything in the spec.

Mark: Possibility to use Stream interfaces. Streams are still at an early phase today. We discussed possible changes, but we're not going to do it today.
... The spec now recommends the controlling UA to pass on the locale settings to the receiver UA so that it may use these settings to fetch and render content
... Fourth issue was around providing a congestion control mechanism (bufferedAmount property). The conclusion was that without having a concrete messaging protocol, it would be hard to specify

Francois: And no one really asked us to add it at this point.

Mark: The spec now recommends to use the UUID algorithm to generate the Presentation identifer and passed from the controller to the receiver side.
... When you call getAvailability more than once, you now get the same object back.
... Last one was about setting the default value of the closed event.

Anssi: Thanks. That leaves us with four main issues for the F2F.

Is a new Permission type required for presentation display availability (#255)

-> Issue #255

Mark: The conclusion from Mounir is that a new permission type is not required.

Mounir: Exactly. Note I'm the editor of the spec.

Jonas: The one thing that people want with the Permissions API is to tell whether there will be a prompt.
... If I call start, will there always be a prompt?

Mark: It can not be 100% guaranteed.
... There may be cases where the UA knows that the user gave permission to use a remote device, it may skip the permission prompt.
... No one has done that though.

Mounir: It's a selection list.

Jonas: As long as no one has implemented it, then we don't need to worry about that.

PROPOSED RESOLUTION: re. #255, no new permission type required

<scribe> ACTION: Mark to close issue #255 with summary of discussions. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action06]

Topic: User Data Controls in Web Browsers guidelines (#275)

-> Issue #275

Mark: Travis gave us a report from TAG related to the Privacy mode spec.

Francois: [going through review]
... By having normative prose in the spec, we're forcing a definition of privacy mode onto implementers and we are not future-proof.

Mark: To be clear, this is not about defining a privacy mode, but an interoperability issue.

tidoust: can we in our spec say: clear site data
... and have that be the normative step
... including IndexedDB, Cookies, etc. to be open for future extensions to what constitutes site data

sicking: private browsing is an area that is evolving, no interest to write a spec

mounir: can we say empty all site data, and reference the TAG document that defines what is site data

tidoust: we omit Service Workers, AppCache already in the spec

mfoltzgoogle: agree this needs to be future-proof

tidoust: can add an informative guidance referencing the TAG document

PROPOSED RESOLUTION: complete the clear site data steps in the spec (add AppCache, Service Workers) and add non-normative note referencing TAG document for more complete list, add note clarifying this is not about defining private mode

<scribe> ACTION: tidoust to inform TAG when the PR re clear site data lands and submits his feedback to TAG [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action07]

Interoperability and presentation request URL fallback mechanism (#153)

-> Issue #153

Mark: I can give an implementation update on interoperability.
... Google Chrome supports Cast devices and DIAL devices as secondary displays.
... as well as other device types that are specific to Google.
... We don't currently support any truly generic display.
... We have intention to support wired secondary displays as well as potential Bluetooth or other wireless devices, but not concrete timeline.

SC: In Mozilla, we try to aggregate the protocols for the discovery and device communication. It's only a draft spec for the time being.

Mark: That's for the open protocol discussion of tomorrow, right?

SC: Right.

Mark: Any proprietary support you can talk about?

SC: For 1UA, we try to support HDMI.
... For 2UA, we have one proprietary implementation for our Firefox-only devices but I'm also co-developing an open protocol.
... For 2UA, since we are casting the entire Web page to the device, it does not really specify what type of video you can play.

Mark: For Cast devices, we only support 2UA mode, and only for applications registered.
... The only other 2UA experience we support is Youtube through DIAL.
... We also have plans for 1UA support through WebRTC on Cast, but some plumbing still required.

Jonas: For 2UA mode, is that an HTML application that gets installed on your Chromecast device?

Mark: You have to register the app, which gives you an ID.
... Through the Presentation API, you just pass the URL, and the ID needs to be added by the developer.

Jonas: If I add my own ID, does it ignore the URL?

Mark: Currently, yes.

Jonas: That seems problematic. If I register a Cast application and get abc123 and call the Presentation API with https://gmail.com/blah#abc123, you'll just be fine and ignore the URL.

Mark: Yes.

Jonas: Generally speaking, if people can pass anything, they will pass anything. So when you start playing with 1UA mode and start to take the URL into account, you'll start to have strange behaviors.
... It seems safer to pass chromecast://[id] URL.
... It seems wrong to pass an ID within a URL that is not designed for that. It may break interoperability.

Mark: Practically speaking, we don't expect the Presentation API to be used outside of the Cast SDK.

Francois: What worries me is that, as a developer, it is as if I have two parameters with one being used by one implementation and another one being used by another implementation.
... So I could have a typo in the URL and that would still work in the Cast case but not in other cases.

Jonas: Using a URL is the right thing to do. But using an HTTP URL seems wrong.

Mark: This is forward-looking for the 1UA case.
... We could switch to "cast://" URL easily.

Anssi: The practical issue that will come up next is with regards to testing.

Mounir: When I suggested to use an array of URLs, feedback was that the developer could simply re-implement the behaviour

Mark: both Cast and DIAL have proprietary extensions.

Anssi: Would it be better to massage that as a specific argument?

SC: I don't think we should add too many things on the URL, because it would then need to be decoded before it can be used.

Anssi: I think this is a pretty critical issue to solve.

Chris: Is this the same issue as Matt Hammond raised for compatibility with HbbTV?

Francois: Yes, although in the case of HbbTV, you need additional parameters on top of the URL. In the case of Cast, the app ID replaces the URL.

SC: But you will expect other receivers to load content for the HTTP URL, not to resolve the app ID, right?

Mark: Right, we won't require such a resolution.

[Discussion on using specific URL for specific implementations, which would require developers to try alternatives]

Mark: If you have both a Cast device and a DIAL device, there would be no way to give a choice to use both devices in the selection list

Mounir: In that case, you need PresentationRequest to take an array of URLs.

Francois: Other implementers may also require a fragment identifier for their own registration mechanism. The developer would have no way to combine the URLs in the same call if they cannot pass an array.

Mark: The concern about the array is what do they mean. Order?

Jonas: Yes, the goal would be to display a union of the devices that can support these URLs.

Anssi: How would the algorithm work? The order would be significant.

Mounir: The UI would show the list of all compatible devices. When the user picks up a device, the UA should pick up the first URL that appears in the list that is compatible with the device.
... That may not be complicated to specify.

Mark: The one argument that I agree with is that it breaks interoperability with fragment identifiers.
... I need to check internally, I support the switch to an array of URLs if that's the case.

[Further discussion between Mark and Mounir about Cast examples that involve Firefox for Android]

Mark: Generic question is on ways to support proprietary arguments.

Jonas: The situation is similar to what happens to codec, it's a reality of life.
... I would definitely imagine that Netflix might want to use a DIAL app if it's there and then fall back to an HTML5 app if not.

Anssi: Should we park this issue for the time being and get back to it tomorrow morning?
... Two proposals: 1) do nothing, 2) have an array of URLs

[group agrees to get back to issue on day 2]

Define behavior of Window and Fullscreen APIs in the presentation browsing context (#99)

-> Issue #99

Mark: Several related questions raised in this issue.
... Some of them are much more CSS-oriented
... which is not my area of expertise.
... First issue is for presentations. Do we want to block the use of certain APIs that require user interaction?
... Prompt, confirmation, print, etc.
... We could say nothing, prevent them, change behavior

Jonas: I think no one uses that in real apps. We did a great job at making prompt ugly ;)
... I don't think it's worth spending time on this.
... I do think that it would be nice to support input controls on a TV.
... It would be good to have a way to tell capabilities first, and then a way to get remote control inputs.
... That's a bigger task.

Mark: Generally, in the long term, we're interested in how to design applications that can accept inputs and send outputs from/to multiple devices
... Here, we'd just want to ensure developers do not expect to get input from users.

Anssi: Most of these things are historical things.

Mounir: Yes, and some of them are ignored already in some situations (window resizing, open, etc.)
... We should add a note that alerts UA that APIs that require user interaction will not work and should be treated appropriately.

Anssi: Informative guidance.
... There was also the Inception case.

Mark: I don't think it should be forbidden, it's just one of these examples where user input is likely going to be required

Jer: Why should the Fullscreen API be supported?

Mounir: It's not a requirement. It may just not work.

PROPOSED RESOLUTION: For methods that would require user input, add informative guidance to UAs about the need to handle them carefully.

Mark: About ::fullscreen and ::backdrop

<Louay> My Slides for Testing session: https://drive.google.com/file/d/0B5Pu35LKfJOIMXZtOVdkTlVmOVk/view

Ed: I don't think we need to do anything there

Mark: About requirements for media queries

Jonas: It would be nice to be able to style things differently if we ran on the TV.

Ed: The problem is that the types are exclusive.
... A TV is a screen.

Anssi: Historical context is that "handheld" evolved over the years

Mark: I think it's not about TV vs. non-TV, but more about interactive vs. non-interactive.

Jonas: My argument is that you probably need both.
... It's a little fluffy what the definition is.
... We have a spectrum today from laptop to smartphone, and the mobile query is extremely popular.

Ed: I think that the distance from the screen which might want to make you to use a larger font has the counter argument that CSS pixels are optical, and that's totally fine.
... Etc.
... What is the case that makes the TV different?

Jonas: What is the case where you want things different on mobiles?
... A perfect developer would want to detect the things that impact them. In practice, it's much more realistic for them to target "mobile".
... Many people parse the User-Agent string to detect "mobile" and use that accordingly.

Ed: In a lot of cases, we've mitigated the need by adding additional media queries properties

Mounir: What do you do with TV in Firefox?

SC: there is "TV" in the User-Agent string.

Jonas: Having a way to detect "TV" in the User-agent string through media queries would be great

Francois: Isn't there a proposal to have that in the latest version of CSS Media Queries with scripted criteria?
... Probably not implemented anywhere.

Anssi: so it would be in CSS spec in any case. We can resolve this issue with a note that if there is a good use case, people should work with the CSS WG.

Jonas: I think we should add a recommendation that "tv" should be in the User-Agent string

Anssi: Would it be a problem to fiddle with the Chromecast user-agent string?

Mounir: It would be, because you can present to things that are not TV.

Jonas: True, but I don't know where we could stick that recommendation instead of in the Presentation API.

PROPOSED RESOLUTION: No change to the Presentation API, feature would be addressed by the CSS WG if a strong use case motivates it. Open another issue about recommending to add "TV" to the User-Agent string.

<Louay> Please use this link https://drive.google.com/file/d/0B5Pu35LKfJOIbGNXeTNOaUIteHM/view

<hyojin> WAVE Project: https://standards.cta.tech/kwspub/wave/

<hyojin> I think It could be referred for an interoperability of Remote Playback API.

Presentation API testing

Anssi: We'd like an update on where we are with testing, potential issues identified, and next steps

Louay: I've prepared a few slides, the link is in IRC
... 2 weeks ago we published the first test report for the Presentation API
... covering only IDL tests. We'll publish a second report this week
... We plan to publish a 3rd test report by end of june. This could include messaging fuctionality
... [going through the slides]
... The working mode slide shows how to set up the test environment and conribute
... When we submit tests, we need them to be reviewed, and also run tests in different browsers
... In the last test report we tested on Google Cast devices, not sure about the Mozilla implementation
... It would be good to involve implementers in the testing activity to get more device coverage
... In the first report we had only IDL tests - in two parts, one for the Controlling UA, other for the Receiving UA
... We should also differentiate the 1-UA from the 2-UA cases
... [WPT slide] The green items are published, those in yellow are under review
... We need to do manual tests for cases where some user intervention is required - these are the files with -manual.html in the name

Mark: Why does the reconnect have to be a manual test?

Louay: Not sure, may be Zhiqiang can comment
... You need to first start a presentation, store the id, and then reconnect - it may be manual due to the starting the presentation

Zhiqiang: The test logic can be automatic, I think

Louay: I hope this week or next to have a final version of these tests, and we'll publish a new version of the test report?

Anssi: Mark, do you have any manual tests in your suite?

Mounir: There are some for things that need gestures
... I'll get in touch with Louay and Zhiqiang

<scribe> ACTION: Mounir to contact Louay and Zhiqiang to add to the test suite [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action08]

Louay: [problems and open questions]
... If we need to run tests using the web platform test runner, this raises errors on receiver devices
... As the test runner opens windows for each test file, and this won't work on Chromecast and most receiver devices that have only one window
... We could use the Controlling UA and launch the presentation pages we want to test using the Presentation API

Anssi: We could add a feature to the test runner, they accept pull requests

Louay: We could use iframes in one page, but I'm not sure about the API implementation inside iframes, there may be differences from the main window
... For the IDL tests, I used a native application to start these tests on Chromecast
... Maybe we can use the Presentation API communication channel to report the test results back to the Controlling UA
... and run the test runner in the Controlling UA

Anssi: Mark and Mounir, how is your test set up?

Mounir: We don't have tests for receiving mode

Zhiqiang: IN other scenarios we have used
... one iframe and communicated with another
... I've used the WPT test runner to run automated tests, and do the others manually, then combine the results

Anssi: If that's doable, that's the right way forwards I guess

Louay: The next open question needs feedback from the group - should we have different test reports for Controlling UAs and Receiving UAs?
... For the IDL tests, as they are independent of each other, we can split the tests and have two test reports
... For other tests, because the functions rely on both UAs, maybe we can collect the test results into one test report
... We need some identifiers for different combinations of Controlling and Receiving UAs, for devices such as Chromecast and Miracast etc
... We'll know more when we do the next test report

Francois: We have to specify exit criteria to get the spec to CR status
... As part of this we need to provide the test reports to show at least 2 conformance passes
... It would be better to have separate implementation reports
... These could be manually generated from test reports from separate platforms

Anssi: How many different back-ends should we require - Chromecast and Miracast were mentioned, where do we draw the line?

Francois: We specify this in our exit criteria, we want two implementations of each feature
... The exit criteria will be reviewed by the Director when we go to CR
... I can start to draft something based on other recent specs, eg, getUserMedia
... Our case is a bit more complex as we have two conformance classes
... and two modes of operation
... The goal of the exit criteria is to determine interoperability, so we need enough coverage to show that

Mark: The only representative implementations now are 2-UA

Francois: I think the exit critera should cover both 1-UA and 2-UA

Mark: To demonstrate conformance with the Receiving UA, does this need to be in 1-UA and 2-UA? Do they have to be two different UAs?
... Is the goal of the exit criteria to show conformance of the UA in both 1-UA and 2-UA modes?

Francois: Yes
... The implementation report needs to show that each feature is implemented in at least 2 UAs

Mark: For 1-UA mode, would Chrome and Firefox be required to produce distinct implementations of the Receiving UA?

Anssi: If we have two browsers using Blink, these often don't qualify as two interoperable implementations
... We may not have specific language in the Charter to define the implementations
... [describes charter]
... There's some flexibility for us to decide

Mark: For 1-UA, Chrome and/or Firefox could support wired displays
... For 2-UA mode we have a couple of options, but to open this up to other browser vendors would require the open protocol

Francois: We can have columns with Chromecast and Firefox OS TV as receivers

Mark: Is each vendor required to implement both modes?

Francois: No, that's not a requirement

Anssi: We can always refine the test suite and add more implementation data, adding more columns for different devices as senders and receivers
... It could be even more complex, with a matrix of senders and receivers
... Some vendors could only implement the sender, others only the receiver, this is one of the goals of the open protocol work

Francois: I propose to draft exit criteria

<scribe> ACTION: Francois to craft exit criteria for PR status [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action09]

Louay: To run some presentations on Google Cast we need to register in the Cast console, then use an ID as a hash parameter in the URL
... [describes test] https://github.com/w3c/web-platform-tests/pull/3062/files
... [discussion of Google Cast app ID and client ID values]

Zhiqiang: We should also use the relative URL in our test suite

Louay: Are relative URLs supported in Google Cast?

Mark: I believe so, but you'd have to try it

Louay: Do we need to include some Cast receiver library in the receiver page?

Mark: I believe you need to include the SDK and initialise it

Louay: Is there a specific name used for the communication channel?

Mark: You can make up a name and sent it across, I can share some docs on how to do that

Louay: Another question for Mozilla: Is there any way to run these tests on Firefox, what receiver devices can we use?

schien: The receiver part is only on Firefox TV, so not available right now
... Our plan is to ship the 1-UA mode in Firefox for Android, then in September open up the 2-UA mode for Firefox for Android and Firefox TV

Louay: We need at least one implementation to make sure our tests work
... We're currently working on testing in Chrome
... I may need more information when we look at messaging
... For the current features: availability, connect, launch, we're fine testing in Chrome

Anssi: Do the IDL tests currently match the latest editor's draft?

Louay: Our last report wasn't on the latest draft, but I'll take another snapshot for the next test report
... Extracting the IDL from the spec is automated

Anssi: What kind of test suite is needed for CR?

Francois: We need a plan, but there's no requirement for a preliminary test report

Mark: Do you run the tests on both Chrome desktop and Chrome Android?

Louay: Yes
... "CA50" indicates Chrome Android and "CD50" Chrome desktop

Anssi: Maybe add a legend to expand these names

Louay: The WPT has a readme file in each folder, where we describe the names

Anssi: Thanks for all the work you've done
... It looks like we have what we need for PR
... We should keep up the speed and momentum

Louay: We'd like to have more contributors for testing, I currently have two students working on it
... available until the end of July

Anssi: Hyojin, could the WAVE project contribute tests?

Hyojin: I'll check with the person in charge and contact Louay

Zhiqiang: I have another person joining me to contribute tests

Anssi: So there are 5 people now working on testing

Louay: I'm not sure about the dates to target

Anssi: Any other topics for testing?

<scribe> ACTION: Louay to verify the URL issue is fixed in Chrome canary v5.3 [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action10]

<scribe> ACTION: Mark to send Louay documentation on how to send messages to receiver apps [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action11]

<scribe> ACTION: Everyone review the tests [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action12]

Anssi: Any other topics?

Remaining issues

Francois: There are still some open issues in GitHub to go through, looking those tagged as Enhancement
... There a 5 issues, four are P3
... Are we OK to address this in a future spec version?

Anssi: I suggest changing the Enhancement tag to V2

Francois: What needs to be done for #242?

Mark: It's not a spec issue, I took an action to do this a while ago

Francois: #283 error handling

Mark: There's detail in step 5, can be put into a pull request

<scribe> ACTION: schien to send a pull request to define error handling on failure to establish presentation connection (#283) [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action13]

Francois: I think #293 has been done?

Mark: It's still open from the previous pull request

<scribe> ACTION: Mounir to resolve #293 [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action14]

Francois: #295 - references - I'm not sure if we can reference the WHATWG spec

Mounir: The Remote Playback API currently only references WHATWG

Francois: That's fine for now
... It's only when we get to CR and PR that we need stable references
... #299 Instead of patching HTML in our spec, we should propose a change to the HTML spec

[ -adjourned- ]

Summary of Action Items

[NEW] ACTION: Anssi to send a CFC once edits are done. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action04]
[NEW] ACTION: Anton to collect edits and coordinate timing with Anssi. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action05]
[NEW] ACTION: Anton to craft a PR to use observe/unobserve pattern for availability [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action01]
[NEW] ACTION: Investigate HTML source selection algorithm to decide if it is applicable. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action02]
[NEW] ACTION: Mark to close issue #255 with summary of discussions. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action06]
[NEW] ACTION: Mounir to contact Louay and Zhiqiang to add to the test suite [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action08]
[NEW] ACTION: Mounir to update issue with comments about remote and local don't fight over output, and suggestion that remote playback can access keys. No spec changes at this juncture. [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action03]
[NEW] ACTION: tidoust to inform TAG when the PR re clear site data lands and submits his feedback to TAG [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action07]
[NEW] ACTION: Everyone review the tests [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action12]
[NEW] ACTION: Francois to craft exit criteria for PR status [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action09]
[NEW] ACTION: Louay to verify the URL issue is fixed in Chrome canary v5.3 [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action10]
[NEW] ACTION: Mark to send Louay documentation on how to send messages to receiver apps [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action11]
[NEW] ACTION: Mounir to resolve #293 [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action14]
[NEW] ACTION: schien to send a pull request to define error handling on failure to establish presentation connection (#283) [recorded in http://www.w3.org/2016/05/24-webscreens-minutes.html#action13]
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.147 (CVS log)
$Date: 2016/06/01 07:42:00 $