W3C

Second Screen WG TPAC F2F - Day 1/2

06 November 2017

Meeting Minutes

Welcome

<RachelNabors> Howdy.

anssik: Welcome to the Second Screen WG and CG meeting
… WG is working on two APIs, Presentation API and Remote Playback API
… CG is working protocols that underpin these APIs

Introductions

mfoltzgoogle: editor of Presentation API, work at Google and on Chrome

_tomoyuki: Tomoyuki from KDDI Japanese telco

schien: Shih-Chiang from Mozilla Taiwan, previously Firefox OS, now desktop Firefox browser, working on Second Screen tech, APIs, and protocols

anssik: Anssi from Intel, Chairing this WG and CG

RachelNabors: Rachel from Microsoft, working on EdgeHTML rendering engine

Geun_hyung: Geunhyung from Dong-Eui University, interested in Second Screen applications as an academic research topic

Louay: Louay from Fraunhofer doing applied research, Presentation API and Open Screen Protocol focus, working with 90 students on experiments

mounir: Mounir from Google, sw engineer at Google, working on Presentation API on Chrome, and Remote Playback API

Remote Playback API

Remote Playback API F2F issues

Remote Playback API test automation #92

mounir: we should probably have a test framework to override two values: device capabilities and device connection state

mfoltzgoogle: this area of testing is pretty new, should do this with the simpler API, perhaps Remote Playback API is simpler that Presentation API and could be easier start

mounir: w-p-t is the main dependency, we do not have simulation of API states in w-p-t
… Philipp is a good contact for w-p-t

anssik: we have invited Philipp to this meeting later today, will touch this topic

anssik: we would need something similar to Presentatation Testing API for the Remote Playback API too

Remote playback device capabilities not always a subset of local playback device's

anssik: There are two classes of interoperability issues discussed in #41
… a feature X is not working locally, but can work remotely
… a feature X is not working remotely, but can work locally
… This issue is about the former. The issue #41 is (mainly) about the latter.

Resolved: close issue #88 since no use cases identified supporting the feature

Should rendering behavior of the remote-d media element be specified? #46

mfoltzgoogle: two distict issues: what the local user sees when the video is remoted, what sees in place of the video element when the playback is stopped
… think should not render in two places at the same time
… second issue, if there are attributes that reflect the render state, how to reflect that at the remote end

mounir: propose to scope this issue to UI implications

mfoltzgoogle: keep this informative in the spec
… then issue #41 is about normative language

Action: mounir to add informative test to the spec that addressed issue #46

[Meta] Guidance for HTMLMediaElement, HTMLAudioElement, HTMLVideoElement behaviors during remoting #41

mounir: no new information since last F2F
… from our own implementation we have some new ideas
… every single attribute SHOULD be exposed, the spec cannot have any MUSTs

https://‌w3c.github.io/‌remote-playback/#media-commands-and-media-playback-state

anssik: anyone interested in improving the media commands and media playback state state section?

Action: mfoltzgoogle to add normative language to the spec around local playback state to address issue #41

mfoltzgoogle: the spec should address the local playback commands

<Stephen> Although I've been working in this area for a bit, this is my first TPAC. I work closely with Mark Foltz, on the product side of Chrome, to implement our second screen support in the browser.

<Louay> how to scribe https://‌www.w3.org/‌2008/‌04/‌scribe.html

Charter 2018

<anssik> Second Screen Working Group Charter extended

<anssik> Group-level advancements since TPAC 2016 in Lisbon

<anssik> 1) Revised CR of Presentation API

<anssik> Presentation API publication history

<anssik> Changelog: https://‌www.w3.org/‌TR/‌2017/‌CR-presentation-api-20170601/#changes-since-14-july-2016

anssik: Well documented changelog, what happened in a year

anssik: Improved the algorithms, minimal API changes

anssik: Restricted API to secure contexts

<anssik> 2) CR of Remote Playback API

<anssik> Remote Playback API publication history

anssik: Fast track. May do a revised CR if there are substantial changes.

<anssik> 3) Open Screen Protocol work started

anssik: Editorship handed off from Anton to Mounir.

<anssik> 4) Charter extended by EOY

anssik: Open Screen addresses need for end to end interop. Interest from TV groups.

<anssik> Initial scope proposal:

anssik: 1. Taking existing standards to REC

anssik: 2. Finalizing test suites

anssik: 3. New level 2 features with well defined scope

anssik: 4. Open Screen Protocol

anssik: seeking feedback in particular on Open Screen Protocol relationship

mfoltzgoogle: need feedback from implementers of the web platform and networking protocol experts

<anssik> ... we're in prototyping phase now protocols, would prefer keeping them in CG going into 2018

<anssik> ... CG has platform implementers providing requirements

<anssik> ... propose having a CG meeting mid-2018 to figure our path to IETF

anssik: may make sense to IETF-ify the documents in preparation to expected IETF submission

schien: good idea to use the CG to experiment with the protocols, after more consolidated idea what to propose to IETF, then migrate

mfoltzgoogle: seconding schien, need to scope down what we have currently, need to come to consensus based on reqs or experiments before advancing to formal standardization

Action: anssik to create a GH repo for the new Second Screen Working Group Charter

Demos

<mfoltzgoogle> First demo is Photowall: https://‌github.com/‌GoogleChrome/‌presentation-api-samples/‌tree/‌master/‌photowall

mfoltzgoogle: the most complete demo app we have
… last year we showed an early 1-UA mode version
… we launched it in Chrome M58 first

[Mark showing a demo]

<mfoltzgoogle> Second demo: Media REmoting

<mfoltzgoogle> Sample URL: https://‌vimeo.com/‌239593389

<mfoltzgoogle> Launch tab mirroring to a Chromecast device, then fullscreen video

<mfoltzgoogle> Chrome flag: chrome://‌flags/#media-remoting

<mfoltzgoogle> Comlink demo: Chrome RPC library that wraps a Presentation Connection.

<mfoltzgoogle> https://‌github.com/‌GoogleChromeLabs/‌comlink/‌blob/‌master/‌docs/‌examples/‌presentation-api/‌index.html

<Louay> Fraunhofer Demo: 360° 24K Video with 4K Field of View on 5 Chromecasts: https://‌www.youtube.com/‌watch?v=EIxHlR7I8Ho

Presentation API v1

Presentation API

Revised CR 01 June 2017

Presentation API

Revised CR tracker

schien: 1-UA mode Firefox implementation status
… in Nightly and Aurora channels for Firefox on Android
… the 1-UA mode is restricted to Chromecast, which Mozilla sees as an issue, lower priority to productive 1-UA

https://‌www.w3.org/‌TR/‌2017/‌CR-presentation-api-20170601/#candidate-recommendation-exit-criteria

... a general resource issue for Presentation API

... work on the experimental 2-UA mode implementation has ended

mfoltzgoogle: question to schien, could the experimental 2-UA mode be resurrected at some later point in the future, considering Open Screen Protocol evolutions

schien: our implementation can be used as input for the WebRTC part of it, can share the results from the experiment with the Second Screen CG
… could try to adopt the experimental 2-UA implementation with the Open Screen Protocol

Presentation API v1 testing meta issue

_tomoyuki: web-platform-tests are ~complete, missing is the automation API

<Louay> https://‌tidoust.github.io/‌presentation-api-testcoverage/

https://‌www.w3.org/‌wiki/‌Second_Screen/‌Implementation_Status#Tests

anssik: any idea whether the failures in https://‌w3c.github.io/‌test-results/‌presentation-api/‌controlling-ua/‌all.html are test issues or implementation issues?

mfoltzgoogle: do w-p-t tests work with cast API?

<mfoltzgoogle> We don't return an answer to getAvailability until we know if devices are available.

Louay: yes

<mfoltzgoogle> The delays may result in a timeout.

<mfoltzgoogle> Also, we don't throw NotFoundError since we do discovery when start() is called

anssik: looking at https://‌w3c.github.io/‌test-results/‌presentation-api/‌receiving-ua/‌all.html Firefox does not implement the receiving UA

Action: mfoltzgoogle to investigate gaps in the test coverage for Chrome

Testing API

Presentation Testing API

<mfoltzgoogle> testdriver.js: http://‌web-platform-tests.org/‌writing-tests/‌testdriver.html

foolip: let's have a look at one of the these tests

<foolip> https://‌github.com/‌w3c/‌web-platform-tests/‌pull/‌6897

<foolip> is the PR where this landed

<foolip> https://‌wpt.fyi/‌uievents/‌order-of-events/‌mouse-events/‌click-cancel.html

[foolip showing a test case for .click]

foolip: before there was a button that had to be clicked by a human, now there's a programmatic way to click a button
… the way how this works, there's a Web Driver API that this is the w-p-t integration for it

anssik: what is the implementation status of Web Driver API?

foolip: all major browsers implement it

anssik: what features do we need for Presentation API?

Presentation API Testing API

[foolip reviewing the Presentation API Testing API]

foolip: PresentationTest interface looks good
… Web Driver API is not the only options, WebUSB did something different

WebUSB Testing API

foolip: testing WebUSB is more complicated, Test Driver API would not have worked out there

foolip: can web devs use the Presentation API Testing API?

mfoltzgoogle: we probably will have an implementation in a fixed screen and can run test code in that window

foolip: how does that work now, firing synthetic events?

mfoltzgoogle: now it's more like manual tests
… you have integration testing in Chrome, relies on browser mocking

foolip: choosing between Testing API vs. Web Driver API
… pros for Web Driver API: not adding new API surface
… can take inspiration from Permissions

<foolip> https://‌github.com/‌w3c/‌permissions/‌pull/‌151 is doing a similar thing, but more complex

foolip: can there be multiple screen?

mfoltzgoogle: yes
… screens have a state associated with them, but useful to have multiple ones
… each screen has an ID

foolip: what does selected attribute mean?

mfoltzgoogle: the user is actually selecting the screen and not canceling it

foolip: what are the inputs and outputs of screen selection

mfoltzgoogle: select a screen or cancel

foolip: how to test if a user declines?

mfoltzgoogle: three outcomes: user selects, user cancels, user sits and waits

mfoltzgoogle: I wanted to use this Testing API for our existing w-p-t tests, rework them as needed

<foolip> https://‌github.com/‌w3c/‌web-platform-tests/‌issues/‌7424 is similar for getUserMedia

foolip: what if you postMessage to a fake screen?

mfoltzgoogle: we'd create a browser tab with a receiving browsing context that receives the message

mfoltzgoogle: proposing Web Driver extensions, where to add them?

foolip: path of least resistance is probably to add them to its own Presentation API Testing API

<foolip> Jon Kereliuk kereliuk@chromium.org is our ChromeDriver guy (for wpt matters)

<foolip> maybe https://‌w3c.github.io/‌webdriver/‌webdriver-spec.html#capabilities will help

foolip: is anyone running the manual tests?

mfoltzgoogle: infrequently, that's why they broke often

Action: mfoltzgoogle to refine the Presentation API Testing API and come up with a WebDriver compatible spec

Action: mfoltzgoogle to refine Presentation API Testing API to be a WebDriver compatible spec

Action: mfoltzgoogle to raise issue about whether fake screen is a "real" screen or a mock screen

Action: mfoltzgoogle to file issue about whether "fake" screen is a "real" screen or can be a mock screen that responds to messages

<foolip> https://‌foolip.github.io/‌day-to-day/ is my toy

Presentation API v2

Presentation Testing API

mfoltzgoogle: the plan is to update the API to be based on Web Driver API
… once we have the API in the Web Driver spec, need review from the Web Driver team (in Chromium)
… make sure we can rewrite the existing test to make use of the new API
… and see how to support both manual and automated tests using the same w-p-t tests

anssik: who would like to contribute to the Web Driver-ification of the current test cases?

[Louya and Tomoyuki volunteered]

Louay: how to test the receiving user agent, if we only have Chromecast receiver? Do we need a fake sender API for testing?
… we know have separately controlling and receiving UA tests

mfoltzgoogle: if you have Chromecast on the network, then if you simulated user gesture, it is possible to launch a presentation on the Chromecast device
… if you have multiple Chromecasts need an API to force the selection of the actual device

schien: isn't it still missing we're not able to verify what happens on the receiver side?

mfoltzgoogle: isn't still stash functionality?

schien: previously we couldn't test message sending on the receiving side
… cannot get message back to the testing page

mfoltzgoogle: such tests can timeout and thus fail

Geun_Hyung: receiver is only Chromecast?

mfoltzgoogle: should work correctly in 1-UA mode

Louay: Android implementation?

mfoltzgoogle: correct

Forced 1-UA mode for documents or frames #347

Remote Window API

mfoltzgoogle: 2-UA mode is good for playing video, get it rendered in full fidelity
… 1-UA is most broadly supported
… because we define receiving browsing context be separate, hard to share resources
… four use cases problematic for 1-UA mode

1) Third party embeds that require cookies

2) Offline or cannot access internet

3) WSIWYG

4) Interaction, touch or gesture

... to address these issues, proposing a slightly different version of 1-UA mode

... rather than window.open() type, allow same-origin window to be created

Sample code (Presentation API based)

Sample code (window.open based)

... Chrome security folks prefer separate context

... for v2 would like to talk with Google Slides team and Data Studio team to get their feedback

anssik: isn't this like spec for incognito mode?

mfoltzgoogle: correct, and we currently implement this in Chrome using incognito, this would be the proper way to do that

schien: do all apps using 1-UA have the same requirements?
… if all 1-UA apps share the same requirements, then we probably want to strip out the 1-UA case from our current spec
… then the current spec would become 2-UA mode only, and the v2 spec would add 1-UA mode via Remote Window API
… if all the 1-UA mode share the same profile, the implementation side is close to the window.open
… then in 1-UA mode just talk to the same-origin window object via .contentWindow

anssik: possible problem with using window.open as is it is synchronous API

schien: another issue in 1-UA mode is it cannot have multiple connections
… separation of the APIs for 1-UA and 2-UA mode is easier for developers, they know the capabilities

Action: mfoltzgoogle to refine the Remote Window API proposal and see if people prefer this over current 1-UA mode, evaluate similar Android API

Action: anssik to add this v2 feature provisionally to the Charter 2018

Investigate possible compatibility with HbbTV #67

mfoltzgoogle: should first look what is in scope
… broadcast-mode not in scope for the API
… media sync mode not in scope for the API, maybe for the Open Screen Protocol

schien: wondering how they do their media sync

Louay: tried to implement DVB-CSS (DVB Companion Screen and Streams)
… 170 pages ETSI spec, clearly our of scope

<Louay> DVB-CSS: https://‌www.dvb.org/‌standards/‌dvb_css

anssik: what HbbTV features could be in scope?

Louay: app launch and control

mfoltzgoogle: in scope how to launch HbbTV app via custom URL scheme, and how to establish the communication channel
… easier option, figure out how to create secure ws
… second options, add auth API to Presentation API and correspondingly to Open Screen Protocol

Louay: or allow presentations to be launched via the communication channel

Action: Louay to investigate how to do HbbTV scheme with parameters, and look into HbbTV privacy aspects

Action: mfoltzgoogle to look into HbbTV communication channel requirements in general

[all P1 issues discussed]

Consider use cases for Presentation API v2 with VR capable displays #444

mfoltzgoogle: use case is to browse on VR, and share 2D view on their external screen for other people to see
… how to show the UI in VR context an implementation detail
… second use case: browse on a laptop, find VR content and want to present it on your mobile phone
… this seems like a browser feature

Action: anssik to report back our findings to the WebVR CG

Action: schien to talk to Mozilla's WebVR people in Taiwan and seek their feedback

[all P2 issues discussed]

Presentation display capability detection #348

mfoltzgoogle: no use case identified, furthermore hard to predict which capabilities to cover

anssik: any concerns in closing this issue?

Resolved: close issue #348

Allow page to turn itself into a presentation session #32

schien: propose we take this as a v2 feature for Charter 2018

mfoltzgoogle: support the proposal
… related to FlyWeb, HbbTV, Signage

Louay: also Hybrid Cast from Japanese TV

Action: anssik to add v2 feature in #32 provisionally to the Charter 2018

Presentations without communication channel #202

mfoltzgoogle: using Presentation API as a launch only API
… basically the controlling page knows if it will need the channel or not
… for example, we have multiple URL support, can add URL for cast and URL for DIAL, get the comms channel for the former, use server for the latter

Louay: how can you check in your app that the communication connection is not working?

mfoltzgoogle: presentation started but cannot communicate with it, then close connection

Action: mfoltzgoogle to check the spec language is aligned with "presentation started but cannot communicate, then close connection"

[Day 1 closed, next up: WG dinner: https://‌www.w3.org/‌wiki/‌Second_Screen/‌Meetings/‌Nov_2017_F2F#Group_Dinner]

Summary of Action Items

  1. mounir to add informative test to the spec that addressed issue #46
  2. mfoltzgoogle to add normative language to the spec around local playback state to address issue #41
  3. anssik to create a GH repo for the new Second Screen Working Group Charter
  4. mfoltzgoogle to investigate gaps in the test coverage for Chrome
  5. mfoltzgoogle to refine the Presentation API Testing API and come up with a WebDriver compatible spec
  6. mfoltzgoogle to refine Presentation API Testing API to be a WebDriver compatible spec
  7. mfoltzgoogle to raise issue about whether fake screen is a "real" screen or a mock screen
  8. mfoltzgoogle to file issue about whether "fake" screen is a "real" screen or can be a mock screen that responds to messages
  9. mfoltzgoogle to refine the Remote Window API proposal and see if people prefer this over current 1-UA mode, evaluate similar Android API
  10. anssik to add this v2 feature provisionally to the Charter 2018
  11. Louay to investigate how to do HbbTV scheme with parameters, and look into HbbTV privacy aspects
  12. mfoltzgoogle to look into HbbTV communication channel requirements in general
  13. anssik to report back our findings to the WebVR CG
  14. schien to talk to Mozilla's WebVR people in Taiwan and seek their feedback
  15. anssik to add v2 feature in #32 provisionally to the Charter 2018
  16. mfoltzgoogle to check the spec language is aligned with "presentation started but cannot communicate, then close connection"

Summary of Resolutions

  1. close issue #88 since no use cases identified supporting the feature
  2. close issue #348
Minutes formatted by Bert Bos's scribe.perl version 2.37 (2017/11/06 19:13:35), a reimplementation of David Booth's scribe.perl. See CVS log.