See also: IRC log
See also the minutes of day 2.
[Anssi kicking off the meeting]
Anssi: [a few reminders]. All the
decisions we take at this meeting are provisional.
... People who can't attend this meeting will have the
possibility to react to these resolutions.
... Pending no
comment, the resolutions are considered final two weeks after
the meeting.
... Plan is to close
at 03:00pm to allow people to go to ad-hoc meetings.
... If we consider that we haven't reached our goals, we may
extend this deadline.
... Meeting starts at 09:00 tomorrow as well.
Anssi: Quick round of
intro.
... I'm the chair of this group. From Intel. Interested to make
progress.
Mark: From Google. I'm the editor of the Presentation API. Also working on the implementation in Chrome. My interest is to enable second screen in general.
Tomoyuki: From KDDI. Several use cases that use both the tablet and a large screen.
Takeshi: From Sony. Working on an
ink notepad. Intersted to share the screen on this
device.
... Also interested to control that device remotely.
Chris: From the BBC. Discovery of devices on local networks.
Shih-Chiang: From Mozilla. Firefos OS engine. Open solution for the Presentation API.
Hyojin: From LG. Applying the API to LG product is intersting for us.
Louay: From Fraunhofer FOKUS.
Work in the Multi-Screen domain since 5-6 years.
... Interested to see an interoperable solution
Ed: From Apple. New participant of the WG since Tuesday.
Anssi: Welcome to the group.
Youngsun: From Samsung. First
time in the Second Screen WG. Working on broadcasting
standardisation (ATSC, HbbTV). Interested to align the spec
with these standards.
... e.g. with HbbTV which has some second screen solutions.
Taejeoung: From Samsung. Interested in applying the Presentation API to products.
Jinwood: From Samsung. Samsung Web engine. Presentation API for Samsung mobile devices and TV browsers. Looking forward for interoperability between devices and browsers.
Mohammed: From Orange. Interested to see how we can use the Presentation API in Orange services.
Francois: From W3C. Interested in standards, of course. Excited about multi-device.
[Looking at the agenda]
Anssi: Before lunch, plan is to
go for P1 issues.
... Then we'll go over P2 issues.
Mark: We're doing the horizontal reviews and someone coming from WebAppSec tomorrow. Would it make sense to move issue #80 to tomorrow so that Mike West can contribute to this discussion?
Anssi: WebAppSec is next door. Let's see if we have confirmation that he can join.
[agenda updated on the Wiki]
Anssi: Any additional item that you'd like to discuss that has been brought up in discussion this week for instance?
-> Issue #153
Anssi: Brought up by Anne as part
of the TAG review
... Anne would have liked to see the whole stack
specified.
... Mark looked at this, Louay worked on a landscape
document.
... We should come up with a group proposal to address the
issue.
... Clearly, under the group's charter, we cannot mandate
protocols. But, this work could happen in other groups or other
organizations, e.g. IETF.
Mark: I think the charter is fine
as it is. The API should work both with open and proprietary
protocols. I do acknowledge that there is no protocol right now
that we can point to that addresses our need.
... I'm happy to collaborate with others, but I need to
identify collaborators to work on that.
... We're having a meeting with Mozilla in a couple of weeks,
and this is on the agenda. If we agree, then we could perhaps
propose something.
... There are different protocols that are of interest for the
different pieces, including WebRTC, TLS, mDNS and the
like.
... Some good discussion in a breakout session yesterday.
... Based on internal discussions, the IETF is probably the
right place to start the work on creating a stack for
that.
... One spec that defines the API and one spec that defines the
protocol stack.
Shih-Chiang: This topic has been discussed. Mozilla has not shared all of our protocol stack with Google yet.
Mark: We wanted the spec to stabilize a bit before.
Anssi: It seems that the group's response is that we want to leave the door open for implementations to use their own protocols and also to standardise protocols.
<mfoltzgoogle> Secure communications with LAN devices breakout session: https://www.w3.org/wiki/TPAC/2015/SessionIdeas#Secure_communication_with_local_network_devices
Ed: Presentation API for local display presentations do not require protocols at all, actually.
Anssi: True. Airplay, Miracast are good examples of 1-UA protocols.
Francois: This issue is indeed scoped to the 2-UA case. Does not apply to the 1-UA case at all.
[Presenting interoperability document]
Louay: Lists the technologies
that are interesting to implement this API.
... Two parts, one for 1-UA cases and one for 2-UA cases.
... As Mark said, there is not a single protocol that can
achieve the whole thing. I split topics into different
steps.
... In the 2-UA case, you have a controlling user agent, with
two parts for the Presentation API, one running on the
controller and the other running on the receiver.
... The controller needs to discover the receiver, launch the
browser and URL on the receiver, creates a communication
channel. Also, for signaling, there needs to be some way to
exchange the state of the presentation session.
... And of course there are security considerations.
Louay: For discovery, SSDP, mDNS,
BLE are just examples here. Other things such as QR code or NFC
would be possible.
... These discovery protocols are often linked to launch
mechanisms. For instance, DIAL uses SSDP. Chromecast uses
mDNS.
... For Launch, DIAL, Google Cast. HbbTV 2.0 is more than just
launch since it also touches on communication.
... In HbbTV, there will be a WebSocket server in the HbbTV
device.
... HbbTV compatibility is discussed in another issue.
... For communication, WebSockets, WebRTC, Raw Socket or Google
Cast are examples of technologies.
... For signaling, this is open. That part is specific to the
Presentation API.
... For security, different solutions are possible, open issue
for the group.
... Now, in the 1-UA case, the question of interoperability is
less of a concern, because everything is done by the
controlling side.
... HDMI, Intel WiDi, Miracast, Airplay, MHL (although I'm not
100% sure).
... I don't think we can use WebRTC directly. You need some
sort of discovery mechanism, as mDNS in Google Cast.
Mark: Discovery gives you the
signaling endpoint, and then you use the signaling endpoint to
launch the application, and then establish the communication
channel.
... Back to signaling, a signal connection will be used for
different presentations. The standard will have to address how
you address multiplexing on either end.
... You want to share that connection between presentation
sessions.
Shih-Chiang: In our current
implementation, we try to establish individual connections for
each session but only use one entry point for bootstrapping the
communication channel.
... We want to isolate all the communications between the end
points to make it more secure.
Anssi: Thinking about how to move forward with this issue. Anne's concern was that the full stack is not given. What are the gaps for RF standards?
Francois: I think the main gap is around launch and signaling.
Anssi: Would we need to extend the existing protocols?
Shih-Chiang: My understanding is that
we're still missing glue parts between protocols.
... For instance, for discovery, you need to specify which type
of service you're willing to launch.
Mark: I believer that there are
uncumbered technologies, but indeed the link between them is
not necessarily specified.
... Authentication handshake, etc. I'm not concerned about each
of them but someone has to do the links, indeed.
Anssi: OK, I think that there are
different places where this could be done. I guess we'd want
the place where we'd have the least friction.
... One option would be the Community Group. Another would be
to approach IETF.
... Do group participants have preferences? Would you want to
contribute?
Mark: Most likely interested
Shih-Chiang: we would have something to contribute
Louay: As well.
Francois: I think the CG would provide the right framework for this to incubate the idea and precise the gaps. It may be too early to go the IETF, although it's certainly the right place to do that and I'm sure you have connections in it.
Anssi: OK, the CG has been dormant for the time being. I'm chair of the CG as well. It seems a good idea to look into re-chartering that group to incubate idea of interoperable protocols for the Presentation API.
<scribe> ACTION: Anssi to trigger the re-chartering discussions of the Web Screens Community Group around the idea of interoperable protocols [recorded in http://www.w3.org/2015/10/28-webscreens-minutes.html#action01]
Chris: I think we would be interested as well.
Anssi: It sounds that there is enough interest for creating a group and making progress on it.
Louay: Will interoperability be considered for the 1-UA case?
Anssi: I don't think there is such a hard requirement for interop in this case.
Mark: I'm concerned that if we scope this work to the 2-UA case, it raises the bar on requirements on the second screen. Same thing if we restrict this to the 1-UA case.
Anssi: So are we in agreement to include both the 1-UA and 2-UA cases for the CG discussions?
Mark: I'm not suggesting that we should discuss priorities as part of these discussions.
Anssi: OK.
Francois: For the official response for this issue, I think it should start by mentioning the scope of the WG. It was not an oversight but rather a will to have the API agnostic of the underlying protocols. And then we should of course mention the plan.
Anssi: OK, I'll draft the answer.
<scribe> ACTION: Anssi to draft a reply to issue #153 based on F2F discussions. [recorded in http://www.w3.org/2015/10/28-webscreens-minutes.html#action02]
Mark: We may have more concrete feedback after meeting with Mozilla.
Anssi: OK, let's leave it open
whether it is the CG or some other group enacting the
plan.
... We'll cover the TAG review with Travis tomorrow.
Mark: The 1-UA case with directly attached displays should be supported by all implementations.
Francois: I wonder if the spec should mandate that.
Ed: That use case is the one you would expect everyone to support
Francois: Exactly.
Mark: This does seem like an
important implementation guideline at least. Not sure if it
raises to the level of requirement.
... The more you mandate, the better interoperability you get,
for sure.
Anssi: A device may not have an HDMI, MHL, or whatever connector, so we may leave these devices out.
Mark: Are you concerned that by mandating this, it would make it too hard to implement?
Anssi: No, rather would that make some devices unable to conform?
Ed: I don't think that it's a
problem.
... It can still be useful to one display.
Mark: You could have a virtual
display for instance.
... That should be the absolute minimum.
Anssi: It sounds promising.
Mark: My caveat is that we should offer that but it should be an opt-in for the user, because otherwise you end up with displays that are always available.
Anssi: It sounds that we should have some text in the spec along these lines.
Mark: I'll take a stab at this.
<scribe> ACTION: Mark to craft a PR to investigate requiring (or strongly recommending) support for attached displays [recorded in http://www.w3.org/2015/10/28-webscreens-minutes.html#action03]
Anssi: Schih-Chien, fine with this?
Shih-Chiang: I think so.
Anssi: I think we're done with this issue. Thanks.
-> Issue #149
Anssi: Issue opened for a couple of months.
Mark: Two questions to
answer:
... 1. How do we notify the sender that this message could not
be delivered?
... Since the communication channel is supposed to be reliable,
it's unclear we should keep the channel open.
... If we keep it open, it kind of implies that the
implementation will retry without telling the app.
... In that scenario, one possibility was to have send return a
Promise. Not entirely clear what that gives you. The other
possibility is to throw an exception. That's a bit tricky
because the send is asynchronous so you cannot really tell
beforehand that the send will fail.
Anssi: I agree, throwing an exception does not work.
Mark: With the Promise case, we cannot resolve that promise before the message is delivered. And there will be latency introduced if you have a batch of send to make.
Anssi: Aren't we reinventing TCP here?
Mark: Sort of.
... The alternative is an error event.
Francois: It seems important to align with existing standards such as WebSocket, which uses an error event.
Mark: That's my conclusion as well.
Anssi: In the issue, that's 1. and 4. options.
Mark: Actually, 1. and 2. (closing the presentation session).
Shih-Chiang: Wen you send in a message, if the user agent finds that there are underlying issues, then it may want to retry under the hoods, and only report the error if that fails.
Francois: That sounds pretty reasonable to me. And implementation specific.
Shih-Chiang: The browser needs to somehow queue the messages anyway.
Mark: Right.
PROPOSED RESOLUTION: For issue #149, adopt options 1. and 2.
Mark: What the RTC data channel
does is a bit more complex. Pages that care about knowing that
the connection was closed because of that may listen to that
event.
... Option 4. is under the hood. Implementations will likely do
that. Not for the spec though.
... Mounir would like to get the message that failed to be
delivered.
Francois: So a specific error construct will need to be specified?
Mark: Right.
PROPOSED RESOLUTION: For issue #149, adopt options 1. and 2. and define an error event that reports the message that could not be sent
[No objection heard]
RESOLUTION: For issue #149, adopt options 1. and 2. and define an error event that reports the message that could not be sent
<anssik> [ overview of the latest Editor's Draft of the Presentation API ]
Mark: The spec defined two
conformance classes: the controller and the receiver
... When the receiver creates a presentation it creates a
connection
... To use the API, construct a PresentationRequest with a url
to be presented
... Then call start(), the user chooses a screen, and gets a
connection
... The connection has an ID, enables creating a second
connection to that presentaion
... PresentationAvailability tells you about availability of
screens
... PresentationConnection is how the controller and receiver
communicate
... Can use various mechanisms: could be a cloud server or a
Bluetooth channel
... PresentationReceiver is currently under debate. The
presentation can communicate back to the controlling
document
... There's an onconnectionavailable event
... The Presentation interface has a defaultRequest - allows us
to put a UI into the browser to initiate
presentations
<tidoust> ACTION: Mark to move the definition of the Presentation interface to the top of the spec (more logical order) [recorded in http://www.w3.org/2015/10/28-webscreens-minutes.html#action04]
Mark: In future there could be
other mechanisms such as NFC
... Also transitioning automatically from mirroring to
split-screen modes
Walkthrough of code examples in the spec
Francois: Looking at example 5, can we assume the state is connected at the start? If the connection is created in the connected state, then this stage change event won't be generated.
Mark: We have a bug filed (Issue #198) already
for this
... Establishing a connection may require the user to
authenticate the screen, eg, using a paring code
Francois: The issue is closed,
but it's not clear. Is there a guarantee of the initial state
of the connection object?
... It's the 'theConnection' parameter in example 5
Anssi: Will reopen this issue
<anssik> ACTION: Mark and Louay to rework example 5 and refine spec prose to clarify the initial connection state when the connection is created [recorded in http://www.w3.org/2015/10/28-webscreens-minutes.html#action05]
Anssi: And now let's look at the recent spec changes
Mark: After the last F2F we met
with Mozilla to look at the browser-initiated presentation use
cases
... We can handle the default request case more simply
... and we introduced the availability object
... Devices that don't support background monitoring could
reject the availability
... There was a change to requiring a user gesture. In general
we want the user to be aware of presentation requests and the
browsing context
... We renamed the join method to reconnect, for joining
existing sessions
... There are two cases for joining: between tabs in the same
browser, and joining the browser to another screen, so the name
'join' wasn't clear
... Previously there was a 'close' method, responsible for
closing an individual connection to a presentation, and closing
an entire presentation
... We decided to add a terminate method, and a terminated
state, allows the closing and termination cases to be
distinguished
... We renamed some things to clarify
... As a browser maker, you implement the controlling side, the
receiving side, or both, so we added some clarification around
this
... There aren't many new features, aside from terminate and
PresentationAvailability
Anssi: That's a good overview
-> Issue #99
Mark: We believe that the initial
use case is for displaying content on a screen that may not
have input capabilities or window systems. There certainly may
be cases where that is not true, so we should be careful about
restricting too much.
... But we also do not want developers to believe that they can
create presentations that use these features.
Anssi: What currently happens when these features are used (alert, confirm, etc.)?
Mark: I think it's reasonable to
have these features be no-ops.
... Another feature would be permission-enabling APIs.
... There might be a way to modify the behavior of the
Permission API to have it deny permission per default.
Anssi: Either that or some error code that says that "I'm in a presentation mode" or something similar.
Mark: Then APIs that create new
browsing contexts (open, createPopup) are tough to implement,
especially in the 1-UA case.
... Sticking to one viewport makes it much easier.
... Finally, APIs that try to move the window around (moveTo,
moveBy)
Francois: How is that done on mobile devices where this is not possible either?
Mark: Right, I don't know.
Anssi: True, we may not have to do anything for these ones. We should have a look at the spec to see how implementations conform to it.
Mark: I think having the spec function like a mobile-like environment seems a good approach.
Anssi: I don't think there's any
special case for mobile.
... I think we can learn from HTML5 how these things are
specified and reuse that.
Mark: OK. Final issue is how does
the presentation interact with the Fullscreen API?
... It's very likely that things will be presented in
fullscreen. But it's also possible that presentations may not
be fullscreens at all.
... I don't have a strong opinion on it, but we need to address
that.
Anssi: Windows 8.1 and 10 have this split window mode
Francois: I don't think MS Edge supports Fullscreen right now, so we won't be able to experiment here.
Mark: Do you know if the content is aware that is runs Fullscreen?
Francois: Fullscreen API is not developed in W3C anymore. Back to WHATWG. I think there are CSS properties defined that let you style the content differently when it runs in fullscreen. Don't know if it's implemented.
Mark: Do we want to require that presented document be set with a media type of TV or projector?
Anssi: That could make sense.
Francois: I think that using a
specific CSS Media Type would not be considered to be such a
good idea. Media types such as "tv" or "handheld" are no longer
fancy or supported, basically.
... CSS media queries are more indicated.
Anssi: It sounds that we might benefit from the :fullscreen pseudo-class as in the Fullscreen API, or perhaps our own pseudo-class.
<karl> http://caniuse.com/#feat=fullscreen
<karl> [Partial support refers to not supporting ::backdrop, and supporting the old :full-screen syntax rather than the standard :fullscreen.]
Anssi: I also note the pushback
we received to reuse allowfullscreen for nested iframes.
... so perhaps defining our own pseudo-class would be
useful.
... Not an expert though, no strong opinion.
Mark: Not an expert as well.
Anssi: Maybe we should loop in Anne who edits the Fullscreen API.
Louay: In our case, the
presentation may not always be fullscreen.
... Page could become presentation at any time
Francois: That's a good argument not to reuse :fullscreen pseudo-class but rather to define a new one.
Anssi: Good point. Confusing to call this fullscreen when it may not be.
Louay: Also, on the APIs themselves, in our case, the receiving side is a regular window, so APIs could be used.
Mark: Maybe there is a way to
amend the spec to say that if the receiver does not support
user input, then we specify the behavior, otherwise we leave
that up to the user agent.
... The final point is that for Web signage or private
browsing, we may want to specify things more precisely.
... Maybe longer term.
Anssi: Do you think that this would be its own section?
Mark: I think so, yes.
... Requirements for the receiving user agent.
Anssi: That sounds good. I can
also help with that text.
... Louay, you probably want to work with Mark on this one.
Louay: Yes.
[side discussion on the Permission API]
PROPOSED RESOLUTION: For issue #99, add a section to the spec that defines the behaviour of these APIs in the presenting context, do not reuse the :fullscreen pseudo-class, and investigate a possible :presentation pseudo-class (discussing with CSS people)
[No objection heard]
Ed: Sorry to be late to the discussion, why aren't we reusing the Fullscreen API?
Anssi: It would be confusing to reuse :fullscreen because you may not always be fullscreen.
Ed: I think the :fullscreen pseudo-class is intended to cover both cases, the fullscreen one and the lightbox one.
Anssi: Can you take an action to investigate that? Any reason why the CSS WG has not picked up on that?
Ed: The reason is not technical.
Anssi: So we should amend the proposed resolution, then.
Ed: I think what you want to look at is the top layer definition.
<hober> https://fullscreen.spec.whatwg.org/#new-stacking-layer
PROPOSED RESOLUTION: For issue #99, add a section to the spec that defines the behaviour of these APIs in the presenting context, investigate possible reuse of :fullscreen pseudo-class (discussing with CSS people)
RESOLUTION: For issue #99, add a section to the spec that defines the behaviour of these APIs in the presenting context, investigate possible reuse of :fullscreen pseudo-class (discussing with CSS people)
[Lunch break]
-> Issue #79
Anssi: Presentations within
nested contexts are not special cased in the spec.
... There are attacks where origins you don't like might try to
start a presentation.
... Solutions to mitigate: Fullscreen API defines an attribute,
"allowfullscreen", on the <iframe> to allow it to request
full screen.
... Would same model work for Presentation API? But reasons not
to so we did not push.
... Interest in allowing pages that do not have access to the
page that embeds the presentation API, adding attribute may not
be a solution.
Mark F.: There are millions of YouTube videos or other content embedded throughout the Web whose markup can't be changed.
Mark F: ...The requestfullscreen attribute caused similar problems. YouTube team expressed concerns with this approach.
Anssi: Couple of proposals: Patch
allowfullscreen, but does not solve this problem.
... Or leave it to the implementation that the user is informed
that the nested content is allowed to present itself. It's not
the top level document but an iframe.
... General issue with other APIs, e.g. geolocation.
Jean-Claude: Should we make an
exception for same-origin <iframes>? Then it is
allowed.
... If I author both pages then they are on the same server, it
would be painful to specify anything.
Anssi: Issue is the YouTube use case. Can we communicate to the user clearly that the content is coming from a different origin.
<tidoust> [
Example of embed
code with some random Youtube video:
<iframe width="640" height="360" src="https://www.youtube.com/embed/23X_bPydaEw"
frameborder="0" allowfullscreen></iframe>
]
Jean-Claude: Main page. In an <iframe>, you load a script that injects the iframe.
Mark: User gets a snippet of
<html> with the embed code. It's okay for us to say that
YouTube would add an attribute to this snippet.
... Problem is legacy. Can't upgrade existing code.
Jean-Claude: Whitelist in the browser for YouTube presentations within an iframe but no-one else.
Anssi: Allow frame to request permissions. Implementations must make sure that this is obvious to the user.
Mark: I think the risk is already
mitigated by the user having to do a gesture. You can only call
start in direct response to a user interaction.
... Making it clear to the user which origin is requesting
presentation would be good.
... I think we should leave it to the implementer to either
make it an option or blacklist. I'm not familiar with similar
cases that work differently whether they are top-level of
embedded but I will investigate.
... A similar approach to Fullscreen where you can revert the
action easily
Anssi: I think there is a risk. The user may not understand the prompt.
Mark: Requiring the user agent to tell the user which origin is requesting presentation.
Anssi: If there is an ad and the
provider of the ad goes evil, adds a log in button and start
something fullscreen.
... You could emulate the regular UI
Kenneth: but you're not used to seeing such UI on your TV anyway
Anssi: Not today
Francois: Trying to summarize. 3
possibilities:
... 1. add a new "allowxx" iframe attribute
... 2. prevent this use case entirely
... 3. Improve the current mitigation mechanism by requiring
user agents to show the origin of the requesting content.
... To be discussed with Travis as part of the TAG review,
perhaps?
Mark: Right, I would prefer not
to add a new iframe attribute.
... I'd like to discuss that with Mike and Travis.
... I can take an action to review the requestFullscreen
mechanism as well, since it goes into details.
Anssi: We chatted with Mike West. He has to attend the WebAppSec meeting all day today and tomorrow but we agreed to have lunch together tomorrow. Anyone invited to join.
Shih-Chiang: If we are going to allow an
iframe to use the Presentation API, can it use
"defaultRequest"?
... There could be different ones in that case.
Mark: Right. I thought about this a little bit. We want to make sure that the top-level page is co-operating.
Shih-Chiang: So the top-level would be able to assign which iframe defines the defaultRequest?
Mark: Correct.
... I think that there is a way to solve it with the spec
today.
... The top-level document will request presentation on behalf
of the iframe and pass the ID down to the iframe who will take
control of it.
... through a call to "reconnect".
... This is how Google rentals work right now to interact with
the Youtube player.
... We need to prototype and make sure that it behaves as
expected.
Shih-Chiang: So we don't want legacy pages to support defaultRequest for embedded content.
Mark: No. For that, we would need
to extend the spec.
... I think it's difficult to define.
Shih-Chiang: Yes, choice between multiple presentations would be hard to explain to the user.
Mark: Yes. I haven't been pushing for this use case personally.
Jean-Claude: If the iframe calls the parent environment to say "open something and then I join", it could also set parameters and choose the right embedded iframe afterwards
Francois: Right, that's not the problem here. The idea was to list embedded iframes "defaultRequest" to the top level domain. And we seem to agree not to do that.
Mark: Right.
... In summary, for the "defaultRequest", we think it should
only come from the top-level request. We'll prototype with
that.
Shih-Chiang: In Firefox for Android, the video sharing feature only works with one video in the page. The sharing button is not displayed when there are multiple videos in the page.
Anssi: That's a clever way to teach people to write good pages if they want to use the feature.
Shih-Chiang: Applied to here, we could do two kinds of things. We could want to display the defaultRequest cast button only when there is one embedded content.
Francois: But the top-level page could set defaultRequest as well, which you cannot know beforehand.
Mark: For the purposes of
figuring out the different actions of the page, all I'm saying
is that the user agent should look at the top-level
defaultRequest property.
... We could eventually look at video tags as well.
PROPOSED RESOLUTION: For #79, for security aspects idea is to improve on existing mitigation mechanism without new iframe attribute and get back to TAG and WebAppSec, for defaultRequest the idea is to postpone resolution while implementers experiment with different scenarios
RESOLUTION: For #79, for security aspects idea is to improve on existing mitigation mechanism without new iframe attribute and get back to TAG and WebAppSec, for defaultRequest the idea is to postpone resolution while implementers experiment with different scenarios
<mfoltzgoogle> Francois: V
-> Issue #202
Francois: This is more
exploratory.
... When we started, imagined that the channel was something
simple that did not require an API.
... But it is not necessarily the easiest thing to add. I
wonder if we need it at all in some of these use cases.
... We have discussed support for HBB2.0. There is a WebSocket
server included. The device will not be aware of the
Presentation API.
... We are already in a situation where there are legacy
devices where we cannot establish a channel because the
receiving side does not implement.
Jean-Claude: There is no way for a device to discover the TV, without a contract with the manufacturer to implement the pairing.
Francois: Tapping devices via
NFC. The PresentationRequest could use other mechanisms, but
can't implement with a channel. Could display with a QR
code.
... Could be an option for the receiving context to display a
QR code. Two devices launch the same URL to communicate via a
backend server.
... Or a more invitation based mechanism where the URL is
broadcasted over Bluetooth.
... Different approach from the API. But from a user
perspective very similar.
Demo: https://mediascape.github.io/presentation-api-polyfill/
Supports Cast, DIAL, HbbTV 2.0, QR code, BLE (Physical Web), Window.
Francois: Supports many
applications that implement a backend server instead of.
... Discuss an option that says that the channel is not
required for the request.
... One way to address security is not to have a communication
channel. The applications can manage their own security.
Anssi: Is this similar to
ProtocolHandler API. If you get a special URL you can open it
on a specific page.
... Like irc://blah.
Francois: Don't see the analogy. The app is willing to present content on a second device. request.start() succeeds, but the app is happy to manage communications.
Anssi: Assuming both sides have
Internet connectivity.
... Two devices that have access to geolocation and
orientation. Can correlate these data and pair them.
... By sending this data to a server. Crazy ways to pair
devices.
Francois: Could allow applications to register cloud screens that they manage on their own. This could be managed by the applications themselves.
Anssi: What is the most realistic scenario?
Francois: HbbTV 2.0 is shipping today.
<Francois describes the HbbTV 2.0 flow for discovery and connectivity>
<Discussion about how this applies to the Hbb2.0 use case>
<Demo of polyfill with optional channel>
Schien: Allow the web page to generate a session by itself. From the HbbTV use case, the page knows exactly that it works well on HbbTV2.0 devices. Probably already knows what protocol it needs to find devices.
Francois: Typically the web app
can't discover the device by itself. Does not have access to
network APIs.
... Let UA do discovery and launch. And let the web app take it
from there.
JC: If you have an Hbb2.0 TV and have the companion screen app of brand X, you can use the complete system including the WebSocket connection.
Francois: Interested that the
controlling side works in a browser.
... There is a need to expose a rendezvous point.
Mark F.: Would support exposing metadata from discovery to assist rendezvous. Concern: Developers will need polyfills to figure out how to deal with them.
scribe: Users will have a bad
experience if the user gets a presentation they can't
control
... One step forward two steps back.
Mark W: Could push these mechanisms back down into the browser over time. This mechanism would help experimentation in the meantime.