See also: IRC log
Mark: Working with Anssi and
Francois in Second Screen WG on Presentation API for
Google.
... Thinking with my team on different aspects to align
different on-going works
... around content remoting
... Rough set of slides.
... One model that works well is the Presentation API for
Content remoting where can send content from my main device to
another screen, either local or remote
... It creates a communication channel.
... One interesting aspect of the API is that we talk about
presenting content to a screen but we haven't defined what a
screen is.
... You can think of a computer connected to a display for
instance.
... Be able to go to a public display could be a good
scenario.
... How do you pull a URL from a display and have it move to
your device.
Mathieu: One thing that comes to
mind. You're talking about putting content. Have you considered
anything about inputs?
... If I have a camera and I want to access it.
... You can push an application that can grab input from
another device and transmit it.
Mikko: Several issues. What is the address of this display?
Mark: Going back to the first
comment, if the other device to local resources, then it can
pass it along. It's an interesting use case.
... With the current model, it's hard to delegate
permissions.
... Also, being able to use MediaStream to pass this along
would be nice.
Mathieu: Indeed, you don't have permissions for outputs while you have for inputs.
Mark: Regarding addressing, the API is very high level, but once the application has established a communication, it might be possible to the application to discover the local address and push it back to the server if needed.
Mikko: How does the browser do the discovery?
Mark: We have a notion of discovery in the Presentation API that says that there is a device available. If we extend that to filter devices with certain capabilities, the scenario might work better.
Mathieu: Probably.
... Does the user select manually?
... The application asks for a second screen and the user
selects one?
Mark: Yes, the API gives a one bit that says whether devices are available. And then the user selects the screen in a user-agent mediated window. The app does not see the list of devices discovered.
Dom: It's manageable fingerprinting. The one bit depends on the context (although it is the same across origins).
Mark: Another scenario. Suppose I
want to edit a photo on my mobile device and I have a wireless
keyboard and screen available, it would be good if I could
temporarily use them to improve my editing experience.
... The application needs to be aware that this is
happening.
Francois: What would you want to expose? The resize event would give you the available viewport. The change of modality for inputs is probably missing.
Mark: Yes, more for inputs than
outputs.
... I could have a Web Component that is good for narrow
screens, and maybe I want to swap to another one for big
inputs.
Jean-Claude: What you describe is not so much distributed app here, more migration.
Mark: Right.
... Exploded apps
Youenn: Can the user agent filter
devices that can be picked up to ensure compatibility?
... The browser is handling discovery. In some cases, the
browser may want to put only the TV in some selection window.
How does the user-agent know that?
Mark: One option would be add filtering options. Another would be to expose new events for the type of modality that becomes available.
Youenn: Are you willing to tie it in with the Presentation API or something else?
Mark: I don't know if the
Presentation API is the best vehicle for this particular
scenario, because you don't necessarily willing to split your
application across different contexts. You're more willing to
discover and use more inputs/outputs
... Maybe it requires creating a split-mode.
Francois: It seems CSS considered
different views at the beginning, to render different views of
the same DOM tree onto different windows. That did not take up
in the end but that would address your need here.
... Daniel Glazman would know more.
Jean-Claude: Are you talking about shared DOM or views of the same DOM?
Mark: The former, I think. A
matter of presenting different things depending on the viewport
size.
... There may be different markup, but in the same DOM. You're
not creating another document.
... In the Presentation API, you create different contexts.
Here, you would have only one context. It's a kind of
mirroring.
Dom: If you have both the view on the mobile and on the big screen. Where does the logic of reacting to he resize event take place?
Mark: One solution would be to
keep the same DOM and use something such as CSS Media Queries
or similar things.
... Another alternative would be to add another context, but I
would prefer to avoid that.
Francois: From a CSS perspective, this should work. However, user interaction APIs are not associated with a particular view, so I'm not even sure what click event coordinates could possibly mean.
Mark: Right.
Jean-Claude: I have a PhD student
working on splitting content across screens. You need mutation
events to observe.
... You cannot have two rendering trees per document.
Francois: Right, so that's why I mentioned the CSS from 20 years ago, which had that possibility.
Mathieu: I would also assume that you would need to display different content on both screens.
Mark: OK, so I'm hearing that it would be complicated to keep only one DOM. It might be useful to find a way to share resources to reduce the impact on performance and memory.
Youenn: Is it more interesting to share resources upfront or let the user agent decide afterwards?
Dom: If you have a lot of local state, sharing resources is interesting. If you have Gb of data to upload before the application can run, this would not work well if resources need to be duplicated.
Mathieu: I would not want to duplicate content across devices. Content should live on one device or the other.
Youenn: If the smartphone has a large collection of images, it can transfer data when it needs. It would not be the role of the browser. App-specific.
Mark: Bringing it back to the
Presentation API. In the Presentation API, we always start in a
clean empty context, not for the app to distinguish between a
local and remote context.
... If you duplicate the context, you need to pass the local
state around authentication, etc.
... What would be interesting with sharing is that you wouldn't
have to do that sharing.
... A solution could be something like a frame that comes into
existence when the screen is added.
... I think it is important to keep things within the same
browsing context.
Mathieu: Trying to share the DOM
seems to be doing too much for Web application
developers.
... That's how it works today with applications.
Youenn: I think we should focus on simple things first. If some common pattern emerges, we might bring it to be supported by browsers.
Mathieu: You might want to have the input and the output directly connected together
Dom: That seems just like an implementation.
aalfar: The app is still running on the mobile phone in your example, not on the display
Mark: Yes. It would be a little awkward to do things such as Pointer.
Dom: The communication channel in the Presentation is always reliable?
Mark: Yes. You can pick up different protocols, but reliable.
Akashi: In your example, you're
creating two communications. Aren't you worried about the time
lag that this could introduce?
... I'm concerned about that.
Mark: We've delegated this to the
communication channel.
... You may want to have a more direct proxy on top of
this.
Youenn: Is the discovery restricted to local devices?
Mark: No.
... We don't have anything in the Presentation API that
provides the app with any kind of capabilities. It needs to
discover itself using feature detection for instance.
... We have a way to match URL to compatible devices so that
the user agent may filter displays accordingly.
... A few more slides. Some already discussed. App remoting is
closer to the Presentation API.
... We don't really require that the target be a screen. It
could be a regular computer.
... It's more a user interface extension.
... This can be experimented with a library on top of the
Presentation API, the notion of synchronizing my DOM tree with
the remote end.
Dom: FYI, Francois and I are
involved in a project called MediaScape that touches on similar
thing. Investigated the notion of shared state. And it works
well.
... Another aspect we investigated is around the layout of
components across devices, how you move components from one
device to the other.
Mathieu: You have multiple DOM, then
Dom: Yes, the assumption is that
all devices have a browser running.
... For the dumb rendering display, could you do that with the
capturing mode of the WebRTC?
Mark: That's basically how we implement the 1-UA mode in the Presentation API.
Mikko: [missed]
Mark: [missed as well]
Mikko: The restricing AMP framework that was demoed here is probably the best to try this sync thing.
Dom: That assumes that the state you want to transfer is the DOM, but that's probably not the main aspect of it.
Mikko: Duplication from point A to point B, run the whole JavaScript.
Mark: Snapshoting and replication would be useful to pre-populate forms that you're migrating to another computer for instance.
Mathieu: That sounds app
specific.
... There's all your JavaScript context. There might be things
that you cannot remote such as a WebRTC connection.
... I need you need to provide capabilities such that the app
can share things but leaves the flexibility.
Dom: Specific network communication is not something you can expose.
Mikko: You need to restrict or harmonize the platform.
Mathieu: I don't think the platform should do it all for the app.
Dom: You can do it but you restrict the platform if you do so.
Mathieu: and then you're back to
1998 all over again.
... You want to provide synchronization facilities.
Dom: Indeed, the MediaScape
project I was mentioning did just that.
... It confirms that sharing the state at this level lets you
do device rendering very cheaply and it's also easy to have a
consistent experience across devices.
Mark: I'd like to look a bit more at the MediaScape outcomes.
Dom: Right. I'm not saying that this is THE solution, but it's interesting to see that you can achieve great things with simple basic steps.
Mark: I think there may still be complexity with multiple inputs.
Dom: One way to view it is to see inputs as a way to modify the shared state.
Mark: but it goes to one device.
Dom: The device can push the
update to the shared state.
... Even though all the input is done on one device, the shared
state contains all the updates, so all devices know about
it.
Youenn: note that there may not be input/output in some cases.
Mark: We're a bit over time. I'll clean up my slides and post it on the Wiki. Thanks for the feedback!
[Breakout adjourned]