See also: IRC log
See also the minutes of day 1.
[Anssi kicking off the meeting]
Anssi: Looking at the agenda.
Travis is here to discuss the TAG review. We'll start by
discussing audio and video rendering, then move on to
horizontal reviews.
... Several of them: Technical Architecture Group (TAG),
Privacy, Security and Accessibility.
Francois: There's the i18n review as well, which we actually did as well. Feedback was roughly that there was no internationalization issue.
Mark: Takeshi and I had a
discussion. There may be an issue with the local character set
used in the presentation. Question about some possibility to
set the requested locale through the Presentation API.
... I think we just need to check the rules for settings the
rules for encoding and locale for the group.
... Encoding of the strings passed through the communication
channel.
Travis: There's a specific USVString type in WebIDL that can be used for such purpose.
Mark: WebSocket probably has something along these lines, we should reuse that.
Shih_Chiang: If we cannot pass this kind of locale requirement during the application launching phase, there may be a problem for sites that use the local locale.
Mark: If you, as a controller, wants to start a presentation, the question is: would you rather specify the preferred locale of the presentation?
Anssi: OK, it sounds we should add a topic on the agenda to discuss that more thoroughly.
Jean-Claude: Date passed between endpoints could be interpreted differently, is that the problem?
Mark: Yes.
Louay: We need to address this
from an end-user perspective. Smartphone is personal device. TV
may not be.
... You may want in your application to have different
languages, so we may need a way to decide of a different
behavior. What is the default setting and is there a way to
change it, basically.
Anssi: OK, agenda updated.
[reviewing the rest of the agenda]
Anssi: Any comment on the agenda?
[none heard]
-> Issue #13
<mfoltzgoogle> Mounir and Anton have a github with the gist of the proposal.
<mfoltzgoogle> https://github.com/mounirlamouri/htmlmediaelement-remoteplayback
Anssi: Mark will report on the proposal from Mounir and Anton
Mark: See GitHub for the proposal.
<mfoltzgoogle> There are two
documents:
Use cases and requirements
WebIDL for extending HTMLMediaElement
Mark: Two documents in there, one that list use cases and requirements, and one describes the proposed interfaces
Mark: Basically, we want to add a
feature to media element that allows for remote playback. You
can also disable this feature.
... We're trying to capture the common use case, where there is
an src that targets a stream that can be played in other
browsers.
... There are proprietary solutions for that, in Safari 7 and 9
to allow remote playback on AirPlay that you can detect from
the app.
... Once the user picks the device, Safari takes over video
playback.
<Travis> Yes, Microsoft Edge recently announced a casting feature: http://venturebeat.com/2015/10/29/microsofts-newest-windows-10-preview-for-pcs-comes-with-edge-media-casting-ask-cortana-in-pdfs-in-edge/
<Travis> also: http://blogs.windows.com/windowsexperience/2015/10/29/announcing-windows-10-insider-preview-build-10576/
Louay: I think Firefox for Android supports something like this for Chromecast
Shih_Chiang: Right, that's browser UI.
Mark: Chrome has a similar functionality through the Cast button
<mounir> The basic idea of the API is for websites to be able to implement custom controls and still have this feature.
<mounir> The same way they can re-implement play/pause/seek and fullscreen
Mark: The Browser UIs section describes the different implementations so far. Mostly similar from browser to browser.
[going through the Android remote playback workflow]
Mark: Play/Pause/volume are the
key features you would want. Seeking is important as
well.
... It's been a very popular feature for the use of Cast on
Android.
... The basic requirements is that a Web site should be able to
know if there is a display available, know when there is remote
session already connected, control the remote playback through
play/pause/... commands
... The ability to know when the remote playback is connecting
is I think useful because there will usually be some sort of
buffering on the remote side because the playback actually
starts
... Any comment?
Francois: Judging from Travis link, MS Edge seems to now offer a similar option
Mark: Working with Miracast?
Travis: Yes, and DLNA.
Mark: The basic idea of the disable feature is that by default, it should work, but the app may want to disable the default browser behavior to provide custom controls.
Anssi: Let's review the interface proposal
Mounir: On the media element
itself, there would be a new attribute adding a new object to
the element, and a flag to disable the remote playback default
behavior.
... The "remote" attribute would always be set. You would call
the getAvailability() method to check that there are available
displays.
... This is similar to how the current Presentation
works.
... "start" allows the custom control to actually start the
playback. I do not know what the Promise returns here.
Jean-Claude: What about controls?
Mark: The controls remain on the
media element. They would just control the remote
playback.
... You cannot send additional messaging
<mounir> start() will show a UI to select a device, the Promise<bool> returns whether a session is starting
<mounir> start() is a name that might change in the future for something more about UI (like .showControls())
Jean-Claude: Internally, the implementation would proxy the commands to the remote end.
Mark: Right.
Mark: We can probably find a common set of controls that are implemented everywhere.
<anssik> [ demonstrating remote controls for play/pause/seek using a similar demo http://webscreens.github.io/requestshowmedia/demo/ ]
Shih_Chiang: Firefox OS tries to define a minimal set of commands and status reports for remote video.
Francois: Not there are media accessibility requirements that might apply and affect this set
Louay: What about DRM content? How do you filter the devices that are available? DRM is just an example, there may be other things such as video quality or streaming format (e.g. HLS)
Mark: If it's 1-UA mode, you
would expect things to be supported. If it's 2-UA mode, you
need to check the codec of course. For EME, the license may
only apply to one local device so remote playback may not be
supported at all.
... That's questions that need to be addressed
Shih_Chiang: For Firefox, we pick only the
most simple scenario, where the video is trying to play an HTTP
url.
... And also we are whitelisting the supported codecs, e.g.
sending the MP4 source to the remote end.
[reviewing Mounir's IRC comments]
Shih_Chiang: for "start", it's about custom UI done by the Web App. The showControls approach would more be a way to let the user agent handle things by itself
Anssi: Are you guys interested in moving this forward?
Mark: I think so.
... We want that feature to be better integrated with the Web
page.
... Better user experience with custom controls.
<mounir> We want to ship the attribute to disable default controls as soon as possible
<mounir> the current browser default controls are providing a bad behaviour when used with Presentation API
<mounir> we need this attribute to provide a smooth experience for websites
<mounir> (ie. the idea is that a website using Presentation API and has default controls, the users will end up seeing two "Cast" buttons that will behave slightly differently)
Anssi: Is is a good proposal for something you might implement in the future, Shih-Chiang?
Shih_Chiang: Maybe. It is probably more a priority for things such as Youtube, but that's something we will probably address in the future.
Anssi: It seems it would be good to move this to an Editor's Draft. Do we have editors?
Mark: I can't speak for people who are not around but know Anton is willing to work on the feature
Anssi: I note the feature is in scope of the Working Group
<mounir> I think Anton and I will can edit the spec
Anssi: It will be separated from
the Presentation API, because it is different
... Any objection to moving forward?
... Seeing that Mounir and Anton "will can" edit the spec, so
that's good.
<mounir> :)
PROPOSED RESOLUTION: for issue #13, Anton and Mounir will spec an Editor's Draft that follows the proposal shared with the group so far.
Anssi: I think it's good not to have a dependency between the two specs, as implementers may want to implement one or the other spec.
<mounir> Avoiding dependencies is something we did in the current spec.
<mounir> As you can see, it is reusing designs from the Presentation API but not the same interfaces
Mark: One thing to consider. The bad experience now is that if a browser implements the Presentation API and this remote playback, there may be two approaches to provide a similar experience from an end-user perspective, hence the need to be able to disable the remote playback to take control of the experience.
Anssi: It may be a good idea to create the Editor's Draft on GitHub in a dedicated W3C repo
<mfoltzgoogle> Thanks Mounir for joining.
Francois: That's totally doable. I'll see with Mounir and Anton how they prefer to move forward. Happy to create the repo.
RESOLUTION: for issue #13, Anton and Mounir will spec an Editor's Draft that follows the proposal shared with the group so far.
-> TAG review of the Presentation API
Anssi: I'd like to thank Travis
for the TAG review.
... Can you take us through the document?
Travis: Reviewed 1-2 months ago
from the perspective of the TAG. Good patterns, etc.
... High level, the spec is breaking new ground. Idea of remote
controlling a remote browsing context.
... Today Web content is tied to screen you are interacting
with physically. Interactivity, display live in the same local
sphere.
... Even opening new windows, opens in the constraints of the
browser. Browser ensures that communications are
available.
... window.open(), browser creates a new frame. Channel created
to opener window. Multiple constraints applied: Essentially on
the same thread as the rest of the Web platform.
... Web developers expect to forward node trees to other
contexts.
... Now connecting to a completely separate device, and remote
controlling it.
... I found it interesting that the API is a high level
experience detail. But not much detail on how to create an
interoperable communications channel.
... Prevents a scenario where three browsers are able to
communicate to the same presentations.
... However other specs use a structured cloning algorithm that
is not specified. Assumed to be in same browser.
... send() API is different from postMEssage(). Is that
intentional?
... Security and privacy. Reminds me of fullscreen. Iframes may
need additional permission - a separate permission may be more
appropriate.
... API creates a session object that represents the remote
session. It seems that when I say start the game, and a couple
of friends join, and I can leave.
... Lifetime and ownership model of the second screen. Is it a
separate entity, is it tied to the opener, or does it live
while there is a live connection.
... There is a stop API. What happens if you never call
it?
... Security requirements for communication channel. When the
controlling page is loaded securely, what is the guarantee on
channel. The TAG+WebAppSec has a security questionnaire (where
these questions came from).
... What is your view of the origin of the second screen. Is
the expectation that it becomes its own origin, separate
relationship. Is it client-server or client-client?
... Client-server is well understood, each side makes its own
decisions. Client-client is limited to what capabilities. Each
client has to choose what origins it can accept messages
from.
... postMessage model
Francois: no postMessage, since
the recipient may not be friendly
... back to the same model as in WebRTC
... this was the motivation to switch to send()
primitives
... if the device I'm using want to connect to another device,
unable to check that the connection is not faked
... no way to convey trust
... IOW, what you receive cannot be trusted
... there is no way for the web app requesting presentation to
tell its origin to the second screen device in a secure way, or
the other way around: for the second screen to trust the
sending side origin
[ Travis repeating the use case ]
Francois: identity check would
happen through some sort of signalling channel, that is not
specified
... having postMessage on this origin would set false
expectations
... when you're in a single user agent you can trust the
origin
Mark: Any origin-based model
cannot be trusted if the UA cannot be trusted
... delegating the trust to third party one viable options
Travis: existing model in a
browser that is the same you're doing
... open PayPal in one tab
... open commerce side in second tab in the same UA
... those two origin do not know they're loaded related to each
other, totally isolated
... still postMessage let's you to postMessage to * world with
some payload
... UA will faithfully delivered the payload to all origin that
are valid per the filter (*)
... it is a push model, instead of pull model
... it becomes the responsibility of receiving party to do the
validation of the received data
... it is trivial to write a protocol atop this model
... e.g. say "I wasn't expected to receive a message at
all"
... if A has an origin loaded on my machine, B on another
origin on another machine
Francois: with postMessage, in a
single UA, you trust your UA
... in 2-UA, you have to trust two UAs
... we do not think that it is a good idea to rely on the same
mechanism
... because it does not build the same trust model
... can of course implement one's own auth protocol atop
Travis: I can't do MITM between
two tabs, a new security dimension
... when you initiate a new remote connection, is it handled by
a single UA only?
... or with a protocol that is shared between the two
Francois: we're protocol agnostic, so many implementations are possible
Travis: it is hard to give
security advise given the protocol is not specified
... Private mode browsing for the presenting context
... not sure about the status of the related TAG document
... how would you reference it?
Anssi: At one point, we were wondering whether the second screen browsing context should be required to be in private browsing mode.
Francois: Use case is that the controlling side is likely a private device, while the remote end may be a shared one, so implementations will most likely run the receiving browsing context in incognito-like mode.
Travis: these requirements would
need to be pushed down to respective specs
... there are cases where the user input would be required,
like access to a mic
Mark: true, there are also cases
where there could be interactivity
... we provide developers means to detect whether the content
is interaction or non-interactive
Ed: there are plenty of cases on
the platform that are about user interaction, e.g. alert() and
expect user to be able to interact
... we have an assumption you do not need to know if some sort
of input is possible
... examples:
... no keyboard
... no keypresses
... now you just listen to keypresses but never get one
... generic model for eventing
... this is a bigger guestion, not specific to the Presentation
API
Travis: often APIs are designed
to fair gracefully
... e.g. you just do not get Promise back
... sync behaviour that expects user input, e.g. alert() and
<input type=file> are examples of ones that have problems
with blocking
Mark: we may want the developer
to require an interactive or non-interactive second
screen
... "only show interactive screens"
... try things, if they don't work, try something else
... as the spec editor asking, if the private mode browsing
spec does not materialize what to do
Ed: each browser behaves differently in terms of private browsing mode currently
Travis: I'll take an action to investigate the status of private mode spec
Mark: with this feedback, I can improve the spec in this regard
<mfoltzgoogle> ACTION: Mark to define a presenting browsing context that sets an empty initial state. Including storage and permissions. [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action01]
Travis: 3. Fingerprinting and
screen availability monitoring
... the web page want to know whether there's second screen
before requesting one
... that's what this is about
... TAG has a take on unsanctioned tracking i.e. not all
tracking is bad
... the user has the choice whether to be tracked or not, think
cookie usage
... sanctioned tracking, revealing IP address in WebRTC
ICECandidate negotiation
... in that example, WebRTC has a plan how to fix that without
exposing the IP
... maybe that becomes an option how to configure their systems
to be in more control over their private data
... the availability of the second screen did not seem a huge
concern
... to draw another parallel, canPlayType already has more
exposure
... adds a bit more tracking data, but not too concerned with
availability
<scribe> ACTION: Mark to reach out to Permission API editors to see how it interacts with Availability [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action02]
Travis: 4. Dealing with legacy
content
... touched this one already a bit
... device w/o an ability to take input
... has dramatic effect on how you view the allowpresentation
attribute
... I wrote that probably do not want to piggy-back on
allowfullscreen
... however if you are then essentially projecting to a
"readonly browsing context"
... maybe allowfullscreen is ok
... especially if the device that is presenting is just driving
the view, or a mirror copy
... it is like a fullscreen experience
... if is a separate thing, it may not be appropriate
Mark: do UAs allow persistent permission for fullscreen
Travis: no
... it is a few APIs that use ask forgiveness model
... there are certain threats for fullscreen, like spoofing the
OS UI
... all shipping browsers implement the Fullscreen API with ask
for forgiveness permission model
... has the Screen Capture API in WebRTC considered in
designing the Presentation API?
Anssi: it is addressing the mirroring use case, for the Presentation API mirroring is out of scope explicitly
[ skipping 5. Presenting the content of an <audio> or <video> element was already discussed ]
-> Privacy review of the Presentation API
Anssi: We asked the Privacy WG to
look at the Presentation API
... Anssi and Mark attended a call with them some time
ago
... Greg from the Privacy group tested their privacy
questionnaire against the Presentation API
... The results are in the email linked above
... Let's see what actions we should take from these
results
... Item 2, does the specification collect personally derived
data?
... This doesn't look to be actionable
... Item 3: Does the spec generate personally derived
data?
... We seem aligned in our consideration of the incognito
mode
Francois: Should we ask clarification on the audio/video case?
Shih-Chiang: They seem to be thinking more of video sharing than the Presentation API
<scribe> ACTION: Francois to contact PING to better explain the Presentation API and that audio/video will be done in a separate specification [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action03]
-> Issue #162
Francois: When we created the
Second Screen WG we got early feedback from the Protocols and
Formats WG
... The group is now known as APA
... We changed a few things in the charter before the group
started
... They are working on a media accessibility user requirements
specification
... Some requirements relate to secondary screens and other
devices
... This isn't what the Presentation API is about, not the same
as presenting the video itself
Francois: I've written a response to share with them
[SD-1]
Anssi: Does Google's
implenentation distinguish discovered devices when presenting
the list
... Yes, we have several types, such as Cast devices, DIAL
devices
MarkF: What defines an accessible UA?
Francois: There are the UAAG
guidelines, has requirements for accessibility friendly
UAs
... But there aren't precise conformance claims
MarkF: I don't see a simple solution, but could discuss internally
<scribe> ACTION: Francois to amend the SD-1 review (drop the second section) and send back to the APA [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action04]
MarkF: Browser extensions can enable accessibility functions making use of markup in the page
Shih-Chiang: And similar in Firefox, but we have screen reader mobile support also
[SD-2]
Anssi: No actions here
[SD-3]
MarkF: This might relate to the video proposal, if there's a new control it should be made available to assistive features
[SD-4]
Francois: In the 1-UA case how
would we expose the browsing context to accessibility
services?
... The plug-in would need access to the DOM being rendered
Anssi: There could be a use case such as tabbing between elements
MarkF: I think this should be explicitly mentioned in the spec
Shih-Chiang: There should be a clear statement that the platform should allow the screen reader to work on the presenting page rather than the remote page
<scribe> ACTION: Francois to ask the APA for suggested clarifying text (SD-4) [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action05]
[SD-5]
<scribe> ACTION: Francois to initiate the review with the APA WG [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action06]
Anssi: We have done the self-review questionnaire
https://github.com/w3c/presentation-api/issues/45
MarkF: I think the three issues
to focus on are: the cross origin nature of the API, secure and
insecure context, and authenticating the user agents
... allowing the user to extend the trust model across user
agents
[Lunch break]
[Back from lunch, most of group participants had lunch with Mike West, from WebAppSec WG]
Mark: First question was: is
there any concern about the API beging fundamentally
cross-origin?
... There was a feeling that this was acceptable
... There is no way to force the other side to give you
information that it would not otherwise.
... Regarding the secure context requirement, the feeling is
that the overall risk is relatively low: there is permission
involved, the API can do little harm to users.
... On fingerprinting issue for getAvailability, we have this
ability to perhaps limit that by rejecting availability
monitoring in non-secure contexts to prevent man-in-the-middle
script injection. Not sure this needs to be in the spec.
Anssi: I think there is an interest to require secure contexts for exisiting APIs that have shipped. Geolocation is an example. The idea could be to leave that open to implementers.
Francois: Not exactly the same thing. If geolocation was re-written, it would require secure contexts. We may choose not to follow the secure requirement because of some rationale, for sure.
Anssi: If you had secure context restriction in an implementation that is not required by the spec, do you still conform to the spec?
Francois: Interesting point. I think so, although it's true that you would not always follow exactly the same steps.
Mark: I think we can be a bit more specific in the spec.
Francois: In the spec, steps typically need to contain some prose that allows the user agent to reject a request for security reasons.
Mark: Right, same wording as for permissions.
[some discussion on user interaction]
Mark: The third item discussed was mixed content rules for the API. If you're in an unsecure context, we probably do not want to allow a presentation on a secure context. There will be no way for the user to know whether the controller is running under unsecure/secure context.
Jean-Claude: Mixed content is a warning you would get when you mix secure and non-secure content. Or any secure resource access from an unsecure context.
Mark: Yes.
Francois: So there is a requirement to add in the spec to prevent an existing receiving presentation running in a secure context from receiving input from a controller running in a non secure context.
Mark: Correct.
Jean-Claude: it's strange to be able to join an HTTPS session from another unrelated one
Francois: That ties in the discussion we had this morning
<scribe> ACTION: mfoltzgoogle to propose wording to prevent mixing HTTP and HTTPS [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action07]
<schien> this is the current statement in mixed content:
<schien> A resource or request is optionally-blockable
<schien> when the risk of allowing its usage as mixed content is outweighed by the risk of breaking significant portions of the web.
Mark: Finally, we touched the question of behavior of nested frames again.
Anssi: Mike presented the idea of delegating the trust from the top-level context to the nested one, which could perhaps apply long-term.
Mark: Yes, and I have a couple of
names to get in touch with.
... This was more from a permissions perspective. In our
situation, the permission is really just for the presentation,
no long term permission.
Anssi: Related to issue #80, this got discussed as well.
Francois: Yes, follows the
discussion with Travis as well. My understanding is that there
is no real way to spec normative requirements on that topic
since we're not addressing protocols.
... More informative guidelines in the end.
Anssi: In a way, issue #80 is out of scope of the group. At least, that is out of our hands.
PROPOSED RESOLUTION: for issue #80, close issue as out of scope, and focus on informative guidelines in security section.
Mark: Yes, anything we can write today would depend on a not yet existing spec.
RESOLUTION: for issue #80, close issue as out of scope, and focus on informative guidelines in security section.
Francois: The security self-review questionnaire should be sent out to the WebAppSec WG instead of the Web Security IG.
Anssi: Indeed.
<scribe> ACTION: Francois to send self-review security evaluation to WebAppSec WG for review. [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action08]
Francois: i18n group has not raised strong concerns so far, but discussion this morning suggests that there may be i18n use cases
Takeshi: The concern is that most
of the Web content is rendered with the current locale. The
typical problem is that maybe backslash character will be
interpreted as Yen in Japanese, something else in Chinese and
Korean.
... These countries share the same character codes but display
different characters.
Anssi: The default fonts are different.
Francois: What is specific for this API is the controlling side may want to suggest the preferred locale for the receiving side, and then for message passing, strings may be interpreted differently.
Takeshi: We can have some rendering of characters different on both screens even if they are specified in UTF-8 / UTF-16
Jean-Claude: My suggestion is to describe the font that is used on the controlling side.
Francois: I'm surprised that the
same Unicode character could be represented by characters with
different meanings by different fonts.
... Encoding may indeed change the meaning of a character, but
once it's mapped as a Unicode character, it should not be
interpreted directly.
... Worth investigating and finding pointers.
Francois: in Presentation API we
cannot impose locale on the receiving site
... also, the message passing uses the string type
... when you send a string, you're using the JS string
type
... it does not allow you to use the full set of characters in
a JS string
... the two ends should not be interpreted, since it is not
encoded
... as Travis was referring to an USVString http://heycam.github.io/webidl/#idl-USVString
... my question is: I do not understand how this affects the
messaging channel of the Presentation API
Jean-Claude: if the system is different (e.g. Windows, Mac), the glyphs you see will be different
<scribe> ACTION: Francois to investigate the possibility for a JS string to be rendered differently by different glyphs and locales [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action09]
Francois: does specifying the locale when starting a presentation has a use case?
Jean-Claude: should we mandate that the receiver will use the same locale?
Anssi: should this be SHOULD and not MUST?
Mark: support SHOULD
Jean-Claude: support SHOULD too
Francois: if you have examples of
things that break due to this issue
... you are encouraged to open an issue on GH: https://github.com/w3c/presentation-api/issues/new
<mfoltzgoogle> We have two entries. Controlling API: https://www.chromestatus.com/feature/6676265876586496
<mfoltzgoogle> Receiving API: https://www.chromestatus.com/feature/5414209298890752
Mark: Implementation of the receiving side. That ships by default. On the controlling browser, that's still under a flag.
Anssi: You can still do updates of the IDL?
Mark: Yes, we do not expect
developers to pick it up before it is stable.
... Support for Chromecast through the Cast protocol. For TVs,
it supports the DIAL protocol.
... That's the initial target.
... We also have tab mirroring, but that is not part of the
Presentation API
Anssi: done through the Screen Capture API?
Mark: Not familiar with that proposal.
Shih_Chiang: That's what we have in Firefox, related to getUserMedia.
Mark: The WebRTC team would be the right ones to ask.
<anssik> [ Screen Capture API http://w3c.github.io/mediacapture-screen-share/ ]
Mark: The receiving API is still
work in progress. A little bit behind the controlling
API.
... This will enable us to support a wide range of URLs. With
the current implementation, we only support URLs that map onto
Cast or DIAL applications.
Anssi: Does the meta-boxes "controller" and "receiving" map directly to the conformance classes in the spec?
Mark: Sort of. The two boxes have
the controlling and receiving interfaces implemented.
... That may be specific to our use case.
Yukinori: Current the Cast protocol is using the DIAL protocol?
Mark: Not exactly, we support
DIAL for discovery purpose. mDNS is used otherwise
normally.
... We don't use the DIAL endpoint to actually launch
applications.
Shih_Chiang: For Mozilla, we're trying to
complete the implementation of the receiving side.
... For the controlling side, we support basic functionality
such as 1:1 connections but no session resumption.
... We have the basic support for controlling and receiving
sides for the 2-UA mode.
... Same code on both ends.
<schien> Slides on discovery and control protocol used in Firefox
Shih_Chiang: For the protocol choices that
we have right now, see the slides
... The protocol side for discovery uses mDNS right now. For
application launch, we did not know which protocol to use, so
we created a very simple one.
... During the application launch, we have a very small control
protocol for each endpoint to provide offer/answer-like
semantics such as TCP points
... For the message transportation, we support raw TCP
socket.
Shih_Chiang: and we plan to support
WebRTC, so there will be two communication channels.
... We're also developing the 1-UA mode for attached displays
and Miracast. First to be deployed on Firefox OS devices.
... And also for the 2-UA mode controlling side, the first
platform that we're going to launch will be Firefox for
Android.
... We'll limit that to a limited audience, to the Firefox
add-on developers.
... That's our current plan.
[Short presentation of the Presentation API polyfill, to investigate different discovery/launch mechanisms, and implement algorithms in the spec to uncover spec bugs: https://github.com/mediascape/presentation-api-polyfill/ ]
[Implementation demo from Mark with Cast SDK on top of the Presentation API]
[Implementation demo from Louay of a multi-player game]
Louay: Happy to share code underlying the demos on a GitHub repo
Anssi: The webscreens organization seems to be a good fit for that
Louay: Great, I just need a gh-pages branch. Static files.
Anssi: My assumption is that you
are familiar with the use cases
... Any new use case that are not covered
Francois: Mentioning the scenario that would not require a communication channel
Jean-Claude: already covered by the use cases, although requirements are not the same.
Louay: Maybe the requirement is to put more information so that you can end up with more devices in the selection box
Anssi: I think you can take a
stab at drafting a "future" section.
... That's good homework.
... I suggest we don't deep-dive in this topic right now.
... Make an offline copy for the flight ;)
-> Web Platform tests repository
Anssi: An important part of the standards process.
Francois: we will need as part of
the W3C Process an implementation report
... to advance from CR to PR
... what that means in practice is we need to build a test
suite
... implementations of the API are covered
... each normative statement in the spec should have test
case(s)
... it should be from an external perspective to ensure
interoperability meaning they implement the normative
statements in the same way
... we've done it differently over the years and now trying to
converge to web-platfom-tests and testthewebforward.org
... GH repo under the hoods https://github.com/w3c/web-platform-tests
... one folder per spec that is being tested, and each folder
contains the test suite for the corresponding spec
... what I suggested is to merge with that initiative and use
that framework to create our test suite
... as part of this framework there's a test runner
... e.g. idlharness.js to test the interface
... the harness lets you to run unit tests, reftests
... ref test allows you to compare the rendering with a
reference image
... the last fallback is manual testing
... given we're in unchartered territory, so testing needs to
require two devices: controller and receiver side
... IOW, it is two conformance classes in the spec
... most of the specs will rely on the existence of the other
side, so we must be able to emulate that other side
... there's no readymade solution, we must come up with a
solution how to test that
Louay: similar issues with HbbTV testing
Francois: the test runner relies
on user interaction, so testing receiving side where user
interaction is not possible causes challenges
... this suggests is a good idea to start the testing
activity
... we'd need a testing lead aka Test Facilitator
... it is a good role to have in a group
... it is good for that person to be not a person who already
has a defined role in the WG, such as editor, chair
Louay: I can look this and see if we could help with testing, will come back with an answer next week
<scribe> ACTION: Louay to talk to internal QA dept to figure out whether we can commit to work on testing [recorded in http://www.w3.org/2015/10/29-webscreens-minutes.html#action10]
Anssi: thanks!
Francois: an example: semantics when sending an empty message should be tested, in WebRTC
Anssi: asking whether browser implementers has some tests to contribute
Mark: happy to upstream relevant
tests to w-p-t
... issues with features that are platform specific, how to act
on browser UI controls
Zhiqiang: we have some tests for
Crosswalk
... Crosswalk project has an implementation of an earlier
version of the spec
Shih-Chiang: we are trying to
split the tasks from the protocol layer
... so that we can use the protocol layer as the trigger
point
... using protocol simulation
[showing an overview of the different steps]
Anssi: Starting from an Editor's
Draft, then a First Public Working Draft, which leads to
several iterations as Working Draft, until we believe we are
feature complete.
... At which stage, we switch to Candidate
Recommendation.
... To move to Proposed Recommendation, we have to provide an
implementation report that we just talked about.
... The hard part is indeed creating the test suite and having
implementations conform to the spec.
... Roughly speaking, we should publish one more drafts after
TPAC.
... And being optimistic, we would switch to Candidate
Recommendation after that.
... My suggestion is to start the testing effort as soon as
possible as it may reveal issues with the spec.
... Once you have the implementation report, there is not much
to do to switch to Proposed Recommendation.
... We have been doing good progress in this group.
... Any question or concern with that?
Francois: Note the need to have horizontal reviews to switch to Candidate Recommendation, which we already started.
Mark: I think feature-wise we're
mostly done. On top of open issues, I'd like to make an
editorial pass on the spec to reorder this.
... I'd like more concrete feedback from the Privacy group.
Otherwise I don't see major issues to moving forward.
Anssi: That was it. I'd like to
thank everyone from participating.
... Thanks for the time at TPAC. Thanks Takeshi for the dinner
organization! It will be long remembered :)
... I'm confident that we can get the spec to the final stage
in some near future.
[Meeting adjourned]