See also: IRC log
Mark_Vickers: [presenting Web and
TV IG charter, scope, what the group does, liaisons with
external organizations].
... The group analyzes gaps but does not develop specs.
... 3 primary ways we can influence the standardisation
process:
Mark_Vickers: 1) bug reports on
existing specs
... 2) new spec done in a WG
... 3) draft a spec in a CG, spinned out of the IG. Then
hopefully transition to a WG
... I will now review the history of task forces and community
groups that the Web and TV IG has done until now.
... Standardisation is a long road. Some activities started in
2009 are still on-going.
... 5 Task Forces concluded their work, 1 on hold (Web Media
Profile TF), 1 active (GGIE) and new proposal for a cloud
browser APIs Task Force (on the agenda for today).
... The Media Pipeline TF was the first task force. Reviewed a
lot of details that the media business had to work with.
... The task force submitted bug reports against the HTML5
spec. The IG contributed to the media architecture of
HTML5.
... Two areas were not addressed by HTML5. The IG drafted
requirements for them: Media Source Extensions (adaptive
bitrate), and Encrypted Media Extensions.
... Both activities are still on-going, although there are
implementations already deployed.
... The TF highglighted a lot of details on how one extracts
metadata from media streams.
... This led to the creation of the Media Resource In-band
Tracks CG.
... The resulting is now referenced by the HTML5 spec.
... And so, the main focus for the first few years was on
HTML5.
... We'll talk about MSE and EME today
... Also, the HTML WG F2F on Thursday and Friday is focused on
these specs.
... The Web Media Profile Task Force (on hold) is about
creating a profile of HTML5 specs. Which version of CSS do you
include? Which specs? That's of interest for standards groups
that need to reference such specs.
... We started this work around 2010 which led to a draft Web
Media Profile.
... However, at that time, most of the specs were still being
updated on a rapid pace, so it was hard to maintain the
list.
... In the end, we put this on hold.
... The question for today is: is there a need to re-open
this?
... It might be the right time now that most specs are
stable.
Chris_Needham: This is something
that we see particularly with CSS. The OIPF spec references CSS
specs that were still in draft formats. Most have stabilized
even though they may not be at the Candidate Recommendation
stage.
... Inconsistencies arised.
... Would be good to do that work now.
Mark_Vickers: That would be a
good candidate topic for a CG. First, it's a spec. Second, we
probably want people from external orgs doing such work to join
and participate.
... Moving ahead to the Home Network Task Force, which wrote
requirements to enable discovery and control of devices on the
LAN.
... These requirements were sent to the Devices APIs WG.
... This led to the Network Service Discovery API spec, which
stalled after review by the Privacy IG (PING).
... No real solutions to enable discovery.
... On hold for a while.
Glenn_Deen: How does this relate to stuff that got into WebRTC?
Mark_Vickers: These groups ran
independently.
... The solution that seems to work is to let the user agent do
the discovery without exposing the list of discovered services
to the application.
... That's the direction we followed in DLNA. Also see Bonjour.
Part of the browser itself.
... The Web site code does not have access to discovery. That's
safe.
... Moving on to the Testing Task Force.
... In the TV world, devices often need to be certified before
they can hit the market.
... Not something that happen in the Web world where browers
just ship new versions.
... The group took input from several external groups (OIF,
ATSC, DLNA, etc.) and reviewed testing use cases and so on to
create a testing plan.
... Several of these outputs were implemented at W3C in the
testing activity.
... The overall goal to improve the Web platform testing
requires a lot of investments.
Yosuke: Web Platform Tests are on
GitHub. However, unfortunately, these tests do not run on TV
sets because of limitations.
... We'll have a presentation of a test runner on TV today
Mark_Vickers: Exactly what I was
about to say.
... Moving on to the Timed Text Task Force.
... Issue is that there are two different formats in use in the
industry, both of them originating from the W3C: TTML and
WebVTT.
... We looked into them, and realized that these two formats
needed to work together.
... We gathered use cases from the industry and from IG
members, wondering what had to happen with the 2 formats.
... Two needs: WebVTT and TTML need to be done in the same
group. And secondly, there should be a mapping doc between TTML
and WebVTT.
... These things happened. There is an Editor's Draft of the
Mapping between TTML and WebVTT.
... Nigel will give an update on the Timed Text TF update and
give demos of new work.
Mark_Vickers: The Media APIs Task
Force looked at recording and downloading media, discovery and
control of device capabilities, exposing TV metadata,
etc.
... All in one Task Force, which did a cross-analysis of use
cases and requirements.
... This triggered the TV Control API CG, which completed a
first version of the TV Control API specification that will be
presented today.
... Currently, we have one Task Force running on: the Glass to
Glass Internet Ecosystem (GGIE). Will be presented today.
... One piece of glass captures content, a lot of them are
covered by specific standards or internal company
architectures. Then different nodes on the path to the other
glass (the media client), all of these nodes using different
sets of standards.
... The Task Force looks at the possibility to preserve data
across these nodes from the first glass to the last one.
... This is not only about driving W3C specs but also liaise
with external groups since W3C will not write all the specs
used in the chain.
... There are a couple of on-going CGs that are closely related
to the Web and TV IG. The IG did not create them as such, but
keeps an eye on them. They will be presented today.
... Looking at where we are now that HTML5 is out, lots of
things have changed, that's incredible. Worldwide standards
support HTML5 (ATSC 3.0, DLNA VidiPath, HbbTV 2.0, MSIP Smart
TV 2.0 in Korea, IPTV Forum Japan Hybridcast). I may have
missed a couple.
... In addition, there are TV platforms that support HTML5 or
plan to: Android TV, Firefox OS, Opera TV, Tizen, WebOS.
... Several of them have HTML as the primary development
platform.
... All these standards and platforms link to the discussion on
establishing a profile of HTML for media.
... Hardware chips also support HTML5 (AMD, ARM, Broadcom,
Intel, Marvell, MStar, NXP, Sigma, ST)
... It all makes sense that chip guys support technologies that
platforms and standards want to use. It's still quite
interesting to see this happen.
... Finally, content protection support EME: Netflix and
Youtube use HTML5 playback including EME. Most systems support
EME or plan to support EME (Adobe Access, Alticast XCAS, Apple
Fairplay, Google Widevice, Microsoft PlayReady, etc.)
... We have an update on the TV standards today.
... What remains to be done? A lot but that's up to us to
decide.
... We didn't get DataCues in HTML5 for instance.
... I'd like to discuss this today and build the list with
you.
Sean: goal is to enable web apps
to interact with TV platform to present tv programs
... we are defining APIs to manage native TV modules
... we looked at existing specs as a starting point
... EU webinos, Mozilla APIs, HbbTV, OIPF and
more
... we worked on a spec and now the specification
draft is ready for publication
... also ATSC showed interest in the API and we
will follow up on that.
... we had one month public review just ended
... after TPAC we would like to have official
release, by the end of November.
... Tuner, source channel and EPG management are
handled via new API and events
... we have methods for channel scanning
... a new interface (TVMediaStream) was defined,
extending Media Stream, to be able to stream content coming
form the TV using a <video>
... also defined a new trigger cue
... also defined API for emergency data
alerts
... about recording, new APIs to schedule and
manage recordings
... we discussed with the Media Stream WG and they
suggested we extend the mediaStream API.
... Parental control also important, so we defined APIs for that at the channel level, recording level and system level.
... new APIs were
defined for the handling conditional access modules
... for now the work is part of a CG, we would like
to have a WG created to move this spec to recommendation
... we also want to work on a new version of the
spec,
... so we are looking for more requirements
Mark vickers: I have a concern, how do you
make this part of the one web
... if there is a dependency
from TV specific modules/tech
Sean: we tried to abstract as
much as possible and define high level features
... there is no real dependency on hardware
Mark Vickers: ok I will review the spec and
send comments, and maybe try to have also this more
abstracted
... for example for CAS, why can't we use EME?
Sean: sure we can use EME, we only needed an interface to handle the CA module
Francois: have you considered splitting the spec between what can run on the web and what is more TV specific?
sean: no we didn't think about it, but it is a good idea.
Francois: about the transition to a WG, where should we draft the charter, maybe in the CG? Maybe is a good time also to discuss security model and the issue about web VS platform API?
giuseppe: is there a security model for this API?
sean: for now we don't have anything special
giuseppe: so every application can access the tuner?
sean: so far yes, we hope we can reuse effort in other areas around security model
<Paul_Higgs> During the CG discussions, the "issue" of security and access to resources came up - it was determined to ba an implementation related activity, not something related to the API
giri: have you considered existing issue about delay in triggering events in HTML5 due to the way these are defined
(Giri, feel free to give me a longer version of your question offline, for later)
<Paul_Higgs> A WG should look at issue beyond just tuner control, i.e. what does TV in 5 years look like?
giri: maybe this could be a
handled in the WG, since many people are facing this
issue
... so it could be handled during the charter discussion
<nigel_> giri was pointing out that the TextTrackCue timing model allows for up to 250ms delay in propagation of cues relative to their timing, and that HbbTV has pointed this out to the HTML WG.
<Paul_Higgs> There are many "real-time" aspects of broadcast/television delivery that need be be taken care of, especially synchronization and alignment of events and media!
mark: we should probably make
sure all groups that have this issue talks together, maybe in
the IG, so we can consolidate our input and ask for changes in
HTML5 and related specs
... you need to be aware that sometimes the first reaction of
WGs are a rejection
... but with time, if you work with the WGs and
make your case, changes happen
sangwan: how are multiple apps using the same tuner handled by the spec?
<Paul_Higgs> spec provides
events for addition/removal of tuners and
... allocation of tuner -> channel -> TVMEdiaSouce is in the spec
... yes, I am on the webex, but hard to hear the audience
... mailing list for issues....
sean: I think this is related to the permission model, we have not taken this in account
<Paul_Higgs> I'm happy to
answer in detail there!
... see the mailing list noted in the presentation
xxx: what happens if there is more than one browser instance?
<Paul_Higgs> respources are finite, browser instances are "infinite"
... its up to the implementation to determine how to allocate a tuner to a
browser - there is no request/release model
mark: I have a suggestion for the
CG
... today there are apps and devices that are streaming linear TV over the internet (e.g.
sling)
<Paul_Higgs> yes, a WebApp
can construct a hybrid (IP/broadcast) channel list
... streaming linear TV over internet === MSE/DASH.js
mark: so if you can think about
it and build an abstraction that takes both cases (ip and
tuner) into account
... maybe you end up with something more "web
like"
jean-pierre: looking at the slides,
there was a mention of texttracks
... is that related to the sourcing of in-band track spec
mentioned before?
<Paul_Higgs> TextTracks are
added into TVMediaSream
... since MediaStream only does audio and video tracks
... Hard to hear the questions from the floor
sean: no they are not related
Chris_Needham: I would like to
encourage people to be more active in the CG list
... and bring their comments there
<Paul_Higgs> Requirements for an update to the TV Control API specification can be made at any time
Yosuke: we break for coffee, back 10:40
Glenn Deen opened session, provided history
GlennD: GGIE was created to look
at digital video on teh Web.
Capture-edit-store-package-distribute-find-watch
... Need for bandwidth to address SD->HD; HD->4K;
4k->8k; Phones now capturing 4k; 1::Many live video
streaming (ex. Periscope) driving banwidth
... GGIE Overview - Smart Edge of Creation - Capture, Store,
Assets feeds the Core (Discovery, distribution, Ingest which
moves the content to the Consumption edge - clients
... Related efforts: SMPTE - Open binding of ID's to media;
ATSC 3.0 - ACR Watermarking; IETF - NetVC working group
(royalty free codec; New alliances including Streaming Video
Alliance (looking at near term needs) , Alliance for Open Media
(Codec)
... Work in 2015 focused on Use Cases around 5 areas - User
Content Discovery/search/EPG/Media Library; Viewing; Content ID
and Measurement; Network Location and Access; Content
Capture
... GGIE Boundaries: focused on discusison, not implementation
based on W3C IG rules.IP Safe zone to foster open
discussion
... Requirements from Use Cases - Need persistent identifiers
to enable intelligent management at all stages; assigned at
capture or distribution; associated with content using
metadata, watermark, or fingerprint. Systems should support ids
from different authorities and schemes. e.g. EIDR, Ad-ID,
ISAN...
... Requirements from Use Cases #1- Need persistent identifiers
to enable intelligent management at all stages; assigned at
capture or distribution; associated with content using
metadata, watermark, or fingerprint. Systems should support ids
from different authorities and schemes. e.g. EIDR, Ad-ID,
ISAN...
GlennD: Requirements from Use
Cases #2: Identifying content for search, EPG, applicaitons is
different than identifying it for streaming, decoding, caching.
Search is work level attributes, not bitrates, etc.
... Requirements for Use Cases #4: Content ID for Delivery -
Ideally integrated with the network, Single "viewing" of
content may involve multiple linked streams going to difference
devices delivering different codecs and data e.g. HEVC to a
screen, different audio codec not in first container. One
suggestion is a 128bit identifier carried in IPv6 address slot
- Content Address
... Requirement #5 - need to translate between Content URI's
and Content Addresses, fundamental linkage between finding
content and accessing content. Can be 1::Many. can be used by
network to locate optimal cache/source for requested content.
Bi-directional resolution . Content Addresses::Content URI is
Many::Many relationship.
... Requirements #6 - Streamed content delivery can be viewded
like a composed flow combining many parts. Not simple file copy
from source to player. Can compose playback of one or more
component streams from one/many addresses to one or many
devices over one or more networks.
... Back to overview diagram of GGIE. Create the content,
obtain Content ID. Publish it and Core can assign Content
addresses. Client discovers and accesses content from optimal
caches and can provide feedback on performance, etc.
... GGIE in 2016 - want to explore Content Identifier API -
possible new CG. Enable applications to retrieve Content ID via
standard common API using metadata, watermark, fingerprint.
Example: 2 devices in home watching same content but
timeshifted. When second goes to retrieve, first can inform it
has a copy in progress and sends directly rather than having to
pull second copy down from external sources.
... 2016 continued - many unexplored topics. Viewer and creator
identity; Privacy issues and mechanisms; metadata and metadata
workflows; others
... GGIE outside W3C - GGIE holding informal BOF at IETF94 in
Yokohama to intoduce GGIE to IETF. Issues to cover at IETF:
Content URIs and Content Addresses. Also looking to engage with
other groups - SMPTE, others.
JP Abello: Looks like you are looking to enable Media to be assembled from multiple sources?
GlennD: Yes - e.g. looking to create new content asset assembled from multiple user's camera streams/assets e.g. Periscope. Could come from 10's or 100's of cameras with many users publishing their own composite asset.
GlennD (in response to JP Abiello question: Security is major issue
Mark Vickers: Why do you need this to reengineer what is already being done by CDNs?
GlennD: CDNs work well today but each CDN today maintains their own caches with their own Identifiers. When searching today each of these show up on local caches as unique copies. GGIE envisions each of these copies having the same ID allowing discovery of the optimal source.
<ldaigle> @sangwhan — you should ask that question!
Mark Vickers: UV tried this but abandoned since different "copies" may have different formats, added unique content (DVDs), etc.
<ldaigle> @sangwhan — I think it is pretty clear! The answer is, I think, complex. Good for discussion :-)
GlennD: Can be handled (over long term) can have different store fronts that deliver content from common delivery services which are cached locally. Similar to retail direct mail delivery from common warehouses but based on orders taken from different retailers.
markw: Can you discuss privacy issue with regard to identifying consumers with the content they consume.
GlennD: Diving too deeply into implementation. IPv6 has some mechanisms that might be applied to help maintain privacy.
[End GGIE Discussion]
Mark gives update on sourcing in banc tracks
spec
... the spec defines a mapping between various
containers and web apis (html, mse)
... status of the work is: no additional work
planned
... last update april 2015
... to add ISOBMFF 608/708
... there was some support for DVB added, but as
there is no expert in the group that work was not completed
nigel: the issue with MPEG-2 TS,
as used by DVB is that there are regional
variations/profiles
... which are not signalled in the container
... that has caused some issues in the
discussion
... on how to address the problem with the current
approach
Mark: if anyone is interested in
working more on this spec, please contact Bob lund or the Media
Resource In-band track CG
... so far there is no off-the-shelf browser supporting
this
... the document is not in a final publication stage, we should
work to make that happen
[lunch time now, we are back at 13:00]
Giuseppe: [HbbTV Overview]
Nigel: Move TTML to W3C spec!
Giuseppe: Right.
... [HbbTV Updates]
... HbbTV 2.0 is a starting point for these two contries.
... [HbbTV Issues and Challenges]
francois: Is there any disucssion
or on-going disucssion such as 3.0
... something this IG can help.
Giuseppe: HbbTV is gathering issues about HbbTV and some of them are likely to be useful for this IG
kinjim: update about hybridcast
... hybridcast based on html5
... version 2.0 spec published in sept 2014
... 3M receivers deployed
... with v1.0 support
... NHK and commercial broadcasters are offering servcies
... no major update since last tpac
... major changes around mpeg-dash
... defined some operational guidelines
... the spec rely on MSE as a way to implement a dash player
... reusing the work done by dash.js
Mark: will give an update on DLNA
Vidipath
... world wide spec
... but first deployement in US
... you can use a certified DLNA Vidipath device
to run operators services
... web specs used: HTML5, EME, MSE, WebCrypto
etc
... issues: profile of web specs
... other issue is communication with local
devices over tls
... as local devices do not have FQDN
... other issue is certification of W3C tech
... test suite is thin
... and is not easy to fill that gap, a lot of
effort and money would be needed
... finally, most things in the apps are common to
any app, but we had to add some requirements to the user agent
around UPnP discovery
... energy management
-> RDK slides
mark: now giving an update on RDK
(but not representing RDK)
... RDK is not a spec but a software bundle
... available from the RDK alliance
... include the usual html tec (HTML5, webcrypto,
MSE etc)
... there is an architecture to support multiple
DRMs
... reusing microsoft CDMi architecture
... one issue is around performance of html5 on
embedded platform
Giridhar_Mandyam: I'm from
Qualcomm and also reporting for the ATSC liaison
... Background: ATSC 1.0 pretty widely deployed and mature
technology.
... ATSC 2.0 was adding interactive technology, leveraging OIPF
technologies
... ATSC 3.0 was started a few years ago without
backwards-compatibility requirements.
... As a result, a new physical layer was defined.
... More robust.
... We also wanted to leverage the audio/video codec
evolution
... and decided to use an IP-based transport layer, with two
options: ROUTE and MPEG Multimedia Transport (MMT)
... We're moving beyond the DAR (Declarative App. Environment)
as we want to leverage more Web technologies.
... Why IP transport? Broadcast is one of the many different
ways to send content.
... We wanted to leverage the benefits of the Internet.
... Different sorts of content. Broadcast and broadband as peer
delivery mechanisms gives you flexibility to maintain or
improve the user experience.
... One of the interests is ad-insertion.
... More work done by the client device, instead of done in
servers.
... [Reviewing ATSC 3.0 organization, see slide], S31, S32,
S33, S34 (on App and presentations that I chair, lots of W3C
technologies referenced there, with S34-5 focused on
accessibility), S35, and S36 about security and potentially in
scope for W3C as well depending on the topic of course.
... It was decided very early to support IP with two modes of
operation. ROUTE leverages DASH.
... MMT uses MPEG-defined MPU
... Non real-time content, which ATSC defines as interactive
apps, targeted ads that are up for local caching, etc. use
ROUTE.
... Both mode use hybrid delivery. Use of DASH gives us some
compatibility with W3C technologies.
... TTML and IMSC1 profiles are used for captions and
subtitles.
... We're looking at extension for the TTWG, including 3D
disparity.
... Just started our interaction with the group
... The runtime environment includes technology from HbbTV 2.0,
OIPF, HTML5 and ATSC 2.0. We are still based on OPIF DAE, but
with additional Web technologies.
... Some of the additions to the OIPF profile are geolocation,
MSE, EME, and touch events.
... These are roughly mature specs in W3C.
... We're also considering referencing the TV Control API and
the liaison letter sent by ATSC 3.0 to this group was written
with that in mind.
Giuseppe: Extensions are only Web specs.
Giridhar_Mandyam: Mostly, but we may have to define new ones.
Giuseppe: Such as?
Giridhar_Mandyam: Personalisation
could be an example.
... Going forward: we want to continue the collaboration with
the Timed Text WG. We would also like regular communication
between the Web and TV IG and ATSC. One idea could be to form a
task force.
... If the TV tuner API transitions to a WG, this would be a
good way to provide feedback.
... We do not want to wait too long, expectation is to achieve
publication of ATSC 3.0 in 2016.
Mark_Vickers: I welcome this idea
to liaise with an external org. I think we're all set up to do
so. I'm willing to review the gaps that you've identified. Some
of them are surely gaps.
... We found a way to move out of the one app model in DLNA. It
was not easy but we managed to do it. I would be interested to
collaborate on this.
Nigel: What kind of status does ATSC 3.0 need to have to be published in 2016?
Giridhar_Mandyam: We want some stability in the specification to drive the certification process.
Nigel: If there's something new that need to be specified.
Giuseppe: When you publish the spec, will it just sit there waiting for devices to implement it?
Giridhar_Mandyam: Not really.
There are no particular requirement for implementations to be
in existence before the spec is published.
... There would be a leap of faith if we want to reference the
TV Control API.
... because it is not a W3C Recommendation.
Yosuke: You proposed to create a task force. Do you have concrete topics to address?
Giridhar_Mandyam: We would want
some more direct way to collaborate than liaison letters. The
IG feels better than a CG to track membership thanks to staff
support.
... We think that there would be some mutual benefit.
Giuseppe: Members from ATSC would
be able to participate directly in the IG?
... The difficulty we face each time is ensuring people from
the external org actually participate in the Web and TV
IG.
... We're certainly willing to liaise with them as well.
Mark_Vickers: I also wonder if reviewing specs is considered as tech spec, which the IG cannot do.
Bill: This group could perhaps
recommend works on the gaps that ATSC identified.
... I wonder if it would make sense to generate a liaison
letter to ATSC 3.0 to wonder whether requirements documents
that probably exist at ATSC 3.0 could be shared with W3C, to
identify gaps.
Mark_Vickers: Right. We have two
issues, one is a membership issue. The second is around
requirements and possibility to have discussions among
groups.
... The requirements are indeed very good.
Giuseppe: Liaison letters are
slow. If you really need to discuss issues, then you need to be
involved.
... The expertise is usually not within the IG itself, but the
IG is a good central point to redirect discussions to the right
working group.
[Discussion on coordination mechanisms]
Mark_Vickers: Let's say we create
a task force. We can't have non-members in the call, but
external people could track the emails on the
mailing-list.
... People with W3C membership could be on the calls too.
Bill: It might not hurt to ask
the question to ATSC: what do we need to do to have a more open
discussion?
... One of the reasons for the existence of this group is to
harmonize the TV stuff done in different parts of the
world.
... We have Hybridcast, ATSC, HbbTV. If different people adopt
different APIs, go in different directions, then we're not any
kind of interactions.
... I don't know how to solve that.
... I wonder if there should be an attempt for a tighter
coordination between the above mentioned orgs and W3C.
... The danger may be to become a broadcasting-only group,
perhaps.
Jeff: I agree with Bill. It's a great idea. We would be happy to contribute to such a discussion. We would definitely like to hear from ATSC what additional requirements you have so that we can spawn the right task forces.
Tatsuya_Igarashi: I missed the morning session on the TV Control API. ATSC has some concern about the status of the TV Control API and willing for it to become a recommendation. What is the expectation?
Yosuke: This morning's discussion
shows some concerns about the abstraction and security model,
but there is general interest to move forward.
... Francois and myself will draft an initial version of a
possible WG charter, which we'd like to discuss with members
here.
Tatsuya_Igarashi: The summary is thus that there are concerns, but that there is consensus to transition to a WG?
Yosuke: A bit early to talk about "consensus", but we'll kick off the work on the charter.
Mark_Vickers: It partly depends
on what the goal is. If you need a spec that has a W3C stamp,
or if, as DLNA, you want a model that sticks to the Web
runtime, this is different.
... Your endpoint is not only to get a W3C Recommendation in
that case.
... If people were willing to adopt the spec as-is, the we
could rush to create a WG, no problem. I cannot represent other
people, but I see that there may be concerns about the runtime
and security issues in the spec.
... I think the CG could review and adapt the spec.
... That's my recommendation, I'm fine if the CG transitions to
a WG.
Tatsuya_Igarashi: The second thing is what we're targeting as well: having most stakeholders agree with the contents of the spec and implement or adapt it.
Yosuke: We'll continue discussing this topic in side discussions at TPAC. Feel free to contact Francois and myself.
Mark_Watson: Working on MSE/EME
for several years now. Hoping to fix all the bugs by the end of
the year. Not impossible although we may be delayed a
bit.
... Then hoping to publish Rec.
... Open issues are on GitHub.
... Good news is that there are multiple implementations.
Firefox, Safari, Internet Explorer (and Edge).
... There are a few glitches here and there from an
interoperability perspective, but being resolved and it's all
going in the right direction.
... Some advanced topics around what combinations are supported
by user agents can be very complex.
... A big issue was around security. This type of session now
persists the information. Some robustness added.
... I raised a discussion with the TAG on that.
... The remaining things that remain are smaller things that we
need to tidy up.
... Requirements around identifiers. Per-origin, clear for
users, etc.
... That should drive consistency across implementations.
... To what extent do we mandate DRM to all work in the same
way is an example of tension that we need to accomodate.
... A topic that frequently arises is how I discover [scribe
missed that]
... The DRM can provide roles with the key and the EME can
report back whether a key can be used. Media key status part of
the spec.
... If you have different policies for different parts of
content, you really need to have different keys. There is some
support around downscaling.
... This approach is not going to scale well. 4K solutions do
not support well this notion of single key with resolution
specific stuff.
... As for MSE, this is mostly coming along nicely, no big
issue to report here, I think.
Kazuhiro_Hoya: [Presenting
potential implementation issues with MSE/EME]
... Background: as presented during a previous session,
Hybridcast 2.0 supports MPEG-DASH.
... We have been implemented a DASH player, similar to DASH.js
but different from DASH.js.
-> Slides on potential implementation issues with MSE/EME
Kazuhiro_Hoya: Some requirements.
First, we should be near the broadcasting services. It should
be as good as regular TV.
... Pretty high-level requirement.
... Some content may come from broadcast or from
streaming.
... UHD capability and VoD services are important for us.
... Our DASH player is almost the same thing as DASH.js but
aimed at running on low-power devices.
... Most TV sets have only one video decoding engine, so we
have to use only one <video> tag.
... Seamless period transition for ad-insertion.
... I will present 2 use cases: quasi-seamless switching from
TV to streaming. One content from air being shown and moving to
streaming.
... The other is AD playback control, which is of interest to
advertisers who want to interact with the user.
... For the first use case, the broadcast controls when the UI
needs to switch from broadcasting to streaming, using the same
timeline.
... The other demo is around disabling trick play during an ad,
so that you cannot skip it.
... Most of the video-on-demand supports this function
[Demos running on the TV]
[Note demos are visible at TPAC in Hall-B]
Satoshi_Nishimura: [Presenting
issues while implementing the demo]
... First issue: buffer-related. There is no API to know how
much space is left in a SourceBugger. Using the buffered
attribute is not enough to estimate free spaces.
... It seems the proper way to handle this is to use the
readyState and QuotaExceededError exception of the
HTMLMediaElement.
... But this is not widely implemented.
... Second issue: presenting text and graphics in sync with
video at low processing load. The inerval of timeupdate event
depends on implementation, so there are troubles arising from
this latency.
... Third issue: Seamless ad-insertion with a single video tag.
The program and ads are encoded from different sources, gaps
tend to be created in SourceBuffer.
... Seamless transition can be achieved by overlapping media
data.
... However, different implementations have different
behavior.
... We look forward to solutions to these issues.
Giuseppe: Did you send these issues to the MSE mailing-list?
Mark_Watson: If people want reaction from the MSE group, they need to file issues on GitHub. Presentations are not enough.
Giuseppe: Do these issues ring a bell?
Mark_Watson: A couple of them have been discussed, but the buffer issue has not been discussed I think. You may approximate the size.
Mark_Watson: The stream API fills
directly the buffer, so that's harder to achieve.
... Time accuracy synchronization is a missing feature I guess.
Test cases would help solve the implementation issues.
Tatsuya_Igarashi: Do you think that specs already address these issues, or do you think that updates are needed?
Kazuhiro_Hoya: Two issues are caused by non-compliant implementations, so the solution is indeed to make the browsers compliant with the specification.
Tatsuya_Igarashi: So testing is the way to address these issues.
Rus(Comcast): Comcast and Adobe are putting together a few cases that are not covered by MSE right now. It would be useful if you could join that discussion.
<rus> https://www.w3.org/wiki/HTML/Media_Task_Force/MSE_Ad_Insertion_Use_Cases the link for use case for ad's doc
Bill: Not on the specifics of buffer, some of the pragmatic issues that arise when you stress a spec against low-end devices, that's pretty interesting.
Mark_Watson: [technical details
on MSE and ArrayBuffer]. There is a proposal to integrate MSE
with the Streams API
... That's something that is definitely on the table to be
worked on.
<nigel> ebuttbot playback bbc_news_2015-09-09-1400.txt
<giuseppe> scribe: giuseppe
nigel: update from TTWG
... WebVTT work ongoing
... new editors
... spec is FPWD
... there is work on mapping TTML and WebVTT
... currently an editor draft
... IMSC, a profile of TTML is at candidate
recommendation stage
... TTML 2 is at FPWD stage
... there are many new features,
... e.g. it will support all types of scripts for
all languages in the world.
... another feature is conditional display
(display different things in different situations)
... liaisons: ATSC, MPEG, DVB,
unicode, SMPTE
... work also closely with EBU via common
members
... DECE also a liaison contact in the past
... Main topics: we have worked on profiles and a registry of
profiles
... so that a document can declare which profile
is using
... in webvtt working on inline styles.
... TTWG WG charter ends in march 2016
... so the group is discussing what to do next,
and scope of the new group.
... One idea: generic texttrackcue
... that contains some piece of html.
[demos]
Questions?
Jean-Claude: I've heard there is no way in TTML to signal a profile
nigel: that's the opposite of the
truth!
... we have been discussing this and will be
addressed in ttml2
... but you are not required to signal a profile
Jean-Claude: wouldn't be useful to make it required?
nigel: the semantic of an absent
profile is defined, but of course that imply support
everything
... we could mandate to put a profile, but is not
a spec that can force people to do it
... is more an industry decision than a spec
one
mark vickers: was there any synergy found as a result of the mapping work between webvtt and ttml?
nigel: the group has worked hard
to describe in the mapping document both formats and to
describe the alignment points and where conversion could be
lossy
... for new features, chairs try to make sure that
any new feature can be defined with both formats
Wendy: Some quick report from the
Digital Marketing workshop that we had in September, hosted by
Nielsen.
... From that, we got a strong sense that a piece we could work
on centered around integrity of marketing and the Web.
... The security of Web advertising delivery mechanisms,
application and page context, viewability, robust and auditable
data measurement, reliable marketing asset tracking and product
description and user privacy assurances.
... You can find more on the workshop's home page
... There was a great set of participants. Advertisers,
marketing measurements, technical analysts, etc.
... We came up with some possible next steps.
... For WebAppSec working group, we'll take some work on
sandboxing ad content to providing a restricted set of
JavaScript capabilities, viewability assurance.
... For Web and TV, asset and product labeling.
... We think that some sort of group would be good to continue
discussions. Not quite sure if that needs to be an
IG/BG/CG.
... Getting more people from marketing to understand the way
W3C works and to get their ideas into working groups is
challenging but important.
... Some things around user-privacy agent could be done, not to
reveal information to advertisers.
... Plenty of the work also happen in other places (IAB, GS1,
schema.org, etc.).
... W3C could be useful as a liaison.
... So I think we have a core set of interesting ideas. We'll
share a workshop report very soon, and then we'll spin off some
interesting topics.
Mark_Vickers: For the Web and TV,
we actually have one task force working on content
identification.
... Could you say more about what you mean by "product"? Video
asset?
Wendy: Media asset. Some
presentations about tracking technologies that people
have.
... We wanted identifiers for physical and virtual
products.
... Some of that might tie in with vocabularies done in
schema.org.
Mark_Vickers: If there are papers submitted that talk about that, please send them around. The GGIE folks might want to take a look at them.
Glenn: Yes, note we're connected to some of the people that went there.
-> Second Screen WG update slides
Louay_Bassbouss: Short update on
the Presentation API, progress done last year.
... Started as a community group. Final report published as
2014. Starting point of the Second Screen WG.
... Latest Working Draft published two weeks ago.
... The CG still exists to discuss features that are out of
scope for the Working Group.
... There are existing technologies that would can use for
discovery (SSDP/UPnP, mDNS/DNS-SD, DIAL), also Apple AirPlay,
Microsoft's PlayTo extensions for the video element.
... Use cases are for example presentations.
... Also gaming which requires the spec to support multiple
connections to the same screen.
... And of course Media "flinging" to cast video to e.g. an
Apple TV or a Chromecast device, as is already possible in some
browsers.
... The control page uses the Presentation API to launch a URL
on the presenting context. After starting the request, the user
will be prompted with the list of screens found by the user
agent.
... Then you have a communication channel to interact with the
presentation running on the screen.
... What is out of scope is to define protocols used under the
hoods.
... With the current API, you can monitor the availability of
displays, launch presentations, join/reconnect to running
presnentations, communicate between devices.
... We have some open issues around compatibility with HbbTV
2.0 for instance.
... There is also an open issue on secure local App2App
communication. There will be a breakout session on this,
proposed by Mark Watson.
... The Second Screen WG meets on Thursday and Friday.
... What could come next: we could offer on top of the
Presentation some synchronization across devices. Also some
discussion in the Web of Things IG around extending to
connected things.
... We had many presentations in Berlin back in May, with
progress reports on implementations in Firefox and
Chromium.
... That's it, come to the F2F.
Mark_Vickers: Were you there when
I reviewed the task forces we initially created. The NSD API
and security review in particular.
... Have there been a review from the security people?
Louay_Bassbouss: Yes, the Presentation API offers a restricted set of interactions with the second screen.
Mark_Vickers: Is the list of discovered devices available to the Web app?
Louay_Bassbouss: No.
Mark_Vickers: OK, that solves the problem, then.
Francois: Note horizontal reviews have already been initiated, with initial feedback from TAG, PING, and WebAppSec.
fwtnb: Tomo-Digi corporation.
Develop applications for TV with web standards
... Fumitaka Watanabe, Yusuke Maehama, Shoko Okuma.
fwtnb: Issues of developing apps
for TV
... Debugging device specific issues
... HybridCast based on HTML5, but all TVs have different
hardware specs.
... e.g. one tv required a separate source element inside the
audio element and rejected the source being included as a src
attribute of audio element.
... It's hard to debug applications when testing on TVs to
figure out what's going wrong.
... It can take weeks to figure out problems!
... Testing on actual Devices:
... Difficult to set up the testing environment - need a
broadcast signal and the in-band resourced to test.
... Some special APIs are needed to manage and control the
inband resources.
... Normal web developers have a hard time with this
... In Japan there are 7 major CE manufacturers making >60
kinds of TV/STB
... most of them are >40 inches. There are about 3,000,000
devices already in existence in Japan.
... We need a physical place to put these large TVs for
testing.
... Mobile phone replacement: once in 3.6 years on average, in
Japan.
... TV replacement: once every 7.4 years on average.
... It's getting shorter but still we need to support or think
about supporting a lot of
... old TV sets. 8 years ago, the browsers were:
... IE7, Chrome Beta, Safari 3, Firefox 3.0, Opera 9.6. We
still have to support them.
... We've had the same problems with mobile development.
... [Embedded systems need to be tested before shipping]
... Many Japanese broadcasters require us to provide the same
content and quality for everyone.
... That means developers like us need to make it work on all
the versions that are out there, going back 5-10 years.
... [Web Platform Test Runner]
... To avoid the same problems in the future, W3C has the Web
Platform Test Runner
... It's a test runner designed to run on PC only, for now. We
tried to run it on TV sets...
... [Demos on 3 TV sets running TV customised web platform test
runner]
... We made several changes to the original test runner. [shows
tests running]
... We have some issues
... To make it run on TV sets we have to fix some
problems.
... First, browers in TVs can only open one window. WPTR opens
more than one.
... We changed that to open the tests in iframes.
... Another problem is different input devices, not just the
mouse for the PC. TVs have remote controls.
... For the Test Runner we need to type the name of the test to
start, which is hard with
... a remote control. We changed it to select from a pull-down
menu.
... One test has finished - the different hardware specs show
different speeds.
... Original WPTR can download and save test data with a JSON
file. TV sets cannot do this.
... So we need another server to post the JSON result data back
to from the TV.
... We made WPTR automatically export the result data with the
timestamp.
tidoust: On these TVs they don't
show the same number of tests as each other.
... It's 278 on one TV and 279 on the other. Is that
normal?
fwtnb: That's not what we're
expecting! We're running the same tests on both.
... This is another problem.
... Thank you! We need to go into those kind of problems.
... We can run some tests but some need user confirmation like
using geolocation.
... The TV sets cannot show dialogs and popups to ask for
permission to do geolocatoin.
... So we can't run those tests.
... Compared to PC TVs have lower CPU power and less memory. So
we can't run all
... tests at once. At some point these TV browsers might stop.
We need to test one by one.
... TV devices don't support Web Driver. It takes a lot of
human effort to run tests manually.
... For now we need to test the TV sets with this test runner
one by one with people.
... If we could use Web Driver then maybe testing could be
implemented, and run overnight say.
... There are still several problems. However hopefully
developers and manufacturers can use the tests.
... Should WPTRunner consider embedded systems adopting HTML5
not just PCs?
tidoust: It definitely should! Did you push this to the testing WG in W3C?
fwtnb: Not yet. We plan to. We'd
like to share github URLs on IRC.
... We need to figure out how to run more and more tests to
help developers to create TV applications.
mark_vickers: It would be useful
to list the changes that you would like to give them something
concrete to react to. Take the solution to them not the
problem.
... You can call on the Web & TV IG, and CC them in
conversations.
... We want to stay involved because this is a real
problem.
yosuke: Do other people need to run tests on TV sets too, or is this the only example?
giuseppe: In the case of HbbTV
there is a test harness to run HTML apps. They are not
... running these tests of course but the approach is similar.
There's a harness that can
... control the TV, switch it on and off etc and in that way
run the tests. Then you have
... the problem of writing the test reports and storing them.
Even if you don't have support
... for Web Driver you can still control the TV.
... It's important to make sure that the tests can run. The
full harness is more a TV specific thing.
-> Report on Multi-Device Timing CG activities
Francois: The goal of the CG is
API support for time sensitive applications
... The idea comes from a Norwegian company called Motion
Corp
... I've worked with them to turn their ideas into a spec
... I want to see if there's interest here, or what direction
we should take
... I'm doing a breakout session on this, Wednesday
... In the Web&TV context, the home networking TF
identified 2 requirements for sync:
... synchronisation, and accurate synchronisation
... Use cases: playing identical media streams, related media
streams, and clean audio
... Lots of other use cases require cross-device
synchronisation, not all media related, e.g., collaborative
editing
... Even measuring distances and speeds of objects
... Levels of synchronisation:
... over 100 ms for not closely related content
... under 25 ms for lip-sync
... under 10 ms for audio synchronisation without audible
echo
... under 1 microsecond for GPS
... multi-device sync doesn't solve instantaneous
propagation
... What's needed? A synchronised external clock to serve as a
common reference for client devices
... There's a client clock and a server clock, and measure the
skew
... This can be done in JavaScript
... But this can't use UDP based protocols, as not available
from Web apps
... Also, limitations due to the event loop model
... Also needed is sharing of messages across the devices, to
communicate playback position and rate
... Distribution of the timeline
... Just share changes to the playback rate, then position can
be computed
... Can be done with Websockets and JavaScript
... To achieve cross-device synchronised media playback, we
need to control the media element to use the shared
timeline
... Which clock is used is user-agent defined, but approximates
the user's wall-clock
... This can be done partially in JavaScript, using the
playback rate property of the media element
... Slaving the media element to the clock
... But playback rate isn't really designed for this
... So standardisation may help here
... We have have a Timing Object Spec
... Timing should be a web resource, somewhere in the cloud or
in the local network
... Clients connect to the timing resource
... Other timing protocols work similarly, but have some
differences: NTP, HbbTV
... We've left this open in the spec, via a timing provider
that gives the skew from the timing resource
... The spec proposes an extension to HTMLMediaElement to set
the timing source
... This acts as the real master
... Buffering causes problems in a multi-device
environment
... A device will try to get back in sync after buffering
... There are use cases that need timed data but not a media
element, so we have a date cue for this
... We add a timing source to the TextTrack interface
... The MediaController doesn't address cross-device
synchronisation
... Clients can update the timeline
... Web Audio is looking at cross-device scenarios, so the same
approach could be used
... Next steps: we've done an exploratory phase, how it could
be included in html5
... We need feedback
... The CG could report bugs, e.g., for playbackrate
... We've found in Edge browser playback skips when changing
the playback rate
... We need active participation in the CG
... If the Timed Media WG is created, and this work is in
scope, could be a future WG for this
<yosuke> http://kwz.me/MW
rus: Synchronised sports is a big
use case for this
... For example, when goals are scored
... It would have to work with live broadcast
Francois: To solve that you'd need to introduce delay
Rus: Not necessarily
Yosuke: We discussed synchronisation at the last TPAC, where other SDOs are working on synchronisation
Francois: This approach is
compatible with HbbTV 2.0, which has a wall clock protocol, UDP
based
... and a timeline protocol
Giuseppe: HbbTV 2.0 has a JavaScript API to the synchronisation
Francois: True. I do not know if there is something similar in Hybridcast or ATSC, but get in touch if interested.
Igarashi: Is the proposed API
similar to MediaController?
... it's the same as setting the currentTime
... Devices with a hardware decoder makes implementation more
critical
Francois: It is. But MediaController is inherently single device. This addresses cross-device synchronization.
Mark_Vickers: Can you send the spec to the IG list, please?
Olivier_Friedrich: We're from the
research and development part of Deutsche Telekom
... HTML5 is now on most TV devices,
... We're following the specs at W3C, OIPF, etc
... But there's a problem with devices that can't run a full
browser environment
... So instead we run the browser in the cloud
... There's a portal page or application, and the image is
streamed to the browser
... There are existing solutions, but with proprietary
technologies
... What other potential gaps are there, with local and
cloud-based functionality
Alexandra_Mikityuk: We were
approached by our business units, 4 years ago
... Looking so shift the problem upwards, each operator has
cloud resources available
... We put the STB hardware into the cloud
... So only a simpler device is needed in the home
... Rendering is done in the cloud, sending a video
stream
... User commands are sent to the cloud
... The cloud browser is an enabler for the cloud user
interface
... There are various companies working on this, all over the
world
... We've found some gaps when moving the browser to the
cloud
... because the browser can't talk to the end user device
... TVs are now using browsers for their portals and
applications
... There are new devices such as HDMI dongles
... also millions of legacy devices that don't have a
browser
... Web development is so fast, that the hardware can't keep
up
... We formulated a problem statement
... There are browser technologies incl. EME, MSE, then
TV-specific technologies, HbbTV
... Work has been done to enable the browser to access the
device capabilities
... We want to tackle the gap in the Cloud Browser TF
... The scope of the group is to start with a reference
architecture
... Define scenarios and use cases
... There are existing standards such as Presentation API, but
also missing APIs
... What would be the timeline to create a WG?
... The TF will look at gaps in standards, eg., canvas, webgl,
websocket, webcrypto
... We have expression of interest from several group
members
... Question about whether we do this as a TF or CG?
... The reference architecture has a zero-client approach, with
everything in the cloud
... The video and UI elements, creating a single stream to push
to the client
... We then separated this into the double stream and single
stream approach
... This impacts the client stack
... There's a trade off with rendering, service execution,
content protection
... Where do these parts run, in the client or in the cloud
browser
... There needs to be a control channel between the client and
browser
... Also multi-device interactions, and session status
... The user interface is on the second video channel
... The security API for content protection
... The signalling API for DVB AIT data
... This is just to give an indication of what's missing
between the consumer device and cloud browser
... Some data comes from the client, mixed with EPG data from
the internet
... The architecture is abstracted
... We need to think about what functionality goes in the
cloud, e.g., the codecs
... Use cases are on our wiki page: tuner, EPG
... red button, multi-device (HbbTV, second screen) but these
need adaptation to the cloud browser approach
... ad insertion, content protection, MSE and EME
... We've identified some adaptations needed to the MSE
spec
... The XHR between client device and browser - the browser
must request from the CDN
... There's an issue with appendbuffer() - no mapping with the
HTTP buffer
... Manipulation of media headers in the cloud browser, to
reduce network resource usage
... Also, IP association
... For EME, we identified two approaches
... A trusted channel to the CDMI
... If we leave the player on the client device, there is still
sensitive data exposed on the public internet
... We're working on integration of EME within the trusted
execution environment
... We are prototyping HbbTV scenarios, red button interactions
between client device and browser
Mark_Vickers: I have two
suggestions: We'd focus on isolating the Web part from the
non-Web part - OIPF and HbbTV are not Web APIs
... For this to be a standard we'd want it to run in any
browser
... The other recommendation is to focus on the established
APIs such as EME, and don't start depending on things that
aren't stable such as TV Control API
... The EME gap could be solved separately to the TV control
gap
Giuseppe: How many of the gaps have been addressed in the proprietary solution?
Olivier_Friedrich: Not all of them, there are things from non-W3C members
Harrison: We're working on
implementation of the cloud technology
... The server side of the cloud browser
... Uses DOCSIS or LTE mobile
... Our suggestion is to receive the linear video separately
and combine them in the STB
... We've launched this in Korea with 2 cable operators this
year, with 7 million subscribers
Yosuke: Video: https://www.youtube.com/watch?v=qpbdzQsDQ8I
Mark_Vickers: The client devices
are the same, but in the cloud there's also a server
... So the comparison is between a slow processor in the client
against the fast processor in the server?
Alexandra: We compared the
network latency,
... Each server supports up to 200-300 clients
Harrison: We can get better usability with lower cost devices
Rus: How many sessions can you handle per server?
Olivier: It depends, a rich UI
differs from a basic UI
... There are mechanisms to allow caching of portal pages, so
you can multiply the number of concurrent sessions
???: Do you expect applications to have to be modified to run in the cloud browser?
Olivier: There are some uncertainties that need investigation
Giuseppe: Running the same apps
should be the goal
... One option is to create a TF here, it's OK to discuss
architecture, but a CG is needed to work on specs
... This seems to be a lot of IPR involved here, so a WG would
be good where there are clear rules
Mark_Vickers: I think it makes
sense to do the requirements in a TF
... Starting a CG can be done quickly
Oliver_Friedrich: We're happy to lead the TF
Mark_Vickers: The TF is limited to W3C members, but non-members can participate via email, but not on phone calls
-> Action Items from TPAC note (This version is the final version which reflects all the points disccussed in this session.)
Mark_Vickers: [showing summary
slides with action items mapped to agenda items]
... TV Control API: based on discussions, there is ways to make
it work better for Web compliance. The CG may have its own
action item to transition to a WG. From an IG perspective, we
have an action item to get a review going on.
Tatsuya: If CG and IG have different views, how does the IG contribute to the CG work?
Mark_Vikers: The IG can comment on the spec, point out areas where we could be better with the Web runtime. No direct spec contributions from IG participants.
Tatsuya: I also suggest that people join the CG if they have more technical points.
Mark_Vickers: Maybe a small action item would be to send an email to the IG to review the spec.
Yosuke: We talked about drafting a charter. Should it be included here?
Mark_Vickers: That's a CG issue. Or an AC issue. Not an IG issue.
Mark_Vickers: GGIE. Two action
items, create new CG on GGIE content ID, not sure it's ready
yet. And continue Task Force work on other issues.
... That mostly continues as-is.
... Web Media Profile: restart Task Force or start new CG. We
need to get some sense of who supports that. I'm a strong
supporter of it. If I'm the only one, it's not worth doing, but
if others are interested, it is.
... ATSC 3.0: we talked about level of engagement. We'd like to
receive ATSC's perceived gaps if possible.
... We can probably schedule some calls. The action item is on
ATSC folks.
Bill: My understanding from Giridhar is that Mike Dolan is the official liaison officer, so this needs to go through him.
Mark_Vickers: OK, so we chairs
will take the action to talk with him using the official
channel.
... MSE/EME updates: 3 issues were shown. Hybridcast people
should file them against the MSE Git repo. They should also
review the Wiki pages on use cases.
... Timed Text: we offered to review the new TTML charter. One
action to Nigel is to send us a link to the charter and a
deadline for review when it's ready.
Nigel: Before that, if anybody
has comments on the scope and proposals, just send them
through.
... We haven't started drafting the new charter yet,
typically.
Mark_Vickers: So just send an
email to the IG mailing-list to say that you're planning to
work on a new charter.
... Workshop on digital market: I actioned the GGIE TF to
review the workshop report when it's ready and the relevant
papers on media and product IDs.
... Second Screen WG: no action item
... Web platform test runner for TV: Tomo-Digi should send the
list of proposed changes to the Web Platform Test runner to the
testing group.
Francois: Jumping in, for the Second Screen, would it make sense to ask the Web and TV IG to review the Presentation API? Roughly Last-call equivalent
Mark_Vickers: Absolutely, do not forget to put a deadline.
Francois: Will do, perhaps after next round of updates
Mark_Vickers: Multi-Device Timing
CG: IG should review the proposed spec. To be shared with the
IG.
... Cloud Browser APIs TF: start the browser TF.
... Thanks for attending the F2F. Overall, I think we have very
worthwhile sessions, even with demos. Thanks everyone for
working on them. See you next year in Lisboa!
[F2F adjourned]