Nearby
Proceedings (PDF, 20.6MB)
Workshop Support
The workshop has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n°248687 (Open Media Web) and n°257103 (webinos).
Check the Sponsorship program and become a sponsor for the Workshop.
Date: 08 Feb 2011
See also: IRC
log
This page contains the minutes of the first day of the W3C Web and TV workshop that took place in Berlin on 8-9 February 2011. The minutes of the second day are available in a separate page.
Contents
See the workshop agenda for details.
-
Introduction: Setting the scene
- W3C Overview, by Philipp Hoschka and Francois Daoust (W3C)
- Intro, by Stephan Steglich (Fraunhofer-FOKUS)
- Tokyo workshop, Web and TV Interest Group, by Masahito Kawamori (NTT)
- Web, TV and Open Standards (and testing), by Giuseppe Pascale (Opera)
- Intro, by Jean-Pierre Evain (EBU)
-
Session 1: Web&TV: Use Cases and Technologies
- Wealth of use cases from DTV/IPTV in Japan and API suggestions from various viewpoints, by Yosuke Funahashi (Tomo-Digi)
- Requirements for a Web and TV environment, by Jean-Claude Dufour (ParisTech)
- Use of Web Technologies in TV Standards in Europe, by Jon Piesing (Philips)
-
Session 2: Second-Screen Scenarios
- A Consideration about "Second Screen Scenario", by Kensaku Komatsu (NTT Communications)
- Technology Defragmentation, by Cedric Monnier (Irdeto)
- Rich User Experience through Multiple Screen Collaboration, by Jaejeung Kim (KAIST)
-
Session 3: Panel on HTTP Adaptive Streaming
- Adaptive HTTP Streaming Standard, by John Simmons (Microsoft)
- The Convergence of Video on IP and HTTP - The Grand Unification of Video, by Bruce Davie (Cisco)
- Dynamic Streaming over HTTP - design principles and standards, by Thomas Stockhammer (Qualcomm)
- Advances in HTML5 <video>, by Jeroen Wijering (LongTail Video)
- Matroska, by Steve Lhomme
- MPEG DASH, by Iraj Sodagar (MPEG DASH chair)
- Summary...
-
Session 4: Content Protection
- New Strategies for Content and Video‐Centric Networking, by Marie-José Montpetit (MIT)
- TV and Radio Content Protection in an open Web ecosystem, by Olivier Thereaux and George Wright (BBC)
- Adaptive HTTP streaming and HTML5, by Mark Watson (Netflix)
- Digital Rights Management Standardization, by John Simmons (Microsoft)
Attendees
- Present
- See the list of workshop participants.
- Chairs
- Francois Daoust (W3C)
Jean-Pierre Evain (EBU)
Giuseppe Pascale (Opera)
Stephan Steglich (Fraunhofer-FOKUS) - Scribes
- Francois, Chaals
Introduction: Setting the scene
See description of the session in the agenda for links to papers and slides
W3C Overview, by Philipp Hoschka and Francois Daoust (W3C)
Slides: Introduction to the Second W3C Web and TV Workshop
[Intro by Philipp Hoschka]
ph: welcome! good
material to dicuss at this workshop
... good rep of the
industry
... this workshop is
important to know from each other, to open your minds, ...
... francois will
explain you what are the workshop goals, etc.
[François Daoust, workshop co-chair, gives intro]
fd: yes, open your mind!
[fd briefly introduces W3C]
[fd presents the PC members]
Intro by Stephan Steglich (Fraunhofer-FOKUS)
[Intro by Stefan, FhG Fokus]
stephan: Web and TV, is it a
special case? Current approaches are often attached to the
past
... the examples that are used are often limited and the same
(e.g. EPG).
... SDOs focus on existing devices
... We should look to what happened in mobile devices.
... Typically, TV is primarily for watching videos matches the
assessment that mobile phones are for making phone calls. That
led to WAP. We should not redo the same mistake for TV.
... The comparison goes on, with different existing mobile form
factors, different interaction methods (pens, joysticks, touch
screens, etc)
... The assessment that power is missing is also not true
anymore on mobile devices. This should happen on TV sets as
well.
... Our assumption is that we should be looking at the past to
predict the feature, and we should make use on specific key
features that the TV has (social feature for instance).
... [possibility to do a lab tour at the end of the first
day!]
Tokyo workshop, Web and TV Interest Group, by Masahito Kawamori (NTT)
Summary of the first Web and TV workshop in Tokyo
masahito: I'm one of the Web and
TV IG co-chairs. Here is a brief introduction.
... Thank you for organizing this workshop.
... First workshop in Tokyo: about 140 participants, discussed
on Web and TV, demos from Japanese broadcasters, and different
discussions on Web and TV from various viewpoints.
... We had a good representation from different regions and
different stakeholders.
... The summary can be found on the Web page:
masahito: We decided to create a
Web and TV Interest Group. We changed the name from "Web on TV"
to "Web and TV".
... We're trying to review existing works and standards as well
as their relationship with Web technologies.
... It's important not to re-create the wheel.
... Very important to identify requirements and use cases for
Web on TV and TV on Web.
... The IG is starting today.
See: Charter of the Web and TV IG
masahito: The IG provides tools
for collective intelligence (public mailing-list, public wiki,
issue tracker). We're adopting agile methodology such as SCRUM,
to ensure progress.
... [presenting a timeline that shows the relationship between
the workshops, the Interest group, internal W3C groups and
external groups]
... From use cases and requirements, we'll clarify and classify
knowledge that will be fed into existing groups or, if
necessary, creating a new working group.
... Questions?
... We have already identified different groups to liaise or
coordinate with. We do not know yet whether, for a particular
item, we'll need to create a WG or can add the work item to an
existing group
<dcorvoysier> question was how will the Web & TV IG monitor its proposals towards other groups
Web, TV and Open Standards (and testing) by Giuseppe Pascale (Opera)
giuseppe: For me, an open
standard is standard where everyone can contribute, that is
widely accepted and that is royalty free to allow more
innovation on top of it.
... There is a risk that tomorrow's Web is fragmented with many
devices that do not talk to each other.
... Open standards are not the only thing you need.
... If everyone starts to speak his own "open standard", that's
a problem.
... Profiles, extensions, outdated references, incompatible
implementations all lead to create fragmentation
... Solutions: 1) cooperation at or with W3C. 2) Testing
... For testing: main problem is the lack of dialog between
implementers, the spec editors, and the test authors.
... It's important that everything goes in parallel.
... An alternative approach is to write the specification in a
way that is compatible with the extraction of test
assertions.
... [image taken from methodology to write test cases note
published at W3C]
Question: interoperability requires that you don't have incompatible
profiles
... this relies on
having real agreement about key concepts.
Giuseppe: My point comes
before that...
... implementors
look at and primarily develop against the test cases, so those
have to be strongly aligned with the spec too or the spec
becomes meaningless
Intro, by Jean-Pierre Evain (EBU)
jp: The largest broadcaster union
in the world.
... The EBU joined W3C a few years ago. I have learned a lot of
things about how W3C works. Which will help or not, we'll
see.
... Primary topics:
... Adaptive streaming with lots of SDOs working on it.
... HTML5, when will it come?
... DRM, do we have solutions?
... What about a royalty-free codec? Skype possibly coming to
MPEG with an RF codec
... RDFa, I'm a little puzzled with what gets done. I'm very
much convinced by RDF. What needs to be done?
... Discussions about subtitles: TTML, WebSRT, something else?
EBU agreed on something and then the rest of the community
decided to go on with WebSRT.
... I'm a bit frightened to see a group with a broad focus
since we don't really know what we want.
... How do we precise the scope of the group? What is the area
in which W3C can really bring something?
... I can see too many groups with only few participants
contributing, sometimes with opposite views and goals. No real
coordination. That's also what I see in W3C.
... My expectations for this workshop is to hear more, try to
identify what are the strengths of W3C.
... I'm not taking positions here.
[+1 heard in the room]
Session 1: Web&TV: Use Cases and Technologies
Moderator: Masahito Kawamori (NTT)
See description of the session in the agenda for links to papers and slides
Wealth of use cases from DTV/IPTV in Japan and API suggestions from various viewpoints, by Yosuke Funahashi (Tomo-Digi)
yosuke: I've been working on
broadcasting since 1994.
... I'd like to give you an overview of what we do in
Japan.
... From the viewpoint of devices, there are three kind of
devices (PC, mobile phones, TV set)
... [demo of a video where users can send comments that get
displayed on screen]
... Very popular in Japan
... DTV is now universal in Japan, with Web browsers.
... Lack of APIs mean we had to extend the standards
... All 127 broadcasters in Japan provide Web and TV services
with the browsers.
... Specific: content is delivered via broadcasting, and the
browser may be over the video.
... [DTV examples of portals in Japan (NHK, MX Tokyo) with
widgets]
... Let's move to IPTV
... First, content is piped via the Internet (or CDN). There
are several ways to deliver video contents (on demand,
streaming, download).
... For shopping and social network services, both types are
used
... [example of shopping: Tokyo Broadcasting System]
... Final example on Sports and Games shows: browser content is
controlled by broadcast signal. The interaction is enhanced.
The user experience as well.
... [demo of Figure Skate by TV Asahi]
... Finally, hot topics in Japan:
... - DTV and IPTV convergence.
... - active development on developing technologies on various
devices.
... - switching from HTML4.01 to HTML5. Is it a good time?
Masahito: Thank you very much.
Requirements for a Web and TV environment, by Jean-Claude Dufour (ParisTech)
jcd: context I'm thinking about:
the center is a connected TV. Around that, some computing
devices and some non-computing devices.
... For instance connected picture frame is not a computing
device, a laptop is.
... Apps should work on any device. I'm fairly optimistic on
this.
... Common ground is "very close" to W3C Widgets: HTML + CSS +
EcmaScript. Not much to do from there.
... Second requirement: Apps need to run on a dynamic
network.
... There are various protocols to do this: we need a service
discovery and protocol. There are many solutions (Bonjour,
SIP-based, UPnP, etc).
... When your friend comes to your home, it should be
discovered automatically to send images to e.g. a TV.
... Third requirement: Services need to be accessible from all
devices.
... Right now, the program guide on TV (EPG) runs on TV,
because it uses the Web TV API.
... You need to make sure the UI can run deported on a remote
device, with communicating widgets.
... It's also service adaptation, as a new way to distribute
services.
... Fourth Requirement: Services should be accessible from the
best device at any time.
... You should be able to start a service on TV and continue on
a second device, and so on.
... We need some way to keep the current state of the
service.
... Fifth requirement: whether the app is native, a widget or
hardware should not make a difference.
... In the ecosystem of the services, you should be able to use
any type of app.
... There may be a need for a framework to compile widget to
native code, and vice versa.
... Sixth requirement: There should be no standard
dependency
<chaals> [I don't see how fifth requirement is compatible with an open standard that lets you build across different hardware - in other words, it complicates everything incredibly]
jcd: For instance, widgets should be able to use HTML or SVG, same for discovery mechanism.
<HJLee> I am with Chaals, maybe this issue will be discussed in long term base.
jcd: We've been building on HbbTV, SVG, W3C Widgets, UPnP/DLNA, MPEG-U, RTP/RTSP, with HbbTV that incorporates another set of standards.
<Danbri> [re 5th, key thing is the network protocol; whatever speaks it can play; keeping state across migrations is nice thing for app creators but needn't be core STD ]
jcd: What do we need for
standardization?
... Please look at "smaller" profiles, because TV sets are
constrained.
... Common Device APIs, and then some way to have document
discovery, communication and migration (declarative, and not
just widgets).
<MattH> [play to web's strengths instead maybe - decouple components, have APIs for communicating with TV functionality, rather than exporting UIs]
Use of Web Technologies in TV Standards in Europe, by Jon Piesing (Philips)
jon: Standards is what I do for a
living. Europe is my particular focus where I have expertise.
I'm talking about the use of Web technologies in TV
standards.
... I've been involved in most of these standardization
activities, often as chair.
... Standards we have are mostly a complete system description,
including codecsc, applications, signaling in the broadcast,
security (e.g. content protection)
... They've been talks about what needs to be done: from making
existing Web content work on TV to making TV use Web
technologies, or something in between.
... DVB-HTML has been developed as an alternative to Java in
2000/2001. It hasn't been widely adopted. Another example is
the Open IPTV Forum DAE, 2008/2009.
... video is integrated through the <object> tag as it
predates HTML5.
... HbbTV basically takes a selection from OIPF specs with a
selection from DVB-HTML.
... Focus is on simplicity and time to marke. It is being
deployed in Germany, and will be in France in 2011.
... UK DTG Connected TV is a more recent example which has a
lot in common with HbbTV, also with more support from W3C
technologies.
<HJLee> main difference from those stds we have here is this is THE 1st attempt of collaboration between web industry and TV industry.
jon: I thought I'd do a quick
summary of which Web technologies are used in these different
works
... [reviewing the examples, adding Web technologies names each
time]
... All of these works include extensions, e.g. related to
application lifecycle.
... There are other system components, for broadcast (AVC and
MPEG-2, DVB/EBU subtitles, MPEG-2 TS, etc).
... and for broadband (same video, audio, subtitle and
container formats as broadcast). MP4 files tends to appear, and
we need a broadband video streaming protocol.
... For security, we need trust models for applications. The
network operator may need to be the one who takes the trust
decision. Content protections as well.
... I though I'd add a slide on non-standard solutions: they
are many proprietary solutions as well, e.g. Virgin Media in
the UK based on Netscape Navigator 4.
philipp: you've been involved in many standard efforts. Still, there are lots of different solutions worldwide used in different industry sectors. It's a bit different from how the Web works today. What do you think are the chances that TV converges to a single solution today? Is there an opportunity today?
jon: you might get some degree of
convergence at a given time, and then things evolve, but the
products you shipped two years ago are still around and cannot
be upgraded.
... There's a huge legacy.
... The most you can achieve is convergence on a certain point
in time which creates the "new legacy".
chaals: Jean-Claude, you said that we should not rely on any standard. I read it as meaning you need to write things a lot of different times.
jcd: maybe I wasn't clear. I'm
thinking in terms of toolbox standards. HbbTV has done a good
work plugging things together without doing any technical
stuff.
... HTML is a toolbox standard. Trying to force a codec in HTML
is mistake in my view.
... What W3C usually does is toolbox standards. HbbTV takes the
standards and builds concrete profiles out of it.
jon: If you look at the way
standards are defined, you have these toolbox standards. They
try to include everyone's requirements. Not really time-based.
More consensus based.
... You need industry standards that take ruthless decisions
for time to market.
giuseppe: I also think that W3C
is the right place to discuss the building blocks.
... When something is missing, when an extension is needed, it
might make sense to push it back to W3C. I think that's
missing.
... How can you do a subset of standard? That's not really done
in W3C right now. How you can rely on standards without
breaking things up?
masahito: Thank you all for your presentation.
[coffee break]
Session 2: Second-Screen Scenarios
Moderator: Stephan Steglich (Fraunhofer-FOKUS)
See description of the session in the agenda for links to papers and slides
A Consideration about "Second Screen Scenario", by Kensaku Komatsu (NTT Communications)
kensaku: I'll introduce some use
cases, a proposal and requirements.
... About NTT Communications, a branch of NTT, providing ISP
and IPTV services.
... My target is second screen. That means smartphone, tablet,
PC, portable game console.
... Our objectives are to increase the effectiveness of
broadcast and make everyone happy.
... [example of a family use case on sunday morning]
... Family is watching a TV program together. Only one TV
screen.
... People in the family may have different needs (fun,
shopping, or simply watch TV).
... Impossible to satisfy everyone's need with only one
screen.
... How to solve? We'd like some way to automatically push
content that is synchronized with TV program. (BTA would also
be fine).
... [example of what the user interface might look like on an
iPad]
... Ads should be synchronized with TV commercials for
instance.
... Technical requirements: some push technology (server-sent
events, WebSocket). We need to discuss some data format, and of
course protocol to communicate with each screen.
... We also need some way to store technology (WebStorage), and
some location sensing technology (Geolocation API).
... Widget functionalities: W3C Widgets
... For GUI: some CSS3 would be good
Question: it seems to me that you have connections only one way. If I'm watching a video about e.g. Honda cars, and switch to different brands. I'd like some double synchronization to occur
scribe: Sometimes triggered by the TV, sometimes by the user
Question (BBC): we have a different approach where we extend the Web browser within the TV sets with APIs that allow to control TV program, and so on.
francois: you mention that a lot of technos are on their way... what is really missing then? the comm protocols or the APIs...
<MattH> [ +1 (Matt Hammond) ]
kensaku: I have no idea about details of the protocol, but it is required, yes.
Technology Defragmentation, by Cedric Monnier (Irdeto)
cedric: We switched from
broadcast to broadband
... Our core business is security.
... How to distribute lots of content to different
devices.
... Customers are regular broadcasters, other content
providers. At the end of the day, the question is how can we
access the video?
... Everybody is moving to a multi-screens experience.
... For a content provider, that means new screens where
content can be distributed.
... As of today, you have more and more different devices (game
consoles, mobile devices, laptop, connected TVs,
automotive)
... The technology is segmented. For a content provider, that's
really a nightmare.
... Typical multi-screen solutions involve lots of different
things. The ecosystem is really complex.
... Example of Foxtel.
... Example of Viasat: typical web-based for laptop-PC devices.
Now moving to TVs and mobile devices.
... Same metadata to different devices. Developed with thematic
consistency in mind.
... Because that ensures the brand is preserved.
... Example of Maxdome that was Silverlight-based and now runs
on LG connected TV.
... From an end-user, it works, it's possible.
... But you need three components: Content management, some way
to deliver the content (Microsoft Adaptive streaming - Jeroen
Wijering (LongTail Video)?), and of course the video player on
the client.
... At the end of the day, we learned that it's quite hard to
target different devices because fragmentation is all over the
place.
... Each platform is different.
... I don't even speak of media player.
... How do you handle standard actions such as play/pause/stop,
trick-play modes?
... It's really a jungle.
... There's a huge technology fragmentation. That's an
explosion of costs. People are waiting for new features. In
terms of porting, it costs a lot.
... At the same time, you need to maintain consistency between
the UIs.
... Our needs: make it silly simpler!
... we do commit on HTML5 and Flash. There are basic extensions
that are needed to facilitate video handling from javascript
(trick modes, content discovery)
... Widgets should be simple, no need to redo the same thing
multiple times for different stores.
... So the question is: should we let de-facto standards become
real standards or should we take the lead now?
... At the end of the day, I cannot change everything for a
single player.
... Any volunteer to solve this issue?
philipp: Thanks a lot for the
analysis of what is missing.
... One thing I did not understand about widgets.
cedric: It's not so easy to bind
a widget to a channel for instance. We need some basic
extensions to make it more friendly on TV sets.
... We are more talking about applications, something that has
access to TV resources and has access to internal stuff on a
secure way.
chaals: follow-up on that. Seems that we're not talking about widgets at all. Rather the APIs that are missing.
cedric: yes.
stephan: some approaches taken by BONDI, etc.
question: Could you elaborate on trick modes?
<danbri> [ I can't get my osx MacBook online here; see nothing in browser when connected to guest network. worked ok from iPad. Has anyone solved this?]
cedric: How can you express the
different modes? We're doing low-level things (security, etc).
We would like to have the application on top of that to be just
HTML5.
... It's a bit of abstract APIs, right. It is highly bound to
network protocols. There are some things that already exist in
DLNA for instance.
Rich User Experience through Multiple Screen Collaboration, by Jaejeung Kim (KAIST)
jaejeung: KAIST institute is a
research institute within KAIST.
... focus on second-screen in this presentation.
... Some assumptions first to scope things: general large size
display with a browser, complex content including applications.
It's a "public computer" at a certain distance.
... Questions are: how can we control such complex contents at
a distance?
... Second screen can help. It can perform as a remote
controller or as an additional information display.
... If the user cannot control the content at a distance, the
user experience suffers.
... So the first scenario is to use the second screen as a
controller.
... The usual remote control does not allow to search on e.g.
YouTube. Smart TV that come with a keyboard and a track pad
allow for better control, but the control is not so good at a
distance.
... One possible approach is to use Web fragmentation of a Web
page (example of a YouTube page).
... The fragmented page structure gets displayed on the
second-screen. Then you can control zoom in/out from your
smartphone to select the fragment you're interesting on.
... You can then navigate the content with a direct
manipulation.
... Second scenario: second screen as a content separator, e.g.
for purchase scenarios, not to disturb the main content.
... Third scenario: Reverse context, collaborative content
sharing through the second screen. The TV is the hub for
content sharing.
... [demo of this scenario into action]
... user can take annotations, write memos, control the
position on the large screen, post Web pages, etc.
... Requirements: Device discovery. An open and widely accepted
standard protocols is required (i.e. DLNA/UPnP). The Web
fragmentation technique requires markup or annotation to
introduce more semantics.
... For the UI migration, session management is required for
video streaming but also for Web page / application session.
I'm not sure this second use case has been standardized
anywhere.
... Issues and discussion: Multiple devices and multiple users
may want to control the same object. There needs to be some
selection mechanism.
... Synchronization among screens is important. There's a
trade-off that needs to be taken because of performance.
... Sensitive content could be filtered not to be displayed on
e.g. kids displays.
<chaals> [filtering by fragments sounds like content blockers, as standard in some browsers and a common extension in nearly all, combined with standard filtering]
jan_lindquist: I generally agree
with the presentation. Televisions are not always IP connected.
These scenarios assume IP connectivity to the network. Do we
put a requirement, here?
... It changes the ways the issue is addressed.
... My opinion is we should, but what's your views on this?
jaejeung: yes, network connection was my basis assumption.
cedric: do we need connectivity? Yes. Does the TV need to be connected to the Internet? Not necessarily. It could be behind a home gateway.
kensaku: to provide interaction
model, we need some bi-directional communication model.
... We need some concertation about scaling. How to set up a
lot of users with a TV?
... Multicasting model would be helpful, I think.
Question: I think the question is for the entire workshop. My opinion is that it should not be an absolute requirement. Another scenario is broadcast-only scenarios.
scribe: Only push use case
here.
... In our opinion, it should be considered as a profile.
chaals: middle-ground. High level
of connectivity is important. What we can do is think of what
we can with different levels of connectivity.
... Broadcast is one. Lots of networks you still pay by
weight.
... How do we build applications that work across these
networks as well is worthwhile.
jon: There are two variations. One category is TV sets that could but haven't, for various reasons (wifi available but no external broadband connection).
<chaals> [Overheard: If you happen to live there, it isn't the middle of nowhere]
jon: If you remove Teletext,
people in the middle of nowhere will scream.
... If you need to take that into account, you end up with a
more complicated system.
<chaals> [Thought: A lot of broadcasters are still publicly funded, and have legal obligations to provide services to all kinds of places with low connectivity]
<MattH> [ one-off closed trial : 2nd screen for live. no tv network connectivity : http://www.bbc.co.uk/blogs/researchanddevelopment/2010/11/the-autumnwatch-tv-companion-e.shtml ]
jon: There is a huge difference between designing a system that can do something for people who do not have broadband and for those who have it by default.
GuillaumeBichot: numerous SDOs are working on different things. I think that for W3C, we should take this basic assumption to be able to progress.
MarkVickers: the Web model works
very well for different connectivity models.
... the ability to deliver content over various networks.
... Web pages that link to each other can work just fine. You
can put all things in cache. As long as you stay in the cache
(HTML5, etc), it works.
... The application model is the same.
yosuke: comment. In Germany, only
5% of TV is connected.
... You could use your phone for connectivity when TV is not
connected.
jean-pierre: what will need to be done in W3C to help develop these applications?
<chaals> [Device APIs]
jon: I think that there is some stuff here that could be done, but not sure what.
danbri: one of the things technical analysis is going to bite us: people don't understand the difference between browsing and search engines, network connectivity doesn't mean a thing.
cedric: no connectivity or
low-connectivity reminds me of broadcast (one-way). One
possible solution is storage, e.g. having a NAS to store
content.
... Lots of people are browsing catalogs on tablets. No
connectivity.
... Trade-off between bandwidth and storage.
<yosuke> yosuke: If TV can communicate with smart phones locally, smart phones can compensate the lack of connectivity of TV. (Tethering)
[lunch break]
Session 3: Panel on HTTP Adaptive Streaming
Moderator: Francois Daoust (W3C)
See description of the session in the agenda for links to papers and slides
Adaptive HTTP Streaming Standard, by John Simmons (Microsoft)
john: Was reading 19th century
predictions about how people would do TV... but they failed to
anticipate a few things.
... We're a bit in that position today. So what is
required?
... 1. Supply side optimisation. The expense of getting to
different devices causes problems.
... encoding, adaptive stream, ...
jon: (i.e. network optmisation
- or optimising to the current state of the network).
... Also related to combinatorial complexity - addressing
multiple tracks etc
... and Cross platform support.
... 2. DRM interoperability - content protection, and wanting
to minimise the amount of andvariety of DRM.
... 3. Authenticaion and authorisation, that is not tied to
being a broadcaster.
<danbri> [ ASIDE; * if you have problems connecting to the Network from OSX, try adding 193.174.153.1 under Prefs > Network > Advanced > DNS. ... it worked for me at least ]
<tvdude> DIS of MPEG DASH available: http://dl.dropbox.com/u/1346434/ISO-IEC_23001-6-DIS.doc
john: Apple published an
adaptive bitrate streaming, and MS their smooth streaming and
encoding under our "community promise" open license.
... MS contributed encoding tech to ultraviolet (a.k.a DECE), instead of
them supporting multiple DRMs...
... 3GPP published another version of the same idea... which
went to OIPF and became another variant...
john: There was a broad sense
that we should harmonise these in some way. Other organisations
were thinking of adopting these, or rolling their own...
... MS encouraged them to wait and get something
together.
... They came into DASH which is being pushed to International
Standard now...
<tvdude_> ultraviolet is the marketing name
john: with the participation of
the various other players here.
... Key piece needed at the bottom of the stack is protected,
DRM-interoperable adaptive streaming.
... MS Plans to make its necessary patent claims for final DASH
specification available Royalty-Free under MPEG's relevant
licensing option
john: This is a stake in the ground
from Microsoft.
... This is simply for MPEG DASH. We hope others will
contribute as Royalty Free, because this stuff is important to
build an industry.
The Convergence of Video on IP and HTTP - The Grand Unification of Video, by Bruce Davie (Cisco)
BD: This talk has a long
history.
... Converting video to HTTP is going to be very important in
breaking down silos and can create a lot of benefits if we get
standards right to take advantage of them.
BD: video over IP is old hat. New
is adaptive streaming for robust delivery in diverse
environments.
... works for all kinds of delivery.
... IPTV networks today are carefully optimised to
a single job. The Web is an organic development that just runs
wherever it can. Adaptive streaming lets us sit in that diverse
environment and do video.
BD: HTTP has been tried and
tested hard, and we know a lot about optimising it.
... Converging on infrastructure helps to reduce costs. More
important is the enabling of greater innovation via
cross-polliniation...
... We see innovation now phones are also web-capable
application environments
... Standardisation is critical, and so is the tension between
timely and too fast.
... lack of standards holds back deployment, but too-early
standards holds back innovation.
... either way, bad resultto avoid is that money is spent on
dealing with infrastructure problems instead of on making cool
applications.
... AN example of what we deliver now is mixing unmanaged and
managed networks. We really want all that to run on a common
infrastructure.
BD: HTTP can do the job. Can
do brilliantly in nice environments, and as well as possible in
harder ones.
... So you can use it as the transport everywhere as a common
infrastructure, which makes it easier to connect different
systems.
... We need to develop the platform without trying to predict
the next application, because we will mostly get the prediction
wrong.
BD: Once you pick a piece,
everything is tightly coupled. We have to have more modularity
like we get with Ultraviolet.
... Maybe we need a known good baseline reference for adaptive
streaming, so we can build better stuff...
... HTML5 needs to support adaptive streaming but not sure on
the details ...
... Mabye clients pick codecs as well as bitrates. It is a
thorny issue at the moment.
<HJLee> Maybe next session Mark from Netflix shows possible HTML5 video tag implementation. let's see
<mark> Thanks for the plug - I can't claim to have all the answers though
CMN: W3C doesn't so reference implementations traditionally, and I think it could be a big challenge structurally.
BD: It is a useful technique that helps people develop.
CMN: Their approach is to write lots of tests. Do you think that is roughly equivalent (I do)
BD: Probably... I am not an expert in W3C yet
FD: HTML5 allows pointing to a streaming manifest - is there anything else required?
BD: Not sure. The ability to do ff/rew etc is important.
JL: We have looked at doing this, and think that the events that are generated and passed should be looked at in this context by W3C.
BD: Performance metrics might need some attention - how well is the adaptation working?
Dynamic Streaming over HTTP - design principles and standards, by Thomas Stockhammer (Qualcomm)
TS: Streaming is important, but
our specifications are currently developed for controlled
environments.
... lot of the usage is actually video going over HTTP
<yosuke> [note: Adaptive streaming is not the only one that needs ff/rew.]
TS: You can generate profiles from MPEG DASH (beyond the ones there are already)
(Scribe is not copying down stuff that can be read from the slides, assuming they will be published too)
TS: DASH doesn't try to replace HTTP etc, it enables them to be used in an implmentation.
TS: doesn't have to be delivered
over HTTP, in principle.
... provides information describing how to access a version of
data from the cloud.
... client is out of scope of the spec. It downloads as well as
it can and delivers to a rendering engine.
... Deployment on CDNs with lots of small files creates
problems. You can use byte-range requests instead.
<tvdude> That was "Turbo Thomas" ;-)
FD: Is there some baseline for formats etc?
TS: DASH doesn't say so, but an industry organisation or company may restrict e.g. codecs...
Advances in HTML5 <video>, by Jeroen Wijering (LongTail Video)
JW: JWPlayer is open source video player.
... There are a lot of small
companies using it, so we have a good understanding of what
they need and want.
... Adaptive streaming is especially good for live streaming in
aprticular.
... No nice toolsets yet for doing everything off the
shelf.
JW: What to assess in QoS - streams available, enabled, ...
JW: Ease of use is important.
HLS has some headaches, but it is easy to understand how it
works.
... If you can explin it in 500 words, people will get
it.
... No additional modules should be required, and ecosystem
(i.e. variety of tools) are important
... There's a lot of interest from developers, so long as MPEG
DASH is an open Royalty-Free format.
JW: FD, your question on HTML -
it would be necessary to have the src= able tp point to the
manifest, at least. That's the simplest.
... people will probably also want APIs for manipulating the
manifest.
... singlanling avilability of tracks should be available as
well.
FD: These are just extension APIs for video, right?
JW: Yes. And extend signal and
allow control of switching heurisitcs - events being created
when it's changing, and why it is doing so.
... enabling developer to control heuristics, e.g. setting
parameters for different configurations...
... think the implementation is a lot harder than the
specification.
CMN: Do you mean something other than being able to extract tracks from the DOM?
JW: Yes, because there are also tracks in the manifest - so if they should keep being in both places, there neds to be an API that handles that. This is not what HTML5 models at the moment.
TS: For stuff like audio, in particular, this makes the timing alignment really important.
JW: yeah, this needs to be dealt with, and is complex
MV (Comcast): Issue 152 in HTML5 - presence of multiplex text tracks - is something that is important. Can this group make decisions on what we need there?
FD: We can decide what we want but that is not a priori binding on HTML5 - you have to participate there to get the decision made.
JW: Allowing adaptive streaming manifest in HTML would solve a lot of issues with the relative poverty of HTML at the moment.
[chaals thinks that on the other hand the simplicity of current HTML model is a strength too, so trade-offs are implied whatever way we go]
[e.g. having text tracks in manifest and in html elements requires an API to deal with the two, etc...]
Matroska, Steve Lhomme
SL: WebM is based on Matroska.
SL: They (Google?) have already
said taht they will pick up new features from matroska wherever
those already exist rather than rolling their own
... we have added stuff over time, and we think it is very
adaptable.
... already have working demos of 3D
... and people want transparency.
FD: You said it uses EBML. THere are other compressed formats - is there a chance to switch to EXI?
SL: I don't know the format, it might be possible.
PH: This is your spec for binary XML? Not the MPEG one?
SL: Never heard of the MPEG one. This is one we did.
HJL: Current situation for adaptive streaming - is there an implementation?
SL: I believe there are people
who have done it, they are encoding per user and not using a
manifest file.
... believe that MPEG DASH works like that.
FD: Without wrapping in e.g. DASH?
SL: Right.
MPEG DASH - Iraj Sodagar (MPEG DASH chair)
IS: We are pretty much complete,
working with other organisations and think we are
converging.
... invite this group and W3C to provide input.
... we have 5 month review, and the spec is being made
available for download.
... It's not a long spec - so please read it.
HJL: Is it likely that other members will declare royalty free as MS did (e.g. today)?
IS: MPEG has pollicy allowing
that or allowing RAND. I believe (personally) that several
companies have intent to help make DASH royalty-free
profile.
... W3C can provide input of the need for this.
HJL: how to get information fast enough
IS: ISO policy is for disclosure in forum, but think you will also hear that outside the forum, as MS did today.
SL: If some companies won't license Royalty-Free, what do we do?
IS: You can make a profile to
avoid the encumbered parts.
... best to do this in MPEG...
SL: If MPEG DASH is RF does that provide a grant for using teh same technology in other things?
[tricky legal question. Default answer is 'probably not']
MW (Netflick): DASH explicitly allows use with different file formats - so in that case it is still DASH.
SL: Question was about codec...
IS: DASH doesn't specify the codec - you can use it with different ones.
Summary...
FD: Seems there is a clear need
for adaptive streaming - it comes up in every discussion.
... We don't know yet if MPEG DASH will be royalty free.
TS: Royalty discussion is important. Being interested to get this broadly deployed, it would be very helpful to get a lot of feedback explaining that Royalty Free is a requirement and that under that condition there is a lot of real interest in deployment.
IS: Say what you are going to use
in DASH - which settings, which features or profiles, as well
as that you want it RF.
... That simplifies what is needed as a client, which
simplifies the question of where patents cover necessary claims
in the first place.
TS: We need clear instructions as to what the expectations are... people to talk to at a technical level.
HJL: From TV makers, we will have
video applications as our core. So we are very sensitive about
royalty-free
... for the time being, video applications will be our
core.
GP: Since there is a broad scope I am not sure why that is so important for royalty-free.
IS: Having the analysis simplifies the process - you are asking participant companies to declare things as royalty-free
Guy Marechal: Audio CD development, mostly between Sony and Philips, had a model? where it was almost free- you got it for free, but a huge penalty if you made stuff incompatible. Maybe a model to look at?
JJ: Would it be helpful if key people from MPEG DASH particiapted in this effort?
IS: Certainly.
JS: There are a number of players who are active in both and I am sure that would be something they are eager to provide.
IS: DASH was about 8 meetings /
year, 4050 companies particitating.
... quite a lot of collaboration even within MPEG
JJ: If we can do it collaboratively,can that help motivate a royalty free standard?
IS: Also, we've done a bunch of informal work on convergence.
MW: We're in the process of joining W3C. I think it makes sense for the W3C baseline request to be "make the whole thing royalty free"
JCD: I think there is someone else outside with a patent, and so long as they don't speak, we have nothing.
MW: That's always the case.
TS: Major contributors are working towards this direction... but there is no magic bullet. Everyone needs to do the work required
Jean-Baptiste Kempf (VideoLAN): At VideoLAN we get 2 or 3 letters a year from MPEG and same from 3GPP saying they will sue us over patents. What changed?
JS: MPEG-LA has no relationship to MPEG except the four letters
[adjourned]
Session 4: Content Protection
Moderator: Philipp Hoschka (W3C)
See description of the session in the agenda for links to papers and slides
New Strategies for Content and Video‐Centric Networking, by Marie-José Montpetit (MIT)
[scribe missed first few minutes of talk]
marie-jose: TV is a very
immersive experience.
... Users do not want to wait in that case.
... We'd like to leverage peer to peer for community viewing,
not to have to go back to the same server when we're sharing
the same piece of video.
... The elements of our strategy: data are algebraic entities,
which can be added, multiplied by factors, etc. We want to
combine analytical and user measurements for quality of
experience.
... Content protection is often pointed out in the same
sentence as DRM.
... It's not just that.
... We wanted to take into account the fact that devices
collaborate with each other.
... The goals of our research right now is to reduce delay and
minimize interruptions for video and converged
applications.
... That relates to W3C needs I heard today.
... P2P is good. People might want their content to be
protected as well (private, shared with friends).
... Social viewing experience requires filters.
... Example: live streaming. The playback is not just a series
of packets.
... it's the linear combinaison of these packets.
... We could regularize the output of the buffer to be fairly
constant.
... New research:
about minimizing signaling overhead, using multilayer video
encoding.
... We want to show
that network coding can provide video content protection in a
social viewing context. There's a demo on that in two weeks
from now. Use case is peer to peer distribution with
registrered usersd that see the content directly
... For non premium users, ad viewing is mandatory. Again,
everything is done at the edge, locally.
... We're going to build on that demo and protect that
information.
... We favor stateless approaches. P2P often requires to know
what your neighbors have. There's a lot of state.
... Last thing is we'd like to add network combining to improve
performance.
... We submitted this paper to show that there are things that get done
below HTTP to improve QoE.
TV and Radio Content Protection in an open Web ecosystem, by Olivier Thereaux and George Wright (BBC)
olivier: You probably know us, if
you're not living in the UK, for BBC news.
... Our focus on Web and TV is much broader than this. In the
UK, the BBC iPlayer allows people in the UK to access content
that is broadcasted.
... Stuff we produce but also content produced by other
people.
... It's also offline. You can download a programme. That is
something that is fairly important for us.
... We need to consider the un-connected use case.
... The BBC was one of the first broadcasters to be on the Web.
We are renewing our involvement in W3C.
... Our public mission implies: openness, access for all.
... Right now, the technology of the player is
proprietary.
... Could we do the same thing with open Web
technologies?
... Yes for content we produce. However, we have an obligation
for content protection for all other content.
... It's an industry demand, but also consumer demand for
varied, quality programmes.
... Sure enough, there is a cultural evolution happening, but
there is a "meantime".
... In this discussion on content protection, we'd like to
stress out that there is no need for perfect content
protection. Good enough protection is enough.
olivier: In practice, content protection means geographical, time-based, and copy should be difficult.
[ yes, online agenda is up-to-date. ]
olivier: When we're talking about
DRM, we cannot just standardize DRM altogether.
... DRM involves a little bit of secrecy, by definition. For
the good enough effect.
... We'd like to see the rest addressed by W3C. We want to be
able to use HTML5 to interface with DRM-protected
formats.
... There needs to be a way to say: this is HTML5 video and it
is content protected.
... We could perhaps extend canPlayType() to address that use
case.
... But we're open to other ideas.
MarkVickers: do you need more richer requirements with DRM?
george: it's really geographic and time.
MarkVickers: it's only a bunch of parameters for the can play/cannot play question.
Question (Cisco): geography, time. Do you need to control devices. Example of Google TV.
olivier: as far as I know,
no.
... We're providing the iPlayer to any device that supports
Flash (because that's the technology that we need right
now).
... We do want to spread the content as wide as possible.
GuyMarechal: comment on the Cisco
comment. Two'level approach for cryptography. The
authentication of the equipment is made by a zero-knowledge
approach.
... Then regular protection.
... This makes the control much more efficient. By the Way,
Cisco's solution is free. You can use it.
Adaptive HTTP streaming and HTML5, by Mark Watson (Netflix)
mark: We're soon to be W3C
Members.
... Netflix is a subscription service in US and Canada. 20
millions subscribers. Both Internet streaming and
DVD-by-mail.
... About 200 devices that are Netflix-enabled.
... Today, we need to do a lot of work to get on all these
devices.
... We have to certify those devices one by one.
... HTML5, we announced in the middle of last year, is our UI
platform-of-choice.
... We can really measure the effect that it has on user
experience. It's a major component for us.
... Tomorrow, if there's enough standards, we can stop SDK
integration and certification expense. So we can expand to more
devices.
... The list of requirements we have should be considered as
input to the group.
... Two aspects to adaptive streaming.
... For the first part, we've been working in MPEG DASH.
... We haven't done the analysis on IPR stuff, but if we have,
I expect we'd release this as RF.
... Basic on-demand profile is important.
... What we need for HTML5: multi-track advertisement and
selection. Events and metrics might be useful.
... And obviously content protection.
... The requirements for content protection is imposed by
content oweners.
... Users agree not to store or re-distribute streamed content
in terms of service.
... We need to make it difficult for users to do so.
... Technical solutions involve encryption the whole
content.
... and everything is secure.
... The DRM black box: content protection functions.
Encyption/Decryption. Common solution.
... Secure key exchange, rights expression and enforcement is
primary focus of DRM. Hard to standardize today.
... Authentication/Authorization should a service
function.
... Our proposal is to standardize: common encryption (look at
the MPEG stuff? Is it already done?)
... Enablers for Javascript implementation secure
authentication/authorization protocols.
... We think that the specific key exchange technology should
not be part of it.
... Advantages: we'd stay clear of DRM commercial issues, and
we'd remove controversial functions from the open.
... [diagram of what it would mean, first for unprotected
content, then for protected content]
... [message flow for Javascript hooks]
... We also need secure device identification as authorization
decisions may depend on device type.
... We propose a new Javascript Device API for secure device
identification, but there's some privacy issue (which may not
be worse than giving up your geolocation)
... Strong binding between the code that is accessing the API
and the domain name. You could think it in terms similar to
accessing a smart card.
... These kind of security models exist.
... Content protection is essential for some businesses. Should
be simplified for the Web.
philipp: thanks for this very detailed presentation and concrete proposal.
GreggKellogg: why do you feel that some device specific ID is required on top of user authentification?
mark: You need to do both. Device identification helps figure out whether the device has hardware-security or not, that kind of stuff.
philipp: Would it be sufficient to identify the device's type?
mark: At a high-level, yes, it
would be sufficient.
... But there are other use cases where it is not, e.g. device revokation and simultaneous device
limits.
Comment from the crowd: I would encourage W3C to have a very close look at Netflix' proposal.
Digital Rights Management Standardization, by John Simmons (Microsoft)
john: very similar views as Mark
here.
... Starting with the slide I was showing before, and I would
like to focus on DRM interoperability.
... We're focusing on common encryption, same as Mark, and
industry fora adoption.
... The problem space: we have many non-interoperable
ecosystems. Each DRM technology is using a different
algorithm.
john: So that means a whole media
stack that lock content. DRM-free is really not an option for
high value video.
... And another problem is that the industry will not settle on
a single DRM.
... There will be a handful of systems.
... Solution attributes: we think that it is very important
that the protection works well with adaptive bitrate
streaming.
... There has to be interoperability, and also a common
multi-screen support. We have a panoply of screens to target,
and we need some protection mechanism that goes through all of
these devices.
... There are four components that are always present.
... First one is the licensing regime. It's always present and
proprietary. There is no exception to that. Even the closest
thing I know of to an open DRM system, OMA DRM, needs some trust authority.
... Next part is the key management system. There has been some
attempts to converge on a solution.
john: One problem is that it is
tied with licensing regime. So standardization is
diffucult.
... Thirdly, there's a Rights Expression Language. You can
standardize that but it's kind of difficult.
... It's tied to compliance rules of licensing regime, and
that's very specific.
... We end up with a need to standardize the licensing regime
if we go down that road.
... Finally, there's the encryption mechanism.
... That's relatively easy to do. You specify how to
encrypt/decrypt something. That's been done in MPEG.
... The good thing is that it takes most of the DRM stuff to
business stuff.
... In the project I mentioned before, we publish PIFF under a
community promise, the equivalent of royalty free).
... Including the encryption/decryption mechanism.
... When we published that in 2009, we submitted that to Ultra
Violet. They accepted that, and are to publish the Common File
Format.
... The good thing is that it handles multiple tracks. You just
need to have them as separate files on the server.
<dcorvoysier> [DVB conditional access implementation: http://www.dvb.org/technology/standards/index.xml#conditional]
john: Our attempt was to see some
optimization there.
... The Common File Format could be used outside of video
content.
... Also, DVG issued a call for proposal for IPTV content
scrambler that is software friendly.
... We submitted the same PIFF file as a proposal for the
software friendly encryption (without the IPR hook).
... And then UV through a liaison with MPEG/ISO introduced some
modifications.
... released last week.
... It includes the same encryption algorithm that has been
proposed to DVB.
... Through standardization, we try to converge to a
solution.
... Two take-aways: a standard encryption algorithm is the best
way to achieve DRM-interoperability. This leaves the business
decision of the DRM technology to use outside the
standard.
... Once you take the DRM technology out of the standard, then
you can create content to be distributed over the Internet
without DRMs at all.
... Whatever the content providers uses, you don't have to go
back and redo the content.
chaals: PIFF is based on H.264?
john: no, it's codec independent. In the encryption algorithm, there is some specific statements in the spec for H.264, but there's no dependency, no.
chaals: So I could make a video of me with a dog and distribute it to everyone without thinking about DRM.
john: yes, you could have the
first 5 minutes of you and your dog in the clear, and the
remaining 20 minutes available on a premium basis, protected by
DRM.
... Having to produce another version each time is a
pain.
... You could simply give them the same file.
giuseppe: we have been talking about video protection so far. No need for application protection? Not as important?
marie-jose: we're talking about video because it's a hot problem. If you can do it real-time, you'll be able to do a lot of other things.
francois: in the end, where does W3C fit in your presentation?
john: I'm not here to say that
some specific direction is the way forward. We've heard several
issues during the day.
... I'm raising issues, here.
[scribe disconnected during the last 3 minutes]
Minutes formatted by David Booth's scribe.perl version 1.135 (CVS log)
$Date: 2011/07/27 20:41:25 $