Minutes


Group photo

Contents


Day 1 (Wednesday 12th March)

Session 1 — Welcome
Introduction by Philipp Hoschka

<dsr> scribenick: dsr

The Web is 25 years old, let's make history today!

Let's put the Web and TV together!

Philipp introduces the goals of this workshop.

W3C is also celebrating its 20th year.

W3C is expanding our reach into new areas, e.g. TV and automotive,

Today we will focus on Hybrid TV and multi-screen.

Tomorrow, we will continue on multiscreeen then focus on ongoing work and upcoming topics.

Finally, we will try to prioritize next steps.

The major players are here at the workshop, and come from all over the World.

This is a great opportunity to exchange ideas,. I would like to thank our host IRT and our sponsor NBCUniversal, and lastly our reviewers.

Welcome from IRT

Ralf Neudel welcomes everyone to IRT. We are a research institute working for public broadcasters.

The convergence of the TV world and the Web world is very exciting for us!

We see a lot of excitement about the prospects for the convergence, after an early phase of trepidation.

It took radio 30 years to reach the same audience that the Web reached in 3 years.

Have a great workshop and let me pass you back to Philipp.

Introduction to W3C Process and Terminology

Work at W3C typically starts with a workshop which often leads to a Community Group or a Working Group.

Working Group's drive specifications to standards (W3C Recommendations).

Our working groups are open and welcome feedback from the public.

Community Groups are free and you don't need to belong to a W3C Member.

We have Interest Groups e.g. the Web and TV Interest Group -- http://www.w3.org/2011/webtv/

The Working Group process progresses specifications from Working Draft to Last Call Working Draft (when a spec is mature and stable).

This is followed by a Candidate Recommendation where we look for implementation feedback. This is followed by a Proposed Recommendation where we seek review from the W3C Advisory Committee, and the spec then becomes a W3C Recommendation.

Q&A

Question: how does the implementation feedback work?

Answer: we provide a set of tests against which implementations can be assessed.

Web & TV IG Overview

See http://www.w3.org/2011/webtv/

Guiseppe Pascale introduces the goals of the interest group as set out in the charter:

http://www.w3.org/2012/11/webTVIGcharter.html

Interest Groups are not the same as Working Groups. The former focuses on requirements and input for working groups.

We work through email with an archived list.

We start with use cases, and extract requirements. We then perform a gap analysis on what's missing from existing standards.

This work is usually done on task forces that address a specific topic.

We may issue bug reports, e.g. against the HTML specification, or we may propose work on a new API.

This may in turn lead to a new working group being set up.

The home network task force (now closed) addressed local discovry and control of devices in local area IP networks.

Opera and Cable Labs then proposed an API to the W3C Device APIs WG.

We got some review by implementers and the Privacy Interest Group and adjusted the API accordingly.

The Media Pipeline task force (now closed) focused on improving the HTML5 media pipeline, e.g. for support for multitrack.

The media source extensions and encrypted media extensions appeared as a result of this task force.

These are being driven along the standards track in the HTML WG.

The testing task force (now closed) focused on testing, use cases and requirements.

The timed text task force (now closed) focused on facilitating use of TTML and WebVTT for subtitles.

Close collaboration is recommended for these two approaches.

The charter for the Timed Text WG is currently under review.

Finally, the media APIs task force (ongoing), which focuses on recording and downloading of media, discovery and control of device capabilities.

We are discussing what it means to integrate with the tuner.

We have also looked at what changes are needed for other related specs.

All of this depends on people actively engaging with the work! We need your help.

Q&A

Questions?

Question: to what extent the feedback from the testing task force have been adopted?

Philipp: there is two levels of testing at W3C. The first is a test for each feature of a spec. We are planning on component testing -- this looks like it would be quite expensive and looked for funders without success.

We will keep this plan on the shelf but are not actively looking for funders anymore.

So whilst we remain focused on spec feature testing, we are not actively working on component testing right now.

Guiseppe: W3C is working on the test framework.

Bryan: contribution based approach and encouragement through test the web forward events.

<ddavis> http://www.testthewebforward.org

Question: any idea on the roadmap for the tuner work?

Guiseppe: we're looking for greater involvement to help move this forward. May be one year to produce a stable spec?

Depending upon how broad the requirements, we may proceed in two phases, based upon the priorities for individual features.

Clarke: the speed depends on the people involved -- if this matters to you please join us!
... we break for coffee and demos

<ddavis> Scribe: Daniel

<ddavis> scribenick: ddavis

Session 2 - Hybrid TV
HbbTV - reinventing the broadcast TV UX

kirk: I'm going to talk about HbbTV - reinventing the broadcast TV UX
... I'll give you an overview of what HbbTV is about.
... 5 or 6 minutes.
... HbbTV started in 2009 to provide a more interactive experience for consumers.
... Meant to be a platform across regions around the globe.
... Business drivers are the analog switch off, consumer demand, broadcaster innovation, government initiatives and pay-TV operators.
... Adoption in Europe is shown on the screen.
... We're starting to see international adoption as well, still building in recent weeks.
... ASBU (equivalent of EBU for Arabic countries) is also adding HbbTV to its DTT recommendations.
... As well as interest in French-speaking African countries and Australia.
... We have a 2.0 version which Jon will talk about shortly.
... Millions of units are being shipped in many countries with wide STB support.
... HbbTV 1.0 introduced various features - playback, download, channel lists, etc.
... HbbTV 1.5 adds MPEG DASH, DRM APIs, etc.
... HbbTV 2.0 has features under consideration including improved HTML5 support, companion apps support, improved support for ad insertion, improved synchronization between media and applications, etc.
... Services available include VOD, information, shopping, education, games, advertising, TV portal, companion screen
... So to summarize, we have good momentum and growth, different stakeholders coming together, and an active LinkedIn group.
... No over to Jon Piesing

HbbTV v2

jon: HbbTV 2.0 started with a list of 19 requirements.
... Most relevant here are related to W3C specs or companion screen.
... Starting with web specs...
... First version was heavily influenced by Open IPTV Forum, based around a TV profile of W3C specs.
... In HbbTV we have HTML5, some CSS 2.1 and 3, DOM 3, and some other HTML5-related specs such as canvas 2D, XHR, etc.
... For some of the specs only a profile is required. For specs that are not Recommendations it's OK to use newer versions.
... For integration, e.g. video element, you have to specify how it fits with other areas such as hardware decoders, so integration is important.
... An example of this is the "controls" attribute of the video element. This specifies how much of the controls are available to the web page or an app.
... There are a number of bugs that we've submitted to W3C.
... For companion screen, we have discovery and launching a companion app from the TV.
... The TV browser can discover apps that can be connected.
... There's a web socket server on the TV so the app gets the IP address of the server.
... We've gone for something similar to DIAL for app-launching companion apps.
... For strategic issues, we find some gaps in HTML5 for IP-delivered video.
... Some parties would like to get away from using our A/V object.
... There's one web but there isn't one TV. There's regional, country or operator differences.
... There needs to be understanding of the differences between one web and regional TV if you look into the details

Q&A

question: How does companion screen discovery work, when does it happen?

jon: It's complicated. We assume the launcher app is already there. It could be remote control app by the TV manufacturer.
... If you have e.g. a Samsung TV, the user would run the launcher app on the tablet.

HbbTV Certification and Testing

klaus: Next is HbbTV certification and testing

simon: Welcome. I'm Simon Waller
... A typical TV viewer looks like Homer Simpson.
... It's a laid-back experience and the TV remote is not necessarily readily available.
... But these days we have alternative methods - voice and gesture.
... What's the broadcaster expectation? We currently have over 100 apps which for us is a lot.
... Broadcasters expect their apps to look identical on every TV and STB.
... TV manufacturers want this as well.
... In the TV space, there are roughly 10 new browsers launched.
... That means a new snapshot of e.g. WebKit
... and no software updates - we tend not to do this in the TV space, except for bug fixing.
... So app developers have to deal with a lot of different browsers and quirks.
... How can the viewer know apps will work?
... What should users look for? We have an HbbTV logo available for license.
... It's based on a test suite and the manufacturer sings an agreement to use it.
... In this agreement it says "TV has to pass HbbTV Test Suite" - all of them, no exceptions.
... The manufacturer commits to try to resolve interoperability problems.
... If the TV is proven to be non-compliant, the manufacturer must update the TV software

ddavis: Next speaker: Andy Hickman

andy: HbbTV just has to work - that's what testing is for.
... The test harness is basically a PC doing three things:
... 1 web server for TV apps
... 2. Playing a DVB transport stream
... 3. It's also a video server, playing MP4, DASH with and without DRM over IP.
... Each test case in the test suite has an XML file describing the test and the test itself - images, javascript, CSS, etc.
... The test case description XML files are very important.
... They have metadata - title, test ID, formal assertion saying what expected behaviour is, description, steps of the test.
... Within the test suite, metadata is used for management - which chapters and references specs are used.

<Mohammed> whois Mohammed

andy: Also says which spec version the test applies to.
... Finally metadata for licensing, history, etc.
... In comparing HbbTV with W3C testing, I'm not a W3C expert so may have some things wrong - please let me know.
... W3C has more reduced test metadata.
... HbbTV has a precisely defined test case list. In extreme cases, a failed test can cause a manufacturer to stop production.
... You either pass or fail test test suite, whereas in the web world it's rare for a browser to say "I pass 100%"
... Both HbbTV and W3C are looking for automation.
... For W3C you can install the test suites locally.
... For HbbTV you must have a test harness implementation.
... The output of the test harness must be a machine-readable and verifiable report.
... The test must be run on an unmodified, off-the-shelf device.
... HbbTV does not address DRM testing.
... It's an area that the HbbTV community is interested in. I think there's interest in EME testing for W3C.
... Video testing is critical for HbbTV.
... I'm not sure how well that's tested in W3C.

simon: HbbTV is used in many countries around the world. The number of apps is growing.
... The number of devices is growing.
... Broadcasters want compatibility.
... HbbTV builds upon W3C Recommendations. It's important to enable HbbTV to be more closely aligned with W3C.
... There are issues if browsers don't support parts of a test suite.

andy: There's a lot of interest in cooperation between HbbTV and W3C, recognizing that there are differences between the approaches.
... Better specs -> better tests -> better compatibility -> better user experience

Q&A

David Singer: You said the experience should be the same on every device, whereas W3C wants correct experience on every browser. Do you need "the same"?

simon: You don't for there to be errors in the implementation so that the page is displayed wrong.

Bryan Sullivan: Most of the goals are shared with W3C (which is contribution-driven) doesn't give W3C as much room to be exact.

scribe: People pay HbbTV to provide tests which W3C doesn't have. But W3C should take good practices from HbbTV and discuss/follow-up on that.

Overview of IPTV Forum Japan's Hybridcast Technical Specification

Next is Kinji Matsumura (NHK)

"Overview of IPTV Forum Japan's Hybridcast Technical Specification"

kinji: Hybridcast uses HTML5. NHK started the Hybridcast service in September 2013
... Commercial broadcasters are carrying out trial services.
... Major TV manufacturers are adopting Hybridcast.
... Version 1.0 was published in March 2013 - the English translation has just been made available

http://www.iptvforum.jp/en/download

kinji: Technical specs consist of two documents:
... "Intergrated broadcast-broadband system specification" and "HTML5 browser specification."
... Diagram of overall architecture is on the screen.
... Hybridcast spec covers enhanced APIs for applications, and app control and management, and companion device connection and message.
... 1. Application Control and Management. This is outside the browser so no direct overlap with W3C specs.
... AIT is "Application Information Table"
... AIT can be used in two ways - multiplexed in the broadcast signal or acquired over HTTP.
... This is similar to HbbTV but there are some differences in getting data from the server.
... 2. "Enhancements to HTML5"
... Some key API enhancements include displaying video/audio using an object element with type attribute "video/iptvf-broadcast"
... Like the video tag, this can use CSS to specify coordination and z-order.
... There is also a Broadcast Resource Access API - the most frequently used API.
... It has a ReceiverDevice object and a StreamEventTarget object.
... The GeneralEventMessageListener is almost the same as the StreamEvent.
... Some Hybridcast second-screen, or companion-application, screenshots are on the screen.
... We use a native app for companion devices.
... The user launches it and connects to the TV, to control the TV contents.
... The user can answer quiz show questions, for example.
... It works as a browser for HTML5 applications on second-screen device.
... The companion apps are mostly developed and distributed by each TV manufacturer.
... The companion app launch sequence is a bit complicated.
... When the user launches the companion app, it discovers the receiver and receives data.
... The TV connects to the broadcaster's server. Then the app on TV calls setURLForCompanionDevice to instruct the companion device to load an app.
... Then the native companion app loads an HTML app from the instructed URL.
... and maker.js in the HTML app implements the APIs to communicate with the TV App.
... Apps on both ends communicate with each other.
... In conclusion, we have 6 months of experience since launching Hybridcast.
... We hope to identify further requirements towards establishing better standards.

Q&A

Turgay Yoo: Is Hybridcast also suitable for rolling out in the education sector for schools?

kinji: Yes, I think so.
... HTML5 content can interact with broadcast content. NHK has an educational channel and we're looking into this.

Clarke Stevens: HbbTV and Hybridcast seem to have overlapping goals. Is there any effort to work together or are they in competition?

kinji: That's one reason why we're here. Ideally we could migrate to one standard, but there are many local requirements to consider, e.g. what remains a regional standard.

yosuke: HbbTV and IPTV Forum Japan are talking together about how to exchange information. Also, talking within W3C is another effort towards cooperation.

Jon Piesing: There are opportunities for learning from each other, but TV is inherently regional. Japanese TV has its own requirements.

scribe: HbbTV came from something fundamentally different to the Japanese system.
... There are opportunities but TV isn't global. If you try to make a global spec for a market that isn't global you're going to end up using a lot of time and energy that won't meet expectations.

Philipp Hoschka: I understand TV is different in Japan but how does that impact web APIs that you want to use? What part of web technology has to be regional because TV isn't global?

Jon Piesing: We can talk in more detail in the panel later, but we have a list APIs from Hybridcast - HbbTV has a similar list.

scribe: You could push them into a single set of APIs with the same syntax but the actual semantics would end up relating to the broadcast system you're running on.
... E.g. return values could be different depending on region.
... The API could be global but only in appearances.
... Concrete examples I know of are parental access controls, metadata, failure reasons.
... Once you start exposing channels, their description is usually TV-specific.
... The more you try and coordinate, the longer it takes.
... Let's do the best we can and not try to boil the ocean.

Japanese Hybrid TV Presentations

Next presentation is "The 1st Implementation Report on TV Programs on Hybrid Broadcasting System using HTML5 Browser-Hybridcast 2014 Project" by Kunio Numabe and Kazuhiro Hoya (Fuji TV)

kunio: We are all from Japanese commercial broadcasters.
... We've been discussing convergence with W3C specs.
... We're getting fedback from the first implementation of HTML5 browsers.
... We're working with the Ministry of Internal Affairs and Communication in Japan.
... We're producing experimental programmes using HTML5 and companion devices.
... We tried various kinds of programmes - sports, animation, etc.

Keiji Yaniguchi: I'm from TBS.

keiji: We made a football (soccer) programme with Hybridcast technology using a large amount of data in realtime.
... The viewer can put widgets on the screen and see rich play data.
... The viewing rate was 2.6%
... One widget is pass conversion rate. Another one is player running distances.
... The widgets were written in canvas, SVG and CSS
... The last widget is a heatmap.
... We tried to make it with SVG but the performance of Hybridcast TV sets so far were not yet sufficient so we used images instead.
... We also have second screen applications using HTML5. You can use the smartphone to see a replay.
... Data arrives from the stadium, goes through the server, and gets shown on TV and smartphones.

Kohei Kawakami: I'm from Nippon TV

scribe: We aired two programmes this winter.
... Users can enjoy not only Hybridcast TV but also legacy TV via smartphones.
... Viewing figures are small but this is one small step for Hybridcast, one giant leap for broadcasting.

kohei: Hybridcast Service System Chart is shown on the screen, using CSS, jQuery, Hybridcast lib, which is a lib for Hyrbridcast APIs NHK introduced in the previous presentation, and object tag for broadcast video.
... Our service is able to connect with social network services and TV programmes.
... So your friends who are watching the same programme as you can see your photo. You can enjoy programmes together.
... We use web specs for animation.

Yusuke Fujii: I'm from TV Asahi

scribe: Our service has useful information available on the TV screen as a Hybridcast application.
... By connecting to the internet, Hybridcast TV can receive information which can be viewed at any time regardless of the broadcast type.
... Now, we can see a famous cartoon character in Japan. This service links a smartphone while watching the programme on air.
... Viewers are able to use their own phone as a remote and enjoy games during the TV programme.
... The Hybridcast API is used for this, to control the character. The character's abilities change over time.
... So the longer you view, the more skills he gets.
... We used canvas to implement this.
... Please see our demos upstairs.

Kenji Sugihara: I'm from TV Tokyo.

scribe: Mission 001 is a shotter game TV program. It supports multiple devices.
... 140,000 people have played this game. 1% of them had a Hybridcast TV.
... The second screen serves as a game controller and the TV shows information.
... The player taps the phone to easily control the game.
... The synchronization uses WebSockets.
... The information on the screen is shown using HTML5.
... The TV can show hints, score and other animation.
... We used CSS sprites and jQuery Effects API for animation.
... We aired this and enhanced the viewer's sense of participation.

Kazuhiro Hoya: I'm from Fuji TV

scribe: We have some case studies.
... We want to make the TV screen to show popups such as tweets or competitions/amusements
... It could be a banner with sponsored content.
... Advantages include rich expression and communication, as well as interaction.
... Some challenges are that it's very hard and uses human resources - we need a framework.
... Also compatibility. There are a lot of TVs on the market. We need a better testing method.
... Also, UX is an issue. Pushing a button the remote is easy.
... But for second screen and pairing there are a lot of steps.
... Using QR code is still hard for coach potatoes.
... We got some negative feedback - too much information on the screen, both on the screen overlay and on the second screen.

kazuhiro: We have 1.2 million viewers but just 1,064 using Hybridcast. And 76 users of second screen.
... We had 1,890 tweets, 3,790 using our hashtag.

Session 3 — Multi-screen 1

Louis Bassbouss: I'm representing the second screen presentation community group. This paper is written by Intel.

scribe: The group was founded at the end of 2013 and has 36 participants from a variety of areas.

<inserted> scribenick: kaz

<inserted> yosuke: got many papers on multi-screen, so split into two sessions

<inserted> ... not only W3C but also involving other SDOs is important

<inserted> ... 15 mins for each presentation

Enabling Second Display Use Cases on the Web - Louay Bassbouss

<inserted> louay: from Fraunhofer FOKUS

<inserted> ... present the second screen CG as well

<inserted> (slide 1)

louay: Introducing the Second Screen Presentation CG

(slide2)

louay: Current Status

(slide3)

louay: Second Display for the Web?

(slide4)

louay: "Second Display" Clarification
... a large screen as TV/projector?

(slide5)

louay: Use Case: Presentation

(slide6)

louay: Use Case: Gaming

(slide7)

louay: Remote Display Technologies
... e.g., Miracast, Airplay, DLNA-based solution and Chromecast
... there are differences between Miracast and Chromecast
... but we want to have one unified API

(slide8)

(slide9)

(slide11)

(slide 12)

(slide 13)

louay: API Preview
... (shows example IDL)

(slide 14)

louay: Presentation API Example
... Phone/Laptop vs. TV/Second Screen
... (explains example script)
... communicating commands like "play/pause video"
... similar to Web messaging APIs
... missing close event

(slide 15)

louay: Presentation API Key Features
... Presentation API

(slide 16)

louay: Presentation API Demo
... (shows URL)

(slide 17)

louay: Participate

Q&A

Q&A time

simon: examples included two user agents

louay: just one agent is possible
... open the Web page with a hidden tab
... like Miracast and Apple TV
... Chromecast is a user agent itself
... if googlechrome hosts miracast, it would be automatically detected
... but the complexity will be move into the browser
... on the browser level

A Flexible Multi-Screen Solution Based on UPnP - Clarke Stevens

clarke: from CableLabs
... member consortium of cable companies
... also representing UPnP
... nearing the end of the spec named multi-screen

(slide 1: Content)

clarke: schedule, etc.

(slide 2: Multi-Screen Trends)

clarke: several of them are proprietary
... looking for W3C solution
... UPnP Forum members working on an open interface
... CableLabs, Cisco, Intel, LGE, PacketVideo, TP Vision, ZTE
... Samsung as well

(slide 3: Goal)

clarke: device /service discovery
... description eventing and notification with the UPnP device architecture
... HbbTV uses UPnP
... communication initiated by either end

(slide 4: Terms used for Informative Usage)

clarke: clarify terms
... Multi-Screen Service, Main screen device and Companion screen device

(slide 5: UPnP Components designed by Multi-Screen ...)

(slide 6: Basic Interaction Model)

clarke: "screen control point" and "screen device"

(slide 7: Extended Interaction Model)

clarke: very flexible model
... TV as a main screen device and tablet as a companion screen device
... but could have individual views on the main screen

(slide 8: Services of Multi-Screen DCP)

clarke: phase 1: app management, app-to-app communication management, key-press protocol and synchronization
... three protocols certified by UPnP
... synchronization is phase 2

(slide 9: UPnP Cloud: Overview)

clarke: allows you set up virtual rooms
... e.g., for chat programs
... anybody from your family can access that room and share realtime communication
... wrapper to a UPnP device

(slide 10: UPnP Cloud Interaction (MUC))

clarke: (shows a diagram)
... user a creates room (MUC)
... user a invites UCCDs and UCC-CPs
... usr a&b meet and share

(slide 11: Web and Virtual Realizations)

clarke: that's it

Q&A

@@@: how does a video streaming work with the architecture?

clarke: just provide your URL
... looking at a possible way for remote access protocol

Simon Waller: eventing?

clarke: you guarantee to leverage UPnP eventing
... depends what you want to do

simon: screen devices

clarke: if you only communicate with new devices you're building, that's fine
... the network service protocol is also being discussed within W3C
... if you just think about communication between screen and screen, websocket might be enough
... but if you use TV, we might need authentication

jc: new screen device type?

clarke: new screen device type

yosuke: tx

Multuscreen Service in Shanghai - Mingmin Wang

mingmin: from Oriental Cable Network
... second time to attend W3C events

(slide1, 2, 3)

mingmin: cable network in Shanghai
... over 1M HDTV subscribers and 1.3M digital Pay TV users
... started NGB since 2008

(slide 4: NGB - network architecture)

(slide 5: NGB - home access network)

(slide 6: Interactve TV service)

mingmin: 10M digital STBs

(slide 7: Multiscreen Service Deployment)

mingmin: started last year

(slide 8: Multiscreen Service Deployment)

(slide 9: Multiscreen Service Deployment)

mingmin: 60 SD libe streaming channels
... 1 HD live streaming channels

(slide 10: Multiple screen service or application)

mingmin: DVB, VOD, SDV, OTT
... are the infrastructure
... Unique content management
... converting metadata
... Unique customer management
... managing IDs
... Unique session management
... session control
... which session comes from which device
... Unique resource management
... QoS control
... identify video stream
... Device management system
... Smart search engine plugins

(slide 11: Multiscreen Service Deployment)

mingmin: typical use case
... video-cloud deployed at cable head-end

(slide 12: Multiscreen Service Deployment)

mingmin: video cloud
... XMPP

(slide 13: Multiscreen Service Deployment)

mingmin: typical application/smart home gateway
... local channels, OTT, internet streaming

(slide 14: Multiscreen Service Deployment - Typical application (1) - DVB+OTT+APP)

mingmin: VOD and TV shopping
... deployed on many STBs
... online chatting as well

(slide 15: Multiscreen Service Deployment - Typical application (2) - OTT+DVB+APP)

mingmin: application for cloud TV

(slide 16: Multiscreen Service Deployment)

mingmin: multi platform
... including iOS and Android
... video streaming, network-based sharing

(slide 17: Multiscreen Service Deployment - smart home gateway)

(slide 18: Multiscreen Service Deployment)

(slide 19: Suggestions)

mingmin: which solution would be suitable for OCN (Oriental Cable Network)?
... would like a unique platform
... XMPP, HTML5 and metadata
... trying to find the best solution

Q&A

ph: already deployed but how popular?

mingmin: deployed our services last year
... OTT, mobile operators cable operators start with Shanghai
... STB for free

not charging for the content either

scribe: our subscribers are 300 thousand

ddavis: video messaging, second screen, OTT

mingmin: they use those services lot
... young people don't watch TV
... prefer tablets
... tablet and mobile phone for streaming
... TV broadcasters are keen to keep their own subscribers

ddavis: Fuji TV said using second screen might be troublesome

mingmin: not troublesome, very popular

Second Screen User... - Dave Raggett

(slide 1: MediaScape)

dsr: new EU project
... mixture of technology of W3C and broadcasters

(slide 2: MediaScape)

dsr: (shows diagram)
... devices- social network-broadcasters

(slide 3: Use Case)

dsr: John is watching a live sports event on TV
... his phone notifies him to start a complementary news service
... he invites Ann...

(slide 4: Requirements)

dsr: social connection, associating TV and phone, allowing phone to know what is being shown on TV, ...

(slide 5: How does it work?)

dsr: conected with WebSocket
... social network server tracks context

(slide 6: What's needed?)

dsr: service workers
... as an agent to be part of the social network
... websocket
... for asynchronous messaging

(slide 7: Synchronization)

dsr: synchronize user experience across devices
... (demo in the break)
... mobile devices don't preload videos

(slide 8: Local vs Remote Messaging)

dsr: local P2P discovery and messaging
... DAP, SysApps, NFC, WebRTC
... lot of choices
... problem with using local discovery
... server-based approach would be better

Q&A

dsr: third-party services
... search broadcasters' metadata, screen, etc.

@@3: would like to have demo

bryan: comparison with other standards?
... any kind of gap analysis?

dsr: will do
... catch me for demo

ddavis: note that no food/drink is allowed here in the meeting room
... this evening
... IRT will hosts a Bavarian dinner
... before that we'll take a group photo
... if you need a taxi, please talk with the lady at the reception
... next session will start at 4pm

(break)

Session 4 — Panel: W3C and SDO alignment

SDO alignment panel

<jcverdie> scribenick: jcverdie

Giuseppe: it's not only a panel, we'd like the audience to interact
... the topics are worth discussing with an audience as wide as possible

(Introduce panelists)

Jean-Pierre Evain for EBU

Jon Piesing for HbbTV/OIPF

Kinj Matsumura for IPTV Forum Japan

UPnP / DLNA : Clarke Stevens

W3C: Philip Hoschka

Short presentation from EBU

Jean-Pierre Evain: largest association of broadcasters worldwide

scribe: production for sports, news
... training for people in the field
... technical innovation, frequency planning
... follow lots of SDO

<kaz> ... 15 mins for each presentation

scribe: Interest in W3C: HTML5 including WAI, annotation, EME, metadata
... Some overlaps indentified: Timed text, EBU-TT-D adopted by MPEG-DASH and HbbTV
... audio modelling
... metadata, bringing semantic web to production
... summary: where's the expertise and who does what?

HbbTV Presentation

(battle plans displayed)

Jon Piesing: this is an illustration of the specification dependencies in HbbTV v2

(Jon describes most important dependencies listed on the slide)

slide is visible here: @@put_url_here

Jon Piesing: many information come from DVB world which is not global.

scribe: ISDB has similar standards but the semantics may vary
... HbbTV is about integration of a pile of stuff done by others, not inventing things

IPTV Forum Japan presentation from Kinji Matsumura-san

(slide show technology overview of HybridCast)

Kinji: Hybridcast uses HTML5 Specs for App development
... Extensions for hybrid use
... @@@

UPnP / DLNA introduction by Clarke Stevens

Clarke: I'm going to assume you have memories of what's been told earlier :)
... UPnP and DLNA are not competitors. UPnP is about technology, DLNA is about usage narrowing down technologies options from UPnP
... Future of TV is about:
... HTML UI: UPnP HTML5 RUI
... Discovery using NSD, XHR or WebSockets
... includes Internet Of Things
... MSE came out of an effort from CableLabs and the Media Pipeline TF
... not directly to UPnP stuff but still cable companies in the US
... Now DLNA
... Recently announces CVP-2 Guidelines
... We don't want to add new APIs to HTML5
... Broke compatibility with CEA-2014 in order to align with HTML5

(slide about cloud scenario in CVP-2)

slide from cable labs are available here: @@cablelabsslides

Clarke: CVP-2 also handles Live Linear Streaming

Giuseppe: @@

Clarke: our goal is to have new platform on HTML5

Kinji: HTML5 is attractive and the only option for us to extend to broadcast

Jon: strategic reason was to sync w/ Standards to enable web developer to reuse their knowledge including libraries
... second reason is people wouldn't develop a specific non HTML5 browser for TV so it was a fait accompli

Giuseppe: what is the challenge of referencing specs from W3C including HTML5?

Clarke: very productive to co-work with W3C (NSD, MSE, EME, TTML...)

Kinji: TV has specific requirements (no windowing, remote controllers)
... We add some guidelines about using HTML5 on TV

Jon Piesing: two set of issues

scribe: Organisational, what version do we refer to if it's not a rec?
... how does that impact test & certification if that can change
... second set of issues is about functionnalities. when they're not completely there, do you define your own properties ?
... you end up with a transition phase with two sets of APIs, one nicely shaped and the other temporary

JP Evain: Trolling about MHEG5

Giuseppe: why is the web economy able to handle the fragmentation, and not the tv industry?

JP Evain: we don't have tv displays only anymore, but dozens of devices with various apps

scribe: HTML5 should be finalized soon so we have a version to reference to
... But there are still ongoing discussions
... But things like timed text are still not sorted
... So some platforms have to create their own extensions

Clarke: Before you put something in a TV you used to test it for a long time and have a stable standard
... The web has brought a new world where you update everyday
... the TV is not quite there

JP Evain: feedback from broadcasters is even more important

Clarke: cable industry now has versioning plans for upgrade and testing

Simon Waller: there are no commercial model for SW Update

scribe: except from giving breath to our call centers by fixing bugs

Giuseppe: is this the same problem in the mobile area?

Steven: I can't say

Jon: the frequency of android updates on the mobile industry is hectic

David Singer: my samsung tv regularly wants me to update

scribe: But I never pressed yes so who knows

@@: @@ ?

Simon: there are updates for the smart tv side. Broadcasters want us to update to be able to run their up to date apps

Jon: whichever update you give even for free that's time & money someone oughts to pay for
... is this broadcasters, manufacturers, ...

Giuseppe: some of you mentioned gaps in your presentations, for instance in Video element. When you run into these gaps in your SDO what's the resolution process?

Clarke: we take some instances back to W3C
... W3C kept up with our pace of innovation
... Related to Web&TV but also Internet of Things

Kinji: in IPTV-J we just use existing standards
... only extension for local broadcasting requirements
... so it wasn't worth bringing back to W3C as it was regional
... there might be some common ground however

Jon: the 1st HbbTV built on top of specs with fine-grained clarifications. We don't want to repeat this
... in v2 we consider W3C specs as they are not trying to clarify anyhow
... What you call gap, I call it a feature I need a solution from somewhere even if not w3C
... little gaps such as Video Element lacking some properties such as different audio stream
... DRM "failure from" description
... look at "hbbtv" in w3C bugzilla for details about what I call "little gaps"

Giuseppe: if it's a little gap you hand it to W3C, if the feature is simply lacking you look for it elsewhere
... At a time you used CEA 2014 now HTML5 would you consider W3C has covered what you call a "big gap"

Jon: but CEA 2014 made the same mistake we made about clarifying W3C specs
... one shouldn't fine-grain clarify other's specs since you don't have publication control

JP Evain: There are different level, second-screen stuff is close to W3C core business

scribe: when you deal with video you are at the fringe
... what a broadcaster sees as a video might not be what the W3C calls a video
... re: schema.org we worked with BBC for a while
... Initial metadata was californian-based geek vision of TV Episode/Series/Season
... Took us ages to sort this out

Giuseppe: there are success and failure stories, is it related to the method aka becoming member and working from the inside rather than bringing input as outsiders

Philipp: the TV Community explained quite well their video element requirements
... it was difficult of course
... but we are obviously interested in getting TV Requirements in our specs

Clarke: disappointed by the testing effort
... we supplied some reqs
... we created some tests on our own

Jon: Investment in time and/or money is important
... Does what you want will benefit from being global ?
... Where are the resources ?
... If you can't afford the investment it takes to do this in the W3C process, then what about elsewhere?
... This is obviously an investment
... Fixing a couple of properties in audio elements vs lobby for a new functionality is a different investment
... Unfortunately TV Industry seems not to have that kind of money anymore
... Some functionalities would be generic but how many devices have a Tuner?
... And if you can't afford to do it in W3C then it doesn't really matter whether W3C is the right place or not

(giuseppe and jon teasing each other about tuner api)

(no harm done, everybody's safe so far)

JP Evain: Jon makes a point.

scribe: I'd like to do things such as Sport ontology in W3C
... my dilemma is: I can do my own stuff and I know I'm good enough because I'm facing actual data
... But there'll always be someone who will say "I don't care, I'll use something from W3C because it's W3C"
... So it's risk mitigation and investment

<inserted> scribenick: ddavis

<scribe> Scribe: Daniel

<jcverdie> Bryan Sullivan: A lot can be done through CG

Bryan: What of these gaps are the most difficult ones?
... Delivery of real-time linear content to devices.
... The access to EPG or metadata.
... Harmonising access today to third-party integration

Clarke: There's an abstraction layer that would be well-address by W3C. If we can indicate a URL it wouldn't depend on the hardware. That's a good area for integration.
... Dealing with linear content in general is one of the most critical.

Jon: There's a huge amount to deal with regarding linear content - I don't think W3C can deal with this.
... So you end up with something not fit for purpose.
... An abstraction layer that doesn't enable a legacy version to be phased out you just end up with duplication.
... There is a lot of work on MPEG-DASH to do live TV over the internet.
... What do you need at W3C to fit with that?
... You need failure modes that have enough information.
... But I look at the people participating in MPEG-DASH and they are people who's lives depend on getting live TV to work. It's advanced, lots of work has gone into it.

Mingmin Wang: The topic is aligning standards and what's next.

scribe: We are interested in W3C standards because we use HTML. In China, standards are very hybrid so there is competition between video services and cable operators.
... Do you think it's possible to have a basic profile quickly that focuses on basic things like a video player to create a unique standard that can have mass deployment?
... It's better to have a good solution as quickly as possible because of fast competition.
... We attended this workshop to find a solution.

Jean-Pierre: For more than 30 years we have been living with the fantasy of faster standards.
... If it's a good compromise, it takes time.

Jon: The more people you have in the room, the longer it takes.
... To do it quickly, you need a small group of people and ruthless focus.
... Then you add more people, more ambition and it takes longer.

@@@: I think EME breaks everyone of Jon's laws.

Jon: I think EME is actually a good example of a fast standard.

Clarke: The Chinese approach seems practical. It's going to be about whether a standards organisation survives if it moves fast enough.

Giuseppe: The last question I have is about testing.

Clarke: If the testing regimen for HTML5 is not ready then we prefer not to use it. The effort to raise money has not really worked. At the end of the day, companies will only use tech they can rely on.

Kinji: Testing is very important. We're happy to use existing test suites. IPTV Forum Japan has a test suite but just for our extensions.
... Our efforts are still under discussion.

Jon: HbbTV is not interested in duplicating anybody else's work.
... If there's something usable from W3C, even if it's not perfect, we'd be interested in it.
... DLNA seems to have the closest mindset for us.

Giuseppe: The need is still there. Is there anything SDOs can do?

Clarke: Jon probably means that DLNA is similar because tests have to be created at an early stage.

Jean-Pierre: We are developing services using existing web technology such as SOAP and REST. It's just for broadcasting. People don't want to pay for the technology or to be a member.
... All that we can do is to have some members that participate, doing remote testing. The best we can do is facilitate such self-testing.

Jon: What about TTML testing?
... Is there somebody who can talk about that?

@@@: I think there is a W3C test suite available since the TTML spec was published.

scribe: I'm not sure how up-to-date it is.

Bryan: Coming back to the question of cost. With Test The Web Forward, anybody can participate. Similarly, community groups are free for anyone to create or join - you just need people who know what they're doing.
... Is there are a possibility that we could drive this through domain-specific efforts?
... All the SDOs have their HTML5-specific profiles. There could be virtual events that don't cost money - just time.

Giuseppe: About testing, we see each SDO is spending money on tests. Wouldn't it be better to put that money together?

Clarke: Viable companies will want to test their products before shipping. For CableLabs we contributed tests to DLNA and W3C. Everyone benefits and it's shortsighted if you keep them to yourself.

Jon: I agree, but the tests have to be suitable.
... Tests that just test the validity of the spec are less useful.
... Off the top of my head, unofficially, the extra formality to make W3C tests useful for certification is not sufficient. The absence of it is a deal breaker.

Clarke: We contributed tests to WebKit.
... If we have a more formalised test development infrastructure we could encourage more contributions.

Bryan: Are there people in the SDOs who are able to share that diligence with W3C? They're experts - if W3C could adopt some of those practices we could build something more usable.

Andy Hickman: I think yes.

scribe: There's clearly an opportunity for mutual benefit.
... It sounds trivial but I reckon if you look at the voluntary HbbTV effort, less than a quarter of it is writing the tests themselves.
... The remainder is the stuff that takes the most time.
... If we're not organised, we get a load of tests but we can't see the woods for the trees.
... There are 10,000s of good tests out there but working out which are good is non-trivial.

Jon: With MHP we spent millions on creating tests.
... The time spent in peer-reviewing tests is easily double what we spent on creating the tests in the beginning.
... But a test suite that hasn't been peer-reviewed is worthless.
... You can't have confidence in the results. So any test cost has to be doubled to account for peer-review.

<bryan> Some W3C members are also very well aware of the need for diligence in device certification - for example many Mobile Network Operators certify up to 40 different devices each year through multiple software revisions. Exact tests that are efficiently regression tested are essential, and business as usual.

Jon: The strangest of example is that we had a test case had to pass three implementations. But you end up building test suites biased towards testing just the easy stuff.
... I have the upmost respect for people who deal with this daily.
... We should try to cooperate better on this as an industry.

Clarke: With W3C, we took on a big testing effort that didn't get any funding.
... Maybe we could build a minimal set of that and at least have something.

Philipp Hoschka: There is an infrastructure in place.

scribe: Test The Web Forward explains how tests work and is a repository for tests.
... There needs to be more study. We are working on this and Opera have contributed a lot of tests, as have others.

<bryan> Focusing/phasing the TTWF infrastructure and assets (e.g. tests and assertions) is one of the "plan B" approaches we have proposed to W3C in the absence of sponsorships. All we need to do is form a Web&TV community that applies resources to defining and developing just those priorities/phased resources, rather than just defining what they want.

@@@: We noticed that some people were leaving after getting funding for a spec, if it's not done on time.

scribe: So funding for the spec includes funding for testing.

Jon: Not going to Recommendation is not a good deterrent when people reluctantly refer to Working Drafts.

Jean-Pierre: Going back to MHP, you have manufacturers who are shipping million TV sets which is different to those making web services.

ddavis: What are some quick wins?

Jean-Pierre: I see more people from the broadcast world coming to W3C than vice versa.
... I'd like to make it easier for people to join W3C from broadcasting world.
... Sometimes it's difficult for people to speak up.

Jon: Being cynical, please look at the Bugzilla entries in my presentation this morning.
... Low-hanging fruit - one of Giuseppe's points is... it's a lot easier in a W3C spec to say the return value of a method is something.
... I've seen huge debates where people end up getting shouted down for proposing extensibility methods.
... When you're trying to make a connection between something that's global with things that are not global, if you can work out how they connect then that would enable more commonality.
... Not necessarily abstraction layers but where return values are not defined by W3C but left to other implementers.

Giuseppe: Maybe there are places where a return value can be generic, but there could be the opposite too.
... Also, it's possible to propose extensions to the spec. We have bugzilla but maybe not enough TV people have submitted bugs.
... So if you want an additional return value you can submit a bug.

Jon: If I wanted to submit a bug for failure codes, doing that would take a lot of my time. But such small extensions are low-hanging fruit.

Clarke: I think the easiest problem to solve is where you've several dedicated people in a room to fix a problem.
... With EME, there weren't necessary aligned opinions, but with MSE there were a lot of people who wanted to solve the same problem.

David Singer: At least one of the slides showed a bewildering number of standards, which reference testing.

scribe: The bodies are slightly different in how they implement specifications. Is that a problem? Should W3C look at that?

Jean-Pierre: Going back to the services using SOAP and REST I mentioned earlier.
... Somebody suggesting calling them examples rather than reference implementations.

Jon: The two words "reference implementation" cause concern. They could give a particular vendor an advantage.
... You could have a big discussion about what reference implementation means.
... It's such a touchy subject about what benefits they give.

Clarke: I think you have to look at each organization. There's also operability testing.

JC Dufour: To give an idea about the level of thinking MPEG had, yes, a reference implementation is a very powerful tool.

scribe: But if we find an ambiguity we have to export it as soon as possible.
... Yes, the company selected gets an advantage but it has to give the software away for free for conforming.

<jcverdie> anyone has an idea why the minutes did not take the last session? (the IRC logs did however)

[ Day 1 ends ]

[ Bavarian-style buffet hosted by IRT]


Day 2 (Thursday 13th March)

Session 5 — Multi-screen 2

<ddavis> scribenick: ddavis

Challenges for enabling targeted multi-screen advertisement for interactive TV services

"Challenges for enabling targeted multi-screen advertisement for interactive TV services" by Louay Bassbouss

<Alan> [link to wiki https://www.w3.org/2011/webtv/wiki/TV_Workshop_Mar_2014/Next_steps]

louay: Our use case is multi-screen advertising
... If we have a multi-screen service like broadcast TV and companion device (e.g. F1)
... the user can select his favourite team or driver on the companion device and can follow the race from the cockpit.
... If we show an advert on the main screen, on the companion screen the user could interact with that.
... So the user launches the multiscreen app, then connects the devices, then sees the advert, then goes back to the TV content.
... Here's a video showing the notification and app launch.

[Video showing]

<Alan> scribenick: ddavis

<Alan> Scribe: Daniel

louay: The second part is interactive TV services.
... We have our own TV lab where we test these services on TV sets.
... Our experience shows that we need a unified video object.
... Please refer to our position paper for more of our interactive services.
... Back to our multi-screen, we identified requirements from our use case.
... For notification and application launch there is W3C Web Notifications, but it concentrates mostly only on local notifications.
... The notification is displayed outside the user agent.
... What we want is the notification to be displayed on another device, so we need an API to do that.
... There are a lot of technologies for connecting devices but we need a unified independent API to send notifications.
... We have a suggested API.
... This listens for a DeviceConnected event.
... Then the TV app can send a LaunchRequest to the device.
... For app to app communication there are a lot of W3C APIs. One is the Web Messaging API for communication with web pages running on the same user agent.
... WebRTC is for communication between different devices. Web Sockets is for communication between a device and the server.
... The Web Messaging API is a simple API and you can use different protocols. We think we can use this.
... If both user agent supports WebRTC they can send metadata through the messaging channel and then establish a WebRTC connection.
... Similarly with Web Sockets.
... In our API, if a connection is established, the message port is used to send messages to the companion screen app
... There is no requirement for a new companion part of the API because Messaging is already supported by the browser.
... We've been using Dash.js which enables DASH in the browser using MSE.
... WE built features upon this library including synchronization and ad insertion.
... We also have a DRM implementation using EME and PlayReady
... We published a white paper about this two weeks ago.
... Finally I'd like to invite you to our Media Web Symposium in May. Please speak to me.

Q&A

Mats Cedervall: ???

louay: I'm not an expert in DRM but if you're interested I can connect you with my colleagues.

Linking Web Content Seamlessly with Broadcast Television

"Linking Web Content Seamlessly with Broadcast Television" by Jan Thomsen

Lyndon: We're coming up with a lot of use cases and finding other issues.
... Firstly, LinkedTV is a EU-funded project. Our object is seamless integration of the web and TV.
... By this we mean looking at trying to link information to parts (objects) in the TV programme.
... For example, if I want to know what the weather is like in the location shown on the TV.
... I can pull up the weather app. Usually my TV doesn't know the location of the TV programme.
... Gary Myer said "TVs can't be smart"
... We have a news-based demo where people and topics are referred to in a news programme.
... In the companion device you can pull up related information.
... Initially we thought we'd use the TV screen for information but after a while we realised using the mobile screen was more appropriate.
... What's our approach?
... We are fragmenting content - splitting in into sections using object detection , OCR etc.
... We use RDF.
... Content could be recorded or any audio visual material.
... We also have enrichment and curation.
... We use an ontology-based data model using RDF.
... We have the open annotation model and we can also analyse data results.
... Over to Lyndon for the next part - lessons learned.

Jan Thomsen: There are a lot of aspects to this.

Jan: The fact that we're aiming for broadcast but linked TV is not ready has a couple of reasons.
... It's very costly and difficult to scale. To make it broadcast-ready you need to annotate content just before broadcast.
... We analyse different tracks and metadata, audio and video objects, so there's plenty of space for configuration and different techniques.
... We're using HTML web technology because it's more open and complete.
... We really need HbbTV 2.0 and more so we're pleased that this is in the works.
... We have some proprietary solutions because there are no standards ready.
... We need an MF URI compatible streaming server and dereferencable HTTP URI.
... We have our own REST API for accessing media fragments.
... In order to make the Linked TV approach we need broadcasts to be ready.
... We support the Media Fragments URI specification.
... We need some kind of locator scheme and extension of TV metadata standards.
... Regarding Media Fragments URI, time is the most important dimension.
... The fragments are mostly relative to the beginning of the video, which means we always have to calculate the beginning.
... We need the clock for synchronisation of content.
... It's good that there will be support of media fragments in HbbTV
... In LinkedTV, URIS are created to identify programs which can link to different broadcasts and locators.
... Annotations of programs are shared across all possible deliveries of broadcasts.
... A problem we identified is that locators often expose communication schemes
... The usage of annotations of Media Fragments allows very rich, fine grained and personalizable relation of web content in single screen and multiscreen environments.
... LinkedTV sees this as the main way for real TV and web convergence.
... This goes far beyond what is currently available for TV applications (mainly EPG data)
... Broadcast publishing workflow is not really a W3C issue.
... Proposals for standardization activities include support for the Media Fragment URI in TVs and STBs.
... Also consistent implementation of spatial and temporal markers for Broadcast TV.
... Agreement on identification of TV content.
... And also description and addressing scheme for rich TV annotations, e.g. via WebVTT/TTML
... Introducing media fragments annotations into production and metadata exchange standards.
... Finally we'd like to sync these efforts with W3C efforts such as the second screen community group.

Q&A

Simon Waller: You're reference the start using the Media Fragments URI. How do you get around the case where adverts will be inserted in the middle.

Lyndon: That's the problem - we can't get around it at the moment.
... We have to trust that our application can map out but we're not sure this is done. We're looking for a solution.

CRID: Content Reference Identifier

Jan: A bit like URL shorteners
... It's widely used by broadcasters.

Clarke: There seems to be a logical problem space - I'm wondering what solutions there are to match up things in real time.

Jan: In real-time is very difficult.
... In the broadcast workflow some broadcasters don't like it so real-time annotation is something we have to think about.
... Visual analysis techniques that aren't so time-consuming is possible, like using OCR.

Lyndon: You can't anticipate with things like sports events but in other cases we can get information in advance and prepare before something is shown on the screen so it feels like real-time.

Clarke: Are you looking at fuzzy matching techniques?

<Giles> j #webtv

Lyndon: There are some different services that can match text so that LA and Los Angeles maps to the same thing.

Inter-Device Media Synchronization in Multi-Screen Environment

"Inter-Device Media Synchronization in Multi-Screen Environment" - Geun-Hyung Kim

Kim: Thank you for letting me present our idea.
... I'll be covering media synchronization.
... Inter-media sync on a single device as been investigated intensively.
... Both media delivered along a single delivery path (e.g. OTT) and along different delivery paths (e.g. hybrid broadcasting).
... In a multi-screen environment users own multiple devices
... Here's an example of basic seamless service migration.
... If the user is watching a movie on the smart TV at home, they can keep watching the movie on their smart phone.
... Then they can continue to watch it on their laptop at their destination.
... There are three categories - Device shift, Cooperative screen and Screen sharing
... In the Device Shift, the whole service is shifted across different synchronised devices.
... In Cooperative Screen, there's partial service migration to overcome the limitation of a single screen size.
... Multiple screens interact with each other to provide rich service experience after migration e.g. game control.
... or display area expansion.
... In Screen Sharing, partial service will be migrated to other screens such as information about actors and actresses. Or chatting through SNS.
... Our requirements are as follows.
... 1. Synchronized media presentation on each individual screen in the multi-screen environment.
... 2. Synchronous seamless migration of whole content that runs on one screen to another screen.
... 3. Synchronous seamless replication of whole content.
... (missed points 4 and 5)
... After discovery, messages are exchanged between the TV and table. Then the tablet communicates with the server. When it receives the requested content it sends a message to the TV.
... The TV then sends a message to start the delivery of the content.
... As an example, the big screen (TV) can be a sharing screen. Content can be sent from other devices.
... In our simple architecture, time is used to sync the content between devices.
... Issues include cross-browser discovery mechanism and direct communication.
... Also signalling information to migrate service components and a component-based web app authoring mechanism to send components to other screens.
... In summary, we looked at sending multi-screen content and the issues involved.

Q&A

Clarke: What do you think W3C can do to solve some of these issues?

Kim: Peer-to-peer messaging needs to be used to send information.

Three challenges for Web & TV

"Three challenges for Web & TV" by Viktor Klos

Viktor: I work at TNO - the largest research institute in the Netherlands.
... I'm going to focus on one of the things I addressed in my paper.
... We created a social TV experience seven years ago.
... Called ConnecTV, you could see what your friends are watching. The average social group size was about 8.
... Then you could add a personal recommendation which were extremely effective - 90% followed.
... There was a network DVR functionality so you could record and share content.
... It worked really well. We even got a Deutsche Telekom award.
... But how could we make it now?
... The set up was quite complicated - we had to make our own STB.
... We are talking about Web and TV but are we talking about Web tech in a TV environment?
... Or the web as a playground to leverage great content in new ways?
... Can W3C help by making standards we can reference or a greater thing?
... One way to integrate our project is as a smart TV app.
... They are vendor specific and there's a privacy issue.
... Now, we would use smartphones and mobile devices.
... The easy thing is to connect the mobile device to the TV.
... You always have it at hand so the TV can just show notifications.
... The mobile app can be a native or web app.
... So the TV is the companion screen.
... This is peculiar because so far the companion screen has been the mobile.
... We think you should start with what's closest to the user.
... So the smartphone is the primary screen, not the TV.
... Data - popups and notifications, even content - is sent to the TV.
... Linear TV is declining, especially young people.
... Extra data can be sent to the TV overlaying the broadcast.
... Also events so you can react to what's going on now, and also as control.
... In other words, data SHALL go to the TV, events SHOULD come from the TV and the phone MAY control the TV.

Q&A

Questions?

Simon Waller: On your last slide, were you talking about the companion screen being able to send data/events to the TV itself rather than the user agent on the TV?

Viktor: Good question - what we want to accomplish is using the TV as a large second screen so it doesn't really matter.

Simon: If you concentrate on the user agent then it refers to Web Messaging, Web Sockets, etc.

Viktor: At this stage, there's no way to get the message to the screen . without being the broadcaster.

Simon: Have you spoken to broadcasters about whether they want popups on top of their content?
... We would not be allowed to do that.

Viktor: There are a lot of efforts e.g. to stop skipping adverts which is in the interests of the broadcaster, but it's also in the interests of the broadcaster to increase the attractiveness of linear broadcast.
... Popups, whatever they are, could be quite harmless to implement.

Simon: It's difficult to distinguish innocent popups with others that recommend something different to watch.

Jon: You are underestimating the paranoia and also legitimate concerns of the broadcasters.
... It could be wrong that people are making money from their content without involving them.
... Some of the things proposed would business concerns.

Viktor: Yes, but let's be brave and find the border of what's possible.

Jon: In a standards discussion, people tend to be defensive.
... There's the fear that if it's in a standard as an option, it could be mandated later on.

Viktor: Users (my kids) don't differentiate between catch-up TV and linear TV. On tablets you can do overlays - it's my screen - so what's the issue?

Jon: But we should accept people's concerns.

Victor: I think if you open it up, good things can happen.
... How can we create a compelling social TV experience?

Session 6 — Panel: Key issues in web media, moderated by JC Verdié

<dsr> scribe: dsr

<scribe> scribenick:dsr

Daniel Davis introduces the panelists.

Jan Lindquist presents protected content in browsers (slides)

Seeking feedback from industry, especially from the content provider side.

Today multiple choices for delivering content: Silverlight or other plugin that supports DRM, or native client.

Legacy plugins have poor security and stability and are being dropped.

You may need to use different DRM systems for different browsers, this is expensive, and my question is whether this is acceptable to industry?

Alternatively, we need to come together to work on a cross browser solution.

Another question is to drop the requirement for DRM, is that viable?

Or you could restrict the number of browsers you support, is that acceptable?

The Chrome browser is ceasing support for the browser plugin API (NPAPI). So plugins will soon not work at all on chrome.

IE likewise will drop support for plugins.

Users will see a puzzle piece on pages requiring plugins, so developers need to adapt!

Different DRM on different browsers is perhaps anticompetitive!

Andreas Tai presents on web-distribution formats for subtitles and captions.

WebVTT and TTML.

Shows examples of the corresponding markup.

W3C is rechartering Timed Text WG for work on TTML 2.0

Which orgs are involved? SMTE-TT, EBU-TT-D, CFF-TT (Ultra Violet)

SMPTE-TT is a superset of TTML. CFF-TT is a subset of TTML plus a few extensions, likewise for EBU-TT-D.

WebVTT started in WhatWG and doesn't use XML.

Currently a draft community spec, and we're expecting it to transition onto W3C REC track.

Both formats (TTML and WebVTT) will be coordinated by the rechartered Timed Text WG.

Requiring support for 2 formats is a nuisance, but doable. There is work on defining the mapping between the two.

Most browsers only support WebVTT (exception is IE)

A good outlook will be for browser to start supporting both formats.

Jean-Claude Dufourd presents on discovery in W3C

Web & TV IG Home Network Task Force came up with requirements for local discovery.

Agreement on support for legacy devices with UPnP or Bonjour.

The requirements were picked by by the W3C Device APIs WG. For some time we worked on Web Intents as a basis for discovery, but this stopped along with work on Web Intents.

A separate work item on network service discovery API, but browser vendors have said they don't want to implement primarily due to concerns with legacy devices.

Requiring CORS rules out legacy devices, as a result the spec has more or less stalled.

What can we do to unblock progress on local discovery?

Perhaps we could focus on discovering browsers which can then talk to each other?

A third option is to incorporate discovery into service specific APIs such as for web screens API.

A fourth idea is some kind of object sharing. Your ideas are welcomed!

Jean-Charles: we have had quite a few presentations on discovery in this workshop. What do you think the issue is for standardization? Is W3C the right place for this?

Jean-Claude: I am not quite sure that network discovery is what is needed as such. This is a good place for discussion. Native apps are one approach but work in silos. I don't really have a good answer.

Victor: does WebRTC help?

David Singer explains the essence of WebRTC - a means to embed a video telephony client in your web page.

Jan: we could move the security and privacy up to the web layer. In other words server based discovery.

Jean-Charles: DIAL is using nssd and is pretty much what you said.

David: we need to clarify why web pages need to discover devices - cites issues with fingerprinting of the home as well as users!

Jean-Claude: discovery is easy within the Apple or Google silo, I want an open standard that works across vendors.
... I think it is clear the solution needs to be based upon web technologies.

Clarke: From UPnP perspective is to take advantage of the devices that are out there. I full agree with David on need to address privacy issues.

From Cable perspective, we want web solutions to find devices as media sources and as media renderers.

Jon: I also agree with David. You certainly don't want wild web pages to scan your environment.

Guiseppe: option 1 (discover with UPnP) doesn't solve all the requirements.

Bryan: local discovery could be via bridge apps and will be solved one way or another -- the question is when it becomes native to the web.

Guiseppe: it isn't just about privacy, there are other issues to be addressed.

Daniel: there is slow progress on the network discovery spec, it hasn't completely stalled. However, perhaps legacy device support may not be achievable.

[scribe wonders why no one has mentioned raw sockets API in sysapps]

Questions to the panel

Jean-Charles: let's switch topics. Andreas you seem optimistic that the Timed Text WG will succeed in finding a compromise.

Andreas: interested in how W3C can address this challenge of brokering an agreement.

Jean-Charles: let's ask Philipp Hoschka for his view on W3C direction.

Philipp: this has a lot of history with plenty of meetings. We have at least brought the two communities into the same group and initiated discussion on mappings. Wnat to hear from David Singer.

David: the battle is to get subtitles to the end users who need it. The good news is that the formats are conceptually very similar, but we need to look at the features that are not in both. Do we need to address these and if so, how?
... we are succeeding in rolling out subtitles to address accessibility of media streams.. I am not pessimistic.

The IPR situation around WebVTT is now much clearer which should help.

Andreas: the next step is to study the two formats and access the overlap and disjoint features.

David: there are open source contributors working on TTML.

Guiseppe: Opera's browser is based upon chromium, but we haven't taken a position on the formats.

Jean-Claude: let's ask the content producers on their views

Andreas: BBC for example has produced TTML and have to deal with archiving requirements..

??: does the browser need to support TTML? You could hand over to the media player.

Jean-Claude: embedding subtitles in the steam has its drawbacks, e.g. inability to index it

Andreas: I agree with David that it is not a question of which is a better standard, but rather addressing end user needs for accessibility.

David: disappointed about only addressing hard of hearing, what about addressing descriptive text for people with visual impairments, should we work on that?

Chris Tirpak: apart from the BBC,and one US player that produce TTML, everyone else is providing '608

Clarke: confusion in US about how to signal visually impaired audio
... some abuse of language tags, which isn't viable
... we're talking with a range of groups to address this properly.

David: please join the working group to avoid '608 as a 3rd format we have to deal with.

Jon: '608 is a regional format not a global one.

Jean-Charles: let's talk about protected content and the move to EME. Does this solve our problem or move it elsewhere?

Jan: concerned with consistency of user experience, and EME related browser components.

Jean-Charles: I don't see EME as solving the UX issues

David: we should ask industry about alternatives to DRM, as it brings so many costs and complexity
... in the music industry Apple managed to persuade content owners that DRM wasn't worth it.
... may be we can slice this problem differently

Battery issues pull decoding close to the hardware, which complicates cross device support for DRM

Clarke: EME and DRM is the best we have right now, and we should figure out how to makeit work

Jean-Claude: hardware, OS and browser, whose problem is it?

Jan: could we come up with a standard for plugable downloadable CDM components?

??: I am afraid that we will see device specific DRM which will hurt the users in the end

Tord Persokrud: multiscreen will make this worse

Jan: if we don't make this work on browsers then native apps will win out.

Jean-Charles thanks the panel and we break for lunch.

<Alan> scribenick: Alan

Scribe+ Alan

Session 7 - More Challenges for Web Media

Moderator: Kaz Ashimura

Network-Assistance and Server Management in Adaptive Streaming on the Internet

Speaker: Lukasz Kondrad - Huawei Technologies

DanielDavis: We had Xin Wang on the agenda but he couldn't make it so we have his colleague stepping in.

Lukasz: This presentation was supposed to be presented by Xin Wang from the US.
... The topic is Networks-Assistence and Server Management in Adaptive Streaming on the Internet

[slide 2]

Lukasz: This streaming was successfully implemented. it was standardized by MPEG and adopted by industry players.

[slide 3]

[architecture diagram of Current: Client Managed DASH]

[slide 4]

<jcverdie> s/??: does the browser need to support TTML?/Simon Waller: does the browser need to support TTML?

Lukasz: It's hard to create a business model to give customers a quality experience.

[slide 5]

Lukasz: We need to think about the tools used to deliver a better eperience.

[slide 6]

scribe: To improve this we should think about shifting the adaptation loginc to the Server side.
... If we had clients with different implementations, the client that tried to be lower footprint would have worse service.
... By shifiting it to server side we make it more equal.

[slide 7]

[slide 8]

Q&A

<turgay> does that imporve net neutrality?

[slide 9]

Kaz: I have some questions.
... I was wondering on slide 7 I was wondering abut the difference between BGR nd GRB

Lukasz: that's a typo - they should be the same.

Kaz: On slide 8 can you explain more?

Lukasz: I'll pass that along to get you an answer.

@@@: With this kind of technology allowed for ???

scribe: 2nd question, if anything is allowed on the client side, even if the client is doing things, the server can do what it wants.

Lukasz: I believe the client needs to provide some basic things and the work should be shifted to the server.

Audio Definition Model

Speaker: David Marston

Presentation: Audio Definition Model

Dave: Bear with me, I'm not a Web expert.
... but Audio is moving along quickly with HD TV, you need something that helps the audio.

[Slide 2]

[Slide 3]

[Slide 4]

Dave does a poll of who has 5.1 technology - 20% but only 3 people have them properly positioned

Dave discusses how to get 3D sound from newer systems

Dave: Need the ability to describe where each channel applies

[Slide 5]

[Slide 6]

Terminology list

[Slide 7 Audio Definition Model Diagram]

[Slide 8 Simple Channel Based Example]

[Slide 9 Coded Audio Example]

[Slide 10 Object Based Example]

[Slide 11 XML Representation]

[Slide 12 Standard Configuration File]

[Slide 13 Custom Configuration]

Slide 14 / slide 15]

Slide 17]

Current Status

s/slide 14/ slide 15/[Slides 14 - 16/

[Slide 18]

Future Work

The End

Q&A

Kaz: Questions?

Dave Singer: Given the 5.1 channel I know where to find stuff, in the object base you didn't appear to have a label.

Dave Marston: We do.

DaveS: MPEG has labelling as well.

DaveM: We want this to be compatible.

Kaz: I have a question.
... How do we integrate this? Via XML?

DaveM: The two would work togehter and can be combined.
... There is the use of objects so you can set your screen up to meet specfic needs.
... It gives you more flexibility.

PARS - Multiscreen Web App Platform

Speaker: Dong-Young Lee - LG Electronics

Subject: PARS - Multiscreen Web App Platform

[Slide 2]

[Slide 3]

Dong-Young: In this approach each device can provide services.
... The UI may be different across devices.

[added 2nd set of points]

A set of cooperation applications

Dong-Young: The information process is simplified via SDKs.

[adds 3rd set of bullets - A distributed application]

[slide 4]

[slide 5]

Dong-Young: I believe Web platform is best for applications for these reasons on screen.
... No pre-installation needs it runs immediately but can be sandboxed for security

[Deomo of PARS}

[Slide 7 - Archtecture]

Dong-Youg: The framework part isn't something we can standardize because we have dependence on local features.

[Slide 8]

<yosuke> Dong-Young: re Standards Presentations API from the CG is a good candidate because it satisfies all three reqs: local discovery, p2p messaging, and remote execution.

scribe: For interoperability we need protocal level
... At the protocol level we need standards to get interoperability
... We talk about if features should be done as webapp or by UA
... Security, Privacy and Usability are equally important

Dong-Young: For time synchronization we need to do it in Javascript, but if we need stronger sychronization we'll need something else.

[applause]

Q&A

Kaz: Questions?

@@@: Can you repeat what the Presentation API is?

Dong-Young: This is work that was done in a CG. It takes care of rendering and P2P messaging.

Bryan_Sullivan: In a previous slide you showed the framework. Is it executing outside the browser?

Dong-Young: In our experiements we have both types.

Bryan: Between these two devices, what's involved?

Dong-Young: Browsers and Systems level services.

&&& me missed that one - sorry &&&

<Louay> Presentation API is the work of the W3C Second Screen Presentation CG

IPTV using P2PSP and HTML5 + WebRTC

Speaker: Vicente Gozalez-Ruiz and Critobal Medina-Lopen

Subject: IPTV using P2PSP and HTML5 + WebRTC

Vicente: URL of slides: Http://www.p2psp.org/slides/Web-TV-Convergence-2014

[Slide 2]

<Louay> Second Screen Presentation CG: http://www.w3.org/community/webscreens/wiki/Main_Page

Vicente: Interactivity is important - as is security of the application
... It should be possible to implement such a system

[slide 3]

Vicente: The most important is using your IP network. On teh left you have likage via ISPs
... You have video you want to send and you can see where it needs to have a supporting business model
... The alternative is to move it to the application and hae unicast

[Slide4]

Vicente explains various models and P2P advantages

[slide 5]

[slide 6]

<ctirpak_> fyi: p2psp slides url returns 404

Vicente: The distribution is shared and done by chunks and there are ways to cover data loss

[slide 7]

<Pecko> octoshape?

[slide 8]

[slide 9]

<ddavis> Slides available here: http://www.p2psp.org/slides/Web-TV-Convergence-2014/

[demo]

demo works!

[Slide 11]

<Pecko> demo always works (when no one is watching)

[Slide 12]

[applause]

Q&A

Kaz: Questions?
... I was wondering about the synchronizing the files. You're using P2P. What client is used in implementations?

Vicente: It is in javascript so its implementation is simple.

Kaz: is it possible to provide that as some kind of javascript library?

Vicente: right now the source is open.

DanielD: Are you using WebSockets or WebRTC?

Vicente: WebSockets because it's outside the web browser.
... probably WebRTC will be used in the future.

Victor Klos: Have you used it for live TV? Have you measured the delays?

Vicente: We haven't measured that but it will depend on the number of blocks.
... This is a protocol that if we follow all the rules you need to have data recovery.
... So if you have short chunks you will have low latency, if you use bigger you will have more latentcy.

Bryan: How is the peer group formed?

Vicnete: By registering to a yyy server.

Bryan: Have you considered the difference between browsers in this peering?

Vicente: Right now it's agnostic, but if you have to send it to multiple users you need formats for each set of browsers.

Bryan: How would this work with EME?
... If the media being distributed had EME involved?

Vicente: Probably the media should be encrypted. This complexity should be taken care of by the browsers.

###: When the connector is lost, all behind it are lost. Is there recovery capabilities?

Vicente: In this implementation it knows which targets are where and if they are available.
... We haven't measured this type of problem.

BREAK TIME!!!!!

<davem> For those who want to play with audio definition model and audio rendering: http://data.bbcarp.org.uk/saqas/

Session 8 — Next steps and wrap-up

Wrap-up session

<ddavis> scribenick: ddavis

<scribe> scribe: Daniel

Glenn Deen: I'm Glenn Deen from NBCUniversal

scribe: I'm going to talk about GGIE - a project we're working on.
... It's our vision of one way to solve the problem of how content creators and content providers can better connect with content users.
... Firstly, the internet has changed our relationship with content.
... We used to give content to users. Now everyone is a creator.
... This is something that's new and we have to recognise. It's changed our relationship.
... On the internet, there's one protocol for all content.
... The internet has done a great job of producing a reliable transport network. But there is a bigger end-to-end connectivity problem.
... How do you mesh the end-to-end view of the world with the vertical silos (IETF, ISOC, etc.)?
... Our idea is to get a group of technical experts together - not a big consortium - and say how do we make it work well from content creation to consumption.
... And how to we feed that back to standards organisations?
... We realised it started falling into 5 categories:
... 1. Scalability
... 2. Metadata - everything about the content, inside the container and outside.
... 3. Content Identification - using techniques like fingerprinting and watermarking to map and identify.
... 4. User Identity - today's models are pretty simple
... 5. Privacy - you may want to be able to separate different videos for different audiences.
... Our approach (GGIE) is:
... To not write standards. We'd like to help existing standards organisations.
... To create use cases and GAP studies, free for groups to use and modify.
... To be open and allow easy participation.
... To not create or accept IP contributions.

[shows diagram of project workflow]

<scribe> Chair: Giuseppe Pascale

Giuseppe: Looking at gaps...
... There are some things partially covered by existing specs such as Media Fragments, HTML Multitrack API, HTML5 MediaController
... Should this be covered by W3C?

Jon: You can create a list of gaps but they need to be consistent.
... It's not clear - some should be covered by W3C, some less so.
... If they're part of a future HTML5 spec and they're not specific to TV/broadcast, will they end up as a Recommendation and be implemented anyway, without someone like Netflix pushing them?
... There are certainly gaps in those areas. Specs that have developed in other organisations cannot be exposed to web apps.
... The examples you have are valid as gaps.
... Have a look at the issues filed by the BBC - DataQueued, etc. There might be some suggestions there.

David_Singer: If we're looking at gaps, we have to focus.
... For synchronisation, we've done a lot of it already.
... One case is playing something on an audible device synced with something on another device that is not aware. Then you have to do audio fingerprinting.
... I'm not sure where the specs are for that. I think the gaps need to be broken down.

Giuseppe: Another use case is multiple videos on a device.

Jon: Another example is better synchronisation of apps to video. For example enabling something to be shown without going through JavaScript.
... You get a trigger coming out of the video that can be acted on without going up to the JavaScript.
... People want to sync video frame-accurately.
... The synchronisation does not involve JavaScript.

Victor: Would it help if video was measured in milliseconds rather than seconds?

Giuseppe: You're saying the timeline resolution is not precise enough?

Victor: I'm not that familiar with the specs.

Clarke: CableLabs has submitted a lot of stuff for timed text - are there shortcomings in that?

Giuseppe: There are some issues, some bits incomplete. It's important for people working with these specs to submit bug reports when they find gaps.
... I wonder if there should be more of a communal effort to follow the discussion.
... We could put bugs and issues on the Interest Group mailing list and get people to try to support it.
... The TV Interest Group is meant to be a community - not specifically to fix bugs.

Clarke: The only thing worse than creating a new spec is creating a new spec when there's already an existing one.

Victor: I just checked - media CurrentTime is in seconds but it should be in milliseconds.

Giuseppe: Also we have testing. People feel it's important.
... But there isn't much information or the information is hard to find.
... And there's a need for more test cases.

David_Singer: There's a bigger question there. W3C's way of thinking about testing is different to MPEG which is different to other groups.
... Do you do reference tests, test suites, conformance software, example software?

Giuseppe: I understand W3C was willing to extend its meaning of testing.

David: We have a mismatch between various bodies.
... And testing is much more important in the TV regulatory industry.
... We tend to think of test suites as checking specs are correct rather than if implementations are correct.

Giuseppe: Right, and this was extended a while ago.

Bryan_Sullivan: I added to the wiki page that what we'll do is address what David's talking about.
... What practices can W3C adopt?
... Can we move forward using existing events that are out there?
... The assumption is that W3C wants to write tests for the TV profile.

Giuseppe: Should we provide an activity for TV-related organisations for testing?

Jon: Before looking at more tests there are a lot of existing ones. Even using those requires work.
... They don't have a unique identifier.
... Also it may not be obvious which ones have been properly reviewed.
... So start by making the existing stuff usable.

Simon_Waller: There needs to be better exchange and better understanding of what W3C already has and what groups like HbbTV require.
... Maybe that requires a joint conference call or something.
... Once you have a common understanding, there can be agreement on how to move forward.
... Is W3C going to do anything about that?

Giuseppe: There's a lot of stuff that can be used already.

Simon_Waller: Maybe there is but there's a lack of understanding.

Giuseppe: So we need to get together to discuss what's required.

Bryan: The issues are part of Test The Web Forward getting better organised, e.g. for TV, which is what we'll do to address these types of questions.

Jean-Claude_Dufourd: When you develop software you have unit testing. Some say you should write the tests before you write the function.
... Should W3C do this, writing assertions at the same time as the spec?

Giuseppe: Is that possible today with existing specs?

Bryan: That discussion is taking place in TTWF

Andy_Hickman: From my lurking on W3C lists, there's a level of operational stuff that makes tests work. Procedural challenges, distribution, etc.
... I don't know whether there's an appetite for that discussion in W3C but it's important.
... Also, HbbTV managed to get it's first version test suite out without spending any money - 100% voluntarily.
... It's not just about having money, it's about a mindset.

Giuseppe: I agree with the operational part.
... Moving on to rendering and control of linear video using <video>
... We've seen some groups are using differing APIs.
... If you want to use the video element, maybe you've found something missing in the HTML5 spec?

JC_Verdie: One issue is how to access the program guide? This is called the tuner in Europe but doesn't make sense in the US.
... It shows the list of things to show and how to show them.

Jon: Access to EPG is completely the wrong words - it's about a list of services.

JC_Verdie: It's about getting the list of services depending on where you are - broadcast and IPTV.
... Much more than the pure rendering in the video element.

Giuseppe: Could that be a mapping exercise?

Jon: No, that just sweeps it under the carpet.
... Stuff that is missing includes things like parental control.
... If an app blocks video because it needs parental permission, that needs to be defined.
... The error generated when parental access is denied needs to be defined - that's just one example.

Giuseppe: Is there an interest in coming up with such a list and defining it?

JC_Verdie: We have a strong interest in this. So-called side-standards give us what we want.
... TV is not global but I believe we can come up with a standard that can be used globally and this is the right place to do that.

Jon: It is possible to create an API for this, as long as you get buy-in from all the people involved.
... Whether the benefits of one API justify the time or effort involved is a different question.
... The API would have to support the union of the functionality of the existing APIs, otherwise you can't get rid of legacy support.
... We shouldn't be creating a new API just because we need a new one - there needs to be a good reason for it.
... If the first version is lowest-common-denominator and only has basic features, the other APIs won't go away - there's no point.

Giuseppe: If a community group was created, how many people would join?

6 hands go up

David_Singer: Isn't there already a community group for this?

Philipp_Hoschka: We had a discussion within the TV IG and the task force found there is a need for this "Tuner API"

<dsinger> see also http://www.w3.org/community/agelabels/

David_Singer: there's an age labels data model community group
... Maybe that's something relevant.

Giuseppe: So this is something to follow up on.
... Next, we have issues around delivery and rendering of IP video.
... The approach that HbbTV is using is the right one (submitting bugs). Can the TV IG help SDOs to drive this and get other SDOs to do the same?

Jon: Would it be good for the IG to hold conference calls on bug discussions.
... At least giving the bug reporter the chance to talk about it in a phone call.

ddavis: Would mailing list discussions work?

Jon: Fixed conference calls with pre-defined agenda encourage people to prepare and put it higher on their agenda.

Giuseppe: The main issue is time zones.

kaz: we already have liaisons with the other SDOs, so this is kind of expanding W3C liaison mechanism, isn't it?

Giuseppe: OK, but people can use the IG if they don't know where to go, but we want the spec writers to comment.

Philipp: We watch out for bugs but sometimes miss them so please raise the awareness of your bugs in the IG.
... Also, there are two fora where we talk about media - the TV IG and also the HTML Media Task Force, mostly about EME and MSE.
... If you want to get people interested that's also a good option.

Jon: HbbTV was invited and did join a phone conference on MSE. It helped saved time avoiding misunderstanding but didn't end positively.

Giuseppe: So next steps are to get more support for bugs submitted. - we should follow up in the IG.
... Next is discovery and communication issues.
... I didn't feel there was much missing that isn't being worked on.

JC_Dufourd: One way of making discovery available to a web page is as an extension.
... So work on it in sysapps using the raw sockets API.

<Giles> .....

Giuseppe: That wouldn't address web apps.

JC_Dufourd: And as I did two years ago, if web and TV members care they should participate more in the second screen CG, Sysapps WG and Network Service Discovery API in DAP WG.

Philipp_Hoschka: As a result of this workshop, we've started a discussion with the second screen CG to move it to a working group. They seem to be open to that proposal.
... It's likely that we'll do that.

Giuseppe: Another topic raised is performance measurement.
... There is a Web Performance Working Group. Performance was discussed but was not a priority.

David_Singer: We should be careful - if we're successful this will become really important.
... It's up to a trade association to set performance targets and the W3C to set performance measurement capabilities.

Jon: There is absolutely a performance issue.
... W3C is the right place to have a discussion - what kind of performance issues are there and how can you measure it.
... UK uses a tech called MHEG 5. There were issues with renderers that made the feature unusable.
... Working out what you can measure and how you can measure is seriously hard work.

Simon: Jon and I have been talking about this for a while.
... It will become an issue in the industry if things are specced with performance requirements.
... People design and optimise for a benchmark.
... And will ignore everything else. They won't bother with anything else.
... If apps are going to go that way, would the benchmarks achieve what they want?
... You could end up with apps being blacklisted even though they pass the performance tests.
... I'd recommend steering clear of it.

Jon: The question is whether to leave it or do something that's not perfect.

Yosuke: As a data point, I joined the Web Performance WG. The browser vendor engineers were doing great work.
... They're working on how to speed up their rendering engine and providing an interface for web app developers to improve the performance of the web UX.
... They were looking out how to measure e.g. framerate.
... At that time, they had to allocate their resources to other things but I think things are now changing, so talking with them is really effective to improve this.

Giuseppe: If there's interest we could raise the issue again.

Philipp: There's increased interest within W3C regarding gaming.

Simon: Graphic performance is an issue and WebKit is trying to optimise it as much as we can.
... I don't know what the aim of the Web Performance WG is

Giuseppe: They're working on how to measure it.
... That's the first step to create a benchmark.

ddavis: It's not just about benchmarks - they create ways for app developers to find bottlenecks in their apps.

JC_Verdie: We need to be able to provide appropriate tools.
... It's not up to the W3C to decide a particular framerate.

+1

Jon: Tools are a step forward.

JC_Verdie: If the W3C sets a target level it will be bypassed by other organisations.

Jon: If there were advice about how to measure certain kind of performance, that could be used by applications and services.

Giuseppe: Next, other accessibility features beyond timed text.

Chris_Tirpak: This should be discussed. It's being regulated in the US.

Philipp: There's some work being done in TV.

Bryan: Specifically the CBBA - bill for communications accessibility.

<jcverdie> s/CBAA/CVAA

<jcverdie> Communication Video Accessibility Act

Chris_Tirpak: The reason I say you should continue that, is that when WebVTT and TTML diverged, I'd hate to see a similar split in accessibility for video.

David_Singer: I think it should be on our list to discuss.

JC_Verdie: I think it's right timing because Judy Brewer is aware of the regulatory issues

Kaz: Mark Sadecki from the W3C accessibility team joined one of our TV IG task force calls so we can ask him to help.

Giuseppe: Finally, CDM and content protection.
... Pluggable CDM for EME - what about this?

Clarke: I'd say it's more an issue for the EME (HTML Media Task Force) group.
... Unless we have specific requirements it's more something for them.

Giuseppe: I think the requirements are clear from our discussion. Maybe it's not even something for W3C to address.

JC_Verdie: We have as an industry a clear requirement - please don't add more mess to the current mess.
... Either the media task force is already taking care of this, or our warning won't change anything.

Stephan_Steglich: I'm not sure where the right place to discuss it is.
... The discussion should take place but where? I don't know.
... We can send a proposal to think about the CDM issue.
... Raise it with different group and start the discussion.

Giuseppe: That's my list. Did I miss anything?
... There's a lot of talk about metadata but I haven't heard anything specific.

Philipp: A big thank you to our host, IRT. Also to our sponsor, NBCUniversal.
... Finally, thanks to you all for coming. Thank you.

[ adjourned ]