Timing mechanisms allow operations to be executed at the correct time. The Web already has several mechanisms supporting timed operations, including setTimeout and setInterval, as well as controllers for media frameworks and animations. However, the Web lacks support for multi-device timing. A multi-device timing mechanism would allow timed operations across Web pages hosted by different devices. Multi-device timing is particularly important for the broadcasting industry, as it is the key enabler for web-based secondary device offerings. More generally, multi-device timing has wide utility in communication, collaboration and multi-screen presentation. This Community Group aims to define a common, multi-device, timing mechanism and a practical programming model. This will improve the Web as a platform for time-sensitive, multi-device Web applications.
Charter : http://webtiming.github.io
Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.
Njål and myself are going to IBC this year, (4 days, friday to monday). Please reach out if you want to meet, or just come by our booth (Media City Bergen – 8.D10).
At IBC we will be representing
Norut (interest: partnership in European research collaboration in media, in particular further research into the vast possibilities opened up the use of timing objects and shared motion (i.e. global timing, synchronization and media control into media systems)
Motion Corporation (interest: commercial exploitation of shared motion)
W3C Multi-device Timing CG (interest: Web standardization of timing, synchronization and media control, i.e. the timing object)
We will also give a few presentations on commercial opportunities enabled by global timing, synchronization and media control. Here are some teasers:
Digital signage with a sprinkle of magic (8.D10 – friday 15.00)
In a world where large screens are available in many public areas, a vast number of opportunities arise if we can limit complexity yet provide flexibility. Waves of ads following the conveyor belts at airports? Interaction between peoples phones and a set of screens or even physical objects? Getting the audio track of the in-train entertainment on your smart phone? Creating a compelling viewer experience with synchronized audio and video across multiple screens and devices is a seemingly insurmountable challenge. In this talk we discuss how our web based synchronization mechanism and tools can be used to unleash your creative people without freaking out your accountants.
Accessibility is king (8.D10 – saturday 13.00)
Making content available for anyone to enjoy can seem difficult, costly and technically complicated. How can we create accessible and highly customizable experiences without interfering with the other viewers? Do we need to watch TV alone to get the correct adaption? In this session we will discuss an experiment with the Norwegian public broadcaster NRK and how we built the most advanced accessibility demonstrator ever created, in two days. See how we adapted a piece of original content to personal needs using the most personal of devices: peoples own mobile phone.
F1TV, the pinnacle of OTT coverage (8.D10 – sunday 13.00)
Formula 1 is the pinnacle of motor sport – and likely has the most technologically interested viewers in the world. F1TV is a new OTT offering from Formula 1, opening the floodgates of audio, video and statistics to highly engaged fans. In this session we discuss some of the amazing possibilities for the future of sports coverage. Now, ultra personalized experiences, collaborative viewing and incredibly flexible multi-screen solutions can be made available with a minimum of investment and technical complexity.
An experiment for amateur camera sports coverage has found real world use. In this talk, we show and tell how Fire and Rescue services use our synchronization service to build an ad-hoc online studio with drones, car and body cameras, sensors and the cell phones of the public as input sources. The extreme flexibility open new ways of communicating very complex situations, harnessing power of the most available resource there is: people in the vicinity.
I’m happy to announce a new publication from the Multi-device Timing CG. A new handbook on media synchronization has just recently been published on Springer. Njål T. Borch, Francois Daoust and myself were asked to contribute a chapter based on our research in this domain.
The chapter explains how to do media synchronization on the Web, and how media synchronization done correctly is the key enabler for a new and highly attractive media model for multi-device, timed Web media.
I also think this chapter is the most comprehensive introduction to the ideas and proposals put forward through the Multi-device Timing CG at this point.
The author version of this chapter is available here. Please cite original chapter published by Springer. You may also request the Springer version of the chapter by emailing authors directly or requesting access through ResearchGate
For the most precise synchronization of HTML5 media, and for the best user experiences (avoiding audiovisual artifacts) we depend on dynamically adjusting variable playbackrate. This works across browsers, but we have identified a subtle bug in the implementation of variablePlaybackRate in Safari, resulting in a terrible experience.
There seems to be a side-effect when variableplaybackrate it modified, causing the value of currentTime to pause for a short time interval, about 0.1 – 0.3 seconds.
We’ve reported the bug to apple. Hopefully they’ll be able to fix it.
Njål, Francois and I are publishing a new paper on multi-device timing at IBC 2016. The paper is titled (rather boldly;)) “Timing: Small step for developers, giant leap for the media industry” and is included as supporting paper in the paper session: “Enhancing the Multi-screen Experience through Synchronisation and Personalisation”. For those present at IBC this year session details may be found here session link.
This paper isn’t overly technical, but focusing instead on how the industry currently deals with timing, as well as pointing out the opportunities that would come from adopting the multi-device timing approach (i.e. timing object + shared motion).
Our setup in the Futures Park booth was fairly simple; four different laptop devices and two smart phones. As you can see in the picture we used the laptops to present a selection of HTML5 videos being synchronized across the different screens (using Shared Motion and the MediaSync library). Two laptops were cabled, two on WiFi. We used Firefox and Chrome browsers. One smart phone was used for controls (play, pause time-shifting the timing objects as well as switching between videos). Another smart phone was used to present the audio of the video. We also brought two pairs of headphones, one connected to a laptop computer and one connected to the smart phone. This way, by using both headphones together, our audience could verify echoless sync between smartphone and laptop computer. We also made sure to reload the Web-browsers to demonstrate how quickly sync is regained – fractions second as long as video data is available. The demos ran in perfect synchrony for four consecutive days, without as much as a glitch. That’s impressive – especially considering the poor networking conditions in the NAB exhibition hall!
Reactions to the demonstrations were overwhelmingly positive. Many people expressed excitement that there was an initiative aiming at improved support for timing on the Web platform. People were also taken aback by the quality of the synchronization as well as the prospect of doing this globally. Some people were curious about use cases, whereas others immediately recognized the need for timing and synchronization in various broadcasting applications, be it live streaming, ad-insertion, tiled screen setups, timed UGC, collaborative viewing, remote control or what not. We mentioned concrete use cases such as secondary device applications, alternative audio tracks on secondary devices (accessibility etc). We also presented more high level value promises such as timing-consistency in UX and the important role of timing with respect to integration and interoperability between heterogeneous media systems. Finally, we had some very concrete interests from very central players. We’ll let you know when interests materialize.
So, a big thanks to Norut, Vicomtech and MediaScape for an excellent show at NAB! Next major event up for the Multi-device Timing CG will likely be a F2F in Lisbon at TPAC 2016 in september.
We have just published a paper on sequencing in Web multimedia. Sequencing is about activation and deactivation of media items at the correct time during media playback. The paper highlights the importance of decoupling sequencing logic from data formats, timing/control and UI in Web-based multimedia.
Data-independent sequencing implies broad utility as well as simple integration of different data types and delivery methods in multimedia applications.
UI-independent sequencing simplifies integration of new data types into visual and interactive components.
Integration with the Timing Object ensures that sequencing tasks may trivially be synchronized and remote controlled, both in single-page media presentations as well as global, multi-device media experiences (e.g. through Shared Motion).
In short, we see precise, distributed sequencing as a fundamental building block in multi-device timed multimedia.
The paper will be presented at the ACM MMSys’16, Special section for Media Synchronization, Klagenfurt, Austria, May 10-13.
The paper is available in the ACM library here or from Norut here.
In collaboration with Vicometch-IK4, we did some tests of HbbTV 1.5 to see if we could manage to exploit Shared Motions to add sync capabilities to existing smart TVs. The HbbTV 1.5 does not have any explicit support for synchronization, and while HbbTV 2.0 will bring this, lots of existing SmartTVs will not get these upgrades. If you are interested and have knowledge about HbbTV, we welcome any input on this initial experiment.
We quickly discovered that the media element of the TV is unable to provide a good user experience when slaving after a Shared Motion. It lacks variable playback rate, and skip operations are very slow. Our approach was therefore to request the media element to play from a given position. It will not be very correct, but instead of trying to correct the playback on the TV, we rather adjust the Shared Motion to match what the TV does. In this way, we’ve re-created a master-slave relation, with one master (the TV/Chromecast) and however many slaves you want.
Here is a film we made from our experiment with a Panasonic TV :
Interestingly, we see that the currentTime reported by this TV fluctuates within around 250ms. We are however able to select the better samples and in that way provide a consistent experience. The TV we tested did need calibration, but this seemed to be hardware specific and consistent for skips, reloads and other content.
This test is of course IP based. We asked for some input on this for broadcasted content. It appears that we only would have streamevents to provide an estimated time (possibly to within a second), but perhaps even relatively rough estimates of the current position could be extracted and make for user friendly transitions between broadcasted and IP delivered content?
Of course, HbbTV 2.0 devices should be much better at all of this, and provide local synchronization to boot. However, we believe this experiment opens for an interesting transition phase, where current SmartTVs can provide at least some additional functionality for a vast number of users.
If there is any interest in testing other HbbTVs, we’re very willing to provide a simple web application for testing, including manual calibration. Please let us know!
Njål and I are going to NABShow 16-21 April in Las Vegas [1,2] to promote the timing object and the W3C Multi-device Timing CG. If you, or any of your colleagues, are attending NAB, please come by our booth in the Future Park .
The booth is hosted by EU FP7 project MediaScape . The invitation came as a result of demos at IBC 2015, where MediaScape project lead partner Vicomtech  showed flexible and tightly synced multi-device adaptation in regular Web browsers. The MediaScape project uses the Shared Motion approach to distributed timing and control in Web browsers, and has been central in pushing for standardization of the Timing Object through the Multi-device Timing CG initiative.
Vicomtech by Mikel Zorilla and Esther Novo will demonstrate the many fruits of the MediaScape approach, with a particular emphasis on multi-device adaptation, while Norut (Njål and I) will focus on distributed control and synchronization for IP-based services.
Also, if you are interested in discussing commercial opportunities implied by Web-based timing, you may set up a meet through our American partners, Glen Sakata  or Chris Lennon  at MediAnswers . They have a long track record within the american broadcasting industry, and a keen understanding of what opportunities the Timing Object & Shared Motion can offer to the industry.