Timing mechanisms allow operations to be executed at the correct time. The Web already has several mechanisms supporting timed operations, including setTimeout and setInterval, as well as controllers for media frameworks and animations. However, the Web lacks support for multi-device timing. A multi-device timing mechanism would allow timed operations across Web pages hosted by different devices. Multi-device timing is particularly important for the broadcasting industry, as it is the key enabler for web-based secondary device offerings. More generally, multi-device timing has wide utility in communication, collaboration and multi-screen presentation. This Community Group aims to define a common, multi-device, timing mechanism and a practical programming model. This will improve the Web as a platform for time-sensitive, multi-device Web applications.
Charter : http://webtiming.github.io
Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.
Fun use of timing objects, sequencers and shared motion to synchronize one thousand mobile phones to become part of the show. Made by high school senior David McAllister for an event at a high school in California. Good work David!
For all we know, this could be the world largest web synchronized event to date, which is pretty cool too!
Christoph Guttandin presented a talk about the Timing Object titled “The Timing Object, a pacemaker for the Web” at this years Web Audio Conf (2018) in Berlin. He has also written an article covering the same theme as the presentation. You’ll find the presentation (video) within the article.
Excellent work Christoph.
Christoph has also kindly agreed to share some of the feedback he got from Web developers at the conference in an upcoming post.
Njål and myself are going to IBC this year, (4 days, friday to monday). Please reach out if you want to meet, or just come by our booth (Media City Bergen – 8.D10).
At IBC we will be representing
Norut (interest: partnership in European research collaboration in media, in particular further research into the vast possibilities opened up the use of timing objects and shared motion (i.e. global timing, synchronization and media control into media systems)
Motion Corporation (interest: commercial exploitation of shared motion)
W3C Multi-device Timing CG (interest: Web standardization of timing, synchronization and media control, i.e. the timing object)
We will also give a few presentations on commercial opportunities enabled by global timing, synchronization and media control. Here are some teasers:
Digital signage with a sprinkle of magic (8.D10 – friday 15.00)
In a world where large screens are available in many public areas, a vast number of opportunities arise if we can limit complexity yet provide flexibility. Waves of ads following the conveyor belts at airports? Interaction between peoples phones and a set of screens or even physical objects? Getting the audio track of the in-train entertainment on your smart phone? Creating a compelling viewer experience with synchronized audio and video across multiple screens and devices is a seemingly insurmountable challenge. In this talk we discuss how our web based synchronization mechanism and tools can be used to unleash your creative people without freaking out your accountants.
Accessibility is king (8.D10 – saturday 13.00)
Making content available for anyone to enjoy can seem difficult, costly and technically complicated. How can we create accessible and highly customizable experiences without interfering with the other viewers? Do we need to watch TV alone to get the correct adaption? In this session we will discuss an experiment with the Norwegian public broadcaster NRK and how we built the most advanced accessibility demonstrator ever created, in two days. See how we adapted a piece of original content to personal needs using the most personal of devices: peoples own mobile phone.
F1TV, the pinnacle of OTT coverage (8.D10 – sunday 13.00)
Formula 1 is the pinnacle of motor sport – and likely has the most technologically interested viewers in the world. F1TV is a new OTT offering from Formula 1, opening the floodgates of audio, video and statistics to highly engaged fans. In this session we discuss some of the amazing possibilities for the future of sports coverage. Now, ultra personalized experiences, collaborative viewing and incredibly flexible multi-screen solutions can be made available with a minimum of investment and technical complexity.
An experiment for amateur camera sports coverage has found real world use. In this talk, we show and tell how Fire and Rescue services use our synchronization service to build an ad-hoc online studio with drones, car and body cameras, sensors and the cell phones of the public as input sources. The extreme flexibility open new ways of communicating very complex situations, harnessing power of the most available resource there is: people in the vicinity.
I’m happy to announce a new publication from the Multi-device Timing CG. A new handbook on media synchronization has just recently been published on Springer. Njål T. Borch, Francois Daoust and myself were asked to contribute a chapter based on our research in this domain.
The chapter explains how to do media synchronization on the Web, and how media synchronization done correctly is the key enabler for a new and highly attractive media model for multi-device, timed Web media.
I also think this chapter is the most comprehensive introduction to the ideas and proposals put forward through the Multi-device Timing CG at this point.
The author version of this chapter is available here. Please cite original chapter published by Springer. You may also request the Springer version of the chapter by emailing authors directly or requesting access through ResearchGate
For the most precise synchronization of HTML5 media, and for the best user experiences (avoiding audiovisual artifacts) we depend on dynamically adjusting variable playbackrate. This works across browsers, but we have identified a subtle bug in the implementation of variablePlaybackRate in Safari, resulting in a terrible experience.
There seems to be a side-effect when variableplaybackrate it modified, causing the value of currentTime to pause for a short time interval, about 0.1 – 0.3 seconds.
We’ve reported the bug to apple. Hopefully they’ll be able to fix it.
Njål, Francois and I are publishing a new paper on multi-device timing at IBC 2016. The paper is titled (rather boldly;)) “Timing: Small step for developers, giant leap for the media industry” and is included as supporting paper in the paper session: “Enhancing the Multi-screen Experience through Synchronisation and Personalisation”. For those present at IBC this year session details may be found here session link.
This paper isn’t overly technical, but focusing instead on how the industry currently deals with timing, as well as pointing out the opportunities that would come from adopting the multi-device timing approach (i.e. timing object + shared motion).
Our setup in the Futures Park booth was fairly simple; four different laptop devices and two smart phones. As you can see in the picture we used the laptops to present a selection of HTML5 videos being synchronized across the different screens (using Shared Motion and the MediaSync library). Two laptops were cabled, two on WiFi. We used Firefox and Chrome browsers. One smart phone was used for controls (play, pause time-shifting the timing objects as well as switching between videos). Another smart phone was used to present the audio of the video. We also brought two pairs of headphones, one connected to a laptop computer and one connected to the smart phone. This way, by using both headphones together, our audience could verify echoless sync between smartphone and laptop computer. We also made sure to reload the Web-browsers to demonstrate how quickly sync is regained – fractions second as long as video data is available. The demos ran in perfect synchrony for four consecutive days, without as much as a glitch. That’s impressive – especially considering the poor networking conditions in the NAB exhibition hall!
Reactions to the demonstrations were overwhelmingly positive. Many people expressed excitement that there was an initiative aiming at improved support for timing on the Web platform. People were also taken aback by the quality of the synchronization as well as the prospect of doing this globally. Some people were curious about use cases, whereas others immediately recognized the need for timing and synchronization in various broadcasting applications, be it live streaming, ad-insertion, tiled screen setups, timed UGC, collaborative viewing, remote control or what not. We mentioned concrete use cases such as secondary device applications, alternative audio tracks on secondary devices (accessibility etc). We also presented more high level value promises such as timing-consistency in UX and the important role of timing with respect to integration and interoperability between heterogeneous media systems. Finally, we had some very concrete interests from very central players. We’ll let you know when interests materialize.
So, a big thanks to Norut, Vicomtech and MediaScape for an excellent show at NAB! Next major event up for the Multi-device Timing CG will likely be a F2F in Lisbon at TPAC 2016 in september.
We have just published a paper on sequencing in Web multimedia. Sequencing is about activation and deactivation of media items at the correct time during media playback. The paper highlights the importance of decoupling sequencing logic from data formats, timing/control and UI in Web-based multimedia.
Data-independent sequencing implies broad utility as well as simple integration of different data types and delivery methods in multimedia applications.
UI-independent sequencing simplifies integration of new data types into visual and interactive components.
Integration with the Timing Object ensures that sequencing tasks may trivially be synchronized and remote controlled, both in single-page media presentations as well as global, multi-device media experiences (e.g. through Shared Motion).
In short, we see precise, distributed sequencing as a fundamental building block in multi-device timed multimedia.
The paper will be presented at the ACM MMSys’16, Special section for Media Synchronization, Klagenfurt, Austria, May 10-13.
The paper is available in the ACM library here or from Norut here.