Timing mechanisms allow operations to be executed at the correct time. The Web already has several mechanisms supporting timed operations, including setTimeout and setInterval, as well as controllers for media frameworks and animations. However, the Web lacks support for multi-device timing. A multi-device timing mechanism would allow timed operations across Web pages hosted by different devices. Multi-device timing is particularly important for the broadcasting industry, as it is the key enabler for web-based secondary device offerings. More generally, multi-device timing has wide utility in communication, collaboration and multi-screen presentation. This Community Group aims to define a common, multi-device, timing mechanism and a practical programming model. This will improve the Web as a platform for time-sensitive, multi-device Web applications.
Charter : http://webtiming.github.io
In collaboration with Vicometch-IK4, we did some tests of HbbTV 1.5 to see if we could manage to exploit Shared Motions to add sync capabilities to existing smart TVs. The HbbTV 1.5 does not have any explicit support for synchronization, and while HbbTV 2.0 will bring this, lots of existing SmartTVs will not get these upgrades. If you are interested and have knowledge about HbbTV, we welcome any input on this initial experiment.
We quickly discovered that the media element of the TV is unable to provide a good user experience when slaving after a Shared Motion. It lacks variable playback rate, and skip operations are very slow. Our approach was therefore to request the media element to play from a given position. It will not be very correct, but instead of trying to correct the playback on the TV, we rather adjust the Shared Motion to match what the TV does. In this way, we’ve re-created a master-slave relation, with one master (the TV/Chromecast) and however many slaves you want.
Here is a film we made from our experiment with a Panasonic TV :
Interestingly, we see that the currentTime reported by this TV fluctuates within around 250ms. We are however able to select the better samples and in that way provide a consistent experience. The TV we tested did need calibration, but this seemed to be hardware specific and consistent for skips, reloads and other content.
This test is of course IP based. We asked for some input on this for broadcasted content. It appears that we only would have streamevents to provide an estimated time (possibly to within a second), but perhaps even relatively rough estimates of the current position could be extracted and make for user friendly transitions between broadcasted and IP delivered content?
Of course, HbbTV 2.0 devices should be much better at all of this, and provide local synchronization to boot. However, we believe this experiment opens for an interesting transition phase, where current SmartTVs can provide at least some additional functionality for a vast number of users.
If there is any interest in testing other HbbTVs, we’re very willing to provide a simple web application for testing, including manual calibration. Please let us know!
Njål and I are going to NABShow 16-21 April in Las Vegas [1,2] to promote the timing object and the W3C Multi-device Timing CG. If you, or any of your colleagues, are attending NAB, please come by our booth in the Future Park .
The booth is hosted by EU FP7 project MediaScape . The invitation came as a result of demos at IBC 2015, where MediaScape project lead partner Vicomtech  showed flexible and tightly synced multi-device adaptation in regular Web browsers. The MediaScape project uses the Shared Motion approach to distributed timing and control in Web browsers, and has been central in pushing for standardization of the Timing Object through the Multi-device Timing CG initiative.
Vicomtech by Mikel Zorilla and Esther Novo will demonstrate the many fruits of the MediaScape approach, with a particular emphasis on multi-device adaptation, while Norut (Njål and I) will focus on distributed control and synchronization for IP-based services.
Also, if you are interested in discussing commercial opportunities implied by Web-based timing, you may set up a meet through our American partners, Glen Sakata  or Chris Lennon  at MediAnswers . They have a long track record within the american broadcasting industry, and a keen understanding of what opportunities the Timing Object & Shared Motion can offer to the industry.
The Multi-device Timing CG is mentioned in recent W3C highlights  of March 2016, under the entertainment section. Web synchronization is also mentioned as overlapping work in the telecommunications section. Finally, cross-device synchronization is one of three headlight topics for 2016.
Though not explicitly referenced, multi-device temporal control is also relevant also for other highlight activities, such as Second Screen Working Group (synchronization and distributed control), HTML Media (timing provides Web apps with more flexible control over video content – addition to MSE or alternative), and Digital Marketing (ads exploiting temporal context, timed multi-screen etc.).
Hi all, after a bug report about our variable playbackRate detector being too eager, I did a proper stress test of Firefox and Chrome on Linux to test the new media sync code. I made a video if you want to look at it. Modern browsers are amazing. 🙂
I’d like to give a quick update on some developments on Chrome, and some relevant points we have discovered.
The Unified Media Pipeline (can be enabled on Android Dev and Beta channels using chrome://flags) gives us variable playback rates on Android, allowing for a much tighter sync and nicer user experience. There’s a bug report tracking this development in .
During testing, we did however discover some discrepancies (which we have communicated), and I think this reflects a need for developers to have good testing systems for timing sensitive issues. The particular issue we noticed is that currentTime appears slightly off, and it’s off by a different amount for audio and video. This gives an audiable echo wieh played back on multiple devices within hearing range. EDIT: This discrepancy was actually some other change that made us detect variable playback rate as broken prematurely. After fixing it, there are still some open issues on some devices, but the difference between media types has gone. Of course, video has a higher bitrate and is therefore also more prone to errors on resource limited devices.
The reporting of currentTime is quite critical for multi-device playback, with two main points:
consitency – how much does currentTime reporting vary
correctness – how closely does currentTime reflect the actual player state
This leads to the question: How do developers measure the correctness of currentTime reporting. Consistency is quite easy, we even have a page to do that . For our sync library, we smooth the values, which seems to work quite well. Correctness is more tricky, and the way we have done this for now is to find a reference player (we use Chrome on a Linux machine) and play back synchronized content on both browsers. An example can be seen on  testing Chrome vs Firefox about a year ago. Chrome produced some audio artifacts as we synchronized it then – you can hear it in the right ear.
Possible suggestions for a solution could be:
Add a new currentTimeTS property, providing a tuple of [currentTime, timestamp] to reflect at which time the sample was taken. This will allow us to compensate for the time passed between sample and reading the value.
A requirement that currentTime should be updated to be correct immediately before the timeUpdate event is emitted. This likely needs to save [currentTime, timestamp] internally, then estimating the currentTime before reporting.
Add a property currentTime (or currentTimeTS) to the timeUpdate event similar to 2 or 1.
Solution 1 could even be writeable, providing a way to specify that players need to compensate for buffering time, latency of decoding subsystems etc. This would give us a perfect foundation to provide very nice user experiences as a skip during playback will behave as if it started playing immediately even if invisibly (due to a lack of data).
I’d love to hear about any experiences or suggestions you might have in this respect.
I announced https://github.com/webtiming/timingsrc a little earlier, a GitHub repository for the Multi-device Timing Community Group. The repository currently includes implementation of the TimingObject, along with TimingConverters and tools for sequencing and media synchronization.
I also want to announce that the documentation for this code will be available at
Support for timing providers are currently not included (but will be quite soon, thereby opening up for multi-device timing).
The timing object with associated concepts effectively define a new programming model for precisely timed web-pages and web-applications. Even without support for precise distributed timing and control (through timing providers) it should still represent a significant improvement on state of the art. Further improvement to synchronization of HTML5 Media likely requires timing support from Web browsers.
You are all invited to start playing with this programming model – and to share your feedback with the group. The code is licensed under LGPL.
The current implementation deviates from the spec on a few minor points. For some of these points I’ll be suggesting changes to the spec. I’ll come back to this later though.
And, not to forget, integration with timing object/shared motion enables HTML5 video and audio to take part in globally synced multi-device playback!
Please note that API and implementation of the Sequencer is similar, but not identical to the proposed TimingTextTracks. The differences should be fairly small though. The Sequencer implementation currently has no known issues and supports all the goals defined in the timing object spec.
data-independency (not tied to specific media format)
UI-independency (no predefined UI – custum UI == simple web development)
precise timing (millisecond precision)
expressive controls (suitable for a variety of media products)
dynamic data (supports live data sources and modification of temporal aspects during playback)
And, not to forget, integration with timing object/shared motion enables Sequencers to take part in globally synced multi-device playback!