[openscreenprotocol] Requirements for multi-device timing while streaming. (#195)

mfoltzgoogle has just created a new issue for https://github.com/webscreens/openscreenprotocol:

== Requirements for multi-device timing while streaming. ==
This issue tracks discussion of protocol changes to allow multi-device synchronization of media playback while streaming.   This was discussed at the Berlin F2F [1].

It's assumed that if there's one sender and one receiver the current protocol is sufficient to play out audio and video on the receiver with lip sync.

However, once there are multiple receivers, we'll need some timing metadata to be exchanged between the sender and receivers.  Here are a few possible scenarios (not exhaustive):

1. Sending audio to one device and video to another.
1. Sending audio to multiple devices.
1. Sending video to multiple devices.
1. Sending audio and video to multiple devices (possibly with multiple audio and video tracks).
1. Scenarios involving text tracks or metadata cues.
1. Scenarios involving non-1.0-rate playback.

Not all of these may be in scope, however.   Items 1 and 4 were pointed out as important in Berlin. 

Next steps are to research what's feasible from an implementation point of view, and study the proposals in the following groups:

Multi-Device Timing CG Timing Object: https://webtiming.github.io/timingobject/
Media Timed Events TF: https://github.com/WICG/datacue/blob/master/explainer.md

[1 https://www.w3.org/2019/05/23-webscreens-minutes.html#x29

Please view or discuss this issue at https://github.com/webscreens/openscreenprotocol/issues/195 using your GitHub account

Received on Monday, 26 August 2019 17:51:16 UTC