This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 21300 - lack of clarity around appendStream
Summary: lack of clarity around appendStream
Status: RESOLVED NEEDSINFO
Alias: None
Product: HTML WG
Classification: Unclassified
Component: Media Source Extensions (show other bugs)
Version: unspecified
Hardware: PC Linux
: P2 normal
Target Milestone: ---
Assignee: Aaron Colwell (c)
QA Contact: HTML WG Bugzilla archive list
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-03-15 08:39 UTC by Jon Piesing (OIPF)
Modified: 2013-03-26 18:30 UTC (History)
4 users (show)

See Also:


Attachments

Description Jon Piesing (OIPF) 2013-03-15 08:39:58 UTC
This issue results from a joint meeting between the Open IPTV Forum, HbbTV and the UK DTG. These organizations originally sent a liaison statement to the W3C Web & TV IG:

https://lists.w3.org/Archives/Member/member-web-and-tv/2013Jan/0000.html (W3C member only link)

We appreciate that appendStream is new however we're trying to understand how it would work in some real-world use-cases and details seem to be lacking.
How would you expect the Stream objects to be obtained for use with appendStream? For example, would you expect the extensions to XMLHttpRequest defined in the Streams API specification to be used to create a Stream object referencing an XMLHttpRequest? If not, how would you expect this be done typically? If there is an assumed dependency on other new W3C specifications then we suggest this be made more explicit. 

In this context, how should xmlHttpRequest.open(GET, ...) behave if insufficient client resources exist to store the result?
Comment 1 Aaron Colwell (c) 2013-03-16 15:56:01 UTC
In general appendStream() doesn't care what the source of the Stream object is. It could be from an XMLHttpRequest, a File object, a WebSocket, or whatever else happens to be able to create instances of Stream. Technically MSE only has a dependency on the Stream interface and not any particular spec that specifies how instances of these objects are created. 

Practically though, the expectation is that the primary use case for this method is with a Stream object created by XMLHttpRequest. The details on how XMLHttpRequest's Stream object behave are outside the scope of the MSE. If I were to speculate though, I'd assume that if the network buffers for the request were full, then it would simply stop reading from the socket until the Source Buffer consumed enough of the data from the Stream to free up space. When more space was available then the UA would start reading from the socket again. If the server closes the connection because the UA hasn't read in a while, then I'd expect the normal error handling to occur and the Source Buffer to trigger an error event because the append was aborted early.

If the request is so large that it starts causing evictions in the Source Buffer then I'd expect the UA to do its best to keep the most important time ranges around to avoid playback disruptions, but it would be highly likely that some of the data in the request would not be in the Source Buffer when the append completes. This behavior should provide the necessary incentive to avoid huge appends and unnecessarily large media segment sizes.

I will make the dependency on the Streams API spec more obvious by putting a biblio link to the spec right next to the Stream link in appendStream()'s description.
Comment 2 Jon Piesing (OIPF) 2013-03-17 10:33:41 UTC
(In reply to comment #1)
<snip>
> Practically though, the expectation is that the primary use case for this
> method is with a Stream object created by XMLHttpRequest. The details on how
> XMLHttpRequest's Stream object behave are outside the scope of the MSE.

I agree that formally they're outside the scope of MSE however if MSE is making assumptions about this behaviour then those assumptions should be documented *somewhere* otherwise you will get implementations of XMLHttpRequest's stream object that are fine as far as the XHR spec is concerned but fail in strange subtle ways with MSE. 

Also in most organisations, non-documented assumptions can't have tests written for them which results in inter-operability issues once there is more than one implementation.

> If I
> were to speculate though, I'd assume that if the network buffers for the
> request were full, then it would simply stop reading from the socket until
> the Source Buffer consumed enough of the data from the Stream to free up
> space. When more space was available then the UA would start reading from
> the socket again.

In your view, would this allow use-cases where the data from the stream that has been copied into the Source Buffer starts being copied to the Decoder Buffer before the last data from the stream has been read in?
Comment 3 Adrian Bateman [MSFT] 2013-03-18 15:10:10 UTC
(In reply to comment #2)
> (In reply to comment #1)
> <snip>
> > Practically though, the expectation is that the primary use case for this
> > method is with a Stream object created by XMLHttpRequest. The details on how
> > XMLHttpRequest's Stream object behave are outside the scope of the MSE.
> 
> I agree that formally they're outside the scope of MSE however if MSE is
> making assumptions about this behaviour then those assumptions should be
> documented *somewhere* otherwise you will get implementations of
> XMLHttpRequest's stream object that are fine as far as the XHR spec is
> concerned but fail in strange subtle ways with MSE. 
> 
> Also in most organisations, non-documented assumptions can't have tests
> written for them which results in inter-operability issues once there is
> more than one implementation.

I don't believe that MSE is making any such assumptions. Please can you articulate some and propose how they should be described?

> > If I
> > were to speculate though, I'd assume that if the network buffers for the
> > request were full, then it would simply stop reading from the socket until
> > the Source Buffer consumed enough of the data from the Stream to free up
> > space. When more space was available then the UA would start reading from
> > the socket again.
> 
> In your view, would this allow use-cases where the data from the stream that
> has been copied into the Source Buffer starts being copied to the Decoder
> Buffer before the last data from the stream has been read in?

Optimisations like this are deliberately not required but one would expect higher quality implementations to include them. The goal of including Stream support was to allow the UA to copy from network buffer to media engine buffer without having an intermediate ArrayBuffer available to JavaScript for other purposes (which would imply additional copying). Since the Stream type is designed for situations where data may be processed before it is all received your scenario is a valid one. However, the implementation of whatever is providing the Stream will determine how and where buffering needs to happen.
Comment 4 Aaron Colwell (c) 2013-03-26 18:30:59 UTC
Marking as NEEDSINFO since we need more information from Jon before we can determine what needs to be changed in the spec.