Copyright © 2025 World Wide Web Consortium. W3C® liability, trademark and permissive document license rules apply.
HTMLMediaElement [HTML] to allow JavaScript to generate
      media streams for playback. Allowing JavaScript to generate streams facilitates a variety of
      use cases like adaptive streaming and time shifting live streams.
    This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.
On top of editorial updates, substantive changes since publication as a W3C Recommendation in November 2016 are:
changeType() method to switch among codecs or
          bytestreams
          MediaSource objects off the main thread in
          dedicated workers
          createObjectURL() extension to the URL object following
          its integration in the File API [FILEAPI]
          ManagedMediaSource, ManagedSourceBuffer, and
          BufferedChangeEvent interfaces supporting power-efficient streaming and active buffered
          media cleanup by the user agent
          For a full list of changes made since the previous version, see the commits.
The working group maintains a list of all bug reports that the editors have not yet tried to address.
Implementors should be aware that this specification is not stable. Implementors who are not taking part in the discussions are likely to find the specification changing out from under them in incompatible ways. Vendors interested in implementing this specification before it eventually reaches the Candidate Recommendation stage should track the GitHub repository and take part in the discussions.
This document was published by the Media Working Group as a Working Draft using the Recommendation track.
Publication as a Working Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as other than a work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent that the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 18 August 2025 W3C Process Document.
This section is non-normative.
        This specification allows JavaScript to dynamically construct media streams for
        <audio> and <video>. It defines a MediaSource object that can serve as a source
        of media data for an HTMLMediaElement. MediaSource objects have one or more
        SourceBuffer objects. Applications append data segments to the SourceBuffer
        objects, and can adapt the quality of appended data based on system performance and other
        factors. Data from the SourceBuffer objects is managed as track buffers for audio,
        video and text data that is decoded and played. Byte stream specifications used with these
        extensions are available in the byte stream format registry [MSE-REGISTRY].
      
This specification was designed with the following goals in mind:
This specification defines:
            The track buffers that provide coded frames for the enabled
            audioTracks, the selected
            videoTracks, and the "showing" or
            "hidden" textTracks. All these
            tracks are associated with SourceBuffer objects in the
            activeSourceBuffers list.
          
            A presentation timestamp range used to filter out coded frames while
            appending. The append window represents a single continuous time range with a single
            start time and end time. Coded frames with presentation timestamp within this
            range are allowed to be appended to the SourceBuffer while coded frames outside
            this range are filtered out. The append window start and end times are controlled by
            the appendWindowStart and appendWindowEnd
            attributes respectively.
          
A unit of media data that has a presentation timestamp, a decode timestamp, and a coded frame duration.
The duration of a coded frame. For video and text, the duration indicates how long the video frame or text SHOULD be displayed. For audio, the duration represents the sum of all the samples contained within the coded frame. For example, if an audio frame contained 441 samples @44100Hz the frame duration would be 10 milliseconds.
The sum of a coded frame presentation timestamp and its coded frame duration. It represents the presentation timestamp that immediately follows the coded frame.
            A group of coded frames that are adjacent and have monotonically increasing
            decode timestamps without any gaps. Discontinuities detected by the coded frame processing algorithm and abort() calls trigger the start of a new
            coded frame group.
          
The decode timestamp indicates the latest time at which the frame needs to be decoded assuming instantaneous decoding and rendering of this and any dependant frames (this is equal to the presentation timestamp of the earliest frame, in presentation order, that is dependant on this frame). If frames can be decoded out of presentation order, then the decode timestamp MUST be present in or derivable from the byte stream. The user agent MUST run the append error algorithm if this is not the case. If frames cannot be decoded out of presentation order and a decode timestamp is not present in the byte stream, then the decode timestamp is equal to the presentation timestamp.
A sequence of bytes that contain all of the initialization information required to decode a sequence of media segments. This includes codec initialization data, Track ID mappings for multiplexed segments, and timestamp offsets (e.g., edit lists).
The byte stream format specifications in the byte stream format registry [MSE-REGISTRY] contain format specific examples.
A sequence of bytes that contain packetized & timestamped media data for a portion of the media timeline. Media segments are always associated with the most recently appended initialization segment.
The byte stream format specifications in the byte stream format registry [MSE-REGISTRY] contain format specific examples.
            A MediaSource object URL is a unique blob URL created by
            createObjectURL(). It is used to attach a MediaSource object to an
            HTMLMediaElement.
          
            These URLs are the same as a blob URLs, except that anything in the definition of
            that feature that refers to File and Blob objects is hereby extended to also
            apply to MediaSource objects.
          
            The origin of the MediaSource object URL is the relevant settings object of
            this during the call to createObjectURL().
          
For example, the origin of the MediaSource object URL affects the way that the media element is consumed by canvas.
            The parent media source of a SourceBuffer object is the MediaSource object
            that created it.
          
The presentation start time is the earliest time point in the presentation and specifies the initial playback position and earliest possible position. All presentations created using this specification have a presentation start time of 0.
            For the purposes of determining if HTMLMediaElement's
            buffered contains a TimeRanges that includes the current
            playback position, implementations MAY choose to allow a current playback position at
            or after presentation start time and before the first TimeRanges to play the
            first TimeRanges if that TimeRanges starts within a reasonably short time,
            like 1 second, after presentation start time. This allowance accommodates the
            reality that muxed streams commonly do not begin all tracks precisely at
            presentation start time. Implementations MUST report the actual buffered range,
            regardless of this allowance.
          
The presentation interval of a coded frame is the time interval from its presentation timestamp to the presentation timestamp plus the coded frame's duration. For example, if a coded frame has a presentation timestamp of 10 seconds and a coded frame duration of 100 milliseconds, then the presentation interval would be [10-10.1). Note that the start of the range is inclusive, but the end of the range is exclusive.
The order that coded frames are rendered in the presentation. The presentation order is achieved by ordering coded frames in monotonically increasing order by their presentation timestamps.
A reference to a specific time in the presentation. The presentation timestamp in a coded frame indicates when the frame SHOULD be rendered.
A position in a media segment where decoding and continuous playback can begin without relying on any previous data in the segment. For video this tends to be the location of I-frames. In the case of audio, most audio frames can be treated as a random access point. Since video tracks tend to have a more sparse distribution of random access points, the location of these points are usually considered the random access points for multiplexed streams.
            The specific byte stream format specification that describes the format of the
            byte stream accepted by a SourceBuffer instance. The byte stream format specification, for a SourceBuffer object, is initially selected based on the
            type passed to the addSourceBuffer() call that created
            the object, and can be updated by changeType() calls on the object.
          
SourceBuffer configuration
        
            A specific set of tracks distributed across one or more SourceBuffer objects
            owned by a single MediaSource instance.
          
            Implementations MUST support at least 1 MediaSource object with the following
            configurations:
          
MediaSource objects MUST support each of the configurations above, but they are only required to support one configuration at a time. Supporting multiple configurations at once or additional configurations is a quality of implementation issue.
A byte stream format specific structure that provides the Track ID, codec configuration, and other metadata for a single track. Each track description inside a single initialization segment has a unique Track ID. The user agent MUST run the append error algorithm if the Track ID is not unique within the initialization segment.
A Track ID is a byte stream format specific identifier that marks sections of the byte stream as being part of a specific track. The Track ID in a track description identifies which sections of a media segment belong to that track.
        The MediaSource interface represents a source of media data for an
        HTMLMediaElement. It keeps track of the readyState for this source as
        well as a list of SourceBuffer objects that can be used to add media data to the
        presentation. MediaSource objects are created by the web application and then attached to
        an HTMLMediaElement. The application uses the SourceBuffer objects in
        sourceBuffers to add media data to this source. The HTMLMediaElement
        fetches this media data from the MediaSource object when it is needed during
        playback.
      
        Each MediaSource object has a [[live seekable
        range]] internal slot that stores a normalized TimeRanges object. It
        is initialized to an empty TimeRanges object when the MediaSource object is
        created, is maintained by setLiveSeekableRange() and
        clearLiveSeekableRange(), and is used in 10. 
        HTMLMediaElement Extensions
        to modify HTMLMediaElement's seekable behavior.
      
        Each MediaSource object has a [[has ever been
        attached]] internal slot that stores a boolean. It is initialized to false when
        the MediaSource object is created, and is set true in the extended
        HTMLMediaElement's resource fetch algorithm as described in the attaching to a media element algorithm. The extended resource fetch algorithm uses this internal
        slot to conditionally fail attachment of a MediaSource using a MediaSourceHandle
        set on a HTMLMediaElement's srcObject attribute.
      
WebIDLenum ReadyState {
  "closed",
  "open",
  "ended",
};
      closed
        open
        SourceBuffer objects in MediaSource's sourceBuffers.
        ended
        MediaSource's
          endOfStream() has been called.
        WebIDLenum EndOfStreamError {
  "network",
  "decode",
};
      network
        Terminates playback and signals that a network error has occurred.
JavaScript applications SHOULD use this status code to terminate playback with a network error. For example, if a network error occurs while fetching media data.
decode
        Terminates playback and signals that a decoding error has occurred.
JavaScript applications SHOULD use this status code to terminate playback with a decode error. For example, if a parsing error occurs while processing out-of-band media data.
WebIDL[Exposed=(Window,DedicatedWorker)]
interface MediaSource : EventTarget {
    constructor();
    [SameObject, Exposed=DedicatedWorker]
    readonly  attribute MediaSourceHandle handle;
    readonly  attribute SourceBufferList sourceBuffers;
    readonly  attribute SourceBufferList activeSourceBuffers;
    readonly  attribute ReadyState readyState;
    attribute unrestricted double duration;
    attribute EventHandler onsourceopen;
    attribute EventHandler onsourceended;
    attribute EventHandler onsourceclose;
    static readonly attribute boolean canConstructInDedicatedWorker;
    SourceBuffer addSourceBuffer(DOMString type);
    undefined removeSourceBuffer(SourceBuffer sourceBuffer);
    undefined endOfStream(optional EndOfStreamError error);
    undefined setLiveSeekableRange(double start, double end);
    undefined clearLiveSeekableRange();
    static boolean isTypeSupported(DOMString type);
};
      
        Contains a handle useful for attachment of a dedicated worker MediaSource object to an
        HTMLMediaElement via srcObject. The handle remains the same object
        for this MediaSource object across accesses of this attribute, but it is distinct for
        each MediaSource object.
      
        This specification may eventually enable visibility of this attribute on MediaSource
        objects on the main Window context. If so, specification care will be necessary to prevent
        potential backwards incompatible changes, such as could happen if exceptions were thrown on
        accesses to this attribute.
      
On getting, run the following steps:
MediaSource object has not yet been created, then run the
        following steps:
          MediaSourceHandle object and associated resources, linked internally to this
            MediaSource.
            MediaSourceHandle object that is this attribute's value.
        
        Contains the list of SourceBuffer objects associated with this MediaSource. When
        MediaSource's readyState equals "closed" this list
        will be empty. Once readyState transitions to "open"
        SourceBuffer objects can be added to this list by using addSourceBuffer().
      
        Contains the subset of sourceBuffers that are providing the
        selected video track, the enabled audio track(s), and the
        "showing" or "hidden" text
        track(s).
      
        SourceBuffer objects in this list MUST appear in the same order as they appear in the
        sourceBuffers attribute; e.g., if only sourceBuffers[0] and
        sourceBuffers[3] are in activeSourceBuffers, then activeSourceBuffers[0]
        MUST equal sourceBuffers[0] and activeSourceBuffers[1] MUST equal sourceBuffers[3].
      
Section 3.15.5 Changes to selected/enabled track state describes how this attribute gets updated.
        Indicates the current state of the MediaSource object. When the MediaSource
        is created readyState MUST be set to "closed".
      
        Allows the web application to set the presentation duration. The duration is initially set
        to NaN when the MediaSource object is created.
      
On getting, run the following steps:
readyState attribute is "closed" then return
        NaN and abort these steps.
        On setting, run the following steps:
TypeError exception and
        abort these steps.
        readyState attribute is not "open" then throw
        an InvalidStateError exception and abort these steps.
        updating attribute equals true on any SourceBuffer in
        sourceBuffers, then throw an InvalidStateError exception and abort
        these steps.
        The duration change algorithm will adjust new duration higher if there is any currently buffered coded frame with a higher end time.
            appendBuffer() and endOfStream() can update the
            duration under certain circumstances.
          
Returns true.
        This attribute enables main thread and dedicated worker feature detection of support for
        creating and using a MediaSource object in a dedicated worker, and mitigates the need
        for higher latency detection polyfills like attempting creation of a MediaSource object
        from a dedicated worker, especially if the feature is not supported.
      
        Adds a new SourceBuffer to sourceBuffers.
      
TypeError exception and abort
        these steps.
        SourceBuffer objects in
        sourceBuffers, then throw a NotSupportedError exception and abort these
        steps.
        QuotaExceededError exception and abort these steps.
          
            For example, a user agent MAY throw a QuotaExceededError exception if the media
            element has reached the HAVE_METADATA readyState. This can occur
            if the user agent's media engine does not support adding more tracks during playback.
          
readyState attribute is not in the "open" state
        then throw an InvalidStateError exception and abort these steps.
        ManagedSourceBuffer if this is a
        ManagedMediaSource, or a SourceBuffer otherwise, with their respective associated
        resources.
        [[generate timestamps flag]] to the value in the
        "Generate Timestamps Flag" column of the Media Source Extensions™ Byte Stream Format Registry entry that is associated with
        type.
        [[generate timestamps flag]] is true, set buffer's
        mode to "sequence". Otherwise, set buffer's
        mode to "segments".
        sourceBuffers.
        addsourcebuffer at this's
        sourceBuffers.
        
        Removes a SourceBuffer from sourceBuffers.
      
sourceBuffers then throw a NotFoundError exception and abort these
        steps.
        updating attribute equals true, then run the
        following steps:
          updating attribute to false.
            abort at sourceBuffer.
            updateend at sourceBuffer.
            AudioTrackList object
        returned by sourceBuffer.audioTracks.
        AudioTrack object in the SourceBuffer audioTracks list, run the
            following steps:
              sourceBuffer attribute on the AudioTrack object to
                null.
                AudioTrack object from the SourceBuffer audioTracks list.
                  
                    This should trigger AudioTrackList [HTML] logic to queue a task to
                    fire an event named removetrack using TrackEvent
                    with the track attribute initialized to the AudioTrack
                    object, at the SourceBuffer audioTracks list. If the enabled
                    attribute on the AudioTrack object was true at the beginning of this
                    removal step, then this should also trigger AudioTrackList [HTML] logic
                    to queue a task to fire an event named change at the
                    SourceBuffer audioTracks list.
                  
Window, to remove the AudioTrack object (or instead, the Window mirror
                of it if the MediaSource object was constructed in a
                DedicatedWorkerGlobalScope) from the media element:
                  AudioTrackList object returned by the audioTracks
                    attribute on the HTMLMediaElement.
                    AudioTrack object from the HTMLMediaElement audioTracks
                    list.
                      
                        This should trigger AudioTrackList [HTML] logic to queue a task
                        to fire an event named removetrack using
                        TrackEvent with the track attribute initialized to the
                        AudioTrack object, at the HTMLMediaElement audioTracks list. If the
                        enabled attribute on the AudioTrack object was true at
                        the beginning of this removal step, then this should also trigger
                        AudioTrackList [HTML] logic to queue a task to fire an event
                        named change at the HTMLMediaElement audioTracks list.
                      
VideoTrackList object
        returned by sourceBuffer.videoTracks.
        VideoTrack object in the SourceBuffer videoTracks list, run the
            following steps:
              sourceBuffer attribute on the VideoTrack object to
                null.
                VideoTrack object from the SourceBuffer videoTracks list.
                  
                    This should trigger VideoTrackList [HTML] logic to queue a task to
                    fire an event named removetrack using TrackEvent
                    with the track attribute initialized to the VideoTrack
                    object, at the SourceBuffer videoTracks list. If the selected
                    attribute on the VideoTrack object was true at the beginning of this
                    removal step, then this should also trigger VideoTrackList [HTML] logic
                    to queue a task to fire an event named change at the
                    SourceBuffer videoTracks list.
                  
Window, to remove the VideoTrack object (or instead, the Window mirror
                of it if the MediaSource object was constructed in a
                DedicatedWorkerGlobalScope) from the media element:
                  VideoTrackList object returned by the videoTracks
                    attribute on the HTMLMediaElement.
                    VideoTrack object from the HTMLMediaElement videoTracks
                    list.
                      
                        This should trigger VideoTrackList [HTML] logic to queue a task
                        to fire an event named removetrack using
                        TrackEvent with the track attribute initialized to the
                        VideoTrack object, at the HTMLMediaElement videoTracks list. If the
                        selected attribute on the VideoTrack object was true at
                        the beginning of this removal step, then this should also trigger
                        VideoTrackList [HTML] logic to queue a task to fire an event
                        named change at the HTMLMediaElement videoTracks list.
                      
TextTrackList object
        returned by sourceBuffer.textTracks.
        TextTrack object in the SourceBuffer textTracks list, run the
            following steps:
              sourceBuffer attribute on the TextTrack object to
                null.
                TextTrack object from the SourceBuffer textTracks list.
                  
                    This should trigger TextTrackList [HTML] logic to queue a task to
                    fire an event named removetrack using TrackEvent with
                    the track attribute initialized to the TextTrack object, at
                    the SourceBuffer textTracks list. If the mode attribute on the
                    TextTrack object was "showing" or "hidden" at the beginning of this removal step, then this
                    should also trigger TextTrackList [HTML] logic to queue a task to
                    fire an event named change at the SourceBuffer
                    textTracks list.
                  
Window, to remove the TextTrack object (or instead, the Window mirror
                of it if the MediaSource object was constructed in a
                DedicatedWorkerGlobalScope) from the media element:
                  TextTrackList object returned by the textTracks
                    attribute on the HTMLMediaElement.
                    TextTrack object from the HTMLMediaElement textTracks
                    list.
                      
                        This should trigger TextTrackList [HTML] logic to queue a task to
                        fire an event named removetrack using TrackEvent
                        with the track attribute initialized to the TextTrack
                        object, at the HTMLMediaElement textTracks list. If the
                        mode attribute on the TextTrack object was "showing" or "hidden" at
                        the beginning of this removal step, then this should also trigger
                        TextTrackList [HTML] logic to queue a task to fire an event
                        named change at the HTMLMediaElement textTracks list.
                      
activeSourceBuffers, then remove sourceBuffer
        from activeSourceBuffers and queue a task to fire an event named
        removesourcebuffer at the SourceBufferList returned by
        activeSourceBuffers.
        sourceBuffers and queue a task to fire an event named removesourcebuffer at the SourceBufferList returned by
        sourceBuffers.
        Signals the end of the stream.
readyState attribute is not in the "open" state
        then throw an InvalidStateError exception and abort these steps.
        updating attribute equals true on any SourceBuffer in
        sourceBuffers, then throw an InvalidStateError exception and abort
        these steps.
        
        Updates [[live seekable range]] that is used in section
        10. 
        HTMLMediaElement Extensions to modify HTMLMediaElement's
        seekable behavior.
      
When this method is invoked, the user agent must run the following steps:
readyState attribute is not "open" then throw
        an InvalidStateError exception and abort these steps.
        TypeError
        exception and abort these steps.
        [[live seekable range]] to be a new normalized TimeRanges object containing a single range whose start position is
        start and end position is end.
        
        Updates [[live seekable range]] that is used in section
        10. 
        HTMLMediaElement Extensions to modify HTMLMediaElement's
        seekable behavior.
      
When this method is invoked, the user agent must run the following steps:
readyState attribute is not "open" then throw
        an InvalidStateError exception and abort these steps.
        [[live seekable range]] contains a range, then set
        [[live seekable range]] to be a new empty TimeRanges object.
        
        Check to see whether the MediaSource is capable of creating SourceBuffer
        objects for the specified MIME type.
      
        If true is returned from this method, it only indicates that the MediaSource
        implementation is capable of creating SourceBuffer objects for the specified MIME type.
        An addSourceBuffer() call SHOULD still fail if sufficient resources are not
        available to support the addition of a new SourceBuffer.
      
        This method returning true implies that HTMLMediaElement's
        canPlayType() will return "maybe" or "probably" since it does not make
        sense for a MediaSource to support a type the HTMLMediaElement knows it cannot play.
      
When this method is invoked, the user agent must run the following steps:
| Event name | Interface | Dispatched when... | 
|---|---|---|
| sourceopen | Event | MediaSource'sreadyStatetransitions from "closed"
              to "open" or from "ended" to "open". | 
| sourceended | Event | MediaSource'sreadyStatetransitions from "open"
              to "ended". | 
| sourceclose | Event | MediaSource'sreadyStatetransitions from "open"
              to "closed" or "ended" to "closed". | 
          When a Window HTMLMediaElement is attached to a DedicatedWorkerGlobalScope
          MediaSource, each context has algorithms that depend on information from the other.
        
          HTMLMediaElement is exposed only to Window contexts, but MediaSource and
          related objects defined in this specification are exposed in Window and
          DedicatedWorkerGlobalScope contexts. This lets applications construct a
          MediaSource object in either of those types of context and attach it to an
          HTMLMediaElement object in a Window context using a MediaSource object URL or
          a MediaSourceHandle as described in the attaching to a media element algorithm. A
          MediaSource object is not Transferable; it is only visible in the context where
          it was created.
        
          The rest of this section describes a model for bounding information latency for
          attachments of a Window media element to a DedicatedWorkerGlobalScope
          MediaSource. While the model describes communication using message passing,
          implementations MAY choose to communicate in potentially faster ways, such as using
          shared memory and locks. Attachments to a Window MediaSource synchronously have
          the information already without communicating it across contexts.
        
          A MediaSource that is constructed in a DedicatedWorkerGlobalScope has a
          [[port to main]] internal slot that stores a
          MessagePort setup during attachment and nulled during detachment. A Window
          [[port to main]] is always null.
        
          An HTMLMediaElement extended by this specification and attached to a
          DedicatedWorkerGlobalScope MediaSource similarly has a [[port to worker]] internal slot that stores a MessagePort
          and a [[channel with worker]] internal slot
          that stores a MessageChannel, both setup during attachment and nulled during
          detachment. Both [[port to worker]] and [[channel with worker]] are null unless attached to a DedicatedWorkerGlobalScope
          MediaSource.
        
          Algorithms in this specification that need to communicate information from a Window
          HTMLMediaElement to an attached DedicatedWorkerGlobalScope MediaSource, or
          vice versa, will use these internal ports implicitly to post a message to their
          counterpart, where the implicit handler of the message runs steps as described in the
          algorithms.
        
            There are distinct mechanisms for attaching a MediaSource to a media element
            depending on where the MediaSource object was constructed, in a Window versus
            in a DedicatedWorkerGlobalScope:
          
                Attaching a MediaSource that was constructed in a Window can be done by
                assigning a MediaSource object URL for that MediaSource to the media
                element src attribute or the src attribute of a <source>
                inside a media element. A MediaSource object URL is created by passing a
                MediaSource object to createObjectURL().
              
                Though implementations MAY allow MediaSource object URL creation in a
                DedicatedWorkerGlobalScope for a MediaSource constructed in that worker,
                attempting to use that MediaSource object URL to attach to a media element
                using either the src attribute or the src attribute of a
                <source> inside a media element MUST fail in the media element's resource fetch algorithm, as extended below.
              
Extending the object URL attachment mechanism to worker MediaSource object URLs would further propagate this idiom that is less preferred versus using srcObject, and would unnecessarily increase user agent interoperability risk and implementation complexity.
MediaSource that was constructed in a
            DedicatedWorkerGlobalScope can only be done by obtaining a handle from it using
            handle, transferring that MediaSourceHandle to the Window
            context and assigning it to the media element srcObject attribute.
            For the purposes of aligning this specification with HTMLMediaElement resource
            loading and fetching algorithms, the underlying DedicatedWorkerGlobalScope
            MediaSource is the MediaSource object mentioned there, and the
            MediaSourceHandle object is the media provider object.
            
            If the resource fetch algorithm was invoked with a media provider object that is a
            MediaSource object, a MediaSourceHandle object or a URL record whose object is
            a MediaSource object, then let mode be local, skip the first step in the resource fetch algorithm (which may otherwise set mode to remote) and continue the execution
            of the resource fetch algorithm.
          
            The first step of the resource fetch algorithm is expected to eventually align with
            selecting local mode for URL records whose objects are media provider objects. The
            intent is that if the HTMLMediaElement's src attribute or
            selected child source's src attribute is a blob: URL matching a
            MediaSource object URL when the respective src attribute was last changed, then
            that MediaSource object is used as the media provider object and current media
            resource in the local mode logic in the resource fetch algorithm. This also means
            that the remote mode logic that includes observance of any preload attribute is skipped
            when a MediaSource object is attached. Even with that eventual change to [HTML], the
            execution of the following steps at the beginning of the local mode logic is still
            required when the current media resource is a MediaSource object.
          
At the beginning of the "Otherwise (mode is local)" section of the resource fetch algorithm, execute the additional steps, below.
Relative to the action which triggered the media element's resource selection algorithm, these steps are asynchronous. The resource fetch algorithm is run after the task that invoked the resource selection algorithm is allowed to continue and a stable state is reached. Implementations may delay the steps in the "Otherwise" clause, below, until the MediaSource object is ready for use.
MediaSource object, a MediaSourceHandle object or a URL record whose
            object is a MediaSource object, then:
              MediaSource
                  that was constructed in a DedicatedWorkerGlobalScope, such as would occur if
                  attempting to use a MediaSource object URL from a
                  DedicatedWorkerGlobalScope MediaSource
                MediaSource's handle from the
                    DedicatedWorker to the Window context and assigning it to the media element's
                    srcObject attribute is the only way to attach such a
                    MediaSource.
                  MediaSourceHandle whose
                  [[Detached]] internal slot is true
                MediaSourceHandle whose underlying
                  MediaSource's [[has ever been attached]] internal slot is
                  true
                MediaSource more than once using a
                    MediaSourceHandle, even if the MediaSource was constructed on
                    Window and had been loaded previously using a MediaSource object URL.
                    This doesn't preclude subsequent use of a MediaSource object URL for a
                    Window MediaSource from succeeding though.
                  readyState is NOT set to "closed"
                MediaSource's [[has ever been attached]]
                    internal slot to true.
                    MediaSource was constructed in a
                          DedicatedWorkerGlobalScope, then setup worker attachment
                          communication and open the MediaSource:
                        [[channel with worker]] to be a new
                            MessageChannel.
                            [[port to worker]] to the
                            port1 value of [[channel with worker]].
                            port2 of [[channel with worker]] as both the value and the sole member of the transferList,
                            and let the result be serialized port2.
                            MediaSource's
                            DedicatedWorkerGlobalScope that will
                              DedicatedWorkerGlobalScope's realm, and set [[port to main]] to be the
                                resulting deserialized clone of the transferred
                                port2 value of [[channel with worker]].
                                readyState attribute to
                                "open".
                                sourceopen at
                                the MediaSource.
                                MediaSource was constructed in a Window:
                        [[channel with worker]] null.
                            [[port to worker]] null.
                            [[port to main]] null.
                            readyState attribute to
                            "open".
                            sourceopen at the
                            MediaSource.
                            appendBuffer().
                        MediaSource is attached.
                        An attached MediaSource does not use the remote mode steps in the resource fetch algorithm, so the media element will not fire "suspend" events. Though future versions of this specification will likely remove "progress" and "stalled" events from a media element with an attached MediaSource, user agents conforming to this version of the specification may still fire these two events as these [HTML] references changed after implementations of this specification stabilized.
            The following steps are run in any case where the media element is going to transition
            to NETWORK_EMPTY and queue a task to fire an event named
            emptied at the media element. These steps SHOULD be run right
            before the transition.
          
MediaSource was constructed in a DedicatedWorkerGlobalScope:
                MediaSource using an internal detach message posted to
                    [[port to worker]].
                    [[port to worker]] null.
                    [[channel with worker]] null.
                    detach notification runs the
                    remainder of these steps in the DedicatedWorkerGlobalScope MediaSource.
                    MediaSource was constructed in a Window:
                Window MediaSource.
                [[port to main]] null.
            readyState attribute to "closed".
            ManagedMediaSource, then set streaming
            attribute to false.
            duration to NaN.
            SourceBuffer objects from activeSourceBuffers.
            removesourcebuffer at
            activeSourceBuffers.
            SourceBuffer objects from sourceBuffers.
            removesourcebuffer at
            sourceBuffers.
            sourceclose at the MediaSource.
            
            Going forward, this algorithm is intended to be externally called and run in any case
            where the attached MediaSource, if any, must be detached from the media element. It
            MAY be called on HTMLMediaElement [HTML] operations like load() and resource fetch algorithm failures in addition to, or in place of, when the media element transitions
            to NETWORK_EMPTY. Resource fetch algorithm failures are those
            which abort either the resource fetch algorithm or the resource selection algorithm,
            with the exception that the "Final step" [HTML] is not considered a failure that
            triggers detachment.
          
Run the following steps as part of the "Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position" step of the seek algorithm:
                The media element looks for media segments containing the new playback
                position in each SourceBuffer object in
                activeSourceBuffers. Any position within a TimeRanges in the
                current value of the HTMLMediaElement's buffered attribute
                has all necessary media segments buffered for that position.
              
TimeRanges of HTMLMediaElement's
                  buffered
                HTMLMediaElement's readyState attribute is
                    greater than HAVE_METADATA, then set the
                    HTMLMediaElement's readyState attribute to
                    HAVE_METADATA.
                      
                        Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                        readyState changes may trigger events on the
                        HTMLMediaElement.
                      
appendBuffer() call
                    causes the coded frame processing algorithm to set the
                    HTMLMediaElement's readyState attribute to a value
                    greater than HAVE_METADATA.
                      
                        The web application can use buffered and
                        HTMLMediaElement's buffered to determine what the
                        media element needs to resume playback.
                      
                    If the readyState attribute is "ended" and the
                    new playback position is within a TimeRanges currently in
                    HTMLMediaElement's buffered, then the seek operation
                    must continue to completion here even if one or more currently selected or
                    enabled track buffers' largest range end timestamp is less than new playback
                    position. This condition should only occur due to logic in
                    buffered when readyState is
                    "ended".
                  
            The following steps are periodically run during playback to make sure that all of the
            SourceBuffer objects in activeSourceBuffers have enough data to ensure uninterrupted playback. Changes to activeSourceBuffers also
            cause these steps to run because they affect the conditions that trigger state
            transitions.
          
            Having enough data to ensure uninterrupted playback is an
            implementation specific condition where the user agent determines that it currently has
            enough data to play the presentation without stalling for a meaningful period of time.
            This condition is constantly evaluated to determine when to transition the media
            element into and out of the HAVE_ENOUGH_DATA ready state. These
            transitions indicate when the user agent believes it has enough data buffered or it
            needs more data respectively.
          
            An implementation MAY choose to use bytes buffered, time buffered, the append rate, or
            any other metric it sees fit to determine when it has enough data. The metrics used MAY
            change during playback so web applications SHOULD only rely on the value of
            HTMLMediaElement's readyState to determine whether more data
            is needed or not.
          
            When the media element needs more data, the user agent SHOULD transition it from
            HAVE_ENOUGH_DATA to HAVE_FUTURE_DATA early
            enough for a web application to be able to respond without causing an interruption in
            playback. For example, transitioning when the current playback position is 500ms before
            the end of the buffered data gives the application roughly 500ms to append more data
            before playback stalls.
          
HTMLMediaElement's readyState attribute equals
              HAVE_NOTHING:
            HTMLMediaElement's buffered does not contain a
              TimeRanges for the current playback position:
            HTMLMediaElement's readyState attribute to
                HAVE_METADATA.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement.
                  
HTMLMediaElement's buffered contains a TimeRanges
              that includes the current playback position and enough data to ensure uninterrupted playback:
            HTMLMediaElement's readyState attribute to
                HAVE_ENOUGH_DATA.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement.
                  
HAVE_CURRENT_DATA.
                HTMLMediaElement's buffered contains a TimeRanges
              that includes the current playback position and some time beyond the current playback
              position, then run the following steps:
            HTMLMediaElement's readyState attribute to
                HAVE_FUTURE_DATA.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement.
                  
HAVE_CURRENT_DATA.
                HTMLMediaElement's buffered contains a TimeRanges
              that ends at the current playback position and does not have a range covering the
              time immediately after the current position:
            HTMLMediaElement's readyState attribute to
                HAVE_CURRENT_DATA.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement.
                  
            During playback activeSourceBuffers needs to be updated if the
            selected video track, the enabled audio track(s), or a
            text track mode changes. When one or more of these changes occur the
            following steps need to be followed. Also, when MediaSource was constructed in a
            DedicatedWorkerGlobalScope, then each change that occurs to a Window mirror of
            a track created previously by the implicit handler for the internal create track
            mirror message MUST also be made to the corresponding DedicatedWorkerGlobalScope
            track using an internal update track state message posted to
            [[port to worker]] whose implicit handler makes the change and
            runs the following steps. Likewise, each change that occurs to a
            DedicatedWorkerGlobalScope track MUST also be made to the corresponding Window
            mirror of the track using an internal update track state message posted to
            [[port to main]] whose implicit handler makes the change to the mirror.
          
SourceBuffer associated with the previously selected video track is
                not associated with any other enabled tracks, run the following steps:
                  SourceBuffer from activeSourceBuffers.
                    removesourcebuffer at
                    activeSourceBuffers
                    SourceBuffer associated with the newly selected video track is not
                already in activeSourceBuffers, run the following steps:
                  SourceBuffer to activeSourceBuffers.
                    addsourcebuffer at
                    activeSourceBuffers
                    SourceBuffer associated with this
              track is not associated with any other enabled or selected track, then run the
              following steps:
            SourceBuffer associated with the audio track from
                activeSourceBuffers
                removesourcebuffer at
                activeSourceBuffers
                SourceBuffer associated with this track
              is not already in activeSourceBuffers, then run the following steps:
            SourceBuffer associated with the audio track to
                activeSourceBuffers
                addsourcebuffer at
                activeSourceBuffers
                mode becomes "disabled"
              and the SourceBuffer associated with this track is not associated with any other
              enabled or selected track, then run the following steps:
            SourceBuffer associated with the text track from
                activeSourceBuffers
                removesourcebuffer at
                activeSourceBuffers
                mode becomes "showing" or
              "hidden" and the SourceBuffer associated with this
              track is not already in activeSourceBuffers, then run the following
              steps:
            SourceBuffer associated with the text track to
                activeSourceBuffers
                addsourcebuffer at
                activeSourceBuffers
                
            Follow these steps when duration needs to change to a new
            duration.
          
duration is equal to new duration, then
            return.
            SourceBuffer objects in
            sourceBuffers, then throw an InvalidStateError exception and abort
            these steps.
              
            SourceBuffer objects in
            sourceBuffers.
            This condition can occur because the coded frame removal algorithm preserves coded frames that start before the start of the removal range.
duration to new duration.
            Window
            to update the media element's duration:
              duration to new duration.
                
            This algorithm gets called when the application signals the end of stream via an
            endOfStream() call or an algorithm needs to signal a decode error. This
            algorithm takes an error parameter that indicates whether an error
            will be signalled.
          
readyState attribute value to "ended".
            sourceended at the MediaSource.
            SourceBuffer objects in
                    sourceBuffers.
                      This allows the duration to properly reflect the end of the appended media segments. For example, if the duration was explicitly set to 10 seconds and only media segments for 0 to 5 seconds were appended before endOfStream() was called, then the duration will get updated to 5 seconds.
network"
                Window:
                  HTMLMediaElement's readyState attribute
                      equals HAVE_NOTHING
                    HTMLMediaElement's readyState attribute is
                      greater than HAVE_NOTHING
                    decode"
                Window:
                  HTMLMediaElement's readyState attribute
                      equals HAVE_NOTHING
                    HTMLMediaElement's readyState attribute is
                      greater than HAVE_NOTHING
                    
            This algorithm is used to run steps on Window from a MediaSource attached from
            either the same Window or from a DedicatedWorkerGlobalScope, usually to update
            the state of the attached HTMLMediaElement. This algorithm takes a steps
            parameter that lists the steps to run on Window.
          
MediaSource was constructed in a DedicatedWorkerGlobalScope:
            mirror on window message to [[port to main]] whose
              implicit handler in Window will run steps. Return control to the caller without
              awaiting that handler's receipt of the message.
              Window rather than these
                  steps somehow happening in the middle of some other Window task's
                  execution, and
                  DedicatedWorkerGlobalScope.
                  
        The MediaSourceHandle interface represents a proxy for a MediaSource object that is
        useful for attaching a DedicatedWorkerGlobalScope MediaSource to a Window
        HTMLMediaElement using srcObject as described in the attaching to a media element algorithm.
      
        This distinct object is necessary to attach a cross-context MediaSource to a media
        element because MediaSource objects themselves are not transferable since they are
        event targets.
      
        Each MediaSourceHandle object has a [[has ever
        been assigned as srcobject]] internal slot that stores a boolean. It is
        initialized to false when the MediaSourceHandle object is created, is set true in the
        extended HTMLMediaElement's srcObject setter as described in
        section 10. 
        HTMLMediaElement Extensions, and if true, prevents successful transfer of
        the MediaSourceHandle as described in section 4.1 
          Transfer.
      
        MediaSourceHandle objects are Transferable, each having a [[Detached]] internal slot that is used to ensure that once the
        handle object instance has been transferred, that instance cannot be transferred again.
      
WebIDL[Transferable, Exposed=(Window,DedicatedWorker)]
interface MediaSourceHandle {};
      
          The MediaSourceHandle transfer steps and transfer-receiving steps require the
          implementation to maintain an implicit internal slot referencing the underlying
          MediaSource to enable attaching to a media element using
          srcObject and consequent setup of an attachment's cross-context communication model.
        
          Implementors should be aware that assumption of "move" semantics implied by
          Transferable is not always reality. For example, extensions or internal
          implementations of postMessage using broadcast may cause unintended multiple recipients
          of a transferred MediaSourceHandle. For this reason, implementations are guided to
          not resolve which potential clone of a transferred MediaSourceHandle is still valid
          for attachment until and unless any handle for the underlying MediaSource object is
          used in the asynchronous portion of the media element's resource selection algorithm.
          This is similar to the existing behavior for attachment via MediaSource object URLs,
          which can be cloned easily, where such a URL is valid for at most one attachment start
          (across all of its potentially many clones).
        
          Implementations MUST support at most one attachment (load) via
          srcObject ever for the MediaSource object underlying a
          MediaSourceHandle, regardless of potential cloning of the MediaSourceHandle due
          to varying implementations of Transferable.
        
See attaching to a media element for how this is enforced during the asynchronous portion of the media element's resource selection algorithm.
          MediaSourceHandle is only exposed on Window and DedicatedWorkerGlobalScope
          contexts, and cannot successfully transfer between different agent clusters [ECMASCRIPT]. Transfer of a MediaSourceHandle object can only succeed
          within the same agent cluster.
        
          For example, transfer of a MediaSourceHandle object from either a Window or
          DedicatedWorkerGlobalScope to either a SharedWorker or a ServiceWorker will not
          succeed. Developers should be aware of this difference versus MediaSource object URLs
          which are DOMStrings that can be communicated many ways. Even so, attaching to a media element using a MediaSource object URL can only succeed for a MediaSource
          that was constructed in a Window context. See also the integration of the
          agent and agent cluster formalisms for Web Application APIs
          [HTML] where related concepts such as dedicated worker agents are defined.
        
          Transfer steps for a MediaSourceHandle object MUST include the following step:
        
MediaSourceHandle's [[has ever been assigned as srcobject]] internal slot is true, then the transfer steps must fail by throwing a
          DataCloneError exception.
          WebIDLenum AppendMode {
  "segments",
  "sequence",
};
      segments
        sequence
        timestampOffset
          attribute will be updated if a new offset is needed to make the new media segments
          adjacent to the previous media segment. Setting the timestampOffset
          attribute in "sequence" mode allows a media segment to be placed at a
          specific position in the timeline without any knowledge of the timestamps in the media
          segment.
        WebIDL[Exposed=(Window,DedicatedWorker)]
interface SourceBuffer : EventTarget {
  attribute AppendMode mode;
  readonly  attribute boolean updating;
  readonly  attribute TimeRanges buffered;
  attribute double timestampOffset;
  readonly  attribute AudioTrackList audioTracks;
  readonly  attribute VideoTrackList videoTracks;
  readonly  attribute TextTrackList textTracks;
  attribute double appendWindowStart;
  attribute unrestricted double appendWindowEnd;
  attribute EventHandler onupdatestart;
  attribute EventHandler onupdate;
  attribute EventHandler onupdateend;
  attribute EventHandler onerror;
  attribute EventHandler onabort;
  undefined appendBuffer(BufferSource data);
  undefined abort();
  undefined changeType(DOMString type);
  undefined remove(double start, unrestricted double end);
};
      mode of type AppendMode
          
              Controls how a sequence of media segments are handled. This attribute is
              initially set by addSourceBuffer() after the object is created, and
              can be updated by changeType() or setting this attribute.
            
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers attribute
              of the parent media source, then throw an InvalidStateError exception and
              abort these steps.
              updating attribute equals true, then throw an
              InvalidStateError exception and abort these steps.
              [[generate timestamps flag]] equals true and new mode
              equals "segments", then throw a TypeError exception and abort
              these steps.
              
                  If the readyState attribute of the parent media source is in
                  the "ended" state then run the following steps:
                
readyState attribute of the parent media source
                  to "open"
                  sourceopen at the parent media source.
                  [[append state]] equals PARSING_MEDIA_SEGMENT, then
              throw an InvalidStateError and abort these steps.
              sequence", then set the
              [[group start timestamp]] to the [[group end timestamp]].
              updating of type boolean, readonly
          
              Indicates whether the asynchronous continuation of an appendBuffer()
              or remove() operation is still being processed. This attribute is
              initially set to false when the object is created.
            
buffered of type TimeRanges, readonly
          
              Indicates what TimeRanges are buffered in the SourceBuffer. This attribute is
              initially set to an empty TimeRanges object when the object is created.
            
When the attribute is read the following steps MUST occur:
sourceBuffers attribute
              of the parent media source then throw an InvalidStateError exception and
              abort these steps.
              SourceBuffer object.
              TimeRanges object
              containing a single range from 0 to highest end time.
              SourceBuffer, run
              the following steps:
                Text track buffers are included in the calculation of highest end time, above, but excluded from the buffered range calculation here. They are not necessarily continuous, nor should any discontinuity within them trigger playback stall when the other media tracks are continuous over the same time range.
readyState is "ended", then set the end
                  time on the last range in track ranges to highest end time.
                  timestampOffset of type double
          
              Controls the offset applied to timestamps inside subsequent media segments that
              are appended to this SourceBuffer. The timestampOffset is
              initially set to 0 which indicates that no offset is being applied.
            
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers attribute
              of the parent media source, then throw an InvalidStateError exception and
              abort these steps.
              updating attribute equals true, then throw an
              InvalidStateError exception and abort these steps.
              
                  If the readyState attribute of the parent media source is in
                  the "ended" state then run the following steps:
                
readyState attribute of the parent media source
                  to "open"
                  sourceopen at the parent media source.
                  [[append state]] equals PARSING_MEDIA_SEGMENT, then
              throw an InvalidStateError and abort these steps.
              mode attribute equals "sequence", then
              set the [[group start timestamp]] to new timestamp offset.
              audioTracks of type AudioTrackList, readonly
          AudioTrack objects created by this object.
          videoTracks of type VideoTrackList, readonly
          VideoTrack objects created by this object.
          textTracks of type TextTrackList, readonly
          TextTrack objects created by this object.
          appendWindowStart of type double
          The presentation timestamp for the start of the append window. This attribute is initially set to the presentation start time.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers attribute
              of the parent media source, then throw an InvalidStateError exception and
              abort these steps.
              updating attribute equals true, then throw an
              InvalidStateError exception and abort these steps.
              appendWindowEnd then throw a TypeError exception and abort these
              steps.
              appendWindowEnd of type unrestricted double
          The presentation timestamp for the end of the append window. This attribute is initially set to positive Infinity.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers attribute
              of the parent media source, then throw an InvalidStateError exception and
              abort these steps.
              updating attribute equals true, then throw an
              InvalidStateError exception and abort these steps.
              TypeError and abort these steps.
              appendWindowStart then
              throw a TypeError exception and abort these steps.
              onupdatestart of type EventHandler
          
              The event handler for the updatestart event.
            
onupdate of type EventHandler
          
              The event handler for the update event.
            
onupdateend of type EventHandler
          
              The event handler for the updateend event.
            
onerror of type EventHandler
          
              The event handler for the error event.
            
onabort of type EventHandler
          
              The event handler for the abort event.
            
appendBuffer
          
              Appends the segment data in an BufferSource[WEBIDL] to
              the SourceBuffer.
            
When this method is invoked, the user agent must run the following steps:
[[input buffer]].
              updating attribute to true.
              updatestart at this
              SourceBuffer object.
              abort
          Aborts the current segment and resets the segment parser.
When this method is invoked, the user agent must run the following steps:
sourceBuffers attribute
              of the parent media source then throw an InvalidStateError exception and
              abort these steps.
              readyState attribute of the parent media source is not
              in the "open" state then throw an InvalidStateError exception
              and abort these steps.
              InvalidStateError exception and abort these steps.
              updating attribute equals true, then run the following
              steps:
                updating attribute to false.
                  abort at this
                  SourceBuffer object.
                  updateend at this
                  SourceBuffer object.
                  appendWindowStart to the presentation start time.
              appendWindowEnd to positive Infinity.
              changeType
          
              Changes the MIME type associated with this object. Subsequent
              appendBuffer() calls will expect the newly appended bytes to conform
              to the new type.
            
When this method is invoked, the user agent must run the following steps:
TypeError exception and
              abort these steps.
              sourceBuffers attribute
              of the parent media source, then throw an InvalidStateError exception and
              abort these steps.
              updating attribute equals true, then throw an
              InvalidStateError exception and abort these steps.
              SourceBuffer objects in the sourceBuffers attribute of the
              parent media source, then throw a NotSupportedError exception and abort these
              steps.
              
                  If the readyState attribute of the parent media source is in
                  the "ended" state then run the following steps:
                
readyState attribute of the parent media source
                  to "open".
                  sourceopen at the parent media source.
                  [[generate timestamps flag]] on this SourceBuffer
              object to the value in the "Generate Timestamps Flag" column of the byte stream
              format registry [MSE-REGISTRY] entry that is associated with type.
              [[generate timestamps flag]] equals true:
                  mode attribute on this SourceBuffer object to
                    "sequence", including running the associated steps for that
                    attribute being set.
                  mode attribute on this
                    SourceBuffer object, without running any associated steps for that
                    attribute being set.
                  [[pending initialization segment for changeType flag]]
              on this SourceBuffer object to true.
              remove
          Removes media for a specific time range. The start of the removal range, in seconds measured from presentation start time The end of the removal range, in seconds measured from presentation start time.
When this method is invoked, the user agent must run the following steps:
sourceBuffers attribute
              of the parent media source then throw an InvalidStateError exception and
              abort these steps.
              updating attribute equals true, then throw an
              InvalidStateError exception and abort these steps.
              duration equals NaN, then throw a TypeError exception and
              abort these steps.
              duration, then
              throw a TypeError exception and abort these steps.
              TypeError exception and abort these steps.
              
                  If the readyState attribute of the parent media source is in
                  the "ended" state then run the following steps:
                
readyState attribute of the parent media source
                  to "open"
                  sourceopen at the parent media source.
                  
          A track buffer stores the track descriptions and coded frames for an individual track. The track buffer is updated as initialization segments and media segments are appended to the SourceBuffer.
        
Each track buffer has a last decode timestamp variable that stores the decode timestamp of the last coded frame appended in the current coded frame group. The variable is initially unset to indicate that no coded frames have been appended yet.
Each track buffer has a last frame duration variable that stores the coded frame duration of the last coded frame appended in the current coded frame group. The variable is initially unset to indicate that no coded frames have been appended yet.
Each track buffer has a highest end timestamp variable that stores the highest coded frame end timestamp across all coded frames in the current coded frame group that were appended to this track buffer. The variable is initially unset to indicate that no coded frames have been appended yet.
Each track buffer has a need random access point flag variable that keeps track of whether the track buffer is waiting for a random access point coded frame. The variable is initially set to true to indicate that random access point coded frame is needed before anything can be added to the track buffer.
Each track buffer has a track buffer ranges variable that represents the presentation time ranges occupied by the coded frames currently stored in the track buffer.
          For track buffer ranges, these presentation time ranges are based on presentation timestamps, frame durations, and potentially coded frame group start times for coded
          frame groups across track buffers in a muxed SourceBuffer.
        
          For specification purposes, this information is treated as if it were stored in a
          normalized TimeRanges object. Intersected track buffer ranges are
          used to report HTMLMediaElement's buffered, and MUST therefore
          support uninterrupted playback within each range of HTMLMediaElement's
          buffered.
        
          These coded frame group start times differ slightly from those mentioned in the coded frame processing algorithm in that they are the earliest presentation timestamp
          across all track buffers following a discontinuity. Discontinuities can occur within the
          coded frame processing algorithm or result from the coded frame removal
          algorithm, regardless of mode. The threshold for determining
          disjointness of track buffer ranges is implementation-specific. For example, to
          reduce unexpected playback stalls, implementations MAY approximate the coded frame processing algorithm's discontinuity detection logic by coalescing adjacent ranges
          separated by a gap smaller than 2 times the maximum frame duration buffered so far in
          this track buffer. Implementations MAY also use coded frame group start times as
          range start times across track buffers in a muxed SourceBuffer to further reduce
          unexpected playback stalls.
        
| Event name | Interface | Dispatched when... | 
|---|---|---|
| updatestart | Event | SourceBuffer'supdatingtransitions from false to true. | 
| update | Event | A SourceBuffer's append or remove successfully completed.SourceBuffer'supdatingtransitions from true to false. | 
| updateend | Event | The append or remove of a SourceBufferended. | 
| error | Event | An error occurred during the append to a SourceBuffer.updatingtransitions from true to false. | 
| abort | Event | The SourceBuffer's append was aborted by anabort()call.updatingtransitions from true to false. | 
            Each SourceBuffer object has an [[append
            state]] internal slot that keeps track of the high-level segment parsing state.
            It is initially set to WAITING_FOR_SEGMENT and can transition to the following
            states as data is appended.
          
| Append state name | Description | 
|---|---|
| WAITING_FOR_SEGMENT | Waiting for the start of an initialization segment or media segment to be appended. | 
| PARSING_INIT_SEGMENT | Currently parsing an initialization segment. | 
| PARSING_MEDIA_SEGMENT | Currently parsing a media segment. | 
            Each SourceBuffer object has an [[input
            buffer]] internal slot that is a byte buffer that holds unparsed bytes across
            appendBuffer() calls. The buffer is empty when the SourceBuffer
            object is created.
          
            Each SourceBuffer object has a [[buffer full
            flag]] internal slot that keeps track of whether appendBuffer()
            is allowed to accept more bytes. It is set to false when the SourceBuffer object is
            created and gets updated as data is appended and removed.
          
            Each SourceBuffer object has a [[group start
            timestamp]] internal slot that keeps track of the starting timestamp for a new
            coded frame group in the "sequence" mode. It is unset when the
            SourceBuffer object is created and gets updated when the mode
            attribute equals "sequence" and the timestampOffset
            attribute is set, or the coded frame processing algorithm runs.
          
            Each SourceBuffer object has a [[group end
            timestamp]] internal slot that stores the highest coded frame end timestamp
            across all coded frames in the current coded frame group. It is set to 0 when
            the SourceBuffer object is created and gets updated by the coded frame processing
            algorithm.
          
            The [[group end timestamp]] stores the highest coded frame end timestamp across all track buffers in a SourceBuffer. Therefore, care should
            be taken in setting the mode attribute when appending multiplexed
            segments in which the timestamps are not aligned across tracks.
          
            Each SourceBuffer object has a [[generate timestamps flag]] internal slot that is a boolean that keeps track
            of whether timestamps need to be generated for the coded frames passed to the
            coded frame processing algorithm. This flag is set by
            addSourceBuffer() when the SourceBuffer object is created and is
            updated by changeType().
          
When the segment parser loop algorithm is invoked, run the following steps:
[[input buffer]] is empty, then jump to the
              need more data step below.
            [[input buffer]] contains bytes that violate the
            SourceBuffer byte stream format specification, then run the append error
            algorithm and abort this algorithm.
            [[input buffer]].
            
                If the [[append state]] equals WAITING_FOR_SEGMENT, then run
                the following steps:
              
[[input buffer]] indicates the start
                of an initialization segment, set the [[append state]] to
                PARSING_INIT_SEGMENT.
                [[input buffer]] indicates the start
                of a media segment, set [[append state]] to
                PARSING_MEDIA_SEGMENT.
                
                If the [[append state]] equals PARSING_INIT_SEGMENT, then run
                the following steps:
              
[[input buffer]] does not contain a complete
                initialization segment yet, then jump to the need more data step below.
                [[input buffer]].
                [[append state]] to WAITING_FOR_SEGMENT.
                
                If the [[append state]] equals PARSING_MEDIA_SEGMENT, then run
                the following steps:
              
[[first initialization segment received flag]] is false
                or the [[pending initialization segment for changeType flag]] is
                true, then run the append error algorithm and abort this algorithm.
                [[input buffer]] contains one or more complete coded frames, then run the coded frame processing algorithm.
                  The frequency at which the coded frame processing algorithm is run is implementation-specific. The coded frame processing algorithm MAY be called when the input buffer contains the complete media segment or it MAY be called multiple times as complete coded frames are added to the input buffer.
SourceBuffer is full and cannot accept more media data, then set
                the [[buffer full flag]] to true.
                [[input buffer]] does not contain a complete media segment, then jump to the need more data step below.
                [[input buffer]].
                [[append state]] to WAITING_FOR_SEGMENT.
                When the parser state needs to be reset, run the following steps:
[[append state]] equals PARSING_MEDIA_SEGMENT and the
            [[input buffer]] contains some complete coded frames, then run the
            coded frame processing algorithm until all of these complete coded frames have
            been processed.
            mode attribute equals "sequence", then set
            the [[group start timestamp]] to the [[group end timestamp]]
            [[input buffer]].
            [[append state]] to WAITING_FOR_SEGMENT.
            This algorithm is called when an error occurs during an append.
updating attribute to false.
            error at this SourceBuffer
            object.
            updateend at this SourceBuffer
            object.
            decode".
            
            When an append operation begins, the following steps are run to validate and prepare
            the SourceBuffer.
          
SourceBuffer has been removed from the sourceBuffers
            attribute of the parent media source then throw an InvalidStateError exception
            and abort these steps.
            updating attribute equals true, then throw an
            InvalidStateError exception and abort these steps.
            MediaSource was constructed in a Window
                HTMLMediaElement's
                  error attribute is not null. If that attribute is null, then
                  let recent element error be false.
                Window case, but run on the Window HTMLMediaElement on any change to
                  its error attribute and communicated by using
                  [[port to worker]] implicit messages. If such a message has
                  not yet been received, then let recent element error be false.
                InvalidStateError exception
            and abort these steps.
            
                If the readyState attribute of the parent media source is in
                the "ended" state then run the following steps:
              
readyState attribute of the parent media source to
                "open"
                sourceopen at the parent media source.
                
                If the [[buffer full flag]] equals true, then throw a
                QuotaExceededError exception and abort these steps.
              
                This is the signal that the implementation was unable to evict enough data to
                accommodate the append or the append is too big. The web application SHOULD use
                remove() to explicitly free up space and/or reduce the size of the
                append.
              
            When appendBuffer() is called, the following steps are run to process
            the appended data.
          
updating attribute to false.
            update at this SourceBuffer
            object.
            updateend at this SourceBuffer
            object.
            Follow these steps when a caller needs to initiate a JavaScript visible range removal operation that blocks other SourceBuffer updates:
updating attribute to true.
            updatestart at this
            SourceBuffer object.
            updating attribute to false.
            update at this SourceBuffer
            object.
            updateend at this SourceBuffer
            object.
            The following steps are run when the segment parser loop successfully parses a complete initialization segment:
Each SourceBuffer object has a [[first initialization segment received flag]] internal slot that tracks whether the first initialization segment has been appended and received by this algorithm. This flag is set to false when the SourceBuffer is created and updated by the algorithm below.
            Each SourceBuffer object has a [[pending
            initialization segment for changeType flag]] internal slot that tracks whether an
            initialization segment is needed since the most recent
            changeType(). This flag is set to false when the SourceBuffer is
            created, set to true by changeType() and reset to false by the
            algorithm below.
          
duration attribute if it currently equals NaN:
              [[first initialization segment received flag]] is true,
            then run the following steps:
              
                        User agents MAY consider codecs, that would otherwise be supported, as "not
                        supported" here if the codecs were not specified in type
                        parameter passed to (a) the most recently successful
                        changeType() on this SourceBuffer object, or (b) if no
                        successful changeType() has yet occurred on this object,
                        the addSourceBuffer() that created this SourceBuffer
                        object. For example, if the most recently successful
                        changeType() was called with 'video/webm' or
                        'video/webm; codecs="vp8"', and a video track containing vp9 appears in
                        the initialization segment, then the user agent MAY use this step to
                        trigger a decode error even if the other two properties' checks, above,
                        pass. Implementations are encouraged to trigger error in such cases only
                        when the codec is indeed not supported or the other two properties' checks
                        fail. Web authors are encouraged to use changeType(),
                        addSourceBuffer() and isTypeSupported()
                        with precise codec parameters to more proactively detect user agent
                        support. changeType() is required if the SourceBuffer
                        object's bytestream format is changing.
                      
                If the [[first initialization segment received flag]] is false,
                then run the following steps:
              
                    User agents MAY consider codecs, that would otherwise be supported, as "not
                    supported" here if the codecs were not specified in type parameter
                    passed to (a) the most recently successful changeType() on
                    this SourceBuffer object, or (b) if no successful
                    changeType() has yet occurred on this object, the
                    addSourceBuffer() that created this SourceBuffer object.
                    For example, MediaSource.isTypeSupported('video/webm;codecs="vp8,vorbis"')
                    may return true, but if addSourceBuffer() was called with
                    'video/webm;codecs="vp8"' and a Vorbis track appears in the initialization segment, then the user agent MAY use this step to trigger a decode error.
                    Implementations are encouraged to trigger error in such cases only when the
                    codec is indeed not supported. Web authors are encouraged to use
                    changeType(), addSourceBuffer() and
                    isTypeSupported() with precise codec parameters to more
                    proactively detect user agent support. changeType() is
                    required if the SourceBuffer object's bytestream format is changing.
                  
For each audio track in the initialization segment, run following steps:
AudioTrack object.
                        id property on
                        new audio track.
                        language property on new
                        audio track.
                        label property on new audio
                        track.
                        kind property on new
                        audio track.
                        
                            If this SourceBuffer object's audioTracks's
                            length equals 0, then run the following steps:
                          
enabled property on new audio track to
                            true.
                            audioTracks attribute on
                        this SourceBuffer object.
                          
                            This should trigger AudioTrackList [HTML] logic to queue a task to fire an event named addtrack using
                            TrackEvent with the track attribute initialized to
                            new audio track, at the AudioTrackList object referenced by the
                            audioTracks attribute on this SourceBuffer object.
                          
DedicatedWorkerGlobalScope:
                            create track mirror message to
                              [[port to main]] whose implicit handler in Window
                              runs the following steps:
                              AudioTrack
                                object.
                                audioTracks attribute on the HTMLMediaElement.
                                audioTracks
                              attribute on the HTMLMediaElement.
                            
                            This should trigger AudioTrackList [HTML] logic to queue a task to fire an event named addtrack using
                            TrackEvent with the track attribute initialized to
                            mirrored audio track or new audio track, at the AudioTrackList
                            object referenced by the audioTracks attribute on
                            the HTMLMediaElement.
                          
For each video track in the initialization segment, run following steps:
VideoTrack object.
                        id property on
                        new video track.
                        language property on new
                        video track.
                        label property on new video
                        track.
                        kind property on new
                        video track.
                        
                            If this SourceBuffer object's videoTracks's
                            length equals 0, then run the following steps:
                          
selected property on new video track to
                            true.
                            videoTracks attribute on
                        this SourceBuffer object.
                          
                            This should trigger VideoTrackList [HTML] logic to queue a task to fire an event named addtrack using
                            TrackEvent with the track attribute initialized to
                            new video track, at the VideoTrackList object referenced by the
                            videoTracks attribute on this SourceBuffer object.
                          
DedicatedWorkerGlobalScope:
                            create track mirror message to
                              [[port to main]] whose implicit handler in Window
                              runs the following steps:
                              VideoTrack
                                object.
                                videoTracks attribute on the HTMLMediaElement.
                                videoTracks
                              attribute on the HTMLMediaElement.
                            
                            This should trigger VideoTrackList [HTML] logic to queue a task to fire an event named addtrack using
                            TrackEvent with the track attribute initialized to
                            mirrored video track or new video track, at the VideoTrackList
                            object referenced by the videoTracks attribute on
                            the HTMLMediaElement.
                          
For each text track in the initialization segment, run following steps:
TextTrack object.
                        id property on
                        new text track.
                        language property on new
                        text track.
                        label property on new text
                        track.
                        kind property on new
                        text track.
                        mode property on new text track equals
                          "showing" or "hidden", then set active track flag to true.
                        textTracks attribute on
                        this SourceBuffer object.
                          
                            This should trigger TextTrackList [HTML] logic to queue a task to fire an event named addtrack using
                            TrackEvent with the track attribute initialized to
                            new text track, at the TextTrackList object referenced by the
                            textTracks attribute on this SourceBuffer object.
                          
DedicatedWorkerGlobalScope:
                            create track mirror message to
                              [[port to main]] whose implicit handler in Window
                              runs the following steps:
                              TextTrack
                                object.
                                textTracks attribute on the HTMLMediaElement.
                                textTracks attribute
                              on the HTMLMediaElement.
                            
                            This should trigger TextTrackList [HTML] logic to queue a task to fire an event named addtrack using
                            TrackEvent with the track attribute initialized to
                            mirrored text track or new text track, at the TextTrackList
                            object referenced by the textTracks attribute on
                            the HTMLMediaElement.
                          
SourceBuffer to activeSourceBuffers.
                    addsourcebuffer at
                    activeSourceBuffers
                    [[first initialization segment received flag]] to true.
                [[pending initialization segment for changeType flag]] to
            false.
            Window:
              HTMLMediaElement's readyState attribute is
                greater than HAVE_CURRENT_DATA, then set the
                HTMLMediaElement's readyState attribute to
                HAVE_METADATA.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement.
                  
sourceBuffers of the parent media source has
            [[first initialization segment received flag]] equal to true, then use
            the parent media source's mirror if necessary algorithm to run the following
            step in Window:
              HTMLMediaElement's readyState attribute is
                HAVE_NOTHING, then set the HTMLMediaElement's
                readyState attribute to HAVE_METADATA.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement. If transition from HAVE_NOTHING to
                    HAVE_METADATA occurs, it should trigger HTMLMediaElement
                    logic to queue a task to fire an event named
                    loadedmetadata at the media element.
                  
When complete coded frames have been parsed by the segment parser loop then the following steps are run:
For each coded frame in the media segment run the following steps:
[[generate timestamps flag]] equals true:
                    Special processing may be needed to determine the presentation and decode timestamps for timed text frames since this information may not be explicitly present in the underlying format or may be dependent on the order of the frames. Some metadata text tracks, like MPEG2-TS PSI data, may only have implied timestamps. Format specific rules for these situations SHOULD be in the byte stream format specifications or in separate extension specifications.
Implementations don't have to internally store timestamps in a double precision floating point representation. This representation is used here because it is the representation for timestamps in the HTML spec. The intention here is to make the behavior clear without adding unnecessary complexity to the algorithm to deal with the fact that adding a timestampOffset may cause a timestamp rollover in the underlying timestamp representation used by the byte stream format. Implementations can use any internal timestamp representation they wish, but the addition of timestampOffset SHOULD behave in a similar manner to what would happen if a double precision floating point representation was used.
mode equals "sequence" and
                [[group start timestamp]] is set, then run the following steps:
                  timestampOffset equal to [[group start timestamp]] minus presentation timestamp.
                    [[group end timestamp]] equal to
                    [[group start timestamp]].
                    [[group start timestamp]].
                    
                    If timestampOffset is not 0, then run the following steps:
                  
timestampOffset to the presentation timestamp.
                    timestampOffset to the decode timestamp.
                    mode equals "segments":
                            [[group end timestamp]] to presentation
                              timestamp.
                            mode equals "sequence":
                            [[group start timestamp]] equal to the
                              [[group end timestamp]].
                            appendWindowStart,
                then set the need random access point flag to true, drop the coded frame, and
                jump to the top of the loop to start processing the next coded frame.
                  
                    Some implementations MAY choose to collect some of these coded frames with
                    presentation timestamp less than appendWindowStart and use
                    them to generate a splice at the first coded frame that has a presentation timestamp greater than or equal to appendWindowStart even if
                    that frame is not a random access point. Supporting this requires multiple
                    decoders or faster than real-time decoding so for now this behavior will not be
                    a normative requirement.
                  
appendWindowEnd, then
                set the need random access point flag to true, drop the coded frame, and jump
                to the top of the loop to start processing the next coded frame.
                  
                    Some implementations MAY choose to collect coded frames with presentation
                    timestamp less than appendWindowEnd and frame end timestamp
                    greater than appendWindowEnd and use them to generate a splice
                    across the portion of the collected coded frames within the append window at
                    time of collection, and the beginning portion of later processed frames which
                    only partially overlap the end of the collected coded frames. Supporting this
                    requires multiple decoders or faster than real-time decoding so for now this
                    behavior will not be a normative requirement. In conjunction with collecting
                    coded frames that span appendWindowStart, implementations MAY
                    thus support gapless audio splicing.
                  
This is to compensate for minor errors in frame timestamp computations that can appear when converting back and forth between double precision floating point numbers and rationals. This tolerance allows a frame to replace an existing one as long as it is within 1 microsecond of the existing frame's start time. Frames that come slightly before an existing frame are handled by the removal step below.
Removing all coded frames until the next random access point is a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point depended on the frames that were removed.
The greater than check is needed because bidirectional prediction between coded frames can cause presentation timestamp to not be monotonically increasing even though the decode timestamps are monotonically increasing.
[[group end timestamp]], then set [[group end timestamp]] equal to frame
                end timestamp.
                [[generate timestamps flag]] equals true, then set
                timestampOffset equal to frame end timestamp.
                
                If the HTMLMediaElement's readyState attribute is
                HAVE_METADATA and the new coded frames cause
                HTMLMediaElement's buffered to have a TimeRanges for
                the current playback position, then set the HTMLMediaElement's
                readyState attribute to
                HAVE_CURRENT_DATA.
              
                Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                readyState changes may trigger events on the HTMLMediaElement.
              
                If the HTMLMediaElement's readyState attribute is
                HAVE_CURRENT_DATA and the new coded frames cause
                HTMLMediaElement's buffered to have a TimeRanges that
                includes the current playback position and some time beyond the current playback
                position, then set the HTMLMediaElement's readyState
                attribute to HAVE_FUTURE_DATA.
              
                Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                readyState changes may trigger events on the HTMLMediaElement.
              
                If the HTMLMediaElement's readyState attribute is
                HAVE_FUTURE_DATA and the new coded frames cause
                HTMLMediaElement's buffered to have a TimeRanges that
                includes the current playback position and enough data to ensure uninterrupted playback, then set the HTMLMediaElement's readyState
                attribute to HAVE_ENOUGH_DATA.
              
                Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                readyState changes may trigger events on the HTMLMediaElement.
              
duration,
            then run the duration change algorithm with new duration set
            to the maximum of the current duration and the [[group end timestamp]].
            Follow these steps when coded frames for a specific time range need to be removed from the SourceBuffer:
                For each track buffer in this SourceBuffer, run the following steps:
              
duration
                If this track buffer has a random access point timestamp that is greater than or equal to end, then update remove end timestamp to that random access point timestamp.
Random access point timestamps can be different across tracks because the dependencies between coded frames within a track are usually different than the dependencies in another track.
For each removed frame, if the frame has a decode timestamp equal to the last decode timestamp for the frame's track, run the following steps:
mode equals "segments":
                        [[group end timestamp]] to presentation timestamp.
                        mode equals "sequence":
                        [[group start timestamp]] equal to the
                          [[group end timestamp]].
                        Removing all coded frames until the next random access point is a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point depended on the frames that were removed.
                    If this object is in activeSourceBuffers, the current playback position is greater than or equal to start and less
                    than the remove end timestamp, and HTMLMediaElement's
                    readyState is greater than
                    HAVE_METADATA, then set the HTMLMediaElement's
                    readyState attribute to HAVE_METADATA
                    and stall playback.
                  
                    Per HTMLMediaElement ready states [HTML] logic, HTMLMediaElement's
                    readyState changes may trigger events on the
                    HTMLMediaElement.
                  
This transition occurs because media data for the current position has been removed. Playback cannot progress until media for the current playback position is appended or the 3.15.5 Changes to selected/enabled track state.
[[buffer full flag]] equals true and this object is ready
            to accept more bytes, then set the [[buffer full flag]] to false.
            
            This algorithm is run to free up space in this SourceBuffer when new data is
            appended.
          
                  Need to recognize step here that implementations MAY decide to set
                  [[buffer full flag]] true here if it predicts that processing
                  new data in addition to any existing bytes in [[input buffer]]
                  would exceed the capacity of the SourceBuffer. Such a step enables more
                  proactive push-back from implementations before accepting new data which would
                  overflow resources, for example. In practice, at least one implementation already
                  does this.
                
[[buffer full flag]] equals false, then abort these steps.
            
                Implementations MAY use different methods for selecting removal ranges so web
                applications SHOULD NOT depend on a specific behavior. The web application can use
                the buffered attribute to observe whether portions of the buffered
                data have been evicted.
              
Follow these steps when the coded frame processing algorithm needs to generate a splice frame for two overlapping audio coded frames:
floor(x * sample_rate + 0.5) / sample_rate).
              For example, given the following values:
presentation timestamp and decode timestamp are updated to 10.0125 since 10.01255 is closer to 10 + 100/8000 (10.0125) than 10 + 101/8000 (10.012625)
Some implementations MAY apply fades to/from silence to coded frames on either side of the inserted silence to make the transition less jarring.
This is intended to allow new coded frame to be added to the track buffer as if overlapped frame had not been in the track buffer to begin with.
If the new coded frame is less than 5 milliseconds in duration, then coded frames that are appended after the new coded frame will be needed to properly render the splice.
See the audio splice rendering algorithm for details on how this splice frame is rendered.
The following steps are run when a spliced frame, generated by the audio splice frame algorithm, needs to be rendered by the media element:
Here is a graphical representation of this algorithm.
 
          Follow these steps when the coded frame processing algorithm needs to generate a splice frame for two overlapping timed text coded frames:
This is intended to allow new coded frame to be added to the track buffer as if it hadn't overlapped any frames in track buffer to begin with.
        SourceBufferList is a simple container object for SourceBuffer objects. It
        provides read-only array access and fires events when the list is modified.
      
WebIDL[Exposed=(Window,DedicatedWorker)]
interface SourceBufferList : EventTarget {
  readonly attribute unsigned long length;
  attribute EventHandler onaddsourcebuffer;
  attribute EventHandler onremovesourcebuffer;
  getter SourceBuffer (unsigned long index);
};
      length of type unsigned long, readonly
          
              Indicates the number of SourceBuffer objects in the list.
            
onaddsourcebuffer of type EventHandler
          
              The event handler for the addsourcebuffer event.
            
onremovesourcebuffer of type EventHandler
          
              The event handler for the removesourcebuffer event.
            
Allows the SourceBuffer objects in the list to be accessed with an array operator (i.e., []).
When this method is invoked, the user agent must run the following steps:
length attribute then return undefined and abort these steps.
              SourceBuffer object in the list.
              | Event name | Interface | Dispatched when... | 
|---|---|---|
| addsourcebuffer | Event | When a SourceBufferis added to the list. | 
| removesourcebuffer | Event | When a SourceBufferis removed from the list. | 
        A ManagedMediaSource is a MediaSource that actively manages its memory content.
        Unlike a MediaSource, the user agent can evict content through the
        memory cleanup algorithm from its sourceBuffers
        (populated with ManagedSourceBuffer) for any reason.
      
WebIDL[Exposed=(Window,DedicatedWorker)]
interface ManagedMediaSource : MediaSource {
  constructor();
  readonly attribute boolean streaming;
  attribute EventHandler onstartstreaming;
  attribute EventHandler onendstreaming;
};
      streaming
        On getting:
| Event name | Interface | Dispatched when... | 
|---|---|---|
| startstreaming | Event | A ManagedMediaSource'sstreamingattribute changed fromfalsetotrue. | 
| endstreaming | Event | A ManagedMediaSource'sstreamingattribute changed fromtruetofalse. | 
The following steps are run periodically, whenever the SourceBuffer Monitoring algorithm is scheduled to run.
        Having enough managed data to ensure uninterrupted playback is an implementation
        defined condition where the user agent determines that it currently has enough data to play
        the presentation without stalling for a meaningful period of time. This condition is
        constantly evaluated to determine when to transition the value of
        streaming. These transitions indicate when the user agent believes
        it has enough data buffered or it needs more data respectively.
      
Being able to retrieve and buffer data in an efficient way is an implementation defined condition where the user agent determines that it can fetch new data in an energy efficient manner while able to achieve the desired memory usage.
MediaSource SourceBuffer Monitoring algorithm.
        buffered attribute contains a TimeRanges that includes the current
        playback position and enough managed data to ensure uninterrupted playback and is
        able to retrieve and buffer data in an efficient way
          streaming, queue an element task on the media element
              that runs the following steps:
            streaming attribute to can play
                uninterrupted and efficiently.
                startstreaming at the ManagedMediaSource.
                endstreaming at the
                ManagedMediaSource.
                sourceBuffers:
            WebIDL[Exposed=(Window,DedicatedWorker)]
interface BufferedChangeEvent : Event {
  constructor(DOMString type, optional BufferedChangeEventInit eventInitDict = {});
  [SameObject] readonly attribute TimeRanges addedRanges;
  [SameObject] readonly attribute TimeRanges removedRanges;
};
dictionary BufferedChangeEventInit : EventInit {
  TimeRanges addedRanges;
  TimeRanges removedRanges;
};
      addedRanges
        updatestart and updateend events (which
          would have occurred during the last run of the coded frame processing algorithm).
        removedRanges
        updatestart and updateend events (which
          would have occurred during the last run of the coded frame removal or coded frame eviction algorithm or if the user agent evicted content in response to a
          memory cleanup).
        WebIDL[Exposed=(Window,DedicatedWorker)]
interface ManagedSourceBuffer : SourceBuffer {
  attribute EventHandler onbufferedchange;
};
      onbufferedchange
        
            An event handler IDL attribute whose event handler event type is
            bufferedchange.
          
| Event name | Interface | Dispatched when... | 
|---|---|---|
| bufferedchange | BufferedChangeEvent | The ManagedSourceBuffer's buffered range changed following a call toappendBuffer(),remove(),endOfStream(), or as a consequence of the user agent running the
              memory cleanup algorithm. | 
        The following steps are run at the completion of all operations to the
        ManagedSourceBuffer buffer that would cause a buffer's
        buffered to change. That is once appendBuffer(),
        remove() or memory cleanup algorithm have
        completed.
      
buffered attribute before the changes occurred.
        buffered
        TimeRanges.
        BufferedChangeEventInit dictionary initialized with
        added as its addedRanges and removed as its
        removedRanges
        bufferedchange at buffer using the
        BufferedChangeEvent interface, initialized with eventInitDict.
        ManagedMediaSource parent
              activeSourceBuffers:
            currentTime until such presentation could be retrieved again.
          
            Implementations can use different strategies for selecting removal ranges so web
            applications shouldn't depend on a specific behavior. The web application would listen
            to the bufferedchange event to observe whether portions of the buffered data have
            been evicted.
          
        This section specifies what existing HTMLMediaElement's seekable
        and HTMLMediaElement's buffered attributes on the
        HTMLMediaElement MUST return when a MediaSource is attached to the element, and
        what the existing HTMLMediaElement's srcObject attribute MUST also
        do when it is set to be a MediaSourceHandle object.
      
HTMLMediaElement's seekable
        
          The HTMLMediaElement's seekable attribute returns a new static
          normalized TimeRanges object created based on the following steps:
        
MediaSource was constructed in a DedicatedWorkerGlobalScope that is
          terminated or is closing then return an empty TimeRanges object and abort these
          steps.
            
              This case is intended to handle implementations that may no longer maintain any
              previous information about buffered or seekable media in a MediaSource that was
              constructed in a DedicatedWorkerGlobalScope that has been terminated by
              terminate() or user agent execution of terminate a worker for the
              MediaSource's DedicatedWorkerGlobalScope, for instance as the eventual result of
              close() execution.
            
Should there be some (eventual) media element error transition in the case of an attached worker MediaSource having its context destroyed? The experimental Chromium implementation of worker MSE just keeps the element readyState, networkState and error the same as prior to that context destruction, though the seekable and buffered attributes each report an empty TimeRange.
duration and
          [[live seekable range]], determined as follows:
            MediaSource was constructed in a Window
              duration and set recent live seekable
                range to be [[live seekable range]].
              duration and [[live seekable range]] were recently,
                updated by handling implicit messages posted by the MediaSource to its
                [[port to main]] on every change to duration or
                [[live seekable range]].
              TimeRanges object.
              HTMLMediaElement's buffered
                      attribute.
                      HTMLMediaElement's buffered attribute returns
                  an empty TimeRanges object, then return an empty TimeRanges object and
                  abort these steps.
                  HTMLMediaElement's
                  buffered attribute.
                  HTMLMediaElement's buffered
        
          The HTMLMediaElement's buffered attribute returns a static
          normalized TimeRanges object based on the following steps.
        
MediaSource was constructed in a DedicatedWorkerGlobalScope that is
          terminated or is closing then return an empty TimeRanges object and abort these
          steps.
            
              This case is intended to handle implementations that may no longer maintain any
              previous information about buffered or seekable media in a MediaSource that was
              constructed in a DedicatedWorkerGlobalScope that has been terminated by
              terminate() or user agent execution of terminate a worker for the
              MediaSource's DedicatedWorkerGlobalScope, for instance as the eventual result of
              close() execution.
            
Should there be some (eventual) media element error transition in the case of an attached worker MediaSource having its context destroyed? The experimental Chromium implementation of worker MSE just keeps the element readyState, networkState and error the same as prior to that context destruction, though the seekable and buffered attributes each report an empty TimeRange.
MediaSource was constructed in a Window
              TimeRanges object.
                  activeSourceBuffers.length does not equal 0 then run the
                  following steps:
                    buffered for each SourceBuffer object in
                      activeSourceBuffers.
                      TimeRanges object containing a single range from 0 to highest end time.
                      SourceBuffer object in activeSourceBuffers
                      run the following steps:
                        buffered attribute on the current
                          SourceBuffer.
                          readyState is "ended", then set
                          the end time on the last range in source ranges to highest end time.
                          TimeRanges resulting from the steps for
                the Window case, but run with the MediaSource and its SourceBuffer
                objects in their DedicatedWorkerGlobalScope and communicated by using
                [[port to main]] implicit messages on every update to the
                activeSourceBuffers, readyState, or any of the
                buffering state that would change any of the values of each of those
                buffered attributes of the activeSourceBuffers.
                The overhead of recalculating and communicating recent intersection ranges so frequently is one reason for allowing implementation flexibility to query this information on-demand using other mechanisms such as shared memory and locks as mentioned in cross-context communication model.
HTMLMediaElement's srcObject
        
          If a HTMLMediaElement's srcObject attribute is assigned a
          MediaSourceHandle, then set [[has ever been assigned as srcobject]] for that MediaSourceHandle to true as part of the synchronous steps of
          the extended HTMLMediaElement's srcObject setter that occur
          before invoking the element's load algorithm.
        
          This prevents transferring that MediaSourceHandle object ever again, enabling clear
          synchronous exception if that is attempted.
        
          MediaSourceHandle needs to be added to HTMLMediaElement's MediaProvider IDL
          typedef and related text involving media provider objects.
        
        This section specifies extensions to the [HTML] AudioTrack definition.
      
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interface AudioTrack {
  readonly attribute SourceBuffer? sourceBuffer;
};
        AudioTrack needs Window+DedicatedWorker exposure.
        sourceBuffer of type SourceBuffer,
              readonly , nullable
            On getting, run the following step:
SourceBuffer that was created on the same
                  realm as this track, and if that SourceBuffer has not been
                  removed from the sourceBuffers attribute of its parent media source:
                SourceBuffer that created this track.
                DedicatedWorkerGlobalScope SourceBuffer notified its
                internal create track mirror handler in Window to create this track, then the
                Window copy of the track would return null for this attribute.
              
        This section specifies extensions to the [HTML] VideoTrack definition.
      
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interface VideoTrack {
  readonly attribute SourceBuffer? sourceBuffer;
};
        VideoTrack needs Window+DedicatedWorker exposure.
        sourceBuffer of type SourceBuffer,
              readonly , nullable
            On getting, run the following step:
SourceBuffer that was created on the same
                  realm as this track, and if that SourceBuffer has not been
                  removed from the sourceBuffers attribute of its parent media source:
                SourceBuffer that created this track.
                DedicatedWorkerGlobalScope SourceBuffer notified its
                internal create track mirror handler in Window to create this track, then the
                Window copy of the track would return null for this attribute.
              
        This section specifies extensions to the [HTML] TextTrack definition.
      
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interface TextTrack {
  readonly attribute SourceBuffer? sourceBuffer;
};
        sourceBuffer of type SourceBuffer,
              readonly , nullable
            On getting, run the following step:
SourceBuffer that was created on the same
                  realm as this track, and if that SourceBuffer has not been
                  removed from the sourceBuffers attribute of its parent media source:
                SourceBuffer that created this track.
                DedicatedWorkerGlobalScope SourceBuffer notified its
                internal create track mirror handler in Window to create this track, then the
                Window copy of the track would return null for this attribute.
              
        The bytes provided through appendBuffer() for a SourceBuffer form a
        logical byte stream. The format and semantics of these byte streams are defined in byte stream format specifications. The byte stream format
        registry [MSE-REGISTRY] provides mappings between a MIME type that may be passed to
        addSourceBuffer(), isTypeSupported() or
        changeType() and the byte stream format expected by a SourceBuffer
        using that MIME type for parsing newly appended data. Implementations are encouraged to
        register mappings for byte stream formats they support to facilitate interoperability. The
        byte stream format registry [MSE-REGISTRY] is the authoritative source for these
        mappings. If an implementation claims to support a MIME type listed in the registry, its
        SourceBuffer implementation MUST conform to the byte stream format specification
        listed in the registry entry.
      
The byte stream format specifications in the registry are not intended to define new storage formats. They simply outline the subset of existing storage format structures that implementations of this specification will accept.
Byte stream format parsing and validation is implemented in the segment parser loop algorithm.
This section provides general requirements for all byte stream format specifications:
AudioTrack,
        VideoTrack, and TextTrack attribute values from data in initialization segments.
          If the byte stream format covers a format similar to one covered in the in-band tracks spec [INBANDTRACKS], then it SHOULD try to use the same attribute mappings so that Media Source Extensions playback and non-Media Source Extensions playback provide the same track information.
The number and type of tracks are not consistent.
For example, if the first initialization segment has 2 audio tracks and 1 video track, then all initialization segments that follow it in the byte stream MUST describe 2 audio tracks and 1 video track.
Unsupported codec changes occur across initialization segments.
                See the initialization segment received algorithm,
                addSourceBuffer() and changeType() for details and
                examples of codec changes.
              
Video frame size changes. The user agent MUST support seamless playback.
This will cause the <video> display region to change size if the web application does not use CSS or HTML attributes (width/height) to constrain the element size.
Audio channel count changes. The user agent MAY support this seamlessly and could trigger downmixing.
This is a quality of implementation issue because changing the channel count may require reinitializing the audio device, resamplers, and channel mixers which tends to be audible.
buffered attribute.
              This is intended to simplify switching between audio streams where the frame boundaries don't always line up across encodings (e.g., Vorbis).
For example, if I1 is associated with M1, M2, M3 then the above MUST hold for all the combinations I1+M1, I1+M2, I1+M1+M2, I1+M2+M3, etc.
Byte stream specifications MUST at a minimum define constraints which ensure that the above requirements hold. Additional constraints MAY be defined, for example to simplify implementation.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY, MUST, MUST NOT, SHOULD, and SHOULD NOT in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
<video id="v" autoplay></video>
<script>
const video = document.getElementById("v");
const mediaSource = new MediaSource();
mediaSource.addEventListener("sourceopen", onSourceOpen);
video.src = window.URL.createObjectURL(mediaSource);
async function onSourceOpen(e) {
  const mediaSource = e.target;
  if (mediaSource.sourceBuffers.length > 0) return;
  const sourceBuffer = mediaSource.addSourceBuffer(
    'video/webm; codecs="vorbis,vp8"',
  );
  video.addEventListener("seeking", (e) => onSeeking(mediaSource, e.target));
  video.addEventListener("progress", () =>
    appendNextMediaSegment(mediaSource),
  );
  try {
    const initSegment = await getInitializationSegment();
    if (initSegment == null) {
      // Error fetching the initialization segment. Signal end of stream with an error.
      mediaSource.endOfStream("network");
      return;
    }
    // Append the initialization segment.
    sourceBuffer.addEventListener("updateend", function firstAppendHandler() {
      sourceBuffer.removeEventListener("updateend", firstAppendHandler);
      // Append some initial media data.
      appendNextMediaSegment(mediaSource);
    });
    sourceBuffer.appendBuffer(initSegment);
  } catch (error) {
    // Handle errors that might occur during initialization segment fetching.
    console.error("Error fetching initialization segment:", error);
    mediaSource.endOfStream("network");
  }
}
async function appendNextMediaSegment(mediaSource) {
  if (
    mediaSource.readyState === "closed" ||
    mediaSource.sourceBuffers[0].updating
  )
    return;
  // If we have run out of stream data, then signal end of stream.
  if (!haveMoreMediaSegments()) {
    mediaSource.endOfStream();
    return;
  }
  try {
    const mediaSegment = await getNextMediaSegment();
    // NOTE: If mediaSource.readyState == "ended", this appendBuffer() call will
    // cause mediaSource.readyState to transition to "open". The web application
    // should be prepared to handle multiple "sourceopen" events.
    mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);
  }
  catch (error) {
    // Handle errors that might occur during media segment fetching.
    console.error("Error fetching media segment:", error);
    mediaSource.endOfStream("network");
  }
}
function onSeeking(mediaSource, video) {
  if (mediaSource.readyState === "open") {
    // Abort current segment append.
    mediaSource.sourceBuffers[0].abort();
  }
  // Notify the media segment loading code to start fetching data at the
  // new playback position.
  seekToMediaSegmentAt(video.currentTime);
  // Append a media segment from the new playback position.
  appendNextMediaSegment(mediaSource);
}
function onProgress(mediaSource, e) {
  appendNextMediaSegment(mediaSource);
}
// Example of async function for getting initialization segment
async function getInitializationSegment() {
  // Implement fetching of the initialization segment
  // This is just a placeholder function
}
// Example function for checking if there are more media segments
function haveMoreMediaSegments() {
  // Implement logic to determine if there are more media segments
  // This is just a placeholder function
}
// Example function for getting the next media segment
async function getNextMediaSegment() {
  // Implement fetching of the next media segment
  // This is just a placeholder function
}
// Example function for seeking to a specific media segment
function seekToMediaSegmentAt(currentTime) {
  // Implement seeking logic
  // This is just a placeholder function
}
</script><script>
async function setUpVideoStream() {
  // Specific video format and codec
  const mediaType = 'video/mp4; codecs="mp4a.40.2,avc1.4d4015"';
  // Check if the type of video format / codec is supported.
  if (!window.ManagedMediaSource?.isTypeSupported(mediaType)) {
    return; // Not supported, do something else.
  }
  // Set up video and its managed source.
  const video = document.createElement("video");
  const source = new ManagedMediaSource();
  video.controls = true;
  await new Promise((resolve) => {
    video.src = URL.createObjectURL(source);
    source.addEventListener("sourceopen", resolve, { once: true });
    document.body.appendChild(video);
  });
  const sourceBuffer = source.addSourceBuffer(mediaType);
  // Set up the event handlers
  sourceBuffer.onbufferedchange = (e) => {
    console.log("onbufferedchange event fired.");
    console.log(`Added Ranges: ${timeRangesToString(e.addedRanges)}`);
    console.log(`Removed Ranges: ${timeRangesToString(e.removedRanges)}`);
  };
  source.onstartstreaming = async () => {
    const response = await fetch("./videos/bipbop.mp4");
    const buffer = await response.arrayBuffer();
    await new Promise((resolve) => {
      sourceBuffer.addEventListener("updateend", resolve, { once: true });
      sourceBuffer.appendBuffer(buffer);
    });
  };
  source.onendstreaming = async () => {
    // Stop fetching new segments here
  };
}
// Helper function...
function timeRangesToString(timeRanges) {
  const ranges = [];
  for (let i = 0; i < timeRanges.length; i++) {
    ranges.push([timeRanges.start(i), timeRanges.end(i)]);
  }
  return "[" + ranges.map(([start, end]) => `[${start}, ${end})` ) + "]";     
}
</script>
<body onload="setUpVideoStream()"></body>The editors would like to thank Alex Giladi, Bob Lund, Chris Needham, Chris Poole, Chris Wilson, Cyril Concolato, Dale Curtis, David Dorwin, David Singer, Duncan Rowden, François Daoust, Frank Galligan, Glenn Adams, Jer Noble, Joe Steele, John Simmons, Kagami Sascha Rosylight, Kevin Streeter, Marcos Cáceres, Mark Vickers, Matt Ward, Matthew Gregan, Michael(tm) Smith, Michael Thornburgh, Mounir Lamouri, Paul Adenot, Philip Jägenstedt, Philippe Le Hegaret, Pierre Lemieux, Ralph Giles, Steven Robertson, and Tatsuya Igarashi for their contributions to this specification.
This section is non-normative.
        The video playback quality metrics described in previous revisions of this specification
        (e.g., sections 5 and 10 of the Candidate Recommendation) are
        now being developed as part of [MEDIA-PLAYBACK-QUALITY]. Some implementations may have
        implemented the earlier draft VideoPlaybackQuality object and the HTMLVideoElement
        extension method getVideoPlaybackQuality() described in those previous
        revisions.
      
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: