WebCodecs

W3C Working Draft,

This version:
https://www.w3.org/TR/2021/WD-webcodecs-20210604/
Latest published version:
https://www.w3.org/TR/webcodecs/
Editor's Draft:
https://w3c.github.io/webcodecs/
Previous Versions:
Issue Tracking:
GitHub
Inline In Spec
Editors:
Chris Cunningham (Google Inc.)
Paul Adenot (Mozilla)
Bernard Aboba (Microsoft Corporation)
Participate:
Git Repository.
File an issue.
Version History:
https://github.com/w3c/webcodecs/commits

Abstract

This specification defines interfaces to codecs for encoding and decoding of audio, video, and images.

This specification does not specify or require any particular codec or method of encoding or decoding. The purpose of this specification is to provide JavaScript interfaces to implementations of existing codec technology developed elsewhere. Implementers may support any combination of codecs or none at all.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

Feedback and comments on this specification are welcome. GitHub Issues are preferred for discussion on this specification. Alternatively, you can send comments to the Media Working Group’s mailing-list, public-media-wg@w3.org (archives). This draft highlights some of the pending issues that are still to be discussed in the working group. No decision has been taken on the outcome of these issues including whether they are valid.

This document was published by the Media Working Group as a Working Draft. This document is intended to become a W3C Recommendation.

Publication as a Working Draft does not imply endorsement by the W3C Membership.

This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 15 September 2020 W3C Process Document.

1. Definitions

Codec

Refers generically to an instance of AudioDecoder, AudioEncoder, VideoDecoder, or VideoEncoder.

Key Chunk

An encoded chunk that does not depend on any other frames for decoding. Also commonly referred to as a "key frame".

Internal Pending Output

Codec outputs such as VideoFrames that currently reside in the internal pipeline of the underlying codec implementation. The underlying codec implementation may emit new outputs only when a new inputs are provided. The underlying codec implementation must emit all outputs in response to a flush.

Codec System Resources

Resources including CPU memory, GPU memory, and exclusive handles to specific decoding/encoding hardware that may be allocated by the User Agent as part of codec configuration or generation of AudioData and VideoFrame objects. Such resources may be quickly exhausted and should be released immediately when no longer in use.

Temporal Layer

A grouping of EncodedVideoChunks whose timestamp cadence produces a particular framerate. See scalabilityMode.

Progressive Image

An image that supports decoding to multiple levels of detail, with lower levels becoming available while the encoded data is not yet fully buffered.

Progressive Image Frame Generation

A generational identifier for a given Progressive Image decoded output. Each successive generation adds additional detail to the decoded output. The mechanism for computing a frame’s generation is implementer defined.

Primary Image Track

An image track that is marked by the given image file as being the default track. The mechanism for indiciating a primary track is format defined.

2. Codec Processing Model

2.1. Background

This section is non-normative.

The codec interfaces defined by the specification are designed such that new codec tasks may be scheduled while previous tasks are still pending. For example, web authors may call decode() without waiting for a previous decode() to complete. This is achieved by offloading underlying codec tasks to a separate thread for parallel execution.

This section describes threading behaviors as they are visible from the perspective of web authors. Implementers may choose to use more or less threads as long the exernally visible behaviors of blocking and sequencing are maintained as follows.

2.2. Control Thread and Codec Thread

All steps in this specificaiton will run on either a control thread or a codec thread.

The control thread is the thread from which authors will construct a codec and invoke its methods. Invoking a codec’s methods will typically result in the creation of control messages which are later executed on the codec thread. Each global object has a separate control thread.

The codec thread is the thread from which a codec will dequeue control messages and execute their steps. Each codec instance has a separate codec thread. The lifetime of a codec thread matches that of its associated codec instance.

The control thread uses a traditional event loop, as described in [HTML].

The codec thread uses a specialized codec processing loop.

Communication from the control thread to the codec thread is done using control message passing. Communication in the other direction is done using regular event loop tasks.

Each codec instance has a single control message queue that is a queue of control messages.

Queuing a control message means enqueing the message to a codec’s control message queue. Invoking codec methods will often queue a control message to schedule work.

Running a control message means performing a sequence of steps specified by the method that enqueued the message. The steps of a control message may depend on injected state, supplied by the method that enqueued the message.

Resetting the control message queue means performing these steps:

  1. For each control message in the control message queue:

    1. If a control message’s injected state includes a promise, reject that promise.

    2. Remove the message from the queue.

The codec processing loop must run these steps:

  1. While true:

    1. If the control message queue is emtpy, continue.

    2. Dequeue front message from the control message queue.

    3. Run control message steps described by front message.

3. AudioDecoder Interface

[Exposed=(Window,DedicatedWorker)]
interface AudioDecoder {
  constructor(AudioDecoderInit init);

  readonly attribute CodecState state;
  readonly attribute long decodeQueueSize;

  undefined configure(AudioDecoderConfig config);
  undefined decode(EncodedAudioChunk chunk);
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<AudioDecoderSupport> isConfigSupported(AudioDecoderConfig config);
};

dictionary AudioDecoderInit {
  required AudioDataOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback AudioDataOutputCallback = undefined(AudioData output);

3.1. Internal Slots

[[codec implementation]]

Underlying decoder implementation provided by the User Agent.

[[output callback]]

Callback given at construction for decoded outputs.

[[error callback]]

Callback given at construction for decode errors.

[[key chunk required]]

A boolean indicating that the next chunk passed to decode() must describe a key chunk as indicated by type.

3.2. Constructors

AudioDecoder(init)
  1. Let d be a new AudioDecoder object.

  2. Assign init.output to [[output callback]].

  3. Assign init.error to [[error callback]].

  4. Assign true to [[key chunk required]].

  5. Assign "unconfigured" to d.state.

  6. Return d.

3.3. Attributes

state, of type CodecState, readonly
Describes the current state of the codec.
decodeQueueSize, of type long, readonly
The number of pending decode requests. This number will decrease as the underlying codec is ready to accept new input.

3.4. Methods

configure(config)
Enqueues a control message to configure the audio decoder for decoding chunks as described by config.

NOTE: This method will trigger a NotSupportedError if the user agent does not support config. Authors should first check support by calling isConfigSupported() with config. User agents are not required to support any particular codec type or configuration.

When invoked, run these steps:

  1. If config is not a valid AudioDecoderConfig, throw a TypeError.

  2. If state is “closed”, throw an InvalidStateError.

  3. Set state to "configured".

  4. Set [[key chunk required]] to true.

  5. Queue a control message to configure the decoder with config.

Running a control message to configure the decoder means running these steps:

  1. Let supported be the result of running the Check Configuration Support algorith with config.

  2. If supported is true, assign [[codec implementation]] with an implementation supporting config.

  3. Otherwise, run the Close AudioDecoder algorithm with NotSupportedError.

decode(chunk)
Enqueues a control message to decode the given chunk.

When invoked, run these steps:

  1. If state is not "configured", throw an InvalidStateError.

  2. If [[key chunk required]] is true:

    1. If chunk.type is not key, throw a DataError.

    2. Implementers should inspect the chunk’s [[internal data]] to verify that it is truly a key chunk. If a mismatch is detected, throw a DataError.

    3. Otherwise, assign false to [[key chunk required]].

  3. Increment decodeQueueSize.

  4. Queue a control message to decode the chunk.

Running a control message to decode the chunk means performing these steps:

  1. Attempt to use [[codec implementation]] to decode the chunk.

  2. If decoding results in an error, queue a task on the control thread event loop to run the Close AudioDecoder algorithm with EncodingError.

  3. Queue a task on the control thread event loop to decrement decodeQueueSize

  4. Let decoded outputs be a list of decoded video data outputs emitted by [[codec implementation]].

  5. If decoded outputs is not empty, queue a task on the control thread event loop to run the Output AudioData algorithm with decoded outputs.

flush()
Completes all control messages in the control message queue and emits all outputs.

When invoked, run these steps:

  1. If state is not "configured", return a promise rejected with InvalidStateError DOMException.

  2. Set [[key chunk required]] to true.

  3. Let promise be a new Promise.

  4. Queue a control message to flush the codec with promise.

  5. Return promise.

Running a control message to flush the codec means performing these steps with promise.

  1. Signal [[codec implementation]] to emit all internal pending outputs.

  2. Let decoded outputs be a list of decoded audio data outputs emitted by [[codec implementation]].

  3. If decoded outputs is not empty, queue a task on the control thread event loop to run the Output AudioData algorithm with decoded outputs.

  4. Queue a task on the control thread event loop to resolve promise.

reset()
Immediately resets all state including configuration, control messages in the control message queue, and all pending callbacks.

When invoked, run the Reset AudioDecoder algorithm.

close()
Immediately aborts all pending work and releases system resources. Close is final.

When invoked, run the Close AudioDecoder algorithm.

isConfigSupported(config)
Returns a promise indicating whether the provided config is supported by the user agent.

NOTE: The returned AudioDecoderSupport config will contain only the dictionary members that user agent recognized. Unrecognized dictionary memebers will be ignored. Authors may detect unrecognized dictionary members by comparinging config to their provided config.

When invoked, run these steps:

  1. If config is not a valid AudioDecoderConfig, return a promise rejected with TypeError.

  2. Let p be a new Promise.

  3. Let checkSupportQueue be the result of starting a new parallel queue.

  4. Enqueue the following steps to checkSupportQueue:

    1. Let decoderSupport be a newly constructed AudioDecoderSupport, initialized as follows:

      1. Set config to the result of running the Clone Configuration algorithm with config.

      2. Set supported to the result of running the Check Configuration Support algorithm with config.

    2. Resolve p with decoderSupport.

  5. Return p.

3.5. Algorithms

Output AudioData (with outputs)
Run these steps:
  1. For each output in outputs:

    1. Let data be an AudioData, intialized as follows:

      1. Assign false to [[detached]].

      2. Let resource be the media resource described by output.

      3. Let resourceReference be a reference to resource.

      4. Assign resourceReference to [[resource reference]].

      5. Let timestamp be the timestamp of the EncodedAudioChunk associated with output.

      6. Assign timestamp to [[timestamp]].

      7. Assign values to [[format]], [[sample rate]], [[number of frames]], and [[number of channels]] as determined by output.

    2. Invoke [[output callback]] with data.

Reset AudioDecoder
Run these steps:
  1. If state is "closed", throw an InvalidStateError.

  2. Set state to "unconfigured".

  3. Signal [[codec implementation]] to cease producing output for the previous configuration.

  4. Reset the control message queue.

  5. Set decodeQueueSize to zero.

Close AudioDecoder (with error)
Run these steps:
  1. Run the Reset AudioDecoder algorithm.

  2. Set state to "closed".

  3. Clear [[codec implementation]] and release associated system resources.

  4. If error is set, queue a task on the control thread event loop to invoke the [[error callback]] with error.

4. VideoDecoder Interface

[Exposed=(Window,DedicatedWorker)]
interface VideoDecoder {
  constructor(VideoDecoderInit init);

  readonly attribute CodecState state;
  readonly attribute long decodeQueueSize;

  undefined configure(VideoDecoderConfig config);
  undefined decode(EncodedVideoChunk chunk);
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<VideoDecoderSupport> isConfigSupported(VideoDecoderConfig config);
};

dictionary VideoDecoderInit {
  required VideoFrameOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback VideoFrameOutputCallback = undefined(VideoFrame output);

4.1. Internal Slots

[[codec implementation]]

Underlying decoder implementation provided by the User Agent.

[[output callback]]

Callback given at construction for decoded outputs.

[[error callback]]

Callback given at construction for decode errors.

[[active decoder config]]

The VideoDecoderConfig that is actively applied.

[[key chunk required]]

A boolean indicating that the next chunk passed to decode() must describe a key chunk as indicated by type.

4.2. Constructors

VideoDecoder(init)
  1. Let d be a new VideoDecoder object.

  2. Assign init.output to the [[output callback]] internal slot.

  3. Assign init.error to the [[error callback]] internal slot.

  4. Assign true to [[key chunk required]].

  5. Assign "unconfigured" to d.state.

  6. Return d.

4.3. Attributes

state, of type CodecState, readonly
Describes the current state of the codec.
decodeQueueSize, of type long, readonly
The number of pending decode requests. This number will decrease as the underlying codec is ready to accept new input.

4.4. Methods

configure(config)
Enqueues a control message to configure the video decoder for decoding chunks as described by config.

NOTE: This method will trigger a NotSupportedError if the user agent does not support config. Authors should first check support by calling isConfigSupported() with config. User agents are not required to support any particular codec type or configuration.

When invoked, run these steps:

  1. If config is not a valid VideoDecoderConfig, throw a TypeError.

  2. If state is “closed”, throw an InvalidStateError.

  3. Set state to "configured".

  4. Set [[key chunk required]] to true.

  5. Queue a control message to configure the decoder with config.

Running a control message to configure the decoder means running these steps:

  1. Let supported be the result of running the Check Configuration Support algorith with config.

  2. If supported is true, assign [[codec implementation]] with an implementation supporting config.

  3. Otherwise, run the Close VideoDecoder algorithm with NotSupportedError and abort these steps.

  4. Set [[active decoder config]] to config.

decode(chunk)
Enqueues a control message to decode the given chunk.

NOTE: Authors should call close() on ouput VideoFrames immediately when frames are no longer needed. The underlying media resources are owned by the VideoDecoder and failing to release them (or waiting for garbage collection) may cause decoding to stall.

When invoked, run these steps:

  1. If state is not "configured", throw an InvalidStateError.

  2. If [[key chunk required]] is true:

    1. If chunk.type is not key, throw a DataError.

    2. Implementers should inspect the chunk’s [[internal data]] to verify that it is truly a key chunk. If a mismatch is detected, throw a DataError.

    3. Otherwise, assign false to [[key chunk required]].

  3. Increment decodeQueueSize.

  4. Queue a control message to decode the chunk.

Running a control message to decode the chunk means performing these steps:

  1. Attempt to use [[codec implementation]] to decode the chunk.

  2. If decoding results in an error, queue a task on the control thread event loop to run the Close VideoDecoder algorithm with EncodingError.

  3. Queue a task on the control thread event loop to decrement decodeQueueSize

  4. Let decoded outputs be a list of decoded video data outputs emitted by [[codec implementation]].

  5. If decoded outputs is not empty, queue a task on the control thread event loop to run the Output VideoFrames algorithm with decoded outputs.

flush()
Completes all control messages in the control message queue and emits all outputs.

When invoked, run these steps:

  1. If state is not "configured", return a promise rejected with InvalidStateError DOMException.

  2. Set [[key chunk required]] to true.

  3. Let promise be a new Promise.

  4. Queue a control message to flush the codec with promise.

  5. Return promise.

Running a control message to flush the codec means performing these steps with promise.

  1. Signal [[codec implementation]] to emit all internal pending outputs.

  2. Let decoded outputs be a list of decoded video data outputs emitted by [[codec implementation]].

  3. If decoded outputs is not empty, queue a task on the control thread event loop to run the Output VideoFrames algorithm with decoded outputs.

  4. Queue a task on the control thread event loop to resolve promise.

reset()
Immediately resets all state including configuration, control messages in the control message queue, and all pending callbacks.

When invoked, run the Reset VideoDecoder algorithm.

close()
Immediately aborts all pending work and releases system resources. Close is final.

When invoked, run the Close VideoDecoder algorithm.

isConfigSupported(config)
Returns a promise indicating whether the provided config is supported by the user agent.

NOTE: The returned VideoDecoderSupport config will contain only the dictionary members that user agent recognized. Unrecognized dictionary memebers will be ignored. Authors may detect unrecognized dictionary members by comparinging config to their provided config.

When invoked, run these steps:

  1. If config is not a valid VideoDecoderConfig, return a promise rejected with TypeError.

  2. Let p be a new Promise.

  3. Let checkSupportQueue be the result of starting a new parallel queue.

  4. Enqueue the following steps to checkSupportQueue:

    1. Let decoderSupport be a newly constructed VideoDecoderSupport, initialized as follows:

      1. Set config to the result of running the Clone Configuration algorithm with config.

      2. Set supported to the result of running the Check Configuration Support algorithm with config.

    2. Resolve p with decoderSupport.

  5. Return p.

4.5. Algorithms

Output VideoFrames (with outputs)
Run these steps:
  1. For each output in outputs:

    1. Let timestamp and duration be the timestamp and duration from the EncodedVideoChunk associated with output.

    2. Let displayAspectWidth and displayAspectHeight be undefined.

    3. If displayAspectWidth and displayAspectHeight exist in the [[active decoder config]], assign their values to displayAspectWidth and displayAspectHeight respectively.

    4. Let frame be the result of running the Create a VideoFrame algorithm with output, timestamp, duration, displayAspectWidth and displayAspectHeight.

    5. Invoke [[output callback]] with frame.

Reset VideoDecoder
Run these steps:
  1. If state is "closed", throw an InvalidStateError.

  2. Set state to "unconfigured".

  3. Signal [[codec implementation]] to cease producing output for the previous configuration.

  4. Reset the control message queue.

  5. Set decodeQueueSize to zero.

Close VideoDecoder (with error)
Run these steps:
  1. Run the Reset VideoDecoder algorithm.

  2. Set state to "closed".

  3. Clear [[codec implementation]] and release associated system resources.

  4. If error is set, queue a task on the control thread event loop to invoke the [[error callback]] with error.

5. AudioEncoder Interface

[Exposed=(Window,DedicatedWorker)]
interface AudioEncoder {
  constructor(AudioEncoderInit init);

  readonly attribute CodecState state;
  readonly attribute long encodeQueueSize;

  undefined configure(AudioEncoderConfig config);
  undefined encode(AudioData data);
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<AudioEncoderSupport> isConfigSupported(AudioEncoderConfig config);
};

dictionary AudioEncoderInit {
  required EncodedAudioChunkOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback EncodedAudioChunkOutputCallback =
    undefined (EncodedAudioChunk output,
               optional EncodedAudioChunkMetadata metadata = {});

5.1. Internal Slots

[[codec implementation]]
Underlying encoder implementation provided by the User Agent.
[[output callback]]
Callback given at construction for encoded outputs.
[[error callback]]
Callback given at construction for encode errors.
[[active encoder config]]
The AudioEncoderConfig that is actively applied.
[[active output config]]
The AudioDecoderConfig that describes how to decode the most recently emitted EncodedAudioChunk.

5.2. Constructors

AudioEncoder(init)
  1. Let e be a new AudioEncoder object.

  2. Assign init.output to the [[output callback]] internal slot.

  3. Assign init.error to the [[error callback]] internal slot.

  4. Assign "unconfigured" to e.state.

  5. Assign null to [[active encoder config]].

  6. Assign null to [[active output config]].

  7. Return e.

5.3. Attributes

state, of type CodecState, readonly
Describes the current state of the codec.
encodeQueueSize, of type long, readonly
The number of pending encode requests. This number will decrease as the underlying codec is ready to accept new input.

5.4. Methods

configure(config)
Enqueues a control message to configure the audio encoder for decoding chunks as described by config.

NOTE: This method will trigger a NotSupportedError if the user agent does not support config. Authors should first check support by calling isConfigSupported() with config. User agents are not required to support any particular codec type or configuration.

When invoked, run these steps:

  1. If config is not a valid AudioEncoderConfig, throw a TypeError.

  2. If state is "closed", throw an InvalidStateError.

  3. Set state to "configured".

  4. Queue a control message to configure the encoder using config.

Running a control message to configure the encoder means performing these steps:

  1. Let supported be the result of running the Check Configuration Support algorith with config.

  2. If supported is true, assign [[codec implementation]] with an implementation supporting config.

  3. Otherwise, run the Close AudioEncoder algorithm with NotSupportedError and abort these steps.

  4. Assign config to [[active encoder config]]

encode(data)
Enqueues a control message to encode the given data.

When invoked, run these steps:

  1. If the value of data’s [[detached]] internal slot is true, throw a TypeError.

  2. If state is not "configured", throw an InvalidStateError.

  3. Let dataClone hold the result of running the Clone AudioData algorithm with data.

  4. Increment encodeQueueSize.

  5. Queue a control message to encode dataClone.

Running a control message to encode the data means performing these steps.

  1. Attempt to use [[codec implementation]] to encode the media resource described by dataClone.

  2. If encoding results in an error, queue a task on the control thread event loop to run the Close AudioEncoder algorithm with EncodingError.

  3. Queue a task on the control thread event loop to decrement encodeQueueSize.

  4. Let encoded outputs be a list of encoded audio data outputs emitted by [[codec implementation]].

  5. If encoded outputs is not empty, queue a task on the control thread event loop to run the Output EncodedAudioChunks algorithm with encoded outputs.

flush()
Completes all control messages in the control message queue and emits all outputs.

When invoked, run these steps:

  1. If state is not "configured", return a promise rejected with InvalidStateError DOMException.

  2. Let promise be a new Promise.

  3. Queue a control message to flush the codec with promise.

  4. Return promise.

Running a control message to flush the codec means performing these steps with promise.

  1. Signal [[codec implementation]] to emit all internal pending outputs.

  2. Let encoded outputs be a list of encoded audio data outputs emitted by [[codec implementation]].

  3. If encoded outputs is not empty, queue a task on the control thread event loop to run the Output EncodedAudioChunks algorithm with encoded outputs.

  4. Queue a task on the control thread event loop to resolve promise.

reset()
Immediately resets all state including configuration, control messages in the control message queue, and all pending callbacks.

When invoked, run the Reset AudioEncoder algorithm.

close()
Immediately aborts all pending work and releases system resources. Close is final.

When invoked, run the Close AudioEncoder algorithm.

isConfigSupported(config)
Returns a promise indicating whether the provided config is supported by the user agent.

NOTE: The returned AudioEncoderSupport config will contain only the dictionary members that user agent recognized. Unrecognized dictionary memebers will be ignored. Authors may detect unrecognized dictionary members by comparinging config to their provided config.

When invoked, run these steps:

  1. If config is not a valid AudioEncoderConfig, return a promise rejected with TypeError.

  2. Let p be a new Promise.

  3. Let checkSupportQueue be the result of starting a new parallel queue.

  4. Enqueue the following steps to checkSupportQueue:

    1. Let encoderSupport be a newly constructed AudioEncoderSupport, initialized as follows:

      1. Set config to the result of running the Clone Configuration algorithm with config.

      2. Set supported to the result of running the Check Configuration Support algorithm with config.

    2. Resolve p with encoderSupport.

  5. Return p.

5.5. Algorithms

Output EncodedAudioChunks (with outputs)
Run these steps:
  1. For each output in outputs:

    1. Let chunkInit be an EncodedAudioChunkInit with the following keys:

      1. Let data contain the encoded audio data from output.

      2. Let type be the EncodedAudioChunkType of output.

      3. Let timestamp be the timestamp from the AudioData associated with output.

    2. Let chunk be a new EncodedAudioChunk constructed with chunkInit.

    3. Let chunkMetadata be a new EncodedAudioChunkMetadata.

    4. Let encoderConfig be the [[active encoder config]].

    5. Let outputConfig be a new AudioDecoderConfig that describes output. Intialize outputConfig as follows:

      1. Assign encoderConfig.codec to outputConfig.codec.

      2. Assign encoderConfig.sampleRate to outputConfig.sampleRate.

      3. Assign to encoderConfig.numberOfChannels to outputConfig.numberOfChannels.

      4. Assign outputConfig.description with a sequence of codec specific bytes as determined by the [[codec implementation]]. The user agent must ensure that the provided description could be used to correctly decode output.

        NOTE: The codec specific requirements for populating the description are described in the [WEBCODECS-CODEC-REGISTRY].

    6. If outputConfig and [[active output config]] are not equal dictionaries:

      1. Assign outputConfig to chunkMetadata.decoderConfig.

      2. Assign outputConfig to [[active output config]].

    7. Invoke [[output callback]] with chunk and chunkMetadata.

Reset AudioEncoder
Run these steps:
  1. If state is "closed", throw an InvalidStateError.

  2. Set state to "unconfigured".

  3. Set [[active encoder config]] to null.

  4. Set [[active output config]] to null.

  5. Signal [[codec implementation]] to cease producing output for the previous configuration.

  6. Reset the control message queue.

  7. Set encodeQueueSize to zero.

Close AudioEncoder (with error)
Run these steps:
  1. Run the Reset AudioEncoder algorithm.

  2. Set state to "closed".

  3. Clear [[codec implementation]] and release associated system resources.

  4. If error is set, queue a task on the control thread event loop invoke the [[error callback]] with error.

5.6. EncodedAudioChunkMetadata

The following metadata dictionary is emitted by the EncodedVideoChunkOutputCallback alongside an associated EncodedVideoChunk.
dictionary EncodedAudioChunkMetadata {
  AudioDecoderConfig decoderConfig;
};
decoderConfig, of type AudioDecoderConfig

A AudioDecoderConfig that authors may use to decode the associated EncodedAudioChunk.

6. VideoEncoder Interface

[Exposed=(Window,DedicatedWorker)]
interface VideoEncoder {
  constructor(VideoEncoderInit init);

  readonly attribute CodecState state;
  readonly attribute long encodeQueueSize;

  undefined configure(VideoEncoderConfig config);
  undefined encode(VideoFrame frame, optional VideoEncoderEncodeOptions options = {});
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<boolean> isConfigSupported(VideoEncoderConfig config);
};

dictionary VideoEncoderInit {
  required EncodedVideoChunkOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback EncodedVideoChunkOutputCallback =
    undefined (EncodedVideoChunk chunk,
               optional EncodedVideoChunkMetadata metadata = {});

6.1. Internal Slots

[[codec implementation]]
Underlying encoder implementation provided by the User Agent.
[[output callback]]
Callback given at construction for encoded outputs.
[[error callback]]
Callback given at construction for encode errors.
[[active encoder config]]
The VideoEncoderConfig that is actively applied.
[[active output config]]
The VideoDecoderConfig that describes how to decode the most recently emitted EncodedVideoChunk.

6.2. Constructors

VideoEncoder(init)
  1. Let e be a new VideoEncoder object.

  2. Assign init.output to the [[output callback]] internal slot.

  3. Assign init.error to the [[error callback]] internal slot.

  4. Assign "unconfigured" to e.state.

  5. Return e.

6.3. Attributes

state, of type CodecState, readonly
Describes the current state of the codec.
encodeQueueSize, of type long, readonly
The number of pending encode requests. This number will decrease as the underlying codec is ready to accept new input.

6.4. Methods

configure(config)
Enqueues a control message to configure the video encoder for decoding chunks as described by config.

NOTE: This method will trigger a NotSupportedError if the user agent does not support config. Authors should first check support by calling isConfigSupported() with config. User agents are not required to support any particular codec type or configuration.

When invoked, run these steps:

  1. If config is not a valid VideoEncoderConfig, throw a TypeError.

  2. If state is "closed", throw an InvalidStateError.

  3. Set state to "configured".

  4. Queue a control message to configure the encoder using config.

Running a control message to configure the encoder means performing these steps:

  1. Let supported be the result of running the Check Configuration Support algorith with config.

  2. If supported is true, assign [[codec implementation]] with an implementation supporting config.

  3. Otherwise, run the Close VideoEncoder algorithm with NotSupportedError and abort these steps.

  4. Assign config to [[active encoder config]].

encode(frame, options)
Enqueues a control message to encode the given frame.

When invoked, run these steps:

  1. If the value of frame’s [[detached]] internal slot is true, throw a TypeError.

  2. If state is not "configured", throw an InvalidStateError.

  3. Let frameClone hold the result of running the Clone VideoFrame algorithm with frame.

  4. Increment encodeQueueSize.

  5. Queue a control message to encode frameClone.

Running a control message to encode the frame means performing these steps.

  1. Attempt to use [[codec implementation]] to encode frameClone according to options.

  2. If encoding results in an error, queue a task on the control thread event loop to run the Close VideoEncoder algorithm with EncodingError.

  3. Queue a task on the control thread event loop to decrement encodeQueueSize.

  4. Let encoded outputs be a list of encoded video data outputs emitted by [[codec implementation]].

  5. If encoded outputs is not empty, queue a task on the control thread event loop to run the Output EncodedVideoChunks algorithm with encoded outputs.

flush()
Completes all control messages in the control message queue and emits all outputs.

When invoked, run these steps:

  1. If state is not "configured", return a promise rejected with InvalidStateError DOMException.

  2. Let promise be a new Promise.

  3. Queue a control message to flush the codec with promise.

  4. Return promise.

Running a control message to flush the codec means performing these steps with promise.

  1. Signal [[codec implementation]] to emit all internal pending outputs.

  2. Let encoded outputs be a list of encoded video data outputs emitted by [[codec implementation]].

  3. If encoded outputs is not empty, queue a task on the control thread event loop to run the Output EncodedVideoChunks algorithm with encoded outputs.

  4. Queue a task on the control thread event loop to resolve promise.

reset()
Immediately resets all state including configuration, control messages in the control message queue, and all pending callbacks.

When invoked, run the Reset VideoEncoder algorithm.

close()
Immediately aborts all pending work and releases system resources. Close is final.

When invoked, run the Close VideoEncoder algorithm.

isConfigSupported(config)
Returns a promise indicating whether the provided config is supported by the user agent.

NOTE: The returned VideoEncoderSupport config will contain only the dictionary members that user agent recognized. Unrecognized dictionary memebers will be ignored. Authors may detect unrecognized dictionary members by comparinging config to their provided config.

When invoked, run these steps:

  1. If config is not a valid VideoEncoderConfig, return a promise rejected with TypeError.

  2. Let p be a new Promise.

  3. Let checkSupportQueue be the result of starting a new parallel queue.

  4. Enqueue the following steps to checkSupportQueue:

    1. Let encoderSupport be a newly constructed VideoEncoderSupport, initialized as follows:

      1. Set config to the result of running the Clone Configuration algorithm with config.

      2. Set supported to the result of running the Check Configuration Support algorithm with config.

    2. Resolve p with encoderSupport.

  5. Return p.

6.5. Algorithms

Output EncodedVideoChunks (with outputs)
Run these steps:
  1. For each output in outputs:

    1. Let chunkInit be an EncodedVideoChunkInit with the following keys:

      1. Let data contain the encoded video data from output.

      2. Let type be the EncodedVideoChunkType of output.

      3. Let timestamp be the [[timestamp]] from the VideoFrame associated with output.

      4. Let duration be the [[duration]] from the VideoFrame associated with output.

    2. Let chunk be a new EncodedVideoChunk constructed with chunkInit.

    3. Let chunkMetadata be a new EncodedVideoChunkMetadata.

    4. Let encoderConfig be the [[active encoder config]].

    5. Let outputConfig be a VideoDecoderConfig that describes output. Initialize outputConfig as follows:

      1. Assign encoderConfig.codec to outputConfig.codec.

      2. Assign encoderConfig.width to outputConfig.cropWidth.

      3. Assign encoderConfig.height to outputConfig.cropHeight.

      4. Assign encoderConfig.displayWidth to outputConfig.displayWidth.

      5. Assign encoderConfig.displayHeight to outputConfig.displayHeight.

      6. Assign the remaining keys of outputConfig as determined by [[codec implementation]]. The user agent must ensure that the configuration is completely described such that outputConfig could be used to correctly decode output.

        NOTE: The codec specific requirements for populating the description are described in the [WEBCODECS-CODEC-REGISTRY].

    6. If outputConfig and [[active output config]] are not equal dictionaries:

      1. Assign outputConfig to chunkMetadata.decoderConfig.

      2. Assign outputConfig to [[active output config]].

    7. If encoderConfig.scalabilityMode describes multiple temporal layers:

      1. Let temporal_layer_id be the zero-based index describing the temporal layer for output.

      2. Assign temporal_layer_id to chunkMetadata.temporalLayerId.

    8. Invoke [[output callback]] with chunk and chunkMetadata.

Reset VideoEncoder
Run these steps:
  1. If state is "closed", throw an InvalidStateError.

  2. Set state to "unconfigured".

  3. Set [[active encoder config]] to null.

  4. Set [[active output config]] to null.

  5. Signal [[codec implementation]] to cease producing output for the previous configuration.

  6. Reset the control message queue.

  7. Set encodeQueueSize to zero.

Close VideoEncoder (with error)
Run these steps:
  1. Run the Reset VideoEncoder algorithm.

  2. Set state to "closed".

  3. Clear [[codec implementation]] and release associated system resources.

  4. If error is set, queue a task on the control thread event loop invoke the [[error callback]] with error.

6.6. EncodedVideoChunkMetadata

The following metadata dictionary is emitted by the EncodedVideoChunkOutputCallback alongside an associated EncodedVideoChunk.
dictionary EncodedVideoChunkMetadata {
  VideoDecoderConfig decoderConfig;
  unsigned long temporalLayerId;
};
decoderConfig, of type VideoDecoderConfig

A VideoDecoderConfig that authors may use to decode the associated EncodedVideoChunk.

temporalLayerId, of type unsigned long

A number that identifies the temporal layer for the associated EncodedVideoChunk.

7. Configurations

7.1. Check Configuration Support (with config)

Run these steps:
  1. If the user agent can provide a codec to support all entries of the config, including applicable default values for keys that are not included, return true.

    NOTE: The types AudioDecoderConfig, VideoDecoderConfig, AudioEncoderConfig, and VideoEncoderConfig each define their respective configuration entries and defaults.

    NOTE: Support for a given configuration may change dynamically if the hardware is altered (e.g. external GPU unplugged) or if required hardware resources are exhausted. User agents should describe support on a best-effort basis given the resources that are available at the time of the query.

  2. Otherwise, return false.

7.2. Clone Configuration (with config)

NOTE: This algorithm will copy only the dictionary members that the user agent recognizes as part of the dictionary type.

Run these steps:

  1. Let dictType be the type of dictionary config.

  2. Let clone be a new empty instance of dictType.

  3. For each dictionary member m defined on dictType:

    1. If m does not exist in config, then continue.

    2. If config[m] is a nested dictionary, set clone[m] to the result of recursively running the Clone Configuration algorithm with config[m].

    3. Otherwise, assign the value of config[m] to clone[m].

7.3. Signalling Configuration Support

7.3.1. AudioDecoderSupport

dictionary AudioDecoderSupport {
  boolean supported;
  AudioDecoderConfig config;
};
supported, of type boolean
A boolean indicating the whether the corresponding config is supported by the user agent.
config, of type AudioDecoderConfig
An AudioDecoderConfig used by the user agent in determining the value of supported.

7.3.2. VideoDecoderSupport

dictionary VideoDecoderSupport {
  boolean supported;
  VideoDecoderConfig config;
};
supported, of type boolean
A boolean indicating the whether the corresponding config is supported by the user agent.
config, of type VideoDecoderConfig
A VideoDecoderConfig used by the user agent in determining the value of supported.

7.3.3. AudioEncoderSupport

dictionary AudioEncoderSupport {
  boolean supported;
  AudioEncoderConfig config;
};
supported, of type boolean
A boolean indicating the whether the corresponding config is supported by the user agent.
config, of type AudioEncoderConfig
An AudioEncoderConfig used by the user agent in determining the value of supported.

7.3.4. VideoEncoderSupport

dictionary VideoEncoderSupport {
  boolean supported;
  VideoEncoderConfig config;
};
supported, of type boolean
A boolean indicating the whether the corresponding config is supported by the user agent.
config, of type VideoEncoderConfig
A VideoEncoderConfig used by the user agent in determining the value of supported.

7.4. Codec String

A codec string describes a given codec format to be used for encoding or decoding.

A valid codec string must meet the following conditions.

  1. Is valid per the relevant codec specification (see examples below).

  2. It describes a single codec.

  3. It is unambiguous about codec profile and level for codecs that define these concepts.

NOTE: In other media specifications, codec strings historically accompanied a MIME type as the "codecs=" parameter (isTypeSupported(), canPlayType()) [RFC6381]. In this specification, encoded media is not containerized; hence, only the value of the codecs parameter is accepted.

The format and semantics for codec strings are defined by codec registrations listed in the [WEBCODECS-CODEC-REGISTRY]. A compliant implementation may support any combination of codec registrations or none at all.

7.5. AudioDecoderConfig

dictionary AudioDecoderConfig {
  required DOMString codec;
  [EnforceRange] required unsigned long sampleRate;
  [EnforceRange] required unsigned long numberOfChannels;
  BufferSource description;
};

To check if an AudioDecoderConfig is a valid AudioDecoderConfig, run these steps:

  1. If codec is not a valid codec string, return false.

  2. Return true.

codec, of type DOMString
Contains a codec string describing the codec.
sampleRate, of type unsigned long
The number of frame samples per second.
numberOfChannels, of type unsigned long
The number of audio channels.
description, of type BufferSource
A sequence of codec specific bytes, commonly known as extradata.

NOTE: The registrations in the [WEBCODECS-CODEC-REGISTRY] describe whether/how to populate this sequence, corresponding to the provided codec.

7.6. VideoDecoderConfig

dictionary VideoDecoderConfig {
  required DOMString codec;
  BufferSource description;
  [EnforceRange] unsigned long codedWidth;
  [EnforceRange] unsigned long codedHeight;
  [EnforceRange] unsigned long displayAspectWidth;
  [EnforceRange] unsigned long displayAspectHeight;
  HardwareAcceleration hardwareAcceleration = "allow";
};

To check if a VideoDecoderConfig is a valid VideoDecoderConfig, run these steps:

  1. If codec is not a valid codec string, return false.

  2. If one of codedWidth or codedHeight is provided but the other isn’t, return false.

  3. If codedWidth = 0 or codedHeight = 0, return false.

  4. If one of displayAspectWidth or displayAspectHeight is provided but the other isn’t, return false.

  5. If displayAspectWidth = 0 or displayAspectHeight = 0, return false.

  6. Return true.

codec, of type DOMString
Contains a codec string describing the codec.
description, of type BufferSource
A sequence of codec specific bytes, commonly known as extradata.

NOTE: The registrations in the [WEBCODECS-CODEC-REGISTRY] may describe whether/how to populate this sequence, corresponding to the provided codec.

codedWidth, of type unsigned long
Width of the VideoFrame in pixels, prior to any cropping or aspect ratio adjustments.
codedHeight, of type unsigned long
Height of the VideoFrame in pixels, prior to any cropping or aspect ratio adjustments.

NOTE: codedWidth and codedHeight are used when selecting a [[codec implementation]].

displayAspectWidth, of type unsigned long
Horizontal dimension of the VideoFrame’s aspect ratio when displayed.
displayAspectHeight, of type unsigned long
Vertical dimension of the VideoFrame’s aspect ratio when displayed.

Note: displayWidth and displayHeight can both be different from displayAspectWidth and displayAspectHeight, but they should have identical ratios, after scaling is applied when creating the video frame.

hardwareAcceleration, of type HardwareAcceleration, defaulting to "allow"
Configures hardware acceleration for this codec. See HardwareAcceleration.

7.7. AudioEncoderConfig

dictionary AudioEncoderConfig {
  required DOMString codec;
  [EnforceRange] unsigned long sampleRate;
  [EnforceRange] unsigned long numberOfChannels;
  [EnforceRange] unsigned long long bitrate;
};

NOTE: Codec-specific extensions to AudioEncoderConfig may be defined by the registrations in the [WEBCODECS-CODEC-REGISTRY].

To check if an AudioEncoderConfig is a valid AudioEncoderConfig, run these steps:

  1. If codec is not a valid codec string, return false.

  2. Return true.

codec, of type DOMString
Contains a codec string describing the codec.
sampleRate, of type unsigned long
The number of frame samples per second.
numberOfChannels, of type unsigned long
The number of audio channels.
bitrate, of type unsigned long long
The average bitrate of the encoded audio given in units of bits per second.

7.8. VideoEncoderConfig

dictionary VideoEncoderConfig {
  required DOMString codec;
  [EnforceRange] unsigned long long bitrate;
  [EnforceRange] required unsigned long width;
  [EnforceRange] required unsigned long height;
  [EnforceRange] unsigned long displayWidth;
  [EnforceRange] unsigned long displayHeight;
  HardwareAcceleration hardwareAcceleration = "allow";
  DOMString scalabilityMode;
};

NOTE: Codec-specific extensions to VideoEncoderConfig may be defined by the registrations in the [WEBCODECS-CODEC-REGISTRY].

To check if a VideoEncoderConfig is a valid VideoEncoderConfig, run these steps:

  1. If codec is not a valid codec string, return false.

  2. If width = 0 or height = 0, return false.

  3. If displayWidth = 0 or displayHeight = 0, return false.

  4. Return true.

codec, of type DOMString
Contains a codec string describing the codec.
bitrate, of type unsigned long long
The average bitrate of the encoded video given in units of bits per second.
width, of type unsigned long
The encoded width of output EncodedVideoChunks in pixels, prior to any display aspect ratio adjustments.

The encoder must scale any VideoFrame whose [[crop width]] differs from this value.

height, of type unsigned long
The encoded height of output EncodedVideoChunks in pixels, prior to any display aspect ratio adjustments.

The encoder must scale any VideoFrame whose [[crop height]] differs from this value.

displayWidth, of type unsigned long
The intended display width of output EncodedVideoChunks in pixels. Defaults to width if not present.
displayHeight, of type unsigned long
The intended display height of output EncodedVideoChunks in pixels. Defaults to width if not present.
NOTE: Providing a displayWidth or displayHeight that differs from width and height signals that chunks should be scaled after decoding to arrive at the final display aspect ratio.

For many codecs this is merely pass-through information, but some codecs may optionally include display sizing in the bitstream.

hardwareAcceleration, of type HardwareAcceleration, defaulting to "allow"
Configures hardware acceleration for this codec. See HardwareAcceleration.
scalabilityMode, of type DOMString
An encoding scalability mode identifier as defined by [WebRTC-SVC].

7.9. Hardware Acceleration

enum HardwareAcceleration {
  "allow",
  "deny",
  "require",
};

When supported, hardware acceleration offloads encoding or decoding to specialized hardware.

NOTE: Most authors will be best served by using the default of allow. This gives the user agent flexibility to optimize based on its knowledge of the system and configuration. A common strategy will be to prioritize hardware acceleration at higher resolutions with a fallback to software codecs if hardware acceleration fails.

Authors should carefully weigh the tradeoffs setting a hardware acceleration preference. The precise trade-offs will be device-specific, but authors should generally expect the following:

Given these tradeoffs, a good example of using "require" would be if an author intends to provide their own software based fallback via WebAssembly.

Alternatively, a good example of using "disallow" would be if an author is especially sensitive to the higher startup latency or decreased robustness generally associated with hardware acceleration.

allow
Indicates that the user agent may use hardware acceleration if it is available and compatible with other aspects of the codec configuration.
deny
Indicates that the user agent must not use hardware acceleration.

NOTE: This will cause the configuration to be unsupported on platforms where an unaccelerated codec is unavailable or is incompatible with other aspects of the codec configuration.

require
Indicates that the user agent must use hardware acceleration.

NOTE: This will cause the configuration to be unsupported on platforms where an accelerated codec is unavailable or is incompatible with other aspects of the codec configuration.

7.10. Configuration Equivalence

Two dictionaries are equal dictionaries if they contain the same keys and values. For nested dictionaries, apply this definition recursively.

7.11. VideoEncoderEncodeOptions

dictionary VideoEncoderEncodeOptions {
  boolean keyFrame = false;
};
keyFrame, of type boolean, defaulting to false
A value of true indicates that the given frame MUST be encoded as a key frame. A value of false indicates that the user agent has flexibility to decide whether the frame will be encoded as a key frame.

7.12. CodecState

enum CodecState {
  "unconfigured",
  "configured",
  "closed"
};
unconfigured
The codec is not configured for encoding or decoding.
configured
A valid configuration has been provided. The codec is ready for encoding or decoding.
closed
The codec is no longer usable and underlying system resources have been released.

7.13. WebCodecsErrorCallback

callback WebCodecsErrorCallback = undefined(DOMException error);

8. Encoded Media Interfaces (Chunks)

These interfaces represent chunks of encoded media.

8.1. EncodedAudioChunk Interface

[Exposed=(Window,DedicatedWorker)]
interface EncodedAudioChunk {
  constructor(EncodedAudioChunkInit init);
  readonly attribute EncodedAudioChunkType type;
  readonly attribute long long timestamp;    // microseconds
  readonly attribute unsigned long byteLength;

  undefined copyTo([AllowShared] BufferSource destination);
};

dictionary EncodedAudioChunkInit {
  required EncodedAudioChunkType type;
  [EnforceRange] required long long timestamp;    // microseconds
  required BufferSource data;
};

enum EncodedAudioChunkType {
    "key",
    "delta",
};

8.1.1. Internal Slots

[[internal data]]

An array of bytes representing the encoded chunk data.

8.1.2. Constructors

EncodedAudioChunk(init)
  1. Let chunk be a new EncodedAudioChunk object, initialized as follows

    1. Assign init.type to type.

    2. Assign init.timestamp to timestamp.

    3. Assign a copy of init.data to [[internal data]].

    4. Assign init.data.byteLength to byteLength;

  2. Return chunk.

8.1.3. Attributes

type, of type EncodedAudioChunkType, readonly

Describes whether the chunk is a key chunk.

timestamp, of type long long, readonly

The presentation timestamp, given in microseconds.

byteLength, of type unsigned long, readonly

The byte length of [[internal data]].

8.1.4. Methods

copyTo(destination)

When invoked, run these steps:

  1. If byteLength is greater than destination.byteLength, throw a TypeError.

  2. Copy the [[internal data]] into destination.

8.2. EncodedVideoChunk Interface

[Exposed=(Window,DedicatedWorker)]
interface EncodedVideoChunk {
  constructor(EncodedVideoChunkInit init);
  readonly attribute EncodedVideoChunkType type;
  readonly attribute long long timestamp;             // microseconds
  readonly attribute unsigned long long? duration;    // microseconds
  readonly attribute unsigned long byteLength;

  undefined copyTo([AllowShared] BufferSource destination);
};

dictionary EncodedVideoChunkInit {
  required EncodedVideoChunkType type;
  [EnforceRange] required long long timestamp;        // microseconds
  [EnforceRange] unsigned long long duration;         // microseconds
  required BufferSource data;
};

enum EncodedVideoChunkType {
    "key",
    "delta",
};

8.2.1. Internal Slots

[[internal data]]

An array of bytes representing the encoded chunk data.

8.2.2. Constructors

EncodedVideoChunk(init)
  1. Let chunk be a new EncodedVideoChunk object, initialized as follows

    1. Assign init.type to type.

    2. Assign init.timestamp to timestamp.

    3. If duration is present in init, assign init.duration to duration. Otherwise, assign null to duration.

    4. Assign a copy of init.data to [[internal data]].

    5. Assign init.data.byteLength to byteLength;

  2. Return chunk.

8.2.3. Attributes

type, of type EncodedVideoChunkType, readonly

Describes whether the chunk is a key chunk.

timestamp, of type long long, readonly

The presentation timestamp, given in microseconds.

duration, of type unsigned long long, readonly, nullable

The presentation duration, given in microseconds.

byteLength, of type unsigned long, readonly

The byte length of [[internal data]].

8.2.4. Methods

copyTo(destination)

When invoked, run these steps:

  1. If byteLength is greater than destination.byteLength, throw a TypeError.

  2. Copy the [[internal data]] into destination.

9. Raw Media Interfaces

These interfaces represent unencoded (raw) media.

9.1. Memory Model

9.1.1. Background

This section is non-normative.

Decoded media data may occupy a large amount of system memory. To minimize the need for expensive copies, this specification defines a scheme for reference counting (clone() and close()).

NOTE: Authors should take care to invoke close() immediately when frames are no longer needed.

9.1.2. Reference Counting

A media resource is storage for the actual pixel data or the audio sample data described by a VideoFrame or AudioData.

The AudioData [[resource reference]] and VideoFrame [[resource reference]] internal slots hold a reference to a media resource.

VideoFrame.clone() and AudioData.clone() return new objects whose [[resource reference]] points to the same media resource as the original object.

VideoFrame.close() and AudioData.close() will clear their [[resource reference]] slot, releasing the reference their media resource

A media resource must remain alive at least as long as it continues to be referenced by a [[resource reference]].

NOTE: When a media resource is no longer referenced by a [[resource reference]], the resource may be destroyed. User agents are encouraged to destroy such resources quickly to reduce memory pressure and facilitate resouce reuse.

9.2. AudioData Interface

[Exposed=(Window,DedicatedWorker)]
interface AudioData {
  constructor(AudioDataInit init);

  readonly attribute AudioSampleFormat format;
  readonly attribute float sampleRate;
  readonly attribute unsigned long numberOfFrames;
  readonly attribute unsigned long numberOfChannels;
  readonly attribute unsigned long long duration;  // microseconds
  readonly attribute long long timestamp;          // microseconds

  unsigned long allocationSize(AudioDataCopyToOptions options);
  undefined copyTo([AllowShared] BufferSource destination, AudioDataCopyToOptions options);
  AudioData clone();
  undefined close();
};

dictionary AudioDataInit {
  required AudioSampleFormat format;
  [EnforceRange] required float sampleRate;
  [EnforceRange] required unsigned long numberOfFrames;
  [EnforceRange] required unsigned long numberOfChannels;
  [EnforceRange] required long long timestamp;  // microseconds
  required BufferSource data;
};

9.2.1. Internal Slots

[[detached]]

Boolean indicating whether close() was invoked on this AudioData.

[[resource reference]]

A reference to a media resource that stores the audio sample data for this AudioData.

[[format]]

The AudioSampleFormat used by this AudioData.

[[sample rate]]

The sample-rate, in Hz, for this AudioData.

[[number of frames]]

The number of frames for this AudioData.

[[number of channels]]

The number of audio channels for this AudioData.

[[timestamp]]

The presentation timestamp, in microseconds, for this AudioData.

9.2.2. Constructors

AudioData(init)
  1. Let frame be a new AudioData object, initialized as follows:

    1. Assign false to [[detached]].

    2. Assign init.format to [[format]].

    3. Assign init.sampleRate to [[sample rate]].

    4. Assign init.numberOfFrames to [[number of frames]].

    5. Assign init.numberOfChannels to [[number of channels]].

    6. Assign init.timestamp to [[timestamp]].

    7. Let resource be a media resource containing a copy of init.data.

    8. Let resourceReference be a reference to resource.

    9. Assign resourceReference to [[resource reference]].

  2. Return frame.

9.2.3. Attributes

format, of type AudioSampleFormat, readonly

The AudioSampleFormat used by this AudioData.

The format getter steps are to return [[format]].

sampleRate, of type float, readonly

The sample-rate, in Hz, for this AudioData.

The sampleRate getter steps are to return [[sample rate]].

numberOfFrames, of type unsigned long, readonly

The number of frames for this AudioData.

The numberOfFrames getter steps are to return [[number of frames]].

numberOfChannels, of type unsigned long, readonly

The number of audio channels for this AudioData.

The numberOfChannels getter steps are to return [[number of channels]].

timestamp, of type long long, readonly

The presentation timestamp, in microseconds, for this AudioData.

The numberOfChannels getter steps are to return [[timestamp]].

duration, of type unsigned long long, readonly

The duration, in microseconds, for this AudioData.

The duration getter steps are to:

  1. Let microsecondsPerSecond be 1,000,000.

  2. Let durationInSeconds be the result of dividing [[number of frames]] by [[sample rate]].

  3. Return the product of durationInSeconds and microsecondsPerSecond.

9.2.4. Methods

allocationSize(options)

Returns the number of bytes required to hold the samples as described by options.

When invoked, run these steps:

  1. Let copyElementCount be the result of running the Compute Copy Element Count algorithm with options.

  2. Let bytesPerSample be the number of bytes per sample, as defined by the [[format]].

  3. Return the product of multiplying bytesPerSample by copyElementCount.

copyTo(destination, options)

Copies the samples from the specified plane of the AudioData to the destination buffer.

When invoked, run these steps:

  1. If the value of frame’s [[detached]] internal slot is true, throw an InvalidStateError DOMException.

  2. Let copyElementCount be the result of running the Compute Copy Element Count algorithm with options.

  3. Let bytesPerSample be the number of bytes per sample, as defined by the [[format]].

  4. If the product of multiplying bytesPerSample by copyElementCount is greater than destination.byteLength, throw a RangeError.

  5. Let resource be the media resource referenced by [[resource reference]].

  6. Let planeFrames be the region of resource corresponding to options.planeIndex.

  7. Copy elements of planeFrames into destination, starting with the frame positioned at options.frameOffset and stopping after copyElementCount samples have been copied.

clone()

Creates a new AudioData with a reference to the same media resource.

When invoked, run these steps:

  1. If the value of frame’s [[detached]] internal slot is true, throw an InvalidStateError DOMException.

  2. Return the result of running the Clone AudioData algorithm with this.

close()

Clears all state and releases the reference to the media resource. Close is final.

When invoked, run these steps:

  1. Assign true to the [[detached]] internal slot.

  2. Assign null to [[resource reference]].

9.2.5. Algorithms

Compute Copy Element Count (with options)

Run these steps:

  1. Let frameCount be the number of frames in the plane identified by options.planeIndex.

  2. If options.frameOffset is greater than or equal to frameCount, throw a RangeError.

  3. Let copyFrameCount be the difference of subtracting options.frameOffset from frameCount.

  4. If options.frameCount exists:

    1. If options.frameCount is greater than copyFrameCount, throw a RangeError.

    2. Otherwise, assign options.frameCount to copyFrameCount.

  5. Let elementCount be copyFrameCount.

  6. If [[format]] describes an interleaved AudioSampleFormat, mutliply elementCount by [[number of channels]]

  7. return elementCount.

Clone AudioData (with data)

Run these steps:

  1. Let clone be a new AudioData initialized as follows:

    1. Let resource be the media resource refrenced by data’s [[resource reference]].

    2. Let reference be a new reference to resource.

    3. Assign reference to [[resource reference]].

    4. Assign the values of data’s [[detached]], [[format]], [[sample rate]], [[number of frames]], [[number of channels]], and [[timestamp]] slots to the corresponding slots in clone.

  2. Return clone.

9.2.6. AudioDataCopyToOptions

dictionary AudioDataCopyToOptions {
  required unsigned long planeIndex;
  unsigned long frameOffset = 0;
  unsigned long frameCount;
};
planeIndex, of type unsigned long

The index identifying the plane to copy from.

frameOffset, of type unsigned long, defaulting to 0

An offset into the source plane data indicating which frame to begin copying from. Defaults to 0.

frameCount, of type unsigned long

The number of frames to copy. If not provided, the copy will include all frames in the plane beginning with frameOffset.

9.3. Audio Sample Format

An audio sample format describes the numeric type used to represent a single sample (e.g. 32-bit floating point) and the arrangement of samples from different channels as either interleaved or planar. The audio sample type refers solely to the numeric type and interval used to store the data, this is U8, S16, S24, S32, or FLT for respectively unsigned 8-bits, signed 16-bits, signed 32-bits, signed 32-bits, and 32-bits floating point number. The audio buffer arrangement refers solely to the way the samples are laid out in memory (planar or interleaved).

A sample refers to a single value that is the magnitude of a signal at a particular point in time in a particular channel.

A frame or (sample-frame) refers to a set of values of all channels of a multi-channel signal, that happen at the exact same time.

Note: Consequently if an audio signal is mono (has only one channel), a frame and a sample refer to the same thing.

All audio samples in this specification are using linear pulse-code modulation (Linear PCM): quantization levels are uniform between values.

Note: The Web Audio API, that is expected to be used with this specificaion, also uses Linear PCM.

enum AudioSampleFormat {
  "U8",
  "S16",
  "S24",
  "S32",
  "FLT",
  "U8P",
  "S16P",
  "S24P",
  "S32P",
  "FLTP",
};
U8

8-bit unsigned integer samples with interleaved channel arrangement.

S16

16-bit signed integer samples with interleaved channel arrangement.

S24

32-bit signed integer samples with interleaved channel arrangement, holding value in the 24-bit of lowest significance.

S32

32-bit signed integer samples with interleaved channel arrangement.

FLT

32-bit float samples with interleaved channel arrangement.

U8P

8-bit unsigned integer samples with planar channel arrangement.

S16P

16-bit signed integer samples with planar channel arrangement.

S24P

32-bit signed integer samples with planar channel arrangement, holding value in the 24-bit of lowest significance.

S32P

32-bit signed integer samples with planar channel arrangement.

FLTP

32-bit float samples with planar channel arrangement.

9.3.1. Arrangement of audio buffer

When an AudioData has an AudioSampleFormat that is interleaved, the audio samples from different channels are laid out consecutively in the same buffer, in the order described in the section § 9.3.3 Audio channel ordering. The AudioData has a single plane, that contains a number of elements therefore equal to numberOfFrames * numberOfChannels.

When an AudioData has an AudioSampleFormat that is planar, the audio samples from different channels are laid out in different buffers, themselves arranged in an order described in the section § 9.3.3 Audio channel ordering. The AudioData has a number of planes equal to the AudioData's numberOfChannels. Each plane contains numberOfFrames elements.

Note: The Web Audio API currently uses FLTP exclusively.

9.3.2. Magnitude of the audio samples

The minimum value and maximum value of an audio sample, for a particular audio sample type, are the values below which (respectively above which) audio clipping might occur. They are otherwise regular types, that can hold values outside this interval during intermeditate processing.

The bias value for an audio sample type is the value that often corresponds to the middle of the range (but often the range is not symmetrical). An audio buffer comprised only of values equal to the bias value is silent.

Sample type IDL type Minimum value Bias value Maximum value
U8 octet 0 128 +255
S16 short -32768 0 +32767
S24 long -8388608 0 +8388607
S32 long -2147483648 0 +2147483647
FLT float -1.0 0.0 +1.0

Note: There is no data type that can hold 24 bits of information conveniently, but audio content using 24-bit samples is common, so 32-bits integers are commonly used to hold 24-bit content.

9.3.3. Audio channel ordering

When decoding, the ordering of the audio channels in the resulting AudioData MUST be the same as what is present in the EncodedAudioChunk.

When encoding, the ordering of the audio channels in the resulting EncodedAudioChunk MUST be the same as what is preset in the given AudioData;

In other terms, no channel reordering is performed when encoding and decoding.

Note: The container either implies or specifies the channel mapping: the channel attributed to a particular channel index.

9.4. VideoFrame Interface

NOTE: VideoFrame is a CanvasImageSource. A VideoFrame may be passed to any method accepting a CanvasImageSource, including CanvasDrawImage's drawImage().

[Exposed=(Window,DedicatedWorker)]
interface VideoFrame {
  constructor(CanvasImageSource image, optional VideoFrameInit init = {});
  constructor(sequence<(Plane or PlaneInit)> planes,
              VideoFramePlaneInit init);

  readonly attribute PixelFormat format;
  readonly attribute FrozenArray<Plane>? planes;
  readonly attribute unsigned long codedWidth;
  readonly attribute unsigned long codedHeight;
  readonly attribute unsigned long cropLeft;
  readonly attribute unsigned long cropTop;
  readonly attribute unsigned long cropWidth;
  readonly attribute unsigned long cropHeight;
  readonly attribute unsigned long displayWidth;
  readonly attribute unsigned long displayHeight;
  readonly attribute unsigned long long? duration;  // microseconds
  readonly attribute long long? timestamp;          // microseconds

  VideoFrame clone();
  undefined close();
};

dictionary VideoFrameInit {
  unsigned long long duration;  // microseconds
  long long timestamp;          // microseconds
};

dictionary VideoFramePlaneInit {
  required PixelFormat format;
  [EnforceRange] required unsigned long codedWidth;
  [EnforceRange] required unsigned long codedHeight;
  [EnforceRange] unsigned long cropLeft;
  [EnforceRange] unsigned long cropTop;
  [EnforceRange] unsigned long cropWidth;
  [EnforceRange] unsigned long cropHeight;
  [EnforceRange] unsigned long displayWidth;
  [EnforceRange] unsigned long displayHeight;
  [EnforceRange] unsigned long long duration;  // microseconds
  [EnforceRange] long long timestamp;          // microseconds
};

9.4.1. Internal Slots

[[detached]]

A boolean indicating whether destroy() was invoked and underlying resources have been released.

[[resource reference]]

A reference to the media resource that stores the pixel data for this frame.

[[format]]

A PixelFormat describing the pixel format of the VideoFrame.

[[planes]]

A list of Planes describing the memory layout of the pixel data in VideoFrame. The number of Planes and their semantics are determined by [[format]].

[[coded width]]

Width of the VideoFrame in pixels, prior to any cropping or aspect ratio adjustments.

[[coded height]]

Height of the VideoFrame in pixels, prior to any cropping or aspect ratio adjustments.

[[crop left]]

The number of pixels to remove from the left of the VideoFrame, prior to aspect ratio adjustments.

[[crop top]]

The number of pixels to remove from the top of the VideoFrame, prior to aspect ratio adjustments.

[[crop width]]

The width of pixels to include in the crop, starting from cropLeft.

[[crop height]]

The height of pixels to include in the crop, starting from cropLeft.

[[display width]]

Width of the VideoFrame when displayed after applying aspect ratio adjustments.

[[display height]]

Height of the VideoFrame when displayed after applying aspect ratio adjustments.

[[duration]]

The presentation duration, given in microseconds. The duration is copied from the EncodedVideoChunk corresponding to this VideoFrame.

[[timestamp]]

The presentation timestamp, given in microseconds. The timestamp is copied from the EncodedVideoChunk corresponding to this VideoFrame.

9.4.2. Constructors

VideoFrame(image, init)

  1. Check the usability of the image argument. If this throws an exception or returns bad, then throw an InvalidStateError DOMException.

  2. If the origin of image’s image data is not same origin with the entry settings object's origin, then throw a SecurityError DOMException.

  3. Let frame be a new VideoFrame.

  4. Switch on image:

  5. Return frame.

VideoFrame(planes, init)

  1. If init is not a valid VideoFramePlaneInit, throw a TypeError.

  2. If planes is incompatible with the given format (e.g. wrong number of planes), throw a TypeError.

    The spec should list additional format specific validation steps ( e.g. number and order of planes, acceptable sizing, etc...). See #165.

  3. Let resource be a new media resource allocated in accordance with init.

    The spec should define explicit rules for each PixelFormat and reference them in the steps above. See #165.

    NOTE: The user agent may choose to allocate resource with a larger coded size and plane strides to improve memory alignment. Increases will be reflected by codedWidth, codedHeight, and stride.

  4. Let resourceReference be a reference to resource.

  5. Let frame be a new VideoFrame object initialized as follows:

    1. Assign resourceReference to [[resource reference]].

    2. Assign format to [[format]].

    3. Assign a new list to [[planes]].

    4. For each planeInit in planes:

      1. Copy planeInit.src to resource.

        NOTE: The user agent may use cropLeft and cropTop to copy only the crop region. It may also reposition the crop region within resource. The final position will be reflected by cropLeft and cropTop.

      2. Let plane be a new Plane initialized as follows:

        1. Assign frame to [[parent frame]].

        2. Let resourceStride be the stride of the plane coresponding to planeInit in resource.

          The spec should provide a definition (and possibly diagrams) for stride. See #166.

        3. Assign resourceStride to stride.

        4. Assign planeInit.rows to rows.

        5. Assign the product of (stride * rows) to length.

      3. Append plane to [[planes]].

    5. Let resourceCodedWidth be the coded width of resource.

    6. Let resourceCodedHeight be the coded height of resource.

    7. Let resourceCropLeft be the left offset of the crop origin of resource.

    8. Let resourceCropTop be the top offset of the crop origin of resource.

      The spec should provide definitions (and possibly diagrams) for coded size, crop size, and display size. See #166.

    9. Assign resourceCodedWidth, resourceCodedHeight, resourceCropLeft, and resourceCropTop to [[coded width]], [[coded height]], [[crop left]], and [[crop top]] respectively.

    10. If init.cropWidth exists, assign it to [[crop width]]. Otherwise, assign [[coded width]] to [[crop width]].

    11. If init.cropHeight exists, assign it to [[crop height]]. Otehrwise, assign [[coded height]] to [[crop height]].

    12. If init.displayWidth exists, assign it to [[display width]]. Otherwise, assign [[crop width]] to [[display width]].

    13. If init.displayHeight exists, assign it to [[display height]]. Otherwise, assign [[crop height]] to [[display height]].

    14. Assign init’s timestamp and duration to [[timestamp]] and [[duration]] respectively.

  6. Return frame.

9.4.3. Attributes

format, of type PixelFormat, readonly

Describes the arrangement of bytes in each plane as well as the number and order of the planes.

The format getter steps are to return [[format]].

planes, of type FrozenArray<Plane>, readonly, nullable

Holds pixel data data, laid out as described by format and Plane attributes.

The planes getter steps are to return [[planes]].

codedWidth, of type unsigned long, readonly

Width of the VideoFrame in pixels, prior to any cropping or aspect ratio adjustments.

The codedWidth getter steps are to return [[coded width]].

codedHeight, of type unsigned long, readonly

Height of the VideoFrame in pixels, prior to any cropping or aspect ratio adjustments.

The codedHeight getter steps are to return [[coded height]].

cropLeft, of type unsigned long, readonly

The number of pixels to remove from the left of the VideoFrame, prior to aspect ratio adjustments.

The cropLeft getter steps are to return [[crop left]].

cropTop, of type unsigned long, readonly

The number of pixels to remove from the top of the VideoFrame, prior to aspect ratio adjustments.

The cropTop getter steps are to return [[crop top]].

cropWidth, of type unsigned long, readonly

The width of pixels to include in the crop, starting from cropLeft.

The cropWidth getter steps are to return [[crop width]].

cropHeight, of type unsigned long, readonly

The height of pixels to include in the crop, starting from cropLeft.

The cropHeight getter steps are to return [[crop height]].

displayWidth, of type unsigned long, readonly

Width of the VideoFrame when displayed after applying aspect ratio adjustments.

The displayWidth getter steps are to return [[display width]].

displayHeight, of type unsigned long, readonly

Height of the VideoFrame when displayed after applying aspect ratio adjustments.

The displayHeight getter steps are to return [[display height]].

timestamp, of type long long, readonly, nullable

The presentation timestamp, given in microseconds. The timestamp is copied from the EncodedVideoChunk corresponding to this VideoFrame.

The timestamp getter steps are to return [[timestamp]].

duration, of type unsigned long long, readonly, nullable

The presentation duration, given in microseconds. The duration is copied from the EncodedVideoChunk corresponding to this VideoFrame.

The duration getter steps are to return [[duration]].

9.4.4. Methods

clone()

Creates a new VideoFrame with a reference to the same media resource.

When invoked, run the these steps:

  1. If the value of frame’s [[detached]] internal slot is true, throw an InvalidStateError DOMException.

  2. Return the result of running the Clone VideoFrame algorithm with this.

close()

Clears all state and releases the reference to the media resource. Close is final.

When invoked, run these steps:

  1. Assign null to [[resource reference]].

  2. Assign true to [[detached]].

  3. Assign "" to format.

  4. Assign null to planes.

  5. Assign 0 to codedWidth, codedHeight, cropLeft, cropTop, cropWidth, cropHeight, displayWidth, and displayHeight.

  6. Assign null to duration and timestamp.

9.4.5. Algorithms

Create a VideoFrame (with output, timestamp, duration, displayAspectWidth, and displayAspectHeight)
  1. Let planes be a sequence of Planes containing the decoded video frame data from output.

  2. Let pixelFormat be the PixelFormat of planes.

  3. Let init be a VideoFramePlaneInit with the following keys:

    1. Assign timestamp to timestamp.

    2. Assign duration to duration.

    3. Let codedWidth and codedHeight be the width and height of the decoded video frame output in pixels, prior to any cropping or aspect ratio adjustments.

    4. Let cropLeft, cropTop, cropWidth, and cropHeight be the crop region of the decoded video frame output in pixels, prior to any aspect ratio adjustments.

    5. Let displayWidth and displayHeight be the the display size of the decoded frame in pixels.

    6. If displayAspectWidth and displayAspectHeight are provided, increase displayWidth or displayHeight until the ratio of displayWidth to displayHeight matches the ratio of displayAspectWidth to displayAspectHeight.

    7. Assign the value of displayWidth and displayHeight to displayWidth and displayHeight respectively.

  4. Return a new VideoFrame, constructed with pixelFormat, planes, and init.

To check if a VideoFramePlaneInit is a valid VideoFramePlaneInit, run these steps:
  1. If codedWidth = 0 or codedHeight = 0,return false.

  2. If cropWidth = 0 or cropHeight = 0, return false.

  3. If cropTop + cropHeight >= codedHeight, return false.

  4. If cropLeft + cropWidth >= codedWidth, return false.

  5. If displayWidth = 0 or displayHeight = 0, return false.

  6. Return true.

Initialize Frame From Other Frame (with init, frame, and otherFrame)
  1. Let resource be the media resource referenced by otherFrame’s [[resource reference]].

  2. Assign a new reference for resource to frame’s [[resource reference]].

  3. Assign the following attributes from otherFrame to frame: format, codedWidth, codedHeight, cropLeft, cropTop, cropWidth, cropHeight, displayWidth, displayHeight.

  4. Let planes be a new list.

  5. For each otherPlane in otherFrame.planes:

    1. Let plane be a new Plane.

    2. Assign a reference for frame to plane’s [[parent frame]].

    3. Assign the following attributes from otherPlane to plane: stride, rows, length.

    4. Append plane to planes.

  6. Assign planes to frame.planes.

  7. If duration exists in init, assign it to frame.duration. Otherwise, assign otherFrame.duration to frame.duration.

  8. If timestamp exists in init, assign it to frame.timestamp. Otherwise, assign otherFrame.timestamp to frame.timestamp.

Initialize Frame With Resource and Size (with init, frame, resource, width and height)
  1. Assign a new reference for resource to frame’s [[resource reference]].

  2. If resource uses a recognized PixelFormat:

    1. Assign the PixelFormat of resource to format.

    2. Let planes be a list of Planes describing the media resource in accordance with the format.

      The spec should define explicit rules for each PixelFormat and reference them in the step above. See #165.

    3. Assign planes to planes.

  3. Otherwise (resource does not use a recognized PixelFormat):

    1. Assign "" to format.

    2. Assign null to planes.

  4. Assign width to the following attributes of frame: codedWidth, cropWidth, displayWidth.

  5. Assign height to the following attributes of frame: codedHeight, cropHeight, displayHeight.

  6. Assign 0 to frame’s cropTop and cropLeft.

  7. Assign init.duration to frame.duration.

  8. Assign init.timestamp to frame.timestamp.

Clone VideoFrame (with frame)
  1. Let clone be a new VideoFrame initialized as follows:

    1. Assign frame.[[resource reference]] to [[resource reference]].

    2. Assign frame.format to format.

    3. Assign a new list to planes.

    4. For each plane in planes:

      1. Let clonePlane be a new Plane initialized as follows:

        1. Assign clone to clonePlane.[[parent frame]].

        2. Assign plane.stride to stride.

        3. Assign plane.rows to rows.

        4. Assign plane.length to length.

      2. Append clonePlane to planes.

    5. Assign all remaining attributes of frame ( codedWidth, codedHeight, etc.) to those of the same name in clone.

  2. Return clone.

9.5. Plane Interface

A Plane is solely constructed by its VideoFrame. During construction, the User Agent may use knowledge of the frame’s PixelFormat to add padding to the Plane to improve memory alignment.

A Plane cannot be used after the VideoFrame is destroyed. A new VideoFrame can be assembled from existing Planes, and the new VideoFrame will remain valid when the original is destroyed. This makes it possible to efficiently add an alpha plane to an existing VideoFrame.

[Exposed=(Window,DedicatedWorker)]
interface Plane {
  readonly attribute unsigned long stride;
  readonly attribute unsigned long rows;
  readonly attribute unsigned long length;

  undefined readInto(ArrayBufferView dst);
};

dictionary PlaneInit {
  required BufferSource src;
  [EnforceRange] required unsigned long stride;
  [EnforceRange] required unsigned long rows;
};

9.5.1. Internal Slots

[[parent frame]]
Refers to the VideoFrame that constructed and owns this plane.

9.5.2. Attributes

stride, of type unsigned long, readonly
The width of each row including any padding.
rows, of type unsigned long, readonly
The number of rows.
length, of type unsigned long, readonly
The total byte length of the plane (stride * rows).

9.5.3. Methods

readInto(dst)

Copies the plane data into dst.

When invoked, run these steps:

  1. If [[parent frame]] has been destroyed, throw an InvalidStateError.

  2. If length is greater than |dst.byteLength|, throw a TypeError.

  3. Let resource be the media resource refrenced by [[parent frame]]'s [[resource reference]].

  4. Let plane bytes be the region of bytes in media resource coresponding to this plane.

  5. Copy the plane bytes into dst.

9.6. Pixel Format

Pixel formats describe the arrangement of bytes in each plane as well as the number and order of the planes.

NOTE: This section needs work. We expect to add more pixel formats and offer much more verbose definitions. For now, please see http://www.fourcc.org/pixel-format/yuv-i420/ for a more complete description.

enum PixelFormat {
  "I420"
};
I420
Planar 4:2:0 YUV.

10. Image Decoding

10.1. Background

This section is non-normative.

Image codec definitions are typically accompanied by a definition for a corresponding file format. Hence image decoders often perform both duties of unpacking (demuxing) as well as decoding the encoded image data. The WebCodecs ImageDecoder follows this pattern, which motivates an interface design that is notably different from that of VideoDecoder and AudioDecoder.

In spite of these differences, ImageDecoder uses the same codec processing model as the other codec interfaces. Additionally, ImageDecoder uses the VideoFrame interface to describe decoded outputs.

10.2. ImageDecoder Interface

[Exposed=(Window,DedicatedWorker)]
interface ImageDecoder {
  constructor(ImageDecoderInit init);

  readonly attribute boolean complete;
  readonly attribute Promise<undefined> completed;
  readonly attribute ImageTrackList tracks;

  Promise<ImageDecodeResult> decode(optional ImageDecodeOptions options = {});
  undefined reset();
  undefined close();

  static Promise<boolean> isTypeSupported(DOMString type);
};

10.2.1. Internal Slots

[[ImageTrackList]]

An ImageTrackList describing the tracks found in [[encoded data]]

[[complete]]

A boolean indicating whether [[encoded data]] is completely buffered.

[[completed promise]]

The promise used to signal when [[complete]] becomes true.

[[codec implementation]]

An underlying image decoder implementation provided by the User Agent.

[[encoded data]]

A byte sequence containing the encoded image data to be decoded.

[[prefer animation]]

A boolean reflecting the value of preferAnimation given at construction.

[[pending decode promises]]

A list of unresolved promises returned by calls to decode().

[[internal selected track index]]

Identifies the image track within [[encoded data]] that is used by decoding algorithms on the codec thread.

[[tracks established]]

A boolean indicating whether the track list has been established in [[ImageTrackList]].

[[closed]]

A boolean indicating that the ImageDecoder is in a permanent closed state and can no longer be used.

[[progressive frame generations]]

A mapping of frame indices to Progressive Image Frame Generations. The values represent the Progressive Image Frame Generation for the VideoFrame which was most recently output by a call to decode() with the given frame index.

10.2.2. Constructor

ImageDecoder(init)

NOTE: Calling decode() on the constructed ImageDecoder will trigger a NotSupportedError if the user agent does not support type. Authors should first check support by calling isTypeSupported() with type. User agents are not required to support any particular type.

When invoked, run these steps:

  1. If init is not valid ImageDecoderInit, throw a TypeError.

  2. Let d be a new ImageDecoder object. In the steps below, all mentions of ImageDecoder members apply to d unless stated otherwise.

  3. Assign [[ImageTrackList]] a new ImageTrackList initialized as follows:

    1. Assign a new list to [[track list]].

    2. Assign -1 to [[selected index]].

  4. Assign null to [[codec implementation]].

  5. If init.preferAnimation exists, assign init.preferAnimation to the [[prefer animation]] internal slot. Otherwise, assign 'null' to [[prefer animation]] internal slot.

  6. Assign a new list to [[pending decode promises]].

  7. Assign -1 to [[internal selected track index]].

  8. Assign false to [[tracks established]].

  9. Assign false to [[closed]].

  10. Assign a new map to [[progressive frame generations]].

  11. If init’s data member is of type ReadableStream:

    1. Assign a new list to [[encoded data]].

    2. Assign false to complete

    3. Queue a control message to configure the image decoder with init.

    4. Let reader be the result of getting a reader for data.

    5. In parallel, perform the Fetch Stream Data Loop on d with reader.

  12. Otherwise:

    1. Assert that init.data is of type BufferSource.

    2. Assign a copy of init.data to [[encoded data]].

    3. Assign true to complete.

    4. Reslove [[completed promise]].

    5. Queue a control message to configure the image decoder with init.

    6. Queue a control message to decode track metadata.

  13. return d.

Running a control message to configure the image decoder means running these steps:

  1. Let supported be the result of running the Check Type Support algorithm with init.type.

  2. If supported is false, queue a task on the control thread event loop to run the Close ImageDecoder algorithm with a NotSupportedError DOMException and abort these steps.

  3. If supported is true, assign the [[codec implementation]] internal slot with an implementation supporting init.type

  4. Configure [[codec implementation]] in accordance with the values given for premultiplyAlpha, colorSpaceConversion, desiredWidth, and desiredHeight.

Running a control message to decode track metadata means running these steps:

  1. Run the Establish Tracks algorithm.

10.2.3. Attributes

complete, of type boolean, readonly

Indicates whether [[encoded data]] is completely buffered.

The complete getter steps are to return [[complete]].

completed, of type Promise<undefined>, readonly

The promise used to signal when complete becomes true.

The completed getter steps are to return [[completed promise]].

tracks, of type ImageTrackList, readonly

Returns a live ImageTrackList, which provides metadata for the available tracks and a mechanism for selecting a track to decode.

The tracks getter steps are to return [[ImageTrackList]].

10.2.4. Methods

decode(options)

Enqueues a control message to decode the frame according to options.

When invoked, run these steps:

  1. If [[closed]] is true, return a Promise rejected with an InvalidStateError DOMException.

  2. If [[ImageTrackList]]'s [[selected index]] is '-1', return a Promise rejected with an InvalidStateError DOMException.

  3. If options is undefined, assign a new ImageDecodeOptions to options.

  4. Let promise be a new Promise.

  5. Queue a control message to decode the the image with options, and promise.

  6. Append promise to [[pending decode promises]].

  7. Return promise.

Running a control message to decode the image means running these steps:

  1. Wait for [[tracks established]] to become true.

  2. If options.completeFramesOnly is false and the image is a Progressive Image for which the user agent supports progressive decoding, run the Decode Progressive Frame algorithm with options.frameIndex and promise.

  3. Otherwise, run the Decode Complete Frame algorithm with options.frameIndex and promise.

reset()

Immediately aborts all pending work.

When invoked, run the Reset ImageDecoder algorithm with and AbortError DOMException.

close()

Immediately aborts all pending work and releases system resources. Close is final.

When invoked, run the Close ImageDecoder algorithm with and AbortError DOMException.

isTypeSupported(type)

Returns a promise indicating whether the provided config is supported by the user agent.

When invoked, run these steps:

  1. If type is not a valid image MIME type, return a Promise rejected with TypeError.

  2. Let p be a new Promise.

  3. In parallel, resolve p with the result of running the Check Type Support algorithm with type.

  4. Return p.

10.2.5. Algorithms

Fetch Stream Data Loop (with reader)

Run these steps:

  1. Let readRequest be the following read request.

    chunk steps, given chunk
    1. If [[closed]] is true, abort these steps.

    2. If chunk is not a Uint8Array object, queue a task on the control thread event loop to run the Close ImageDecoder algorithm with a DataError DOMException and abort these steps.

    3. Let bytes be the byte sequence represented by the Uint8Array object.

    4. Append bytes to the [[encoded data]] internal slot.

    5. If [[tracks established]] is false, run the Establish Tracks algorithm.

    6. Otherwise, run the Update Tracks algorithm.

    7. Run the Fetch Stream Data Loop algorithm with reader.

    close steps
    1. Assign true to complete

    2. Resolve [[completed promise]].

    error steps
    1. Queue a task on the control thread event loop to run the Close ImageDecoder algorithm with a NotReadableError DOMException

  2. Read a chunk from reader given readRequest.

Establish Tracks

Run these steps:

  1. Assert [[tracks established]] is false.

  2. If [[encoded data]] does not contain enough data to determine the number of tracks:

    1. If complete is true, queue a task on the control thread event loop to run the Close ImageDecoder algorithm.

    2. Abort these steps.

  3. If the number of tracks is found to be 0, queue a task on the control thread event loop to run the Close ImageDecoder algorithm and abort these steps.

  4. Let newTrackList be a new list.

  5. For each image track found in [[encoded data]]:

    1. Let newTrack be a new ImageTrack, initialized as follows:

      1. Assign this to [[ImageDecoder]].

      2. Assign tracks to [[ImageTrackList]].

      3. If image track is found to be animated, assign true to newTrack’s [[animated]] internal slot. Otherwise, assign false.

      4. If image track is found to describe a frame count, assign that count to newTrack’s [[frame count]] internal slot. Otherwise, assign 0.

        NOTE: If this was constructed with data as a ReadableStream, the frameCount may change as additional bytes are appended to [[encoded data]]. See the Update Tracks algorithm.

      5. If image track is found to describe a repetition count, assign that count to [[repetition count]] internal slot. Otherwise, assign 0.

        NOTE: A value of Infinity indicates infinite repetitions.

      6. Assign false to newTrack’s [[selected]] internal slot.

    2. Append newTrack to newTrackList.

  6. Let selectedTrackIndex be the result of running the Get Default Selected Track Index algorithm with newTrackList.

  7. Let selectedTrack be the track at position selectedTrackIndex within newTrackList.

  8. Assign true to selectedTrack’s [[selected]] internal slot.

  9. Assign selectedTrackIndex to [[internal selected track index]].

  10. Assign true to [[tracks established]].

  11. Queue a task on the control thread event loop to perform the following steps:

    1. Assign newTrackList to the tracks [[track list]] internal slot.

    2. Assign selectedTrackIndex to tracks [[selected index]].

    3. Resolve [[ready promise]].

Get Default Selected Track Index (with trackList)

Run these steps:

  1. If [[encoded data]] identifies a Primary Image Track:

    1. Let primaryTrack be the ImageTrack from trackList that describes the Primary Image Track.

    2. Let primaryTrackIndex be position of primaryTrack within trackList.

    3. If [[prefer animation]] is null, return primaryTrackIndex.

    4. If primaryTrack.animated equals [[prefer animation]], return primaryTrackIndex.

  2. If any ImageTracks in trackList have animated equal to [[prefer animation]], return the position of the earliest such track in trackList.

  3. Return 0.

Update Tracks

A track update struct is a struct that consists of a track index (unsigned long) and a frame count (unsigned long).

Run these steps:

  1. Assert [[tracks established]] is true.

  2. Let trackChanges be a new list.

  3. Let trackList be a copy of tracks' [[track list]].

  4. For each track in trackList:

    1. Let trackIndex be the position of track in trackList.

    2. Let latestFrameCount be the frame count as indicated by [[encoded data]] for the track corresponding to track.

    3. Assert that latestFrameCount is greater than or equal to track.frameCount.

    4. If latestFrameCount is greater than track.frameCount:

      1. Let change be a track update struct whose track index is trackIndex and frame count is latestFrameCount.

      2. Append change to tracksChanges.

  5. If tracksChanges is empty, abort these steps.

  6. Queue a task on the control thread event loop to perform the following steps:

    1. For each update in trackChanges:

      1. Let updateTrack be the ImageTrack at position update.trackIndex within tracks' [[track list]].

      2. Assign update.frameCount to updateTrack’s [[frame count]].

      3. Fire a simple event named change at the tracks object.

Decode Complete Frame (with frameIndex and promise)
  1. Assert that [[tracks established]] is true.

  2. Assert that [[internal selected track index]] is not -1.

  3. Let encodedFrame be the encoded frame identified by frameIndex and [[internal selected track index]].

  4. Wait for any of the following conditions to be true (whichever happens first):

    1. [[encoded data]] contains enough bytes to completely decode encodedFrame.

    2. [[encoded data]] is found to be malformed.

    3. complete is true.

    4. [[closed]] is true.

  5. If [[encoded data]] is found to be malformed, run the Fatally Reject Bad Data algorithm and abort these steps.

  6. If [[encoded data]] does not contain enough bytes to completely decode encodedFrame, run the Reject Infeasible Decode algorithm with promise and abort these steps.

  7. Attempt to use [[codec implementation]] to decode encodedFrame.

  8. If decoding produces an error, run the Fatally Reject Bad Data algorithm and abort these steps.

  9. If [[progressive frame generations]] contains an entry keyed by frameIndex, remove the entry from the map.

  10. Let output be the decoded image data emitted by [[codec implementation]] corresponding to encodedFrame.

  11. Let decodeResult be a new ImageDecodeResult initialized as follows:

    1. Assign 'true' to complete.

    2. Let timestamp and duration be the presentation timestamp and duration for output as described by encodedFrame. If encodedFrame does not describe a timestamp or duration, assign null to the corresponding variable.

    3. Assign image with the result of running the Create a VideoFrame algorithm with output, timestamp, and duration.

  12. Run the Resolve Decode algorithm with promise and decodeResult.

Decode Progressive Frame (with frameIndex and promise)
  1. Assert that [[tracks established]] is true.

  2. Assert that [[internal selected track index]] is not -1.

  3. Let encodedFrame be the encoded frame identified by frameIndex and [[internal selected track index]].

  4. Let lastFrameGeneration be null.

  5. If [[progressive frame generations]] contains a map entry with the key frameIndex, assign the value of the map entry to lastFrameGeneration.

  6. Wait for any of the following conditions to be true (whichever happens first):

    1. [[encoded data]] contains enough bytes to decode encodedFrame to produce an output whose Progressive Image Frame Generation exceeds lastFrameGeneration.

    2. [[encoded data]] is found to be malformed.

    3. complete is true.

    4. [[closed]] is true.

  7. If [[encoded data]] is found to be malformed, run the Fatally Reject Bad Data algorithm and abort these steps.

  8. Otherwise, if [[encoded data]] does not contain enough bytes to decode encodedFrame to produce an output whose Progressive Image Frame Generation exceeds lastFrameGeneration, run the Reject Infeasible Decode algorithm with promise and abort these steps.

  9. Attempt to use [[codec implementation]] to decode encodedFrame.

  10. If decoding produces an error, run the Fatally Reject Bad Data algorithm and abort these steps.

  11. Let output be the decoded image data emitted by [[codec implementation]] corresponding to encodedFrame.

  12. Let decodeResult be a new ImageDecodeResult.

  13. If output is the final full-detail progressive output corresponding to encodedFrame:

    1. Assign true to decodeResult’s complete.

    2. If [[progressive frame generations]] contains an entry keyed by frameIndex, remove the entry from the map.

  14. Otherwise:

    1. Assign false to decodeResult’s complete.

    2. Let frameGeneration be the Progressive Image Frame Generation for output.

    3. Add a new entry to [[progressive frame generations]] with key frameIndex and value frameGeneration.

  15. Let timestamp and duration be the presentation timestamp and duration for output as described by encodedFrame. If encodedFrame does not describe a timestamp or duration, assign null to the corresponding variable.

  16. Assign image with the result of running the Create a VideoFrame algorithm with output, timestamp, and duration.

  17. Remove promise from [[pending decode promises]].

  18. Resolve promise with decodeResult.

Resolve Decode (with promise and result)
  1. Queue a task on the control thread event loop to run these steps:

    1. If [[closed]], abort these steps.

    2. Assert that promise is an element of [[pending decode promises]].

    3. Remove promise from [[pending decode promises]].

    4. Resolve promise with result.

Reject Infeasible Decode (with promise)
  1. Assert that complete is true or [[closed]] is true.

  2. If complete is true, let exception be a RangeError. Otherwise, let exception be an InvalidStateError DOMException.

  3. Queue a task on the control thread event loop to run these steps:

    1. If [[closed]], abort these steps.

    2. Assert that promise is an element of [[pending decode promises]].

    3. Remove promise from [[pending decode promises]].

    4. Reject promise with exception.

Fatally Reject Bad Data
  1. Queue a task on the control thread event loop to run these steps:

    1. If [[closed]], abort these steps.

    2. Run the Close ImageDecoder algorithm with an EncodingError DOMException.

Check Type Support (with type)
  1. If the user agent can provide a codec to support decoding type, return true.

  2. Otherwise, return false.

Reset ImageDecoder (with exception)
  1. Signal [[codec implementation]] to abort any active decoding operation.

  2. For each decodePromise in [[pending decode promises]]:

    1. Reject decodePromise with exception.

    2. Remove decodePromise from [[pending decode promises]].

Close ImageDecoder (with exception)
  1. Run the Reset ImageDecoder algorithm with exception.

  2. Assign true to [[closed]].

  3. Clear [[codec implementation]] and release associated system resources.

  4. Remove all entries from [[ImageTrackList]].

  5. Assign -1 to [[ImageTrackList]]'s [[selected index]].

10.3. ImageDecoderInit Interface

typedef (BufferSource or ReadableStream) ImageBufferSource;
dictionary ImageDecoderInit {
  required DOMString type;
  required ImageBufferSource data;
  PremultiplyAlpha premultiplyAlpha = "default";
  ColorSpaceConversion colorSpaceConversion = "default";
  [EnforceRange] unsigned long desiredWidth;
  [EnforceRange] unsigned long desiredHeight;
  boolean preferAnimation;
};

To determine if an ImageDecoderInit is a valid ImageDecoderInit, run these steps:

  1. If type is not a valid image MIME type, return false.

  2. If data is of type ReadableStream and the ReadableStream is disturbed or locked, return false.

  3. If data is of type BufferSource:

    1. If the result of running IsDetachedBuffer (described in [ECMASCRIPT]) on data is false, return false.

    2. If data is empty, return false.

  4. If desiredWidth exists and desiredHeight does not exist, return false.

  5. If desiredHeight exists and desiredWidth does not exist, return false.

  6. Return true.

A valid image MIME type is a string that is a valid MIME type string and for which the type, per Section 3.1.1.1 of [RFC7231], is image.

type, of type DOMString

String containing the MIME type of the image file to be decoded.

data, of type ImageBufferSource

BufferSource or ReadableStream of bytes representing an encoded image file as described by type.

premultiplyAlpha, of type PremultiplyAlpha, defaulting to "default"

Controls whether decoded outputs' color channels are to be premultiplied by their alpha channel, as defined by premultiplyAlpha in ImageBitmapOptions.

colorSpaceConversion, of type ColorSpaceConversion, defaulting to "default"

Controls whether decoded outputs' color space is converted or ignored, as defined by colorSpaceConversion in ImageBitmapOptions.

desiredWidth, of type unsigned long

Indicates a desired width for decoded outputs. Implementation is best effort; decoding to a desired width may not be supported by all formats/ decoders.

desiredHeight, of type unsigned long

Indicates a desired height for decoded outputs. Implementation is best effort; decoding to a desired height may not be supported by all formats/decoders.

preferAnimation, of type boolean

For images with multiple tracks, this indicates whether the initial track selection should prefer an animated track.

NOTE: See the Get Default Selected Track Index algorithm.

10.4. ImageDecodeOptions Interface

dictionary ImageDecodeOptions {
  [EnforceRange] unsigned long frameIndex = 0;
  boolean completeFramesOnly = true;
};

frameIndex, of type unsigned long, defaulting to 0

The index of the frame to decode.

completeFramesOnly, of type boolean, defaulting to true

For Progressive Images, a value of false indicates that the decoder may output an image with reduced detail. Each subsequent call to decode() for the same frameIndex will resolve to produce an image with a higher Progressive Image Frame Generation (more image detail) than the previous call, until finally the full-detail image is produced.

If completeFramesOnly is assigned true, or if the image is not a Progressive Image, or if the user agent does not support progressive decoding for the given image type, calls to decode() will only resolve once the full detail image is decoded.

NOTE: For Progressive Images, setting completeFramesOnly to false may be used to offer users a preview an image that is still being buffered from the network (via the data ReadableStream).

Upon decoding the full detail image, the ImageDecodeResult's complete will be set to true.

10.5. ImageDecodeResult Interface

dictionary ImageDecodeResult {
  required VideoFrame image;
  required boolean complete;
};

image, of type VideoFrame

The decoded image.

complete, of type boolean

Indicates whether image contains the final full-detail output.

NOTE: complete is always true when decode() is invoked with completeFramesOnly set to true.

10.6. ImageTrackList Interface

[Exposed=(Window,DedicatedWorker)]
interface ImageTrackList {
  getter ImageTrack (unsigned long index);

  readonly attribute Promise<undefined> ready;
  [EnforceRange] readonly attribute unsigned long length;
  [EnforceRange] readonly attribute long selectedIndex;
  readonly attribute ImageTrack? selectedTrack;
};

10.6.1. Internal Slots

[[ready promise]]

The promise used to signal when the ImageTrackList has been populated with ImageTracks.

NOTE: ImageTrack frameCount may receive subsequent updates until complete is true.

[[track list]]

The list of ImageTracks describe by this ImageTrackList.

[[selected index]]

The index of the selected track in [[track list]]. A value of -1 indeicates that no track is selected.

10.6.2. Attributes

ready, of type Promise<undefined>, readonly

The ready getter steps are to return the [[ready promise]].

length, of type unsigned long, readonly

The length getter steps are to return the length of [[track list]].

selectedIndex, of type long, readonly

The selectedIndex getter steps are to return [[selected index]];

selectedTrack, of type ImageTrack, readonly, nullable

The selectedTrack getter steps are:

  1. If [[selected index]] is -1, return null.

  2. Otherwise, return the ImageTrack from [[track list]] at the position indicated by [[selected index]].

10.7. ImageTrack Interface

[Exposed=(Window,DedicatedWorker)]
interface ImageTrack : EventTarget {
  readonly attribute boolean animated;
  [EnforceRange] readonly attribute unsigned long frameCount;
  [EnforceRange] readonly attribute unrestricted float repetitionCount;
  attribute EventHandler onchange;
  attribute boolean selected;
};

10.7.1. Internal Slots

[[ImageDecoder]]

The ImageDecoder instance that constructed this ImageTrack.

[[ImageTrackList]]

The ImageTrackList instance that lists this ImageTrack.

[[animated]]

Indicates whether this track contains an animated image with multiple frames.

[[frame count]]

The number of frames in this track.

[[repetition count]]

The number of times the animation is intended to repeat.

[[selected]]

Indicates whether this track is selected for decoding.

10.7.2. Attributes

animated, of type boolean, readonly

The animated getter steps are to return the value of [[animated]].

NOTE: This attribute provides an early indication that frameCount will ultimately exceed 0 for images where the frameCount starts at 0 and later increments as new chunks of the ReadableStream data arrive.

frameCount, of type unsigned long, readonly

The frameCount getter steps are to return the value of [[frame count]].

repetitionCount, of type unrestricted float, readonly

The repetitionCount getter steps are to return the value of [[repetition count]].

onchange, of type EventHandler

An event handler IDL attribute whose event handler event type is change.

selected, of type boolean

The selected getter steps are to return the value of [[selected]].

The selected setter steps are:

  1. If [[ImageDecoder]]'s [[closed]] slot is true, abort these steps.

  2. Let newValue be the given value.

  3. If newValue equals [[selected]], abort these steps.

  4. Assign newValue to [[selected]].

  5. Let parentTrackList be [[ImageTrackList]]

  6. Let oldSelectedIndex be the value of parentTrackList [[selected index]].

  7. If oldSelectedIndex is not -1:

    1. Let oldSelectedTrack be the ImageTrack in parentTrackList [[track list]] at the position of oldSelectedIndex.

    2. Assign false to oldSelectedTrack [[selected]]

  8. If newValue is true, let selectedIndex be the index of this ImageTrack within parentTrackList’s [[track list]]. Otherwise, let selectedIndex be -1.

  9. Assign selectedIndex to parentTrackList [[selected index]].

  10. Run the Reset ImageDecoder algorithm on [[ImageDecoder]].

  11. Queue a control message to [[ImageDecoder]]'s control message queue to update the internal selected track index with selectedIndex.

Running a control message to update the internal selected track index means running these steps:

  1. Assign selectedIndex to [[internal selected track index]].

  2. Remove all entries from [[progressive frame generations]].

10.7.3. Event Summary

change

Fired at the ImageTrack when the frameCount is altered.

11. Security Considerations

The primary security impact is that features of this API make it easier for an attacker to exploit vulnerabilities in the underlying platform codecs. Additionally, new abilities to configure and control the codecs may allow for new exploits that rely on a specific configuration and/or sequence of control operations.

Platform codecs are historically an internal detail of APIs like HTMLMediaElement, [WEBAUDIO], and [WebRTC]. In this way, it has always been possible to attack the underlying codecs by using malformed media files/streams and invoking the various API control methods.

For example, you can send any stream to a decoder by first wrapping that stream in a media container (e.g. mp4) and setting that as the src of an HTMLMediaElement. You can then cause the underlying video decoder to be reset() by setting a new value for <video>.currentTime.

WebCodecs makes such attacks easier by exposing low level control when inputs are provided and direct access to invoke the codec control methods. This also affords attackers the ability to invoke sequences of control methods that were not previously possible via the higher level APIs.

User agents should mitigate this risk by extensively fuzzing their implementation with random inputs and control method invocations. Additionally, user agents are encouraged to isolate their underlying codecs in processes with restricted privileges (sandbox) as a barrier against successful exploits being able to read user data.

An additional concern is exposing the underlying codecs to input mutation race conditions. Specifically, it should not be possible for a site to mutate a codec input or output while the underlying codec may still be operating on that data. This concern is mitigated by ensuring that input and output interfaces are immutable.

12. Privacy Considerations

The primary privacy impact is an increased ability to fingerprint users by querying for different codec capabilities to establish a codec feature profile. Much of this profile is already exposed by existing APIs. Such profiles are very unlikely to be uniquely identifying, but may be used with other metrics to create a fingerprint.

An attacker may accumulate a codec feature profile by calling IsConfigSupported() methods with a number of different configuration dictionaries. Similarly, an attacker may attempt to configure() a codec with different configuration dictionaries and observe which configurations are accepted.

Attackers may also use existing APIs to establish much of the codec feature profile. For example, the [media-capabilities] decodingInfo() API describes what types of decoders are supported and its powerEfficient attribute may signal when a decoder uses hardware acceleration. Similarly, the [WebRTC] getCapabilities() API may be used to determine what types of encoders are supported and the getStats() API may be used to determine when an encoder uses hardware acceleration. WebCodecs will expose some additional information in the form of low level codec features.

A codec feature profile alone is unlikely to be uniquely identifying. Underlying codecs are often implemented entirely in software (be it part of the user agent binary or part of the operating system), such that all users who run that software will have a common set capabilities. Additionally, underlying codecs are often implemented with hardware acceleration, but such hardware is mass produced and devices of a particular class and manufacture date (e.g. flagship phones manufactured in 2020) will often have common capabilities. There will be outliers (some users may run outdated versions of software codecs or use a rare mix of custom assembled hardware), but most of the time a given codec feature profile is shared by a large group of users.

Segmenting groups of users by codec feature profile still amounts to a bit of entropy that can be combined with other metrics to uniquely identify a user. User agents may partially mitigate this by returning an error whenever a site attempts to exhaustively probe for codec capabilities. Additionally, user agents may implement a "privacy budget", which depletes as authors use WebCodecs and other identifying APIs. Upon exhaustion of the privacy budget, codec capabilities could be reduced to a common baseline or prompt for user approval.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[CSS-IMAGES-3]
Tab Atkins Jr.; Elika Etemad; Lea Verou. CSS Images Module Level 3. 17 December 2020. CR. URL: https://www.w3.org/TR/css-images-3/
[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[MEDIA-CAPABILITIES]
Mounir Lamouri; Chris Cunningham; Vi Nguyen. Media Capabilities. 16 March 2021. WD. URL: https://www.w3.org/TR/media-capabilities/
[MIMESNIFF]
Gordon P. Hemsley. MIME Sniffing Standard. Living Standard. URL: https://mimesniff.spec.whatwg.org/
[STREAMS]
Adam Rice; Domenic Denicola; 吉野剛史 (Takeshi Yoshino). Streams Standard. Living Standard. URL: https://streams.spec.whatwg.org/
[SVG2]
Amelia Bellamy-Royds; et al. Scalable Vector Graphics (SVG) 2. 4 October 2018. CR. URL: https://www.w3.org/TR/SVG2/
[WebIDL]
Boris Zbarsky. Web IDL. 15 December 2016. ED. URL: https://heycam.github.io/webidl/
[WebRTC]
Cullen Jennings; Henrik Boström; Jan-Ivar Bruaroey. WebRTC 1.0: Real-Time Communication Between Browsers. 26 January 2021. REC. URL: https://www.w3.org/TR/webrtc/
[WebRTC-SVC]
Bernard Aboba; Peter Thatcher. Scalable Video Coding (SVC) Extension for WebRTC. 15 March 2021. WD. URL: https://www.w3.org/TR/webrtc-svc/
[WEBXR-LAYERS-1]
WebXR Layers API Level 1 URL: https://immersive-web.github.io/layers/

Informative References

[MEDIA-SOURCE]
Matthew Wolenetz; et al. Media Source Extensions™. 17 November 2016. REC. URL: https://www.w3.org/TR/media-source/
[RFC6381]
R. Gellens; D. Singer; P. Frojdh. The 'Codecs' and 'Profiles' Parameters for "Bucket" Media Types. August 2011. Proposed Standard. URL: https://datatracker.ietf.org/doc/html/rfc6381
[RFC7231]
R. Fielding, Ed.; J. Reschke, Ed.. Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content. June 2014. Proposed Standard. URL: https://httpwg.org/specs/rfc7231.html
[WEBAUDIO]
Paul Adenot; Hongchan Choi. Web Audio API. 6 May 2021. PR. URL: https://www.w3.org/TR/webaudio/
[WEBCODECS-CODEC-REGISTRY]
Chris Cunningham; Paul Adenot; Bernard Aboba. WebCodecs Codec Registry. 3 June 2021. WD. URL: https://www.w3.org/TR/webcodecs-codec-registry/

IDL Index

[Exposed=(Window,DedicatedWorker)]
interface AudioDecoder {
  constructor(AudioDecoderInit init);

  readonly attribute CodecState state;
  readonly attribute long decodeQueueSize;

  undefined configure(AudioDecoderConfig config);
  undefined decode(EncodedAudioChunk chunk);
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<AudioDecoderSupport> isConfigSupported(AudioDecoderConfig config);
};

dictionary AudioDecoderInit {
  required AudioDataOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback AudioDataOutputCallback = undefined(AudioData output);

[Exposed=(Window,DedicatedWorker)]
interface VideoDecoder {
  constructor(VideoDecoderInit init);

  readonly attribute CodecState state;
  readonly attribute long decodeQueueSize;

  undefined configure(VideoDecoderConfig config);
  undefined decode(EncodedVideoChunk chunk);
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<VideoDecoderSupport> isConfigSupported(VideoDecoderConfig config);
};

dictionary VideoDecoderInit {
  required VideoFrameOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback VideoFrameOutputCallback = undefined(VideoFrame output);

[Exposed=(Window,DedicatedWorker)]
interface AudioEncoder {
  constructor(AudioEncoderInit init);

  readonly attribute CodecState state;
  readonly attribute long encodeQueueSize;

  undefined configure(AudioEncoderConfig config);
  undefined encode(AudioData data);
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<AudioEncoderSupport> isConfigSupported(AudioEncoderConfig config);
};

dictionary AudioEncoderInit {
  required EncodedAudioChunkOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback EncodedAudioChunkOutputCallback =
    undefined (EncodedAudioChunk output,
               optional EncodedAudioChunkMetadata metadata = {});

dictionary EncodedAudioChunkMetadata {
  AudioDecoderConfig decoderConfig;
};

[Exposed=(Window,DedicatedWorker)]
interface VideoEncoder {
  constructor(VideoEncoderInit init);

  readonly attribute CodecState state;
  readonly attribute long encodeQueueSize;

  undefined configure(VideoEncoderConfig config);
  undefined encode(VideoFrame frame, optional VideoEncoderEncodeOptions options = {});
  Promise<undefined> flush();
  undefined reset();
  undefined close();

  static Promise<boolean> isConfigSupported(VideoEncoderConfig config);
};

dictionary VideoEncoderInit {
  required EncodedVideoChunkOutputCallback output;
  required WebCodecsErrorCallback error;
};

callback EncodedVideoChunkOutputCallback =
    undefined (EncodedVideoChunk chunk,
               optional EncodedVideoChunkMetadata metadata = {});

dictionary EncodedVideoChunkMetadata {
  VideoDecoderConfig decoderConfig;
  unsigned long temporalLayerId;
};

dictionary AudioDecoderSupport {
  boolean supported;
  AudioDecoderConfig config;
};

dictionary VideoDecoderSupport {
  boolean supported;
  VideoDecoderConfig config;
};

dictionary AudioEncoderSupport {
  boolean supported;
  AudioEncoderConfig config;
};

dictionary VideoEncoderSupport {
  boolean supported;
  VideoEncoderConfig config;
};

dictionary AudioDecoderConfig {
  required DOMString codec;
  [EnforceRange] required unsigned long sampleRate;
  [EnforceRange] required unsigned long numberOfChannels;
  BufferSource description;
};

dictionary VideoDecoderConfig {
  required DOMString codec;
  BufferSource description;
  [EnforceRange] unsigned long codedWidth;
  [EnforceRange] unsigned long codedHeight;
  [EnforceRange] unsigned long displayAspectWidth;
  [EnforceRange] unsigned long displayAspectHeight;
  HardwareAcceleration hardwareAcceleration = "allow";
};

dictionary AudioEncoderConfig {
  required DOMString codec;
  [EnforceRange] unsigned long sampleRate;
  [EnforceRange] unsigned long numberOfChannels;
  [EnforceRange] unsigned long long bitrate;
};

dictionary VideoEncoderConfig {
  required DOMString codec;
  [EnforceRange] unsigned long long bitrate;
  [EnforceRange] required unsigned long width;
  [EnforceRange] required unsigned long height;
  [EnforceRange] unsigned long displayWidth;
  [EnforceRange] unsigned long displayHeight;
  HardwareAcceleration hardwareAcceleration = "allow";
  DOMString scalabilityMode;
};

enum HardwareAcceleration {
  "allow",
  "deny",
  "require",
};

dictionary VideoEncoderEncodeOptions {
  boolean keyFrame = false;
};

enum CodecState {
  "unconfigured",
  "configured",
  "closed"
};

callback WebCodecsErrorCallback = undefined(DOMException error);

[Exposed=(Window,DedicatedWorker)]
interface EncodedAudioChunk {
  constructor(EncodedAudioChunkInit init);
  readonly attribute EncodedAudioChunkType type;
  readonly attribute long long timestamp;    // microseconds
  readonly attribute unsigned long byteLength;

  undefined copyTo([AllowShared] BufferSource destination);
};

dictionary EncodedAudioChunkInit {
  required EncodedAudioChunkType type;
  [EnforceRange] required long long timestamp;    // microseconds
  required BufferSource data;
};

enum EncodedAudioChunkType {
    "key",
    "delta",
};

[Exposed=(Window,DedicatedWorker)]
interface EncodedVideoChunk {
  constructor(EncodedVideoChunkInit init);
  readonly attribute EncodedVideoChunkType type;
  readonly attribute long long timestamp;             // microseconds
  readonly attribute unsigned long long? duration;    // microseconds
  readonly attribute unsigned long byteLength;

  undefined copyTo([AllowShared] BufferSource destination);
};

dictionary EncodedVideoChunkInit {
  required EncodedVideoChunkType type;
  [EnforceRange] required long long timestamp;        // microseconds
  [EnforceRange] unsigned long long duration;         // microseconds
  required BufferSource data;
};

enum EncodedVideoChunkType {
    "key",
    "delta",
};

[Exposed=(Window,DedicatedWorker)]
interface AudioData {
  constructor(AudioDataInit init);

  readonly attribute AudioSampleFormat format;
  readonly attribute float sampleRate;
  readonly attribute unsigned long numberOfFrames;
  readonly attribute unsigned long numberOfChannels;
  readonly attribute unsigned long long duration;  // microseconds
  readonly attribute long long timestamp;          // microseconds

  unsigned long allocationSize(AudioDataCopyToOptions options);
  undefined copyTo([AllowShared] BufferSource destination, AudioDataCopyToOptions options);
  AudioData clone();
  undefined close();
};

dictionary AudioDataInit {
  required AudioSampleFormat format;
  [EnforceRange] required float sampleRate;
  [EnforceRange] required unsigned long numberOfFrames;
  [EnforceRange] required unsigned long numberOfChannels;
  [EnforceRange] required long long timestamp;  // microseconds
  required BufferSource data;
};

dictionary AudioDataCopyToOptions {
  required unsigned long planeIndex;
  unsigned long frameOffset = 0;
  unsigned long frameCount;
};

enum AudioSampleFormat {
  "U8",
  "S16",
  "S24",
  "S32",
  "FLT",
  "U8P",
  "S16P",
  "S24P",
  "S32P",
  "FLTP",
};

[Exposed=(Window,DedicatedWorker)]
interface VideoFrame {
  constructor(CanvasImageSource image, optional VideoFrameInit init = {});
  constructor(sequence<(Plane or PlaneInit)> planes,
              VideoFramePlaneInit init);

  readonly attribute PixelFormat format;
  readonly attribute FrozenArray<Plane>? planes;
  readonly attribute unsigned long codedWidth;
  readonly attribute unsigned long codedHeight;
  readonly attribute unsigned long cropLeft;
  readonly attribute unsigned long cropTop;
  readonly attribute unsigned long cropWidth;
  readonly attribute unsigned long cropHeight;
  readonly attribute unsigned long displayWidth;
  readonly attribute unsigned long displayHeight;
  readonly attribute unsigned long long? duration;  // microseconds
  readonly attribute long long? timestamp;          // microseconds

  VideoFrame clone();
  undefined close();
};

dictionary VideoFrameInit {
  unsigned long long duration;  // microseconds
  long long timestamp;          // microseconds
};

dictionary VideoFramePlaneInit {
  required PixelFormat format;
  [EnforceRange] required unsigned long codedWidth;
  [EnforceRange] required unsigned long codedHeight;
  [EnforceRange] unsigned long cropLeft;
  [EnforceRange] unsigned long cropTop;
  [EnforceRange] unsigned long cropWidth;
  [EnforceRange] unsigned long cropHeight;
  [EnforceRange] unsigned long displayWidth;
  [EnforceRange] unsigned long displayHeight;
  [EnforceRange] unsigned long long duration;  // microseconds
  [EnforceRange] long long timestamp;          // microseconds
};

[Exposed=(Window,DedicatedWorker)]
interface Plane {
  readonly attribute unsigned long stride;
  readonly attribute unsigned long rows;
  readonly attribute unsigned long length;

  undefined readInto(ArrayBufferView dst);
};

dictionary PlaneInit {
  required BufferSource src;
  [EnforceRange] required unsigned long stride;
  [EnforceRange] required unsigned long rows;
};

enum PixelFormat {
  "I420"
};

[Exposed=(Window,DedicatedWorker)]
interface ImageDecoder {
  constructor(ImageDecoderInit init);

  readonly attribute boolean complete;
  readonly attribute Promise<undefined> completed;
  readonly attribute ImageTrackList tracks;

  Promise<ImageDecodeResult> decode(optional ImageDecodeOptions options = {});
  undefined reset();
  undefined close();

  static Promise<boolean> isTypeSupported(DOMString type);
};


typedef (BufferSource or ReadableStream) ImageBufferSource;
dictionary ImageDecoderInit {
  required DOMString type;
  required ImageBufferSource data;
  PremultiplyAlpha premultiplyAlpha = "default";
  ColorSpaceConversion colorSpaceConversion = "default";
  [EnforceRange] unsigned long desiredWidth;
  [EnforceRange] unsigned long desiredHeight;
  boolean preferAnimation;
};


dictionary ImageDecodeOptions {
  [EnforceRange] unsigned long frameIndex = 0;
  boolean completeFramesOnly = true;
};


dictionary ImageDecodeResult {
  required VideoFrame image;
  required boolean complete;
};


[Exposed=(Window,DedicatedWorker)]
interface ImageTrackList {
  getter ImageTrack (unsigned long index);

  readonly attribute Promise<undefined> ready;
  [EnforceRange] readonly attribute unsigned long length;
  [EnforceRange] readonly attribute long selectedIndex;
  readonly attribute ImageTrack? selectedTrack;
};


[Exposed=(Window,DedicatedWorker)]
interface ImageTrack : EventTarget {
  readonly attribute boolean animated;
  [EnforceRange] readonly attribute unsigned long frameCount;
  [EnforceRange] readonly attribute unrestricted float repetitionCount;
  attribute EventHandler onchange;
  attribute boolean selected;
};


Issues Index

The spec should list additional format specific validation steps ( e.g. number and order of planes, acceptable sizing, etc...). See #165.
The spec should define explicit rules for each PixelFormat and reference them in the steps above. See #165.
The spec should provide a definition (and possibly diagrams) for stride. See #166.
The spec should provide definitions (and possibly diagrams) for coded size, crop size, and display size. See #166.
The spec should define explicit rules for each PixelFormat and reference them in the step above. See #165.