WebRTC Encoded Transform

W3C Working Draft,

More details about this document
This version:
https://www.w3.org/TR/2021/WD-webrtc-encoded-transform-20211207/
Latest published version:
https://www.w3.org/TR/webrtc-encoded-transform/
Editor's Draft:
https://w3c.github.io/webrtc-encoded-transform/
Previous Versions:
History:
https://www.w3.org/standards/history/webrtc-encoded-transform
Feedback:
public-webrtc@w3.org with subject line “[webrtc-encoded-transform] … message topic …” (archives)
GitHub
Editors:
(Google)
(Google)
(Apple)

Abstract

This API defines an API surface for manipulating the bits on MediaStreamTracks being sent via an RTCPeerConnection.

Status of this document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Web Real-Time Communications Working Group as a Working Draft using the Recommendation track. This document is intended to become a W3C Recommendation.

If you wish to make comments regarding this document, please send them to public-webrtc@w3.org (subscribe, archives). When sending e-mail, please put the text “webrtc-encoded-transform” in the subject, preferably like this: “[webrtc-encoded-transform] …summary of comment…”. All comments are welcome.

Publication as a Working Draft does not imply endorsement by W3C and its Members. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 2 November 2021 W3C Process Document.

1. Introduction

The [WEBRTC-NV-USE-CASES] document describes several functions that can only be achieved by access to media (requirements N20-N22), including, but not limited to:

These use cases further require that processing can be done in worker threads (requirement N23-N24).

Furthermore, the "trusted JavaScript cloud conferencing" use case requires such processing to be done on encoded media, not just the raw media.

This specification gives an interface inspired by [WEB-CODECS] to provide access to such functionality while retaining the setup flow of RTCPeerConnection.

This iteration of the specification provides access to encoded media, which is the output of the encoder part of a codec and the input to the decoder part of a codec.

2. Terminology

3. Specification

The Streams definition doesn’t use WebIDL much, but the WebRTC spec does. This specification shows the IDL extensions for WebRTC.

It uses an additional API on RTCRtpSender and RTCRtpReceiver to insert the processing into the pipeline.

// New dictionary
dictionary RTCInsertableStreams {
    ReadableStream readable;
    WritableStream writable;
};

typedef (SFrameTransform or RTCRtpScriptTransform) RTCRtpTransform;

// New methods for RTCRtpSender and RTCRtpReceiver
partial interface RTCRtpSender {
    attribute RTCRtpTransform? transform;
};

partial interface RTCRtpReceiver {
    attribute RTCRtpTransform? transform;
};

3.1. Extension operation

At the time when a codec is initialized as part of the encoder, and the corresponding flag is set in the RTCPeerConnection's RTCConfiguration argument, ensure that the codec is disabled and produces no output.

3.1.1. Stream creation

At construction of each RTCRtpSender or RTCRtpReceiver, run the following steps:

  1. Initialize this.[[transform]] to null.

  2. Initialize this.[[readable]] to a new ReadableStream.

  3. Set up this.[[readable]]. this.[[readable]] is provided frames using the readEncodedData algorithm given this as parameter.

  4. Set this.[[readable]].[[owner]] to this.

  5. Initialize this.[[writable]] to a new WritableStream.

  6. Set up this.[[writable]] with its writeAlgorithm set to writeEncodedData given this as parameter and its sizeAlgorithm to an algorithm that returns 0.

    Chunk size is set to 0 to explictly disable streams backpressure on the write side.

  7. Set this.[[writable]].[[owner]] to this.

  8. Initialize this.[[pipeToController]] to null.

  9. Initialize this.[[lastReceivedFrameCounter]] to 0.

  10. Initialize this.[[lastEnqueuedFrameCounter]] to 0.

  11. Queue a task to run the following steps:

    1. If this.[[pipeToController]] is not null, abort these steps.

    2. Set this.[[pipeToController]] to a new AbortController.

    3. Call pipeTo with this.[[readable]], this.[[writable]], preventClose equal to true, preventAbort equal to true, preventCancel equal to true and this.[[pipeToController]].signal.

3.1.2. Stream processing

The readEncodedData algorithm is given a rtcObject as parameter. It is defined by running the following steps:

  1. Wait for a frame to be produced by rtcObject’s encoder if it is a RTCRtpSender or rtcObject’s packetizer if it is a RTCRtpReceiver.

  2. Increment rtcObject.[[lastEnqueuedFrameCounter]] by 1.

  3. Let frame be the newly produced frame.

  4. Set frame.[[owner]] to rtcObject.

  5. Set frame.[[counter]] to rtcObject.[[lastEnqueuedFrameCounter]].

  6. Enqueue frame in rtcObject.[[readable]].

The writeEncodedData algorithm is given a rtcObject as parameter and a frame as input. It is defined by running the following steps:

  1. If frame.[[owner]] is not equal to rtcObject, abort these steps and return a promise resolved with undefined. A processor cannot create frames, or move frames between streams.

  2. If frame.[[counter]] is equal or smaller than rtcObject.[[lastReceivedFrameCounter]], abort these steps and return a promise resolved with undefined. A processor cannot reorder frames, although it may delay them or drop them.

  3. Set rtcObject.[[lastReceivedFrameCounter]] to frame[[counter]].

  4. Enqueue the frame for processing as if it came directly from the encoded data source, by running one of the following steps:

  5. Return a promise resolved with undefined.

On sender side, as part of readEncodedData, frames produced by rtcObject’s encoder MUST be enqueued in rtcObject.[[readable]] in the encoder’s output order. As writeEncodedData ensures that the transform cannot reorder frames, the encoder’s output order is also the order followed by packetizers to generate RTP packets and assign RTP packet sequence numbers.

On receiver side, as part of readEncodedData, frames produced by rtcObject’s packetizer MUST be enqueued in rtcObject.[[readable]] in the same encoder’s output order. To ensure the order is respected, the depacketizer will typically use RTP packet sequence numbers to reorder RTP packets as needed before enqueuing frames in rtcObject.[[readable]]. As writeEncodedData ensures that the transform cannot reorder frames, this will be the order expected by rtcObject’s decoder.

3.2. Extension attribute

A RTCRtpTransform has two private slots called [[readable]] and [[writable]].

The transform getter steps are:

  1. Return this.[[transform]].

The transform setter steps are:

  1. Let transform be the argument to the setter.

  2. Let checkedTransform set to transform if it is not null or to an identity transform stream otherwise.

  3. Let reader be the result of getting a reader for checkedTransform.[[readable]].

  4. Let writer be the result of getting a writer for checkedTransform.[[writable]].

  5. Initialize newPipeToController to a new AbortController.

  6. If this.[[pipeToController]] is not null, run the following steps:

    1. Add the chain transform algorithm to this.[[pipeToController]].signal.

    2. signal abort this.[[pipeToController]].signal.

  7. Else, run the chain transform algorithm steps.

  8. Set this.[[pipeToController]] to newPipeToController.

  9. Set this.[[transform]] to transform.

The chain transform algorithm steps are defined as:

  1. If newPipeToController is aborted, abort these steps.

  2. Release reader.

  3. Release writer.

  4. Assert that newPipeToController is the same object as rtcObject.[[pipeToController]].

  5. Call pipeTo with rtcObject.[[readable]], checkedTransform.[[writable]], preventClose equal to false, preventAbort equal to false, preventCancel equal to true and newPipeToController.signal.

  6. Call pipeTo with checkedTransform.[[readable]], rtcObject.[[writable]], preventClose equal to true, preventAbort equal to true, preventCancel equal to false and newPipeToController.signal.

This algorithm is defined so that transforms can be updated dynamically. There is no guarantee on which frame will happen the switch from the previous transform to the new transform.

If a web application sets the transform synchronously at creation of the RTCRtpSender (for instance when calling addTrack), the transform will receive the first frame generated by the RTCRtpSender's encoder. Similarly, if a web application sets the transform synchronously at creation of the RTCRtpReceiver (for instance when calling addTrack, or at track event handler), the transform will receive the first full frame generated by the RTCRtpReceiver's packetizer.

4. SFrameTransform

The API presented in this section represents a preliminary proposal based on protocol proposals that have not yet been adopted by an IETF WG. As a result, both the API and underlying protocol are likely to change significantly going forward.

enum SFrameTransformRole {
    "encrypt",
    "decrypt"
};

dictionary SFrameTransformOptions {
    SFrameTransformRole role = "encrypt";
};

typedef [EnforceRange] unsigned long long SmallCryptoKeyID;
typedef (SmallCryptoKeyID or bigint) CryptoKeyID;

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransform {
    constructor(optional SFrameTransformOptions options = {});
    Promise<undefined> setEncryptionKey(CryptoKey key, optional CryptoKeyID keyID);
    attribute EventHandler onerror;
};
SFrameTransform includes GenericTransformStream;

enum SFrameTransformErrorEventType {
    "authentication",
    "keyID",
    "syntax"
};

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransformErrorEvent : Event {
    constructor(DOMString type, SFrameTransformErrorEventInit eventInitDict);

    readonly attribute SFrameTransformErrorEventType errorType;
    readonly attribute CryptoKeyID? keyID;
    readonly attribute any frame;
};

dictionary SFrameTransformErrorEventInit : EventInit {
    required SFrameTransformErrorEventType errorType;
    required any frame;
    CryptoKeyID? keyID;
};

The new SFrameTransform(options) constructor steps are:

  1. Let transformAlgorithm be an algorithm which takes a frame as input and runs the SFrame transform algorithm with this and frame.

  2. Set this.[[transform]] to a new TransformStream.

  3. Set up this.[[transform]] with transformAlgorithm set to transformAlgorithm.

  4. Let options be the method’s first argument.

  5. Set this.[[role]] to options["role"].

  6. Set this.[[readable]] to this.[[transform]].[[readable]].

  7. Set this.[[writable]] to this.[[transform]].[[writable]].

4.1. Algorithm

The SFrame transform algorithm, given sframe as a SFrameTransform object and frame, runs these steps:

  1. Let role be sframe.[[role]].

  2. If frame.[[owner]] is a RTCRtpSender, set role to 'encrypt'.

  3. If frame.[[owner]] is a RTCRtpReceiver, set role to 'decrypt'.

  4. Let data be undefined.

  5. If frame is a BufferSource, set data to frame.

  6. If frame is a RTCEncodedAudioFrame, set data to frame.data

  7. If frame is a RTCEncodedVideoFrame, set data to frame.data

  8. If data is undefined, abort these steps.

  9. Let buffer be the result of running the SFrame algorithm with data and role as parameters. This algorithm is defined by the SFrame specification and returns an ArrayBuffer.

  10. If the SFrame algorithm exits abruptly with an error, queue a task to run the following sub steps:

    1. If the processing fails on decryption side due to data not following the SFrame format, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to syntax and its frame attribute set to frame.

    2. If the processing fails on decryption side due to the key identifier parsed in data being unknown, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to keyID, its frame attribute set to frame and its keyID attribute set to the keyID value parsed in the SFrame header.

    3. If the processing fails on decryption side due to validation of the authentication tag, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to authentication and its frame attribute set to frame.

    4. Abort these steps.

  11. If frame is a BufferSource, set frame to buffer.

  12. If frame is a RTCEncodedAudioFrame, set frame.data to buffer.

  13. If frame is a RTCEncodedVideoFrame, set frame.data to buffer.

  14. Enqueue frame in sframe.[[transform]].

4.2. Methods

The setEncryptionKey(key, keyID) method steps are:
  1. Let promise be a new promise.

  2. If keyID is a bigint which cannot be represented as a integer between 0 and 264-1 inclusive, reject promise with a RangeError exception.

  3. Otherwise, in parallel, run the following steps:

    1. Set key with its optional keyID as key material to use for the SFrame transform algorithm, as defined by the SFrame specification.

    2. If setting the key material fails, reject promise with an InvalidModificationError exception and abort these steps.

    3. Resolve promise with undefined.

  4. Return promise.

5. RTCRtpScriptTransform

// New enum for video frame types. Will eventually re-use the equivalent defined
// by WebCodecs.
enum RTCEncodedVideoFrameType {
    "empty",
    "key",
    "delta",
};

dictionary RTCEncodedVideoFrameMetadata {
    long long frameId;
    sequence<long long> dependencies;
    unsigned short width;
    unsigned short height;
    long spatialIndex;
    long temporalIndex;
    long synchronizationSource;
    sequence<long> contributingSources;
};

// New interfaces to define encoded video and audio frames. Will eventually
// re-use or extend the equivalent defined in WebCodecs.
[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedVideoFrame {
    readonly attribute RTCEncodedVideoFrameType type;
    readonly attribute unsigned long long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedVideoFrameMetadata getMetadata();
};

dictionary RTCEncodedAudioFrameMetadata {
    long synchronizationSource;
    sequence<long> contributingSources;
};

[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedAudioFrame {
    readonly attribute unsigned long long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedAudioFrameMetadata getMetadata();
};


// New interfaces to expose JavaScript-based transforms.

[Exposed=DedicatedWorker]
interface RTCTransformEvent : Event {
    readonly attribute RTCRtpScriptTransformer transformer;
};

partial interface DedicatedWorkerGlobalScope {
    attribute EventHandler onrtctransform;
};

[Exposed=DedicatedWorker]
interface RTCRtpScriptTransformer {
    readonly attribute ReadableStream readable;
    readonly attribute WritableStream writable;
    readonly attribute any options;
};

[Exposed=Window]
interface RTCRtpScriptTransform {
    constructor(Worker worker, optional any options, optional sequence<object> transfer);
};

5.1. Operations

The new RTCRtpScriptTransform(worker, options, transfer) constructor steps are:

  1. Set t1 to an identity transform stream.

  2. Set t2 to an identity transform stream.

  3. Set this.[[writable]] to t1.[[writable]].

  4. Set this.[[readable]] to t2.[[readable]].

  5. Let serializedOptions be the result of StructuredSerializeWithTransfer(options, transfer).

  6. Let serializedReadable be the result of StructuredSerializeWithTransfer(t1.[[readable]], « t1.[[readable]] »).

  7. Let serializedWritable be the result of StructuredSerializeWithTransfer(t2.[[writable]], « t2.[[writable]] »).

  8. Queue a task on the DOM manipulation task source worker’s global scope to run the following steps:

    1. Let transformerOptions be the result of StructuredDeserialize(serializedOptions, the current Realm).

    2. Let readable be the result of StructuredDeserialize(serializedReadable, the current Realm).

    3. Let writable be the result of StructuredDeserialize(serializedWritable, the current Realm).

    4. Let transformer be a new RTCRtpScriptTransformer.

    5. Set transformer.[[options]] to transformerOptions.

    6. Set transformer.[[readable]] to readable.

    7. Set transformer.[[writable]] to writable.

    8. Let event be the result of creating an event with RTCTransformEvent.

    9. Set event.type attribute to "rtctransform".

    10. Set event.transformer to transformer.

    11. Dispatch event on worker’s global scope.

// FIXME: Describe error handling (worker closing flag true at RTCRtpScriptTransform creation time. And worker being terminated while transform is processing data).

5.2. Attributes

A RTCRtpScriptTransformer has three private slots called [[options]], [[readable]] and [[writable]].

The options getter steps are:

  1. Return this.[[options]].

The readable getter steps are:

  1. Return this.[[readable]].

The writable getter steps are:

  1. Return this.[[writable]].

6. Privacy and security considerations

This API gives Javascript access to the content of media streams. This is also available from other sources, such as Canvas and WebAudio.

However, streams that are isolated (as specified in [WEBRTC-IDENTITY]) or tainted with another origin, cannot be accessed using this API, since that would break the isolation rule.

The API will allow access to some aspects of timing information that are otherwise unavailable, which allows some fingerprinting surface.

The API will give access to encoded media, which means that the JS application will have full control over what’s delivered to internal components like the packetizer or the decoder. This may require additional care with auditing how data is handled inside these components.

For instance, packetizers may expect to see data only from trusted encoders, and may not be audited for reception of data from untrusted sources.

7. Examples

See the explainer document.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[MEDIACAPTURE-STREAMS]
Cullen Jennings; et al. Media Capture and Streams. 3 December 2021. CR. URL: https://www.w3.org/TR/mediacapture-streams/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[STREAMS]
Adam Rice; et al. Streams Standard. Living Standard. URL: https://streams.spec.whatwg.org/
[WebCryptoAPI]
Mark Watson. Web Cryptography API. 26 January 2017. REC. URL: https://www.w3.org/TR/WebCryptoAPI/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WEBRTC-1]
WebRTC 1.0: Real-time Communication Between Browsers URL: https://www.w3.org/TR/webrtc/

Informative References

[WEB-CODECS]
Web Codecs explainer. URL: https://github.com/WICG/web-codecs/blob/master/explainer.md
[WEBRTC-IDENTITY]
Cullen Jennings; Martin Thomson. Identity for WebRTC 1.0. 27 September 2018. CR. URL: https://www.w3.org/TR/webrtc-identity/
[WEBRTC-NV-USE-CASES]
Bernard Aboba. WebRTC Next Version Use Cases. 23 November 2021. NOTE. URL: https://www.w3.org/TR/webrtc-nv-use-cases/

IDL Index

// New dictionary
dictionary RTCInsertableStreams {
    ReadableStream readable;
    WritableStream writable;
};

typedef (SFrameTransform or RTCRtpScriptTransform) RTCRtpTransform;

// New methods for RTCRtpSender and RTCRtpReceiver
partial interface RTCRtpSender {
    attribute RTCRtpTransform? transform;
};

partial interface RTCRtpReceiver {
    attribute RTCRtpTransform? transform;
};

enum SFrameTransformRole {
    "encrypt",
    "decrypt"
};

dictionary SFrameTransformOptions {
    SFrameTransformRole role = "encrypt";
};

typedef [EnforceRange] unsigned long long SmallCryptoKeyID;
typedef (SmallCryptoKeyID or bigint) CryptoKeyID;

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransform {
    constructor(optional SFrameTransformOptions options = {});
    Promise<undefined> setEncryptionKey(CryptoKey key, optional CryptoKeyID keyID);
    attribute EventHandler onerror;
};
SFrameTransform includes GenericTransformStream;

enum SFrameTransformErrorEventType {
    "authentication",
    "keyID",
    "syntax"
};

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransformErrorEvent : Event {
    constructor(DOMString type, SFrameTransformErrorEventInit eventInitDict);

    readonly attribute SFrameTransformErrorEventType errorType;
    readonly attribute CryptoKeyID? keyID;
    readonly attribute any frame;
};

dictionary SFrameTransformErrorEventInit : EventInit {
    required SFrameTransformErrorEventType errorType;
    required any frame;
    CryptoKeyID? keyID;
};

// New enum for video frame types. Will eventually re-use the equivalent defined
// by WebCodecs.
enum RTCEncodedVideoFrameType {
    "empty",
    "key",
    "delta",
};

dictionary RTCEncodedVideoFrameMetadata {
    long long frameId;
    sequence<long long> dependencies;
    unsigned short width;
    unsigned short height;
    long spatialIndex;
    long temporalIndex;
    long synchronizationSource;
    sequence<long> contributingSources;
};

// New interfaces to define encoded video and audio frames. Will eventually
// re-use or extend the equivalent defined in WebCodecs.
[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedVideoFrame {
    readonly attribute RTCEncodedVideoFrameType type;
    readonly attribute unsigned long long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedVideoFrameMetadata getMetadata();
};

dictionary RTCEncodedAudioFrameMetadata {
    long synchronizationSource;
    sequence<long> contributingSources;
};

[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedAudioFrame {
    readonly attribute unsigned long long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedAudioFrameMetadata getMetadata();
};


// New interfaces to expose JavaScript-based transforms.

[Exposed=DedicatedWorker]
interface RTCTransformEvent : Event {
    readonly attribute RTCRtpScriptTransformer transformer;
};

partial interface DedicatedWorkerGlobalScope {
    attribute EventHandler onrtctransform;
};

[Exposed=DedicatedWorker]
interface RTCRtpScriptTransformer {
    readonly attribute ReadableStream readable;
    readonly attribute WritableStream writable;
    readonly attribute any options;
};

[Exposed=Window]
interface RTCRtpScriptTransform {
    constructor(Worker worker, optional any options, optional sequence<object> transfer);
};