WebRTC Encoded Transform

W3C Working Draft,

More details about this document
This version:
https://www.w3.org/TR/2023/WD-webrtc-encoded-transform-20230131/
Latest published version:
https://www.w3.org/TR/webrtc-encoded-transform/
Editor's Draft:
https://w3c.github.io/webrtc-encoded-transform/
Previous Versions:
History:
https://www.w3.org/standards/history/webrtc-encoded-transform
Feedback:
public-webrtc@w3.org with subject line “[webrtc-encoded-transform] … message topic …” (archives)
GitHub
Editors:
(Google)
(Google)
(Apple)

Abstract

This API defines an API surface for manipulating the bits on MediaStreamTracks being sent via an RTCPeerConnection.

Status of this document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Web Real-Time Communications Working Group as a Working Draft using the Recommendation track. This document is intended to become a W3C Recommendation.

If you wish to make comments regarding this document, please send them to public-webrtc@w3.org (subscribe, archives). When sending e-mail, please put the text “webrtc-encoded-transform” in the subject, preferably like this: “[webrtc-encoded-transform] …summary of comment…”. All comments are welcome.

Publication as a Working Draft does not imply endorsement by W3C and its Members. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 2 November 2021 W3C Process Document.

1. Introduction

The [WEBRTC-NV-USE-CASES] document describes the use-case of

which requires that the conferencing server does not have access to the cleartext media (requirement N27).

This specification provides access to encoded media, which is the output of the encoder part of a codec and the input to the decoder part of a codec which allows the user agent to apply encryption locally.

The interface is inspired by [WEB-CODECS] to provide access to such functionality while retaining the setup flow of RTCPeerConnection

2. Terminology

3. Specification

The Streams definition doesn’t use WebIDL much, but the WebRTC spec does. This specification shows the IDL extensions for WebRTC.

It uses an additional API on RTCRtpSender and RTCRtpReceiver to insert the processing into the pipeline.

typedef (SFrameTransform or RTCRtpScriptTransform) RTCRtpTransform;

// New methods for RTCRtpSender and RTCRtpReceiver
partial interface RTCRtpSender {
    attribute RTCRtpTransform? transform;
};

partial interface RTCRtpReceiver {
    attribute RTCRtpTransform? transform;
};

3.1. Extension operation

At the time when a codec is initialized as part of the encoder, and the corresponding flag is set in the RTCPeerConnection's RTCConfiguration argument, ensure that the codec is disabled and produces no output.

3.1.1. Stream creation

At construction of each RTCRtpSender or RTCRtpReceiver, run the following steps:

  1. Initialize this.[[transform]] to null.

  2. Initialize this.[[readable]] to a new ReadableStream.

  3. Set up this.[[readable]]. this.[[readable]] is provided frames using the readEncodedData algorithm given this as parameter.

  4. Set this.[[readable]].[[owner]] to this.

  5. Initialize this.[[writable]] to a new WritableStream.

  6. Set up this.[[writable]] with its writeAlgorithm set to writeEncodedData given this as parameter and its sizeAlgorithm to an algorithm that returns 0.

    Chunk size is set to 0 to explictly disable streams backpressure on the write side.

  7. Set this.[[writable]].[[owner]] to this.

  8. Initialize this.[[pipeToController]] to null.

  9. Initialize this.[[lastReceivedFrameCounter]] to 0.

  10. Initialize this.[[lastEnqueuedFrameCounter]] to 0.

  11. Queue a task to run the following steps:

    1. If this.[[pipeToController]] is not null, abort these steps.

    2. Set this.[[pipeToController]] to a new AbortController.

    3. Call pipeTo with this.[[readable]], this.[[writable]], preventClose equal to true, preventAbort equal to true, preventCancel equal to true and this.[[pipeToController]].signal.

3.1.2. Stream processing

The readEncodedData algorithm is given a rtcObject as parameter. It is defined by running the following steps:

  1. Wait for a frame to be produced by rtcObject’s encoder if it is a RTCRtpSender or rtcObject’s packetizer if it is a RTCRtpReceiver.

  2. Increment rtcObject.[[lastEnqueuedFrameCounter]] by 1.

  3. Let frame be the newly produced frame.

  4. Set frame.[[owner]] to rtcObject.

  5. Set frame.[[counter]] to rtcObject.[[lastEnqueuedFrameCounter]].

  6. Enqueue frame in rtcObject.[[readable]].

The writeEncodedData algorithm is given a rtcObject as parameter and a frame as input. It is defined by running the following steps:

  1. If frame.[[owner]] is not equal to rtcObject, abort these steps and return a promise resolved with undefined. A processor cannot create frames, or move frames between streams.

  2. If frame.[[counter]] is equal or smaller than rtcObject.[[lastReceivedFrameCounter]], abort these steps and return a promise resolved with undefined. A processor cannot reorder frames, although it may delay them or drop them.

  3. Set rtcObject.[[lastReceivedFrameCounter]] to frame[[counter]].

  4. Let data be frame.[[data]].

  5. Let serializedFrame be StructuredSerializeWithTransfer(frame, « data »).

  6. Let frameCopy be StructuredDeserialize(serializedFrame, frame’s relevant realm).

  7. Enqueue frameCopy for processing as if it came directly from the encoded data source, by running one of the following steps:

  8. Return a promise resolved with undefined.

On sender side, as part of readEncodedData, frames produced by rtcObject’s encoder MUST be enqueued in rtcObject.[[readable]] in the encoder’s output order. As writeEncodedData ensures that the transform cannot reorder frames, the encoder’s output order is also the order followed by packetizers to generate RTP packets and assign RTP packet sequence numbers.

On receiver side, as part of readEncodedData, frames produced by rtcObject’s packetizer MUST be enqueued in rtcObject.[[readable]] in the same encoder’s output order. To ensure the order is respected, the depacketizer will typically use RTP packet sequence numbers to reorder RTP packets as needed before enqueuing frames in rtcObject.[[readable]]. As writeEncodedData ensures that the transform cannot reorder frames, this will be the order expected by rtcObject’s decoder.

3.2. Extension attribute

A RTCRtpTransform has two private slots called [[readable]] and [[writable]].

Each RTCRtpTransform has an association steps set, which is empty by default.

The transform getter steps are:

  1. Return this.[[transform]].

The transform setter steps are:

  1. Let transform be the argument to the setter.

  2. Let checkedTransform set to transform if it is not null or to an identity transform stream otherwise.

  3. Let reader be the result of getting a reader for checkedTransform.[[readable]].

  4. Let writer be the result of getting a writer for checkedTransform.[[writable]].

  5. Initialize newPipeToController to a new AbortController.

  6. If this.[[pipeToController]] is not null, run the following steps:

    1. Add the chain transform algorithm to this.[[pipeToController]].signal.

    2. signal abort this.[[pipeToController]].signal.

  7. Else, run the chain transform algorithm steps.

  8. Set this.[[pipeToController]] to newPipeToController.

  9. Set this.[[transform]] to transform.

  10. Run the steps in the set of association steps of transform with this.

The chain transform algorithm steps are defined as:

  1. If newPipeToController is aborted, abort these steps.

  2. Release reader.

  3. Release writer.

  4. Assert that newPipeToController is the same object as rtcObject.[[pipeToController]].

  5. Call pipeTo with rtcObject.[[readable]], checkedTransform.[[writable]], preventClose equal to false, preventAbort equal to false, preventCancel equal to true and newPipeToController.signal.

  6. Call pipeTo with checkedTransform.[[readable]], rtcObject.[[writable]], preventClose equal to true, preventAbort equal to true, preventCancel equal to false and newPipeToController.signal.

This algorithm is defined so that transforms can be updated dynamically. There is no guarantee on which frame will happen the switch from the previous transform to the new transform.

If a web application sets the transform synchronously at creation of the RTCRtpSender (for instance when calling addTrack), the transform will receive the first frame generated by the RTCRtpSender's encoder. Similarly, if a web application sets the transform synchronously at creation of the RTCRtpReceiver (for instance when calling addTrack, or at track event handler), the transform will receive the first full frame generated by the RTCRtpReceiver's packetizer.

4. SFrameTransform

The API presented in this section allows applications to process SFrame data as defined in [SFrame].

enum SFrameTransformRole {
    "encrypt",
    "decrypt"
};

dictionary SFrameTransformOptions {
    SFrameTransformRole role = "encrypt";
};

typedef [EnforceRange] unsigned long long SmallCryptoKeyID;
typedef (SmallCryptoKeyID or bigint) CryptoKeyID;

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransform {
    constructor(optional SFrameTransformOptions options = {});
    Promise<undefined> setEncryptionKey(CryptoKey key, optional CryptoKeyID keyID);
    attribute EventHandler onerror;
};
SFrameTransform includes GenericTransformStream;

enum SFrameTransformErrorEventType {
    "authentication",
    "keyID",
    "syntax"
};

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransformErrorEvent : Event {
    constructor(DOMString type, SFrameTransformErrorEventInit eventInitDict);

    readonly attribute SFrameTransformErrorEventType errorType;
    readonly attribute CryptoKeyID? keyID;
    readonly attribute any frame;
};

dictionary SFrameTransformErrorEventInit : EventInit {
    required SFrameTransformErrorEventType errorType;
    required any frame;
    CryptoKeyID? keyID;
};

The new SFrameTransform(options) constructor steps are:

  1. Let transformAlgorithm be an algorithm which takes a frame as input and runs the SFrame transform algorithm with this and frame.

  2. Set this.[[transform]] to a new TransformStream.

  3. Set up this.[[transform]] with transformAlgorithm set to transformAlgorithm.

  4. Let options be the method’s first argument.

  5. Set this.[[role]] to options["role"].

  6. Set this.[[readable]] to this.[[transform]].[[readable]].

  7. Set this.[[writable]] to this.[[transform]].[[writable]].

4.1. Algorithm

The SFrame transform algorithm, given sframe as a SFrameTransform object and frame, runs these steps:

  1. Let role be sframe.[[role]].

  2. If frame.[[owner]] is a RTCRtpSender, set role to 'encrypt'.

  3. If frame.[[owner]] is a RTCRtpReceiver, set role to 'decrypt'.

  4. Let data be undefined.

  5. If frame is a BufferSource, set data to frame.

  6. If frame is a RTCEncodedAudioFrame, set data to frame.data

  7. If frame is a RTCEncodedVideoFrame, set data to frame.data

  8. If data is undefined, abort these steps.

  9. Let buffer be the result of running the SFrame algorithm with data and role as parameters. This algorithm is defined by the SFrame specification and returns an ArrayBuffer.

  10. If the SFrame algorithm exits abruptly with an error, queue a task to run the following sub steps:

    1. If the processing fails on decryption side due to data not following the SFrame format, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to syntax and its frame attribute set to frame.

    2. If the processing fails on decryption side due to the key identifier parsed in data being unknown, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to keyID, its frame attribute set to frame and its keyID attribute set to the keyID value parsed in the SFrame header.

    3. If the processing fails on decryption side due to validation of the authentication tag, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to authentication and its frame attribute set to frame.

    4. Abort these steps.

  11. If frame is a BufferSource, set frame to buffer.

  12. If frame is a RTCEncodedAudioFrame, set frame.data to buffer.

  13. If frame is a RTCEncodedVideoFrame, set frame.data to buffer.

  14. Enqueue frame in sframe.[[transform]].

4.2. Methods

The setEncryptionKey(key, keyID) method steps are:
  1. Let promise be a new promise.

  2. If keyID is a bigint which cannot be represented as a integer between 0 and 264-1 inclusive, reject promise with a RangeError exception.

  3. Otherwise, in parallel, run the following steps:

    1. Set key with its optional keyID as key material to use for the SFrame transform algorithm, as defined by the SFrame specification.

    2. If setting the key material fails, reject promise with an InvalidModificationError exception and abort these steps.

    3. Resolve promise with undefined.

  4. Return promise.

5. RTCRtpScriptTransform

5.1. RTCEncodedVideoFrameType dictionary

// New enum for video frame types. Will eventually re-use the equivalent defined
// by WebCodecs.
enum RTCEncodedVideoFrameType {
    "empty",
    "key",
    "delta",
};
Enumeration description
Enum value Description
empty

This frame contains no data.

key

This frame can be decoded without reference to any other frames.

delta

This frame references another frame and can not be decoded without that frame.

5.2. RTCEncodedVideoFrameMetadata dictionary

dictionary RTCEncodedVideoFrameMetadata {
    unsigned long long frameId;
    sequence<unsigned long long> dependencies;
    unsigned short width;
    unsigned short height;
    unsigned long spatialIndex;
    unsigned long temporalIndex;
    unsigned long synchronizationSource;
    octet payloadType;
  sequence<unsigned long> contributingSources;
};

5.2.1. Members

synchronizationSource of type unsigned long

The synchronization source (ssrc) identifier is an unsigned integer value per [RFC3550] used to identify the stream of RTP packets that the encoded frame object is describing.

payloadType of type octet

The payload type is an unsigned integer value in the range from 0 to 127 per [RFC3550] that is used to describe the format of the RTP payload.

contributingSources of type sequence<unsigned long>

The list of contribution sources (csrc list) as defined in [RFC3550].

5.3. RTCEncodedVideoFrame interface

// New interfaces to define encoded video and audio frames. Will eventually
// re-use or extend the equivalent defined in WebCodecs.
[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedVideoFrame {
    readonly attribute RTCEncodedVideoFrameType type;
    readonly attribute unsigned long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedVideoFrameMetadata getMetadata();
};

5.3.1. Members

type of type RTCEncodedVideoFrameType

The type attribute allows the application to determine when a key frame is being sent or received.

timestamp of type unsigned long

The RTP timestamp identifier is an unsigned integer value per [RFC3550] that reflects the sampling instant of the first octet in the RTP data packet.

data of type ArrayBuffer

The encoded frame data.

5.3.2. Methods

getMetadata()

Returns the metadata associated with the frame.

5.4. RTCEncodedAudioFrameMetadata dictionary

dictionary RTCEncodedAudioFrameMetadata {
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    short sequenceNumber;
};

5.4.1. Members

synchronizationSource of type unsigned long

The synchronization source (ssrc) identifier is an unsigned integer value per [RFC3550] used to identify the stream of RTP packets that the encoded frame object is describing.

payloadType of type octet

The payload type is an unsigned integer value in the range from 0 to 127 per [RFC3550] that is used to describe the format of the RTP payload.

contributingSources of type sequence<unsigned long>

The list of contribution sources (csrc list) as defined in [RFC3550].

sequenceNumber of type short

The RTP sequence number as defined in [RFC3550]. Only exists for incoming audio frames.

Comparing two sequence numbers requires serial number arithmetic described in [RFC1982].

5.5. RTCEncodedAudioFrame interface

[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedAudioFrame {
    readonly attribute unsigned long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedAudioFrameMetadata getMetadata();
};

5.5.1. Members

timestamp of type unsigned long

The RTP timestamp identifier is an unsigned integer value per [RFC3550] that reflects the sampling instant of the first octet in the RTP data packet.

data of type ArrayBuffer

The encoded frame data.

5.5.2. Methods

getMetadata()

Returns the metadata associated with the frame.

// New interfaces to expose JavaScript-based transforms. ##Interfaces

[Exposed=DedicatedWorker]
interface RTCTransformEvent : Event {
    readonly attribute RTCRtpScriptTransformer transformer;
};

partial interface DedicatedWorkerGlobalScope {
    attribute EventHandler onrtctransform;
};

[Exposed=DedicatedWorker]
interface RTCRtpScriptTransformer {
    readonly attribute ReadableStream readable;
    readonly attribute WritableStream writable;
    readonly attribute any options;
    Promise<unsigned long long> generateKeyFrame(optional DOMString rid);
    Promise<undefined> sendKeyFrameRequest();
};

[Exposed=Window]
interface RTCRtpScriptTransform {
    constructor(Worker worker, optional any options, optional sequence<object> transfer);
};

5.6. Operations

The new RTCRtpScriptTransform(worker, options, transfer) constructor steps are:

  1. Set t1 to an identity transform stream.

  2. Set t2 to an identity transform stream.

  3. Set this.[[writable]] to t1.[[writable]].

  4. Set this.[[readable]] to t2.[[readable]].

  5. Let serializedOptions be the result of StructuredSerializeWithTransfer(options, transfer).

  6. Let serializedReadable be the result of StructuredSerializeWithTransfer(t1.[[readable]], « t1.[[readable]] »).

  7. Let serializedWritable be the result of StructuredSerializeWithTransfer(t2.[[writable]], « t2.[[writable]] »).

  8. Queue a task on the DOM manipulation task source worker’s global scope to run the following steps:

    1. Let transformerOptions be the result of StructuredDeserialize(serializedOptions, the current Realm).

    2. Let readable be the result of StructuredDeserialize(serializedReadable, the current Realm).

    3. Let writable be the result of StructuredDeserialize(serializedWritable, the current Realm).

    4. Let transformer be a new RTCRtpScriptTransformer.

    5. Set transformer.[[options]] to transformerOptions.

    6. Set transformer.[[readable]] to readable.

    7. Set transformer.[[writable]] to writable.

    8. Fire an event named rtctransform using RTCTransformEvent with transformer set to transformer on worker’s global scope.

// FIXME: Describe error handling (worker closing flag true at RTCRtpScriptTransform creation time. And worker being terminated while transform is processing data).

Each RTCRtpScriptTransform has the following set of association steps, given rtcObject:

  1. Let transform be the RTCRtpScriptTransform object that owns the association steps.

  2. Let encoder be rtcObject’s encoder if rtcObject is a RTCRtpSender or undefined otherwise.

  3. Let depacketizer be rtcObject’s depacketizer if rtcObject is a RTCRtpReceiver or undefined otherwise.

  4. Queue a task on the DOM manipulation task source worker’s global scope to run the following steps:

    1. Let transformer be the RTCRtpScriptTransformer object associated to transform.

    2. Set transformer.[[encoder]] to encoder.

    3. Set transformer.[[depacketizer]] to depacketizer.

The generateKeyFrame(rid) method steps are:

  1. Let promise be a new promise.

  2. Run the generate key frame algorithm with promise, this.[[encoder]] and rid.

  3. Return promise.

The sendKeyFrameRequest() method steps are:

  1. Let promise be a new promise.

  2. Run the send request key frame algorithm with promise and this.[[depacketizer]].

  3. Return promise.

5.7. Attributes

A RTCRtpScriptTransformer has the following private slots called [[depacketizer]], [[encoder]], [[options]], [[readable]] and [[writable]]. In addition, a RTCRtpScriptTransformer is always associated with its parent RTCRtpScriptTransform transform. This allows algorithms to go from an RTCRtpScriptTransformer object to its RTCRtpScriptTransform parent and vice versa.

The options getter steps are:

  1. Return this.[[options]].

The readable getter steps are:

  1. Return this.[[readable]].

The writable getter steps are:

  1. Return this.[[writable]].

5.8. KeyFrame Algorithms

The generate key frame algorithm, given promise, encoder and rid, is defined by running these steps:

  1. If encoder is undefined, reject promise with InvalidStateError, abort these steps.

  2. If encoder is not processing video frames, reject promise with InvalidStateError, abort these steps.

  3. If rid is defined, validate its value. If invalid, reject promise with NotAllowedError and abort these steps.

  4. In parallel, run the following steps:

    1. Gather a list of video encoders, named videoEncoders from encoder, ordered according negotiated RIDs if any.

    2. If rid is defined, remove from videoEncoders any video encoder that does not match rid.

    3. If rid is undefined, remove from videoEncoders all video encoders except the first one.

    4. If videoEncoders is empty, reject promise with NotFoundError and abort these steps. videoEncoders is expected to be empty if the corresponding RTCRtpSender is not active, or the corresponding RTCRtpSender track is ended.

    5. Let videoEncoder be the first encoder in videoEncoders.

    6. If rid is undefined, set rid to the RID value corresponding to videoEncoder.

    7. Create a pending key frame task called task with task.[[rid]] set to rid and task.[[promise]]| set to promise.

    8. If encoder.[[pendingKeyFrameTasks]] is undefined, initialize encoder.[[pendingKeyFrameTasks]] to an empty set.

    9. Let shouldTriggerKeyFrame be true if encoder.[[pendingKeyFrameTasks]] contains a task whose [[rid]] value is equal to rid, and false otherwise.

    10. Add task to encoder.[[pendingKeyFrameTasks]].

    11. If shouldTriggerKeyFrame is true, instruct videoEncoder to generate a key frame for the next provided video frame.

For any RTCRtpScriptTransformer named transformer, the following steps are run just before any frame is enqueued in transformer.[[readable]]:

  1. Let encoder be transformer.[[encoder]].

  2. If encoder or encoder.[[pendingKeyFrameTasks]] is undefined, abort these steps.

  3. If frame is not a video "key" frame, abort these steps.

  4. For each task in encoder.[[pendingKeyFrameTasks]], run the following steps:

    1. If frame was generated by a video encoder identified by task.[[rid]], run the following steps:

      1. Remove task from encoder.[[pendingKeyFrameTasks]].

      2. Resolve task.[[promise]] with frame’s timestamp.

By resolving the promises just before enqueuing the corresponding key frame in a RTCRtpScriptTransformer's readable, the resolution callbacks of the promises are always executed just before the corresponding key frame is exposed. If the promise is associated to several rid values, it will be resolved when the first key frame corresponding to one the rid value is enqueued.

The send request key frame algorithm, given promise and depacketizer, is defined by running these steps:

  1. If depacketizer is undefined, reject promise with InvalidStateError, abort these steps.

  2. If depacketizer is not processing video packets, reject promise with InvalidStateError, abort these steps.

  3. In parallel, run the following steps:

    1. If sending a Full Intra Request (FIR) by depacketizer’s receiver is not deemed appropriate, resolve promise with undefined and abort these steps. Section 4.3.1 of [RFC5104] provides guidelines of how and when it is appropriate to sending a Full Intra Request.

    2. Generate a Full Intra Request (FIR) packet as defined in section 4.3.1 of [RFC5104] and send it through depacketizer’s receiver.

    3. Resolve promise with undefined.

6. RTCRtpSender extension

An additional API on RTCRtpSender is added to complement the generation of key frame added to RTCRtpScriptTransformer.

partial interface RTCRtpSender {
    Promise<undefined> generateKeyFrame(optional sequence <DOMString> rids);
};

6.1. Extension operation

The generateKeyFrame(rids) method steps are:

  1. Let promise be a new promise.

  2. In parallel, run the generate key frame algorithm with promise, this’s encoder and rids.

  3. Return promise.

7. Privacy and security considerations

This API gives Javascript access to the content of media streams. This is also available from other sources, such as Canvas and WebAudio.

However, streams that are isolated (as specified in [WEBRTC-IDENTITY]) or tainted with another origin, cannot be accessed using this API, since that would break the isolation rule.

The API will allow access to some aspects of timing information that are otherwise unavailable, which allows some fingerprinting surface.

The API will give access to encoded media, which means that the JS application will have full control over what’s delivered to internal components like the packetizer or the decoder. This may require additional care with auditing how data is handled inside these components.

For instance, packetizers may expect to see data only from trusted encoders, and may not be audited for reception of data from untrusted sources.

8. Examples

See the explainer document.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[MEDIACAPTURE-STREAMS]
Cullen Jennings; et al. Media Capture and Streams. 12 January 2023. CR. URL: https://www.w3.org/TR/mediacapture-streams/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[STREAMS]
Adam Rice; et al. Streams Standard. Living Standard. URL: https://streams.spec.whatwg.org/
[WebCryptoAPI]
Mark Watson. Web Cryptography API. 26 January 2017. REC. URL: https://www.w3.org/TR/WebCryptoAPI/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WEBRTC]
Cullen Jennings; Henrik Boström; Jan-Ivar Bruaroey. WebRTC 1.0: Real-Time Communication Between Browsers. 26 January 2021. REC. URL: https://www.w3.org/TR/webrtc/

Informative References

[RFC1982]
R. Elz; R. Bush. Serial Number Arithmetic. August 1996. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc1982
[RFC3550]
H. Schulzrinne; et al. RTP: A Transport Protocol for Real-Time Applications. July 2003. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc3550
[RFC5104]
S. Wenger; et al. Codec Control Messages in the RTP Audio-Visual Profile with Feedback (AVPF). February 2008. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc5104
[SFrame]
Secure Frame (SFrame). URL: https://www.ietf.org/archive/id/draft-ietf-sframe-enc-00.html
[WEB-CODECS]
Web Codecs explainer. URL: https://github.com/WICG/web-codecs/blob/master/explainer.md
[WEBRTC-IDENTITY]
Cullen Jennings; Martin Thomson. Identity for WebRTC 1.0. 27 September 2018. CR. URL: https://www.w3.org/TR/webrtc-identity/
[WEBRTC-NV-USE-CASES]
Bernard Aboba. WebRTC Next Version Use Cases. 27 January 2023. NOTE. URL: https://www.w3.org/TR/webrtc-nv-use-cases/

IDL Index

typedef (SFrameTransform or RTCRtpScriptTransform) RTCRtpTransform;

// New methods for RTCRtpSender and RTCRtpReceiver
partial interface RTCRtpSender {
    attribute RTCRtpTransform? transform;
};

partial interface RTCRtpReceiver {
    attribute RTCRtpTransform? transform;
};

enum SFrameTransformRole {
    "encrypt",
    "decrypt"
};

dictionary SFrameTransformOptions {
    SFrameTransformRole role = "encrypt";
};

typedef [EnforceRange] unsigned long long SmallCryptoKeyID;
typedef (SmallCryptoKeyID or bigint) CryptoKeyID;

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransform {
    constructor(optional SFrameTransformOptions options = {});
    Promise<undefined> setEncryptionKey(CryptoKey key, optional CryptoKeyID keyID);
    attribute EventHandler onerror;
};
SFrameTransform includes GenericTransformStream;

enum SFrameTransformErrorEventType {
    "authentication",
    "keyID",
    "syntax"
};

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransformErrorEvent : Event {
    constructor(DOMString type, SFrameTransformErrorEventInit eventInitDict);

    readonly attribute SFrameTransformErrorEventType errorType;
    readonly attribute CryptoKeyID? keyID;
    readonly attribute any frame;
};

dictionary SFrameTransformErrorEventInit : EventInit {
    required SFrameTransformErrorEventType errorType;
    required any frame;
    CryptoKeyID? keyID;
};

// New enum for video frame types. Will eventually re-use the equivalent defined
// by WebCodecs.
enum RTCEncodedVideoFrameType {
    "empty",
    "key",
    "delta",
};

dictionary RTCEncodedVideoFrameMetadata {
    unsigned long long frameId;
    sequence<unsigned long long> dependencies;
    unsigned short width;
    unsigned short height;
    unsigned long spatialIndex;
    unsigned long temporalIndex;
    unsigned long synchronizationSource;
    octet payloadType;
  sequence<unsigned long> contributingSources;
};

// New interfaces to define encoded video and audio frames. Will eventually
// re-use or extend the equivalent defined in WebCodecs.
[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedVideoFrame {
    readonly attribute RTCEncodedVideoFrameType type;
    readonly attribute unsigned long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedVideoFrameMetadata getMetadata();
};

dictionary RTCEncodedAudioFrameMetadata {
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    short sequenceNumber;
};

[Exposed=(Window,DedicatedWorker)]
interface RTCEncodedAudioFrame {
    readonly attribute unsigned long timestamp;
    attribute ArrayBuffer data;
    RTCEncodedAudioFrameMetadata getMetadata();
};

[Exposed=DedicatedWorker]
interface RTCTransformEvent : Event {
    readonly attribute RTCRtpScriptTransformer transformer;
};

partial interface DedicatedWorkerGlobalScope {
    attribute EventHandler onrtctransform;
};

[Exposed=DedicatedWorker]
interface RTCRtpScriptTransformer {
    readonly attribute ReadableStream readable;
    readonly attribute WritableStream writable;
    readonly attribute any options;
    Promise<unsigned long long> generateKeyFrame(optional DOMString rid);
    Promise<undefined> sendKeyFrameRequest();
};

[Exposed=Window]
interface RTCRtpScriptTransform {
    constructor(Worker worker, optional any options, optional sequence<object> transfer);
};

partial interface RTCRtpSender {
    Promise<undefined> generateKeyFrame(optional sequence <DOMString> rids);
};