Scalable Video Coding (SVC) Extension for WebRTC

W3C Working Draft

This version:
https://www.w3.org/TR/2020/WD-webrtc-svc-20200408/
Latest published version:
https://www.w3.org/TR/webrtc-svc/
Latest editor's draft:
https://w3c.github.io/webrtc-svc/
Previous version:
https://www.w3.org/TR/2019/WD-webrtc-svc-20191022/
Editors:
Peter Thatcher (Google)
Bernard Aboba (Microsoft Corporation)
Author:
Participate:
Mailing list
Browse open issues
IETF AVTCORE Working Group

Abstract

This document defines a set of ECMAScript APIs in WebIDL to extend the WebRTC 1.0 API to enable user agents to support scalable video coding (SVC).

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

The API is based on preliminary work done in the W3C ORTC Community Group.

This document was published by the Web Real-Time Communications Working Group as a Working Draft. This document is intended to become a W3C Recommendation.

Comments regarding this document are welcome. Please send them to public-webrtc@w3.org (archives).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 March 2019 W3C Process Document.

1. Introduction

This section is non-normative.

This specification extends the WebRTC specification [WEBRTC] to enable configuration of encoding parameters for scalable video coding (SVC). Since SVC bitstreams are self-describing and SVC-capable codecs implemented in browsers require that compliant decoders be capable of decoding any legal encoding sent by an encoder, this specification does not support decoder configuration. However, it is possible for decoders that cannot decode any legal bitstream to describe the supported scalability modes.

2. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MUST and SHOULD in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

This specification defines conformance criteria that apply to a single product: the user agent that implements the interfaces that it contains.

Conformance requirements phrased as algorithms or specific steps may be implemented in any manner, so long as the end result is equivalent. (In particular, the algorithms defined in this specification are intended to be easy to follow, and not intended to be performant.)

Implementations that use ECMAScript to implement the APIs defined in this specification MUST implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [WEBIDL-1], as this specification uses that specification and terminology.

3. Terminology

The EventHandler interface, representing a callback used for event handlers, and the ErrorEvent interface are defined in [HTML51].

The concepts queue a task, fires a simple event and networking task source are defined in [HTML51].

The terms event, event handlers and event handler event types are defined in [HTML51].

When referring to exceptions, the terms throw and create are defined in [WEBIDL-1].

The term simulcast envelope refers to the maximum number of simulcast streams and the order of the encoding parameters.

The terms fulfilled, rejected, resolved, pending and settled used in the context of Promises are defined in [ECMASCRIPT-6.0].

The terms MediaStream, MediaStreamTrack, and MediaStreamConstraints are defined in [GETUSERMEDIA].

This specification references objects, methods, internal slots and dictionaries defined in [WEBRTC], including the RTCPeerConnection object (defined in Section 4.4), the RTCError object (defined in Section 11.1), the addTrack and addTransceiver methods (defined in Section 5.1), the setCodecPreferences method (defined in Section 5.4), the getParameters and setParameters methods (defined in Section 5.2), the RTCRtpParameters dictionary (defined in Section 5.2.1), the RTCRtpSendParameters dictionary (defined in Section 5.2.2), the RTCRtpReceiveParameters dictionary (defined in Section 5.2.3), the RTCRtpCodingParameters dictionary (defined in Section 5.2.4), the RTCRtpEncodingParameters dictionary (defined in Section 5.2.6), the RTCRtpCodecCapability dictionary (defined in Section 5.2.13), the RTCRtpTransceiver object including its [[Sender]] and [[Receiver]] internal slots (defined in Section 5.4), the RTCRtpSender object, including its [[SendEncodings]] and [[LastReturnedParameters]] internal slots (defined in Section 5.2) and the RTCRtpReceiver object (defined in Section 5.3).

For Scalable Video Coding (SVC), the terms single-session transmission (SST) and multi-session transmission (MST) are defined in [RFC6190]. This specification only supports SST but not MST.

The term Single Real-time Transport Protocol (RTP) stream Single Transport (SRST), defined in [RFC7656] Section 3.7, refers to SVC implementations that transmit all layers within a single transport, using a single RTP stream and synchronization source (SSRC). The term Multiple RTP stream Single Transport (MRST), also defined in [RFC7656] Section 3.7, refers to implementations that transmit all layers within a single transport, using multiple RTP streams with a distinct SSRC for each layer. This specification only supports SRST transport, not MRST. Codecs with RTP payload specifications supporting SRST transport include VP8 [RFC7741], VP9 [VP9-PAYLOAD], AV1 [AV1-RTP] and H.264/SVC [RFC6190].

4. Operational model

This specification extends [WEBRTC] to enable configuration of encoding parameters for Scalable Video Coding (SVC), as well as discovery of the SVC capabilities of both an encoder and decoder, by extending the RTCRtpEncodingParameters and RTCRtpCodecCapability dictionaries.

Since this specification does not change the behavior of WebRTC objects and methods, restrictions relating to Offer/Answer negotiation and encoding parameters remain, as described in [WEBRTC] Section 5.2: "setParameters does not cause SDP renegotiation and can only be used to change what the media stack is sending or receiving within the envelope negotiated by Offer/Answer."

The configuration of SVC-capable codecs implemented in browsers fits within this restriction. Codecs such as VP8 [RFC6386], VP9 [VP9] and AV1 [AV1] mandate support for SVC and require a compliant decoder to be able to decode any compliant encoding that an encoder can send. Therefore, for these codecs there is no need to configure the decoder or to negotiate SVC support within Offer/Answer, enabling encoding parameters to be used for SVC configuration.

4.1 Error handling

[WEBRTC] Section 5.2 describes error handling in setParameters, including use of RTCError to indicate a "hardware-encoder-error" due to an unsupported encoding parameter, as well as OperationError for other errors. Implementations of this specification utilize RTCError and OperationError in the prescribed manner when an invalid scalabilityMode value is provided to setParameters or addTransceiver.

Note that since the addTransceiver and setCodecPreferences methods can be called before the Offer/Answer negotiation has concluded, the negotiated codec and its capabilities may not be known, and it is possible that the scalabilityMode values configured in sendEncodings cannot be applied due to incompatibility with the eventually selected codec. Since this cannot be determined at the time addTransceiver is called, an error may only be detected if a scalabilityMode value is invalid for any supported codec. In order for the application to determine whether the requested scalabilityMode values have been applied, the RTCRtpSender.getParameters method can be called after negotiation has completed and the sending codec has been determined. If the configuration is not satisfactory, the application can utilize setParameters to change it.

To influence the Offer/Answer negotiation so as to make it more likely that the desired scalabilityMode values can be applied, setCodecPreferences can be used to limit the negotiated codecs to those supporting the desired configuration. For example, if it is desired to support temporal scalability as well as spatial adaptation, when addTransceiver is called, sendEncodings can be configured so as to send multiple simulcast streams with different resolutions, with each stream utilizing temporal scalability. Since temporal scalability is supported by the VP8, VP9 and AV1 codecs, such a configuration could be applied if any of these codecs were negotiated. In the event that H.264/AVC was negotiated, temporal scalability would not be available, but simulcast with different resolutions would be applied. If this was unsatisfactory, a subsequent call to setParameters could be used to adjust the parameters within the envelope negotiated in Offer/Answer.

In situations where the decoder cannot necessarily decode anything that an encoder can send (e.g. an H.264/SVC decoder), the getCapabilities method can be used to retrieve the scalability modes supported by the decoder and encoder. By exchanging capabilities, the application can compute the intersection of the scalabilityMode values supported by the local and remote peers, enabling it to configure scalabilityMode values supported by both the local and remote peers using the addTransceiver and setParameters methods. However, in situations where SVC modes are negotiated in SDP Offer/Answer, setParameters can only change scalabilityMode values within the envelope negotiated by Offer/Answer, resulting in an error if the requested scalabilityMode value is outside this envelope.

5. Dictionary extensions

5.1 RTCRtpEncodingParameters Dictionary Extensions

WebIDLpartial dictionary RTCRtpEncodingParameters {
             DOMString           scalabilityMode;
};

Dictionary RTCRtpEncodingParameters Members

scalabilityMode of type DOMString

An identifier of the scalability mode to be used for this stream. The scalabilityMode selected MUST be one of the scalability modes supported for the codec, as indicated in RTCRtpCodecCapability. Scalability modes are defined in Section 6.

5.2 RTCRtpCodecCapability Dictionary Extensions

WebIDLpartial dictionary RTCRtpCodecCapability {
             sequence<DOMString>           scalabilityModes;
};

Dictionary RTCRtpCodecCapability Members

scalabilityModes of type sequence<DOMString>

An sequence of the scalability modes (defined in Section 6) supported by the encoder implementation.

In response to a call to RTCRtpSender.getCapabilities(kind), conformant implementations of this specification MUST return a sequence of scalability modes supported by each codec of that kind. If a codec does not support encoding of any scalability modes, then the scalabilityModes member is not provided.

In response to a call to RTCRtpReceiver.getCapabilities(kind), decoders that do not support decoding of scalability modes (e.g. an H.264/AVC decoder) or that are required to decode any scalability mode (such as compliant VP8, VP9 and AV1 decoders) omit the scalabilityModes member. However, decoders that only support decoding of a subset of scalability modes MUST return a sequence of the scalability modes supported by that codec.

Note

The scalabilityModes sequence represents the scalability modes supported by a user agent. While the user agent's SVC capabilities are assumed to be static, this may not be the case for a Selective Forwarding Unit (SFU), whose ability to forward SVC modes may depend on the negotiated header extensions. For example, if the SFU cannot parse codec payloads (either because it is not designed to do so, or because the payloads are confidential), then negotiation of an RTP header extension (such as the AV1 Descriptor defined in Appendix A of [AV1-RTP] or frame marking [FRAME-MARKING]) may be required to enable the SFU to forward scalability modes. As a result, an application may choose to exchange scalabilityModes with an SFU after the initial offer/answer negotiation, so that the SFU's supported scalabilityModes can be determined, allowing the application to compute the intersection of supported scalabilityModes.

6. Scalability modes

The scalability modes supported in this specification, as well as their associated identifiers and characteristics, are provided in the table below. The syntax for naming scalability modes is taken from [AV1] Section 6.7.5, which also provides additional information on the modes, including dependency diagrams.

While [AV1] and VP9 [VP9-PAYLOAD] implementations can support all the modes defined in the table, other codecs cannot. For example, VP8 [RFC7741] only supports temporal scalability (e.g. "L1T2", "L1T3"). H.264/SVC [RFC6190], which supports both temporal and spatial scalability, only permits transport of simulcast on distinct SSRCs, so that it does not support the "S" modes (where multiple encodings are transported on a single SSRC).

Scalability Mode Identifier Spatial Layers Resolution Ratio Temporal Layers Inter-layer dependency
"L1T2" 1 2
"L1T3" 1 3
"L2T1" 2 2:1 1 Yes
"L2T2" 2 2:1 2 Yes
"L2T3" 2 2:1 3 Yes
"L3T1" 3 2:1 1 Yes
"L3T2" 3 2:1 2 Yes
"L3T3" 3 2:1 3 Yes
"L2T1h" 2 1.5:1 1 Yes
"L2T2h" 2 1.5:1 2 Yes
"L2T3h" 2 1.5:1 3 Yes
"S2T1" 2 2:1 1 No
"S2T2" 2 2:1 2 No
"S2T3" 2 2:1 3 No
"S2T1h" 2 1.5:1 1 No
"S2T2h" 2 1.5:1 2 No
"S2T3h" 2 1.5:1 3 No
"S3T1" 3 2:1 1 No
"S3T2" 3 2:1 2 No
"S3T3" 3 2:1 3 No
"S3T1h" 3 1.5:1 1 No
"S3T2h" 3 1.5:1 2 No
"S3T3h" 3 1.5:1 3 No
"L3T2_KEY" 3 2:1 2 Yes
"L3T3_KEY" 3 2:1 3 Yes
"L4T5_KEY" 4 2:1 5 Yes
"L4T7_KEY" 4 2:1 7 Yes
"L3T2_KEY_SHIFT" 3 2:1 2 Yes
"L3T3_KEY_SHIFT" 3 2:1 3 Yes
"L4T5_KEY_SHIFT" 4 2:1 5 Yes
"L4T7_KEY_SHIFT" 4 2:1 7 Yes

6.1 Guidelines for addition of scalabilityMode values

When proposing a scalabilityMode value, the following principles should be followed:

  1. The proposed scalabilityMode MUST define entries to the table in Section 6, including values for the Scalabilty Mode Identifier, spatial and temporal layers, Resolution Ratio and Inter-layer dependency.
  2. The Scalability Mode Identifier SHOULD be consistent with naming schemes used in existing specifications such as [AV1] Section 6.7.5. The AV1 naming scheme utilizes "LxTy" to denote a scalabilityMode with x spatial layers using a 2:1 resolution ratio and y temporal layers. "LxTyh" denotes x spatial layers with a 1.5:1 resolution ratio and y temporal layers. "SxTy" denotes a scalabilityMode with x simulcast encodings with a 2:1 resolution ratio, with each simulcast encoding containing y temporal layers. "SxTyh" denotes a 1.5:1 resolution ratio. In addition, [AV1] Section 6.7.5 defines "LxTy_KEY" and "LxTy_KEY_SHIFT" modes.
  3. If the new scalabilityMode does not reference a dependency diagram defined within an existing specification, then a dependency diagram MUST be supplied.

7. Examples

7.1 Spatial Simulcast and Temporal Scalability

This section is non-normative.

const signaling = new SignalingChannel(); // handles JSON.stringify/parse
const constraints = {audio: true, video: true};
const configuration = {'iceServers': [{'urls': 'stun:stun.example.org'}]};
let pc;

// call start() to initiate
async function start() {
  pc = new RTCPeerConnection(configuration);

  // let the "negotiationneeded" event trigger offer generation
  pc.onnegotiationneeded = async () => {
    try {
      await pc.setLocalDescription();
      // send the offer to the other peer
      signaling.send({description: pc.localDescription});
    } catch (err) {
      console.error(err);
    }
  };

  try {
    // get a local stream, show it in a self-view and add it to be sent
    const stream = await navigator.mediaDevices.getUserMedia(constraints);
    selfView.srcObject = stream;
    pc.addTransceiver(stream.getAudioTracks()[0], {direction: 'sendonly'});
    pc.addTransceiver(stream.getVideoTracks()[0], {
      direction: 'sendonly',
   // Example of 3 spatial simulcast layers + 3 temporal layers with
   // an SSRC and RID for each simulcast layer
      sendEncodings: [
        {rid: 'q', scaleResolutionDownBy: 4.0, scalabilityMode: 'L1T3'}
        {rid: 'h', scaleResolutionDownBy: 2.0, scalabilityMode: 'L1T3'},
        {rid: 'f', scalabilityMode: 'L1T3'},
      ]
   // Example of 3 spatial simulcast layers + 3 temporal layers on a single SSRC
   //    sendEncodings: [
   //       {scalabilityMode: 'S3T3'}
   //    ]             
    });
  } catch (err) {
    console.error(err);
  }
}

signaling.onmessage = async ({data: {description, candidate}}) => {
  try {
    if (description) {
      await pc.setRemoteDescription(description);
      // if we got an offer, we need to reply with an answer
      if (description.type == 'offer') {
        await pc.setLocalDescription();
        signaling.send({description: pc.localDescription});
      }
    } else if (candidate) {
      await pc.addIceCandidate(candidate);
    }
  } catch (err) {
    console.error(err);
  }
};

7.2 Spatial and Temporal Scalability

This section is non-normative.

// Example of 1 spatial layer + 2 temporal layers
var sendEncodings = [
  {scalabilityMode: 'L1T2'}
];

// Example of 2 spatial layers (with 2:1 ratio) + 3 temporal layers
var sendEncodings = [
  {scalabilityMode: 'L2T3'}
];

7.3 SVC Encoder Capabilities

This section is non-normative.

// Capabilities returned by RTCRtpSender.getCapabilities('video').codecs[]
  "codecs": [
    {
      "clockRate": 90000,
      "mimeType": "video/VP8",
      "scalabilityModes": [
        "L1T2",
        "L1T3"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=96"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/VP9",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L2T1",
        "L2T2",
        "L2T3",
        "L3T1",
        "L3T2",
        "L3T3",
        "L1T2h",
        "L1T3h",
        "L2T1h",
        "L2T2h",
        "L2T3h"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=98"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "packetization-mode=1;profile-level-id=42001f;level-asymmetry-allowed=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=100"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "packetization-mode=0;profile-level-id=42001f;level-asymmetry-allowed=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=102"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=42e01f;packetization-mode=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=104"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=42e01f;packetization-mode=0"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=106"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=4d0032;packetization-mode=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=108"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=640032;packetization-mode=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=110"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/red"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=112"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/ulpfec"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/AV1",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L2T1",
        "L2T2",
        "L2T3",
        "L3T1",
        "L3T2",
        "L3T3",
        "L1T2h",
        "L1T3h",
        "L2T1h",
        "L2T2h",
        "L2T3h",
        "S2T1",
        "S2T2",
        "S2T3",
        "S3T1",
        "S3T2",
        "S3T3",
        "S2T1h",
        "S2T2h",
        "S2T3h",
        "S3T1h",
        "S3T2h",
        "S3T3h"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=113"
    }
]

7.4 SFU Capabilities

This section is non-normative.

// RTCRtpReceiver.getCapabilities('video').codecs[] returned by 
// SFU that can only forward VP8 and VP9 temporal scalability modes
 "codecs": [
    {
      "clockRate": 90000,
      "mimeType": "video/VP8",
      "scalabilityModes": [
        "L1T2",
        "L1T3"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/VP9",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L1T2h",
        "L1T3h"
      ]
    }
]

7.5 SVC Decoder Capabilities

This section is non-normative.

// RTCRtpReceiver.getCapabilities('video').codecs[] returned by a browser that can
// support all scalability modes within the VP8 and VP9 codecs.
  "codecs": [
    { 
      "clockRate": 90000,
      "mimeType": "video/VP8"
    },
    { 
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=96"
    },
    { 
      "clockRate": 90000,
      "mimeType": "video/VP9"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=98"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "packetization-mode=1;profile-level-id=42001f;level-asymmetry-allowed=1"
    },

    ...
]

8. Privacy and Security Considerations

This section is non-normative; it specifies no new behaviour, but instead summarizes information already present in other parts of the specification. The overall security considerations of the APIs and protocols used in WebRTC are described in [RTCWEB-SECURITY-ARCH].

8.1 Impact on same origin policy

This API enables data to be communicated between browsers and other devices, including other browsers.

This means that data can be shared between applications running in different browsers, or between an application running in the same browser and something that is not a browser. This is an extension to the Web model which has had barriers against sending data between entities with different origins.

This specification provides no user prompts or chrome indicators for communication; it assumes that once the Web page has been allowed to access data, it is free to share that data with other entities as it chooses.

8.2 Impact on local network

Since the browser is an active platform executing in a trusted network environment (inside the firewall), it is important to limit the damage that the browser can do to other elements on the local network, and it is important to protect data from interception, manipulation and modification by untrusted participants.

Mitigations include:

These measures are specified in the relevant IETF documents.

8.3 Confidentiality of Communications

The fact that communication is taking place cannot be hidden from adversaries that can observe the network, so this has to be regarded as public information.

8.4 Persistent information

The WebRTC API exposes information about the underlying media system via the RTCRtpSender.getCapabilities and RTCRtpReceivergetCapabilities methods, including detailed and ordered information about the codecs that the system is able to produce and consume. The WebRTC-SVC extension adds supported SVC modes to that information, which is in most cases persistent across time and origins, therefore increasing the fingerprint surface. However, the supported scalabilityModes does not provide information on which codecs are implemented in hardware, and which are only available in software.

9. Change Log

This section will be removed before publication.

A. Acknowledgements

The editors wish to thank the Working Group chairs and Team Contact, Harald Alvestrand, Stefan Håkansson and Dominique Hazaël-Massieux, for their support.

B. References

B.1 Normative references

[AV1]
AV1 Bitstream & Decoding Process Specification. Peter de Rivaz; Jack Haughton. Alliance for Open Media. January 8, 2019. URL: https://aomediacodec.github.io/av1-spec/av1-spec.pdf
[AV1-RTP]
RTP Payload Format for AV1. AV1 RTC SG. Alliance for Open Media. January 9, 2020. URL: https://aomediacodec.github.io/av1-rtp-spec/
[ECMASCRIPT-6.0]
ECMA-262 6th Edition, The ECMAScript 2015 Language Specification. Allen Wirfs-Brock. Ecma International. June 2015. Standard. URL: http://www.ecma-international.org/ecma-262/6.0/index.html
[GETUSERMEDIA]
Media Capture and Streams. Daniel Burnett; Adam Bergkvist; Cullen Jennings; Anant Narayanan; Bernard Aboba; Jan-Ivar Bruaroey; Henrik Boström. W3C. 2 July 2019. W3C Candidate Recommendation. URL: https://www.w3.org/TR/mediacapture-streams/
[HTML51]
HTML 5.1 2nd Edition. Steve Faulkner; Arron Eicholz; Travis Leithead; Alex Danilo. W3C. 3 October 2017. W3C Recommendation. URL: https://www.w3.org/TR/html51/
[RFC2119]
Key words for use in RFCs to Indicate Requirement Levels. S. Bradner. IETF. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[RFC6190]
RTP Payload Format for Scalable Video Coding. S. Wenger; Y.-K. Wang; T. Schierl; A. Eleftheriadis. IETF. May 2011. RFC. URL: https://tools.ietf.org/html/rfc6190
[RFC6386]
VP8 Data Format and Decoding Guide. J. Bankoski; J. Koleszar; L. Quillio; J. Salonen; Y. Xu. IETF. November 2011. RFC. URL: https://tools.ietf.org/html/rfc6386
[RFC7656]
A Taxonomy of Semantics and Mechanisms for Real-Time Transport Protocol (RTP) Sources. J. Lennox; K. Gross; S. Nandakumar; G. Salgueiro; B. Burman, Ed.. IETF. November 2015. Informational. URL: https://tools.ietf.org/html/rfc7656
[RFC7675]
Session Traversal Utilities for NAT (STUN) Usage for Consent Freshness. M. Perumal; D. Wing; R. Ravindranath; T. Reddy; M. Thomson. IETF. October 2015. Proposed Standard. URL: https://tools.ietf.org/html/rfc7675
[RFC7741]
RTP Payload Format for VP8 Video. P. Westin; H. Lundin; M. Glover; J. Uberti; F. Galligan. IETF. March 2016. RFC. URL: https://tools.ietf.org/html/rfc7741
[RFC8174]
Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words. B. Leiba. IETF. May 2017. Best Current Practice. URL: https://tools.ietf.org/html/rfc8174
[RTCWEB-SECURITY-ARCH]
WebRTC Security Architecture. E. Rescorla. IETF. 21 July 2019. Internet Draft (work in progress). URL: https://tools.ietf.org/html/draft-ietf-rtcweb-security-arch
[VP9]
VP9 Bitstream & Decoding Process Specification. A. Grange; P. de Rivaz; J. Hunt. Google. February 2016. Version 0.6. URL: https://storage.googleapis.com/downloads.webmproject.org/docs/vp9/vp9-bitstream-specification-v0.6-20160331-draft.pdf
[VP9-PAYLOAD]
RTP Payload Format for VP9 Video. J. Uberti; S. Holmer; M. Flodman; J. Lennox; D. Hong. IETF. 24 July 2019. Internet Draft (work in progress). URL: https://tools.ietf.org/html/draft-ietf-payload-vp9
[WebIDL]
Web IDL. Boris Zbarsky. W3C. 15 December 2016. W3C Editor's Draft. URL: https://heycam.github.io/webidl/
[WEBIDL-1]
WebIDL Level 1. Cameron McCormack. W3C. 15 December 2016. W3C Recommendation. URL: https://www.w3.org/TR/2016/REC-WebIDL-1-20161215/
[WEBRTC]
WebRTC 1.0: Real-time Communication Between Browsers. Cullen Jennings; Henrik Boström; Jan-Ivar Bruaroey; Adam Bergkvist; Daniel Burnett; Anant Narayanan; Bernard Aboba; Taylor Brandstetter. W3C. 13 December 2019. W3C Candidate Recommendation. URL: https://www.w3.org/TR/webrtc/

B.2 Informative references

[FRAME-MARKING]
Frame Marking RTP Header Extension. M. Zanaty; E. Berger; S. Nandakumar. IETF. 21 November 2019. URL: https://tools.ietf.org/html/draft-ietf-avtext-framemarking