MediaStream Recording

W3C Working Draft,

More details about this document
This version:
https://www.w3.org/TR/2024/WD-mediastream-recording-20241031/
Latest published version:
https://www.w3.org/TR/mediastream-recording/
Editor's Draft:
https://w3c.github.io/mediacapture-record/
Previous Versions:
History:
https://www.w3.org/standards/history/mediastream-recording/
Feedback:
public-webrtc@w3.org with subject line “[mediastream-recording] … message topic …” (archives)
GitHub
Editor:
(Google Inc.)
Former Editors:
Jim Barnett (Genesis)
(Microsoft Corp.)
Participate:
Mailing list
GitHub repo (new issue, open issues)
Implementation:
Can I use Media Recording?
Chromium Encode Acceleration Support

Abstract

This document defines a recording API for use with MediaStreams.

Status of this document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Web Real-Time Communications Working Group as a Working Draft using the Recommendation track. This document is intended to become a W3C Recommendation.

If you wish to make comments regarding this document, please send them to public-webrtc@w3.org (subscribe, archives). When sending e-mail, please put the text “mediastream-recording” in the subject, preferably like this: “[mediastream-recording] …summary of comment…”. All comments are welcome.

Publication as a Working Draft does not imply endorsement by W3C and its Members. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 03 November 2023 W3C Process Document.

1. Overview

This API attempts to make basic recording very simple, while still allowing for more complex use cases. In the simplest case, the application instantiates a MediaRecorder object, calls start() and then calls stop() or waits for the MediaStreamTrack(s) [GETUSERMEDIA] to be ended. The contents of the recording will be made available in the platform’s default encoding via the ondataavailable event. Functions are available to query the platform’s available set of encodings, and to select the desired ones if the author wishes. The application can also choose how much data it wants to receive at one time. By default a Blob containing the entire recording is returned when the recording finishes. However the application can choose to receive smaller buffers of data at regular intervals.

2. Media Recorder API

[Exposed=Window]
interface MediaRecorder : EventTarget {
  constructor(MediaStream stream, optional MediaRecorderOptions options = {});
  readonly attribute MediaStream stream;
  readonly attribute DOMString mimeType;
  readonly attribute RecordingState state;
  attribute EventHandler onstart;
  attribute EventHandler onstop;
  attribute EventHandler ondataavailable;
  attribute EventHandler onpause;
  attribute EventHandler onresume;
  attribute EventHandler onerror;
  readonly attribute unsigned long videoBitsPerSecond;
  readonly attribute unsigned long audioBitsPerSecond;
  readonly attribute BitrateMode audioBitrateMode;

  undefined start(optional unsigned long timeslice);
  undefined stop();
  undefined pause();
  undefined resume();
  undefined requestData();

  static boolean isTypeSupported(DOMString type);
};

2.1. Constructors

MediaRecorder(MediaStream stream, optional MediaRecorderOptions options = {})
When the MediaRecorder() constructor is invoked, the User Agent MUST run the following steps:
  1. Let stream be the constructor’s first argument.
  2. Let options be the constructor’s second argument.
  3. If invoking is type supported with optionsmimeType member as its argument returns false, throw a NotSupportedError DOMException and abort these steps.
  4. Let recorder be a newly constructed MediaRecorder object.
  5. Let recorder have a [[ConstrainedMimeType]] internal slot, initialized to the value of optionsmimeType member.
  6. Let recorder have a [[ConstrainedBitsPerSecond]] internal slot, initialized to the value of optionsbitsPerSecond member if it is present, otherwise null.
  7. Let recorder have a [[VideoKeyFrameIntervalDuration]] internal slot, initialized to the value of optionsvideoKeyFrameIntervalDuration member if it is present, otherwise null.
  8. Let recorder have a [[VideoKeyFrameIntervalCount]] internal slot, initialized to the value of optionsvideoKeyFrameIntervalCount member if it is present, otherwise null.
  9. Initialize recorder’s stream attribute to stream.
  10. Initialize recorder’s mimeType attribute to the value of recorder’s [[ConstrainedMimeType]] slot.
  11. Initialize recorder’s state attribute to inactive.
  12. Initialize recorder’s videoBitsPerSecond attribute to the value of optionsvideoBitsPerSecond member, if it is present. Otherwise, choose a target value the User Agent deems reasonable for video.
  13. Initialize recorder’s audioBitsPerSecond attribute to the value of optionsaudioBitsPerSecond member, if it is present. Otherwise, choose a target value the User Agent deems reasonable for audio.
  14. If recorder’s [[ConstrainedBitsPerSecond]] slot is not null, set recorder’s videoBitsPerSecond and audioBitsPerSecond attributes to values the User Agent deems reasonable for the respective media types, such that the sum of videoBitsPerSecond and audioBitsPerSecond is close to the value of recorder’s [[ConstrainedBitsPerSecond]] slot.
  15. If recorder supports the BitrateMode specified by the value of optionsaudioBitrateMode member, then initialize recorder’s audioBitrateMode attribute to the value of optionsaudioBitrateMode member, else initialize recorder’s audioBitrateMode attribute to the value "variable".
  16. Return recorder.

2.2. Attributes

stream, of type MediaStream, readonly
The MediaStream [GETUSERMEDIA] to be recorded.
mimeType, of type DOMString, readonly
The MIME type [RFC2046] used by the MediaRecorder object. The User Agent SHOULD be able to play back any of the MIME types it supports for recording. For example, it should be able to display a video recording in the HTML <video> tag.
mimeType specifies the media type and container format for the recording via a type/subtype combination, with the codecs and/or profiles parameters [RFC6381] specified where ambiguity might arise. Individual codecs might have further optional specific parameters.
state, of type RecordingState, readonly
The current state of the MediaRecorder object.
onstart, of type EventHandler
Called to handle the start event.
onstop, of type EventHandler
Called to handle the stop event.
ondataavailable, of type EventHandler
Called to handle the dataavailable event. The Blob of recorded data is contained in this event and can be accessed via its data attribute.
onpause, of type EventHandler
Called to handle the pause event.
onresume, of type EventHandler
Called to handle the resume event.
onerror, of type EventHandler
Called to handle an ErrorEvent.
videoBitsPerSecond, of type unsigned long, readonly
The target bitrate used to encode video tracks.
audioBitsPerSecond, of type unsigned long, readonly
The target bitrate used to encode audio tracks.
audioBitrateMode, of type BitrateMode, readonly
The BitrateMode used to encode audio tracks.

2.3. Methods

For historical reasons, the following methods alter state synchronously and fire events asynchronously.
start(optional unsigned long timeslice)
When a MediaRecorder object’s start() method is invoked, the UA MUST run the following steps:
  1. Let recorder be the MediaRecorder object on which the method was invoked.
  2. Let timeslice be the method’s first argument, if provided, or undefined.
  3. Let stream be the value of recorder’s stream attribute.
  4. Let tracks be the set of live tracks in stream’s track set.
  5. If the value of recorder’s state attribute is not inactive, throw an InvalidStateError DOMException and abort these steps.
  6. If the isolation properties of stream disallow access from recorder, throw a SecurityError DOMException and abort these steps.
  7. If stream is inactive, throw a NotSupportedError DOMException and abort these steps.
  8. If the [[ConstrainedMimeType]] slot specifies a media type, container, or codec, then constrain the configuration of recorder to the media type, container, and codec specified in the [[ConstrainedMimeType]] slot.
  9. If recorder’s [[ConstrainedBitsPerSecond]] slot is not null, set recorder’s videoBitsPerSecond and audioBitsPerSecond attributes to values the User Agent deems reasonable for the respective media types, for recording all tracks in tracks, such that the sum of videoBitsPerSecond and audioBitsPerSecond is close to the value of recorder’s [[ConstrainedBitsPerSecond]] slot.
  10. Let videoBitrate be the value of recorder’s videoBitsPerSecond attribute, and constrain the configuration of recorder to target an aggregate bitrate of videoBitrate bits per second for all video tracks recorder will be recording. videoBitrate is a hint for the encoder and the value might be surpassed, not achieved, or only be achieved over a long period of time.
  11. Let audioBitrate be the value of recorder’s audioBitsPerSecond attribute, and constrain the configuration of recorder to target an aggregate bitrate of audioBitrate bits per second for all audio tracks recorder will be recording. audioBitrate is a hint for the encoder and the value might be surpassed, not achieved, or only be achieved over a long period of time.
  12. Let videoKeyFrameIntervalDuration be recorder.[[VideoKeyFrameIntervalDuration]], and let videoKeyFrameIntervalCount be recorder.[[VideoKeyFrameIntervalCount]]. The UA SHOULD constrain the configuration of recorder so that the video encoder follows the below rules:
    • If videoKeyFrameIntervalDuration is not null and videoKeyFrameIntervalCount is null, the video encoder produces a keyframe on the first frame arriving after videoKeyFrameIntervalDuration milliseconds elapsed since the last key frame.
    • If videoKeyFrameIntervalCount is not null and videoKeyFrameIntervalDuration is null, the video encoder produces a keyframe on the first frame arriving after videoKeyFrameIntervalCount frames passed since the last key frame.
    • If both videoKeyFrameIntervalDuration and videoKeyFrameIntervalCount are not null, then throw a NotSupportedError DOMException and abort these steps.
    • If both videoKeyFrameIntervalDuration and videoKeyFrameIntervalCount are null, the User Agent may emit key frames as it deems fit.

    Note that encoders will sometimes make independent decisions about when to emit key frames.

  13. Constrain the configuration of recorder to encode using the BitrateMode specified by the value of recorder’s audioBitrateMode attribute for all audio tracks recorder will be recording.
  14. For each track in tracks, if the User Agent cannot record the track using the current configuration, then throw a NotSupportedError DOMException and abort these steps.
  15. Set recorder’s state to recording, and run the following steps in parallel:
    1. If the container and codecs to use for the recording have not yet been fully specified, the User Agent specifies them in recorder’s current configuration. The User Agent MAY take the sources of the tracks in tracks into account when deciding which container and codecs to use.
      By looking at the sources of the tracks in tracks when recording starts, the User Agent could choose a configuration that avoids re-encoding track content. For example, if the MediaStreamTrack is a remote track sourced from an RTCPeerConnection, the User Agent could pick the same codec as is used for the MediaStreamTrack’s RTP stream, should it be known. However, if the codec of the RTP stream changes while recording, the User Agent has to be prepared to re-encode it to avoid interruptions.
    2. Start recording all tracks in tracks using the recorder’s current configuration and gather the data into a Blob blob. Queue a task, using the DOM manipulation task source, to run the following steps:
      1. Let extendedMimeType be the value of recorder’s [[ConstrainedMimeType]] slot.
      2. Modify extendedMimeType by adding media type, subtype and codecs parameter reflecting the configuration used by the MediaRecorder to record all tracks in tracks, if not already present. This MAY include the profiles parameter [RFC6381] or further codec-specific parameters.
      3. Set recorder’s mimeType attribute to extendedMimeType.
      4. Fire an event named start at recorder.
    3. If at any point stream’s isolation properties change so that MediaRecorder is no longer allowed access to it, the UA MUST stop gathering data, discard any data that it has gathered, and queue a task, using the DOM manipulation task source, that runs the following steps:
      1. Inactivate the recorder with recorder.
      2. Fire an error event named SecurityError at recorder.
      3. Fire a blob event named dataavailable at recorder with blob.
      4. Fire an event named stop at recorder.
    4. If at any point, a track is added to or removed from stream’s track set, the UA MUST stop gathering data, and queue a task, using the DOM manipulation task source, that runs the following steps:
      1. Inactivate the recorder with recorder.
      2. Fire an error event named InvalidModificationError at recorder.
      3. Fire a blob event named dataavailable at recorder with blob.
      4. Fire an event named stop at recorder.
    5. If the UA at any point is unable to continue gathering data for reasons other than isolation properties or stream’s track set, it MUST stop gathering data, and queue a task, using the DOM manipulation task source, that runs the following steps:
      1. Inactivate the recorder with recorder.
      2. Fire an error event named UnknownError at recorder.
      3. Fire a blob event named dataavailable at recorder with blob.
      4. Fire an event named stop at recorder.
    6. If timeslice is not undefined, then once a minimum of timeslice milliseconds of data have been collected, or some minimum time slice imposed by the UA, whichever is greater, start gathering data into a new Blob blob, and queue a task, using the DOM manipulation task source, that fires a blob event named dataavailable at recorder with blob.

      Note that an undefined value of timeslice will be understood as the largest unsigned long value.

    7. If all recorded tracks become ended, then stop gathering data, and queue a task, using the DOM manipulation task source, that runs the following steps:
      1. Inactivate the recorder with recorder.
      2. Fire a blob event named dataavailable at recorder with blob.
      3. Fire an event named stop at recorder.

Note that stop(), requestData(), and pause() also affect the recording behavior.

The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.

If any Track within the MediaStream is muted or not enabled at any time, the UA will only record black frames or silence since that is the content produced by the Track.

The Inactivate the recorder algorithm given a recorder, is as follows:

  1. Set recorder’s mimeType attribute to the value of the [[ConstrainedMimeType]] slot.
  2. Set recorder’s state attribute to inactive.
  3. If recorder’s [[ConstrainedBitsPerSecond]] slot is not undefined, set recorder’s videoBitsPerSecond and audioBitsPerSecond attributes to values the User Agent deems reasonable for the respective media types, such that the sum of videoBitsPerSecond and audioBitsPerSecond is close to the value of recorder’s [[ConstrainedBitsPerSecond]] slot.
stop()
When a MediaRecorder object’s stop() method is invoked, the UA MUST run the following steps:
  1. Let recorder be the MediaRecorder object on which the method was invoked.
  2. If recorder’s state attribute is inactive, abort these steps.
  3. Inactivate the recorder with recorder.
  4. Queue a task, using the DOM manipulation task source, that runs the following steps:
    1. Stop gathering data.
    2. Let blob be the Blob of collected data so far, then fire a blob event named dataavailable at recorder with blob.
    3. Fire an event named stop at recorder.
  5. return undefined.
pause()
When a MediaRecorder object’s pause() method is invoked, the UA MUST run the following steps:
  1. If state is inactive, throw an InvalidStateError DOMException and abort these steps.
  2. If state is paused, abort these steps.
  3. Set state to paused, and queue a task, using the DOM manipulation task source, that runs the following steps:
    1. Stop gathering data into blob (but keep it available so that recording can be resumed in the future).
    2. Let target be the MediaRecorder context object. Fire an event named pause at target.
  4. return undefined.
resume()
When a MediaRecorder object’s resume() method is invoked, the UA MUST run the following steps:
  1. If state is inactive, throw an InvalidStateError DOMException and abort these steps.
  2. If state is recording, abort these steps.
  3. Set state to recording, and queue a task, using the DOM manipulation task source, that runs the following steps:
    1. Resume (or continue) gathering data into the current blob.
    2. Let target be the MediaRecorder context object. Fire an event named resume at target.
  4. return undefined.
requestData()
When a MediaRecorder object’s requestData() method is invoked, the UA MUST run the following steps:
  1. If state is inactive throw an InvalidStateError DOMException and terminate these steps. Otherwise the UA MUST queue a task, using the DOM manipulation task source, that runs the following steps:
    1. Let blob be the Blob of collected data so far and let target be the MediaRecorder context object, then fire a blob event named dataavailable at target with blob. (Note that blob will be empty if no data has been gathered yet.)
    2. Create a new Blob and gather subsequent data into it.
  2. return undefined.
isTypeSupported(DOMString type)
Check to see whether a MediaRecorder can record in a specified MIME type. If true is returned from this method, it only indicates that the MediaRecorder implementation is capable of recording Blob objects for the specified MIME type. Recording may still fail if sufficient resources are not available to support the concrete media encoding. When this method is invoked, the User Agent must return the result of is type supported, given the method’s first argument.
The is type supported algorithm consists of the following steps.
  1. Let type be the algorithm’s argument.
  2. If type is an empty string, then return true (note that this case is essentially equivalent to leaving up to the UA the choice of both container and codecs).
  3. If type does not contain a valid MIME type string, then return false.
  4. If type contains a media type or media subtype that the MediaRecorder does not support, then return false.
  5. If type contains a media container that the MediaRecorder does not support, then return false.
  6. If type contains more than one audio codec, or more than one video codec, then return false.
  7. If type contains a codec that the MediaRecorder does not support, then return false.
  8. If the MediaRecorder does not support the specified combination of media type/subtype, codecs and container then return false.
  9. Return true.

2.4. Data handling

To fire a blob event with a Blob blob means to fire an event at target using a BlobEvent with its data attribute initialized to blob.

Usually blob will be the data gathered by the UA after the last transition to recording state.

2.5. MediaRecorderOptions

dictionary MediaRecorderOptions {
  DOMString mimeType = "";
  unsigned long audioBitsPerSecond;
  unsigned long videoBitsPerSecond;
  unsigned long bitsPerSecond;
  BitrateMode audioBitrateMode = "variable";
  DOMHighResTimeStamp videoKeyFrameIntervalDuration;
  unsigned long videoKeyFrameIntervalCount;
};

2.5.1. Members

mimeType, of type DOMString, defaulting to ""
The container and codec format(s) [RFC2046] for the recording, which may include any parameters that are defined for the format.
mimeType specifies the media type and container format for the recording via a type/subtype combination, with the codecs and/or profiles parameters [RFC6381] specified where ambiguity might arise. Individual codecs might have further optional or mandatory specific parameters.
audioBitsPerSecond, of type unsigned long
Aggregate target bits per second for encoding of the Audio track(s), if any.
videoBitsPerSecond, of type unsigned long
Aggregate target bits per second for encoding of the Video track(s), if any.
bitsPerSecond, of type unsigned long
Aggregate target bits per second for encoding of all Video and Audio Track(s) present. This member overrides either audioBitsPerSecond or videoBitsPerSecond if present, and might be distributed among the present track encoders as the UA sees fit.
audioBitrateMode, of type BitrateMode, defaulting to "variable"
Specifes the BitrateMode that should be used to encode the Audio track(s).
videoKeyFrameIntervalDuration, of type DOMHighResTimeStamp
Specifies the nominal interval in time between key frames in the encoded video stream. The UA controls key frame generation considering this dictionary member as well as videoKeyFrameIntervalCount.
videoKeyFrameIntervalCount, of type unsigned long
Specifies the interval in number of frames between key frames in the encoded video stream. The UA controls key frame generation considering this dictionary member as well as videoKeyFrameIntervalDuration.

2.6. BitrateMode

enum BitrateMode {
  "constant",
  "variable"
};

2.6.1. Values

constant
Encode at a constant bitrate.
variable
Encode using a variable bitrate, allowing more space to be used for complex signals and less space for less complex signals.

2.7. RecordingState

enum RecordingState {
  "inactive",
  "recording",
  "paused"
};

2.7.1. Values

inactive
Recording is not occuring: Either it has not been started or it has been stopped.
recording
Recording has been started and the UA is capturing data.
paused
Recording has been started, then paused, and not yet stopped or resumed.

3. Blob Event

[Exposed=Window]
interface BlobEvent : Event {
  constructor(DOMString type, BlobEventInit eventInitDict);
  [SameObject] readonly attribute Blob data;
  readonly attribute DOMHighResTimeStamp timecode;
};

3.1. Constructors

BlobEvent(DOMString type, BlobEventInit eventInitDict)

3.2. Attributes

data, of type Blob, readonly
The encoded Blob whose type attribute indicates the encoding of the blob data.
timecode, of type DOMHighResTimeStamp, readonly
For a MediaRecorder instance, the timecode in the first produced BlobEvent MUST contain 0. Subsequent BlobEvent's timecode contain the difference of the timestamp of creation of the first chunk in said BlobEvent and the timestamp of the first chunk of the first produced BlobEvent, as DOMHighResTimeStamp [HR-TIME].

3.3. BlobEventInit

dictionary BlobEventInit {
  required Blob data;
  DOMHighResTimeStamp timecode;
};

3.3.1. Members

data, of type Blob
A Blob object containing the data to deliver via BlobEvent.
timecode, of type DOMHighResTimeStamp
The timecode to be used in initializing BlobEvent.

4. Error handling

4.1. General principles

This section is non-normative.

The UA will throw a DOMException when the error can be detected at the time that the call is made. In all other cases the UA will fire an error event. If recording has been started and not yet stopped when the error occurs, let blob be the Blob of collected data so far; after raising the error, the UA will fire a dataavailable event with blob; immediately after the UA will then fire an event named stop. The UA may set platform-specific limits, such as those for the minimum and maximum Blob size that it will support, or the number of MediaStreamTracks it will record at once. It will signal a fatal error if these limits are exceeded.

4.2. Error events

To fire an error event means to [= fire an event =] using ErrorEvent as eventConstructor.

4.3. Exception Summary

Each of the exceptions defined in this document is a DOMException with a specific type.

Name Description
InvalidStateError An operation was called on an object on which it is not allowed or at a time when it is not allowed, or if a request is made on a source object that has been deleted or removed.
NotSupportedError An operation could not be performed because the MIME type was not supported or the set of tracks could not be recorded by the MIME type. User agents should provide as much additional information as possible in the message attribute.
SecurityError The isolation properties of the MediaStream do not allow the MediaRecorder access to it.
InvalidModificationError The set of MediaStreamTracks of the recoded MediaStream has changed, preventing any further recording.

5. Event summary

The following additional events fire on MediaRecorder objects:

Event name Interface Fired when...
start Event The UA has started recording data from the MediaStream.
stop Event The UA has stopped recording data from the MediaStream.
dataavailable BlobEvent The UA generates this event to return data to the application. The data attribute of this event contains a Blob of recorded data.
pause Event The UA has paused recording data from the MediaStream.
resume Event The UA has resumed recording data from the MediaStream.
error ErrorEvent An error has occurred, e.g. out of memory or a modification to the stream has occurred that makes it impossible to continue recording (e.g. a Track has been added to or removed from the said stream while recording is occurring).

6. Privacy and Security Considerations

This section is non-normative.

Given that the source of data for MediaRecorder is always going to be a MediaStream, a large part of the security is essentially offloaded onto the [GETUSERMEDIA] and its "Privacy and Security Consideration" Section. In particular, the source MediaStream is assumed to be coming from a secure context.

6.1. Resource exhaustion

Video and audio encoding can consume a great deal of resources. A malicious website could try to block or bring down the UA by configuring too large a workload, e.g. encoding large frame resolutions and/or framerates.

MediaRecorder can be configured to hold on to the encoded data for a certain period of time upon start() by means of the timeslice parameter. Too large a time slice parameter can force the UA to buffer a large amount of data, causing jankiness and otherwise memory exhaustion.

UAs should take measures to avoid the encoding and buffering process from exhausting the resources.

6.2. Fingerprinting

MediaRecorder provides information regarding the supported video and audio MIME types via the isTypeSupported() method. It will also select the most appropriate codec and bandwidth allocation combination when these are not defined in the MediaRecorderOptions, and make this information available via the type attribute of the event’s' data received in ondataavailable. It will also try to honour the MediaRecorderOptions if specified.

A malicious website could try to use this information for active fingerprinting in a number of ways, e.g. it might try to

The UAs should take measures to mitigate this fingerprinting surface increase by e.g. implementing broad support for a given codec or MIME type and not making it dependent on e.g. architecture or hardware revisions nor OS/ version support, to prevent device/hardware characteristics inference through browser functionality. The UA should also take steps for making the default values that limit the amount and identifiability of the UA capabilities.

7. Examples

Slightly modified versions of these examples can be found in e.g. this codepen collection.

7.1. Check for MediaRecorder and content types

This example checks if the implementation supports a few popular codec/container combinations.

The following example can also be found in e.g. this codepen with minimal modifications.
if (window.MediaRecorder == undefined) {
  console.error('MediaRecorder not supported, boo');
} else {
  var contentTypes = ["video/webm",
                      "video/webm;codecs=vp8",
                      "video/x-matroska;codecs=avc1",
                      "audio/webm",
                      "video/mp4;codecs=avc1",
                      "video/invalid"];
  contentTypes.forEach(contentType => {
    console.log(contentType + ' is '
        + (MediaRecorder.isTypeSupported(contentType) ?
            'supported' : 'NOT supported '));
  });
}

7.2. Recording webcam video and audio

This example captures an video+audio MediaStream using getUserMedia(), plugs it into a <video> tag and tries to record it, retrieving the recorded chunks via the ondataavailable event. Note that the recording will go on forever until either MediaRecorder is stop()ed or all the MediaStreamTracks of the recorded MediaStream are ended.

The following example can also be found in e.g. this codepen with minimal modifications.
<html>
<body>
<video autoplay/>
<script>
  var recordedChunks = [];

  function gotMedia(stream) {
    // |video| shows a live view of the captured MediaStream.
    var video = document.querySelector('video');
    video.src = URL.createObjectURL(stream);

    var recorder = null;
    try {
      recorder = new MediaRecorder(stream, {mimeType: "video/webm"});
    } catch (e) {
      console.error('Exception while creating MediaRecorder: ' + e);
      return;
    }

    recorder.ondataavailable = (event) => {
      console.log(' Recorded chunk of size ' + event.data.size + "B");
      recordedChunks.push(event.data);
    };

    recorder.start(100);
  }

  navigator.mediaDevices.getUserMedia({video: true, audio: true})
      .then(gotMedia)
      .catch(e => { console.error('getUserMedia() failed: ' + e); });
</script>
</body>
</html>
The recordedChunks can be saved to a file using e.g. the function download() in the MediaRecorder Web Fundamentals article.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[FileAPI]
Marijn Kruisselbrink. File API. 24 October 2024. WD. URL: https://www.w3.org/TR/FileAPI/
[FINGERPRINTING-GUIDANCE]
Nick Doty. Mitigating Browser Fingerprinting in Web Specifications. 28 March 2019. NOTE. URL: https://www.w3.org/TR/fingerprinting-guidance/
[GETUSERMEDIA]
Cullen Jennings; et al. Media Capture and Streams. 3 October 2024. CR. URL: https://www.w3.org/TR/mediacapture-streams/
[HR-TIME]
Yoav Weiss. High Resolution Time. 19 July 2023. WD. URL: https://www.w3.org/TR/hr-time-3/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[RFC2046]
N. Freed; N. Borenstein. Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types. November 1996. Draft Standard. URL: https://www.rfc-editor.org/rfc/rfc2046
[WEBDRIVER-BIDI]
WebDriver BiDi. Editor's Draft. URL: https://w3c.github.io/webdriver-bidi/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/

Informative References

[RFC6381]
R. Gellens; D. Singer; P. Frojdh. The 'Codecs' and 'Profiles' Parameters for "Bucket" Media Types. August 2011. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc6381

IDL Index

[Exposed=Window]
interface MediaRecorder : EventTarget {
  constructor(MediaStream stream, optional MediaRecorderOptions options = {});
  readonly attribute MediaStream stream;
  readonly attribute DOMString mimeType;
  readonly attribute RecordingState state;
  attribute EventHandler onstart;
  attribute EventHandler onstop;
  attribute EventHandler ondataavailable;
  attribute EventHandler onpause;
  attribute EventHandler onresume;
  attribute EventHandler onerror;
  readonly attribute unsigned long videoBitsPerSecond;
  readonly attribute unsigned long audioBitsPerSecond;
  readonly attribute BitrateMode audioBitrateMode;

  undefined start(optional unsigned long timeslice);
  undefined stop();
  undefined pause();
  undefined resume();
  undefined requestData();

  static boolean isTypeSupported(DOMString type);
};

dictionary MediaRecorderOptions {
  DOMString mimeType = "";
  unsigned long audioBitsPerSecond;
  unsigned long videoBitsPerSecond;
  unsigned long bitsPerSecond;
  BitrateMode audioBitrateMode = "variable";
  DOMHighResTimeStamp videoKeyFrameIntervalDuration;
  unsigned long videoKeyFrameIntervalCount;
};

enum BitrateMode {
  "constant",
  "variable"
};

enum RecordingState {
  "inactive",
  "recording",
  "paused"
};

[Exposed=Window]
interface BlobEvent : Event {
  constructor(DOMString type, BlobEventInit eventInitDict);
  [SameObject] readonly attribute Blob data;
  readonly attribute DOMHighResTimeStamp timecode;
};

dictionary BlobEventInit {
  required Blob data;
  DOMHighResTimeStamp timecode;
};