1. Overview
This API attempts to make basic recording very simple, while still allowing for
more complex use cases. In the simplest case, the application instantiates a MediaRecorder object, calls start() and then calls stop() or waits
for the MediaStreamTrack(s) [GETUSERMEDIA] to be ended. The contents of the recording will
be made available in the platform’s default encoding via the ondataavailable event. Functions are available to query the platform’s available set of
encodings, and to select the desired ones if the author wishes. The application
can also choose how much data it wants to receive at one time. By default a Blob containing the entire recording is returned when the recording
finishes. However the application can choose to receive smaller buffers of data
at regular intervals.
2. Media Recorder API
[Exposed =Window ]interface :MediaRecorder EventTarget {constructor (MediaStream ,stream optional MediaRecorderOptions = {});options readonly attribute MediaStream stream ;readonly attribute DOMString mimeType ;readonly attribute RecordingState state ;attribute EventHandler onstart ;attribute EventHandler onstop ;attribute EventHandler ondataavailable ;attribute EventHandler onpause ;attribute EventHandler onresume ;attribute EventHandler onerror ;readonly attribute unsigned long videoBitsPerSecond ;readonly attribute unsigned long audioBitsPerSecond ;readonly attribute BitrateMode audioBitrateMode ;undefined start (optional unsigned long );timeslice undefined stop ();undefined pause ();undefined resume ();undefined requestData ();static boolean isTypeSupported (DOMString ); };type
2.1. Constructors
MediaRecorder(MediaStream stream, optional MediaRecorderOptions options = {})-
When the
MediaRecorder()constructor is invoked, the User Agent MUST run the following steps:- Let stream be the constructor’s first argument.
- Let options be the constructor’s second argument.
- If invoking is type supported with options’
mimeTypemember as its argument returns false, throw aNotSupportedErrorDOMExceptionand abort these steps. - Let recorder be a newly constructed
MediaRecorderobject. - Let recorder have a [[ConstrainedMimeType]] internal slot,
initialized to the value of options’
mimeTypemember. - Let recorder have a [[ConstrainedBitsPerSecond]] internal
slot, initialized to the value of options’
bitsPerSecondmember, if it is present, otherwiseundefined. - Initialize recorder’s
streamattribute to stream. - Initialize recorder’s
mimeTypeattribute to the value of recorder’s [[ConstrainedMimeType]] slot. - Initialize recorder’s
stateattribute toinactive. - Initialize recorder’s
videoBitsPerSecondattribute to the value of options’videoBitsPerSecondmember, if it is present. Otherwise, choose a target value the User Agent deems reasonable for video. - Initialize recorder’s
audioBitsPerSecondattribute to the value of options’audioBitsPerSecondmember, if it is present. Otherwise, choose a target value the User Agent deems reasonable for audio. - If recorder’s [[ConstrainedBitsPerSecond]] slot is not
undefined, set recorder’svideoBitsPerSecondandaudioBitsPerSecondattributes to values the User Agent deems reasonable for the respective media types, such that the sum ofvideoBitsPerSecondandaudioBitsPerSecondis close to the value of recorder’s [[ConstrainedBitsPerSecond]] slot. - If recorder supports the
BitrateModespecified by the value of options’audioBitrateModemember, then initialize recorder’saudioBitrateModeattribute to the value of options’audioBitrateModemember, else initialize recorder’saudioBitrateModeattribute to the value "variable". - Return recorder.
2.2. Attributes
stream, of type MediaStream, readonly- The
MediaStream[GETUSERMEDIA] to be recorded. mimeType, of type DOMString, readonly-
The MIME type [RFC2046] used by the
MediaRecorderobject. The User Agent SHOULD be able to play back any of the MIME types it supports for recording. For example, it should be able to display a video recording in the HTML <video> tag. state, of type RecordingState, readonly- The current state of the
MediaRecorderobject. onstart, of type EventHandler- Called to handle the start event.
onstop, of type EventHandler- Called to handle the stop event.
ondataavailable, of type EventHandler- Called to handle the dataavailable event. The
Blobof recorded data is contained in this event and can be accessed via itsdataattribute. onpause, of type EventHandler- Called to handle the pause event.
onresume, of type EventHandler- Called to handle the resume event.
onerror, of type EventHandler- Called to handle an
ErrorEvent. videoBitsPerSecond, of type unsigned long, readonly- The target bitrate used to encode video tracks.
audioBitsPerSecond, of type unsigned long, readonly- The target bitrate used to encode audio tracks.
audioBitrateMode, of type BitrateMode, readonly- The
BitrateModeused to encode audio tracks.
2.3. Methods
state synchronously
and fire events asynchronously. start(optional unsigned long timeslice)-
When a
MediaRecorderobject’sstart()method is invoked, the UA MUST run the following steps:- Let recorder be the
MediaRecorderobject on which the method was invoked. - Let timeslice be the method’s first argument, if provided,
or
undefined. - Let stream be the value of recorder’s
streamattribute. - Let tracks be the set of
livetracks in stream’s track set. - If the value of recorder’s
stateattribute is notinactive, throw anInvalidStateErrorDOMExceptionand abort these steps. - If the isolation properties of stream disallow access from recorder, throw a
SecurityErrorDOMExceptionand abort these steps. - If stream is inactive, throw a
NotSupportedErrorDOMExceptionand abort these steps. - If the [[ConstrainedMimeType]] slot specifies a media type, container, or codec, then constrain the configuration of recorder to the media type, container, and codec specified in the [[ConstrainedMimeType]] slot.
- If recorder’s [[ConstrainedBitsPerSecond]] slot is not
undefined, set recorder’svideoBitsPerSecondandaudioBitsPerSecondattributes to values the User Agent deems reasonable for the respective media types, for recording all tracks in tracks, such that the sum ofvideoBitsPerSecondandaudioBitsPerSecondis close to the value of recorder’s [[ConstrainedBitsPerSecond]] slot. - Let videoBitrate be the value of recorder’s
videoBitsPerSecondattribute, and constrain the configuration of recorder to target an aggregate bitrate of videoBitrate bits per second for all video tracks recorder will be recording. videoBitrate is a hint for the encoder and the value might be surpassed, not achieved, or only be achieved over a long period of time. - Let audioBitrate be the value of recorder’s
audioBitsPerSecondattribute, and constrain the configuration of recorder to target an aggregate bitrate of audioBitrate bits per second for all audio tracks recorder will be recording. audioBitrate is a hint for the encoder and the value might be surpassed, not achieved, or only be achieved over a long period of time. - Constrain the configuration of recorder to encode using the
BitrateModespecified by the value of recorder’saudioBitrateModeattribute for all audio tracks recorder will be recording. - For each track in tracks, if the User Agent cannot record the track
using the current configuration, then throw a
NotSupportedErrorDOMExceptionand abort these steps. -
Set recorder’s
statetorecording, and run the following steps in parallel:-
If the container and codecs to use for the recording have not yet been
fully specified, the User Agent specifies them in recorder’s current
configuration. The User Agent MAY take the sources of the tracks in tracks into account when deciding which container and codecs to
use.
By looking at the sources of the tracks in tracks when recording starts, the User Agent could choose a configuration that avoids re-encoding track content. For example, if the MediaStreamTrack is a remote track sourced from an RTCPeerConnection, the User Agent could pick the same codec as is used for the MediaStreamTrack’s RTP stream, should it be known. However, if the codec of the RTP stream changes while recording, the User Agent has to be prepared to re-encode it to avoid interruptions.
-
Start recording all tracks in tracks using the recorder’s current
configuration and gather the data into a
Blobblob. Queue a task, using the DOM manipulation task source, to run the following steps:- Let extendedMimeType be the value of recorder’s [[ConstrainedMimeType]] slot.
- Modify extendedMimeType by adding media type, subtype and codecs parameter reflecting the configuration used by the MediaRecorder to record all tracks in tracks, if not already present. This MAY include the profiles parameter [RFC6381] or further codec-specific parameters.
- Set recorder’s
mimeTypeattribute to extendedMimeType. - Fire an event named start at recorder.
-
If at any point stream’s isolation properties change so
that
MediaRecorderis no longer allowed access to it, the UA MUST stop gathering data, discard any data that it has gathered, and queue a task, using the DOM manipulation task source, that runs the following steps:- Inactivate the recorder with recorder.
- Fire an error event named
SecurityErrorat recorder. - Fire a blob event named dataavailable at recorder with blob.
- Fire an event named stop at recorder.
-
If at any point, a track is added to or removed from stream’s track set, the UA MUST stop gathering data, and queue a task,
using the DOM manipulation task source, that runs the following steps:
- Inactivate the recorder with recorder.
- Fire an error event named
InvalidModificationErrorat recorder. - Fire a blob event named dataavailable at recorder with blob.
- Fire an event named stop at recorder.
-
If the UA at any point is unable to continue gathering data for
reasons other than isolation properties or stream’s track set, it MUST stop gathering data, and queue a task, using the
DOM manipulation task source, that runs the following steps:
- Inactivate the recorder with recorder.
- Fire an error event named
UnknownErrorat recorder. - Fire a blob event named dataavailable at recorder with blob.
- Fire an event named stop at recorder.
-
If timeslice is not
undefined, then once a minimum of timeslice milliseconds of data have been collected, or some minimum time slice imposed by the UA, whichever is greater, start gathering data into a newBlobblob, and queue a task, using the DOM manipulation task source, that fires a blob event named dataavailable at recorder with blob.Note that an
undefinedvalue of timeslice will be understood as the largestunsigned longvalue. -
If all recorded tracks become
ended, then stop gathering data, and queue a task, using the DOM manipulation task source, that runs the following steps:- Inactivate the recorder with recorder.
- Fire a blob event named dataavailable at recorder with blob.
- Fire an event named stop at recorder.
-
If the container and codecs to use for the recording have not yet been
fully specified, the User Agent specifies them in recorder’s current
configuration. The User Agent MAY take the sources of the tracks in tracks into account when deciding which container and codecs to
use.
Note that
stop(),requestData(), andpause()also affect the recording behavior.The UA MUST record
streamin such a way that the original Tracks can be retrieved at playback time. When multipleBlobs are returned (because oftimesliceorrequestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.If any Track within the
MediaStreamismutedor notenabledat any time, the UA will only record black frames or silence since that is the content produced by the Track.The Inactivate the recorder algorithm given a recorder, is as follows:
- Set recorder’s
mimeTypeattribute to the value of the [[ConstrainedMimeType]] slot. - Set recorder’s
stateattribute toinactive. - If recorder’s [[ConstrainedBitsPerSecond]] slot is not
undefined, set recorder’svideoBitsPerSecondandaudioBitsPerSecondattributes to values the User Agent deems reasonable for the respective media types, such that the sum ofvideoBitsPerSecondandaudioBitsPerSecondis close to the value of recorder’s [[ConstrainedBitsPerSecond]] slot.
- Let recorder be the
stop()-
When a
MediaRecorderobject’sstop()method is invoked, the UA MUST run the following steps:- Let recorder be the
MediaRecorderobject on which the method was invoked. - If recorder’s
stateattribute isinactive, abort these steps. - Inactivate the recorder with recorder.
-
Queue a task, using the DOM manipulation task source, that runs the
following steps:
- Stop gathering data.
- Let blob be the Blob of collected data so far, then fire a blob event named dataavailable at recorder with blob.
- Fire an event named stop at recorder.
- return
undefined.
- Let recorder be the
pause()-
When a
MediaRecorderobject’spause()method is invoked, the UA MUST run the following steps:- If
stateisinactive, throw anInvalidStateErrorDOMExceptionand abort these steps. - If
stateispaused, abort these steps. -
Set
statetopaused, and queue a task, using the DOM manipulation task source, that runs the following steps:- Stop gathering data into blob (but keep it available so that recording can be resumed in the future).
- Let target be the MediaRecorder context object. Fire an event named pause at target.
- return
undefined.
- If
resume()-
When a
MediaRecorderobject’sresume()method is invoked, the UA MUST run the following steps:- If
stateisinactive, throw anInvalidStateErrorDOMExceptionand abort these steps. - If
stateisrecording, abort these steps. -
Set
statetorecording, and queue a task, using the DOM manipulation task source, that runs the following steps:- Resume (or continue) gathering data into the current blob.
- Let target be the MediaRecorder context object. Fire an event named resume at target.
- return
undefined.
- If
requestData()-
When a
MediaRecorderobject’srequestData()method is invoked, the UA MUST run the following steps:-
If
stateisinactivethrow anInvalidStateErrorDOMExceptionand terminate these steps. Otherwise the UA MUST queue a task, using the DOM manipulation task source, that runs the following steps:- Let blob be the
Blobof collected data so far and let target be theMediaRecordercontext object, then fire a blob event named dataavailable at target with blob. (Note that blob will be empty if no data has been gathered yet.) - Create a new Blob and gather subsequent data into it.
- Let blob be the
- return
undefined.
-
If
isTypeSupported(DOMString type)- Check to see whether a
MediaRecordercan record in a specified MIME type. If true is returned from this method, it only indicates that theMediaRecorderimplementation is capable of recordingBlobobjects for the specified MIME type. Recording may still fail if sufficient resources are not available to support the concrete media encoding. When this method is invoked, the User Agent must return the result of is type supported, given the method’s first argument.- The is type supported algorithm consists of the following steps.
- Let type be the algorithm’s argument.
- If type is an empty string, then return true (note that this case is essentially equivalent to leaving up to the UA the choice of both container and codecs).
- If type does not contain a valid MIME type string, then return false.
- If type contains a media type or media subtype that the MediaRecorder does not support, then return false.
- If type contains a media container that the MediaRecorder does not support, then return false.
- If type contains more than one audio codec, or more than one video codec, then return false.
- If type contains a codec that the MediaRecorder does not support, then return false.
- If the MediaRecorder does not support the specified combination of media type/subtype, codecs and container then return false.
- Return true.
- The is type supported algorithm consists of the following steps.
2.4. Data handling
To fire a blob event with a Blob blob means to fire an event at target using a BlobEvent with
its data attribute initialized to blob.
2.5. MediaRecorderOptions
dictionary {MediaRecorderOptions DOMString mimeType = "";unsigned long audioBitsPerSecond ;unsigned long videoBitsPerSecond ;unsigned long bitsPerSecond ;BitrateMode audioBitrateMode = "variable"; };
2.5.1. Members
mimeType, of type DOMString, defaulting to""- The container and codec format(s) [RFC2046] for the recording, which may include any parameters that are defined for the format.
audioBitsPerSecond, of type unsigned long- Aggregate target bits per second for encoding of the Audio track(s), if any.
videoBitsPerSecond, of type unsigned long- Aggregate target bits per second for encoding of the Video track(s), if any.
bitsPerSecond, of type unsigned long- Aggregate target bits per second for encoding of all Video and Audio
Track(s) present. This member overrides either
audioBitsPerSecondorvideoBitsPerSecondif present, and might be distributed among the present track encoders as the UA sees fit. audioBitrateMode, of type BitrateMode, defaulting to"variable"- Specifes the
BitrateModethat should be used to encode the Audio track(s).
2.6. BitrateMode
enum {BitrateMode "constant" ,"variable" };
2.6.1. Values
constant- Encode at a constant bitrate.
variable- Encode using a variable bitrate, allowing more space to be used for complex signals and less space for less complex signals.
2.7. RecordingState
enum {RecordingState "inactive" ,"recording" ,"paused" };
2.7.1. Values
inactive- Recording is not occuring: Either it has not been started or it has been stopped.
recording- Recording has been started and the UA is capturing data.
paused- Recording has been started, then paused, and not yet stopped or resumed.
3. Blob Event
[Exposed =Window ]interface :BlobEvent Event {constructor (DOMString ,type BlobEventInit ); [eventInitDict SameObject ]readonly attribute Blob data ;readonly attribute DOMHighResTimeStamp timecode ; };
3.1. Constructors
BlobEvent(DOMString type, BlobEventInit eventInitDict)
3.2. Attributes
data, of type Blob, readonly- The encoded
Blobwhosetypeattribute indicates the encoding of the blob data. timecode, of type DOMHighResTimeStamp, readonly- The difference between the timestamp of the first chunk in
dataand the timestamp of the first chunk in the firstBlobEventproduced by this recorder as aDOMHighResTimeStamp[HR-TIME]. Note that thetimecodein the first producedBlobEventdoes not need to be zero.
3.3. BlobEventInit
dictionary {BlobEventInit required Blob data ;DOMHighResTimeStamp timecode ; };
3.3.1. Members
data, of type Blob- A
Blobobject containing the data to deliver viaBlobEvent. timecode, of type DOMHighResTimeStamp- The timecode to be used in initializing
BlobEvent.
4. Error handling
4.1. General principles
This section is non-normative.The UA will throw a DOMException when the error can be detected at the time
that the call is made. In all other cases the UA will fire an error event. If recording has been started and not yet stopped
when the error occurs, let blob be the Blob of collected data so
far; after raising the error, the UA will fire a
dataavailable event with blob; immediately after the UA will then fire an event named stop.
The UA may set platform-specific limits, such as those for the minimum and
maximum Blob size that it will support, or the number of MediaStreamTracks it will record at once.
It will signal a fatal error if these limits are exceeded.
4.2. Error events
To fire an error event means to [= fire an event =] using ErrorEvent as eventConstructor.
4.3. Exception Summary
Each of the exceptions defined in this document is a DOMException with a
specific type.
| Name | Description |
|---|---|
InvalidStateError
| An operation was called on an object on which it is not allowed or at a time when it is not allowed, or if a request is made on a source object that has been deleted or removed. |
NotSupportedError
| An operation could not be performed because the MIME type was not
supported or the set of tracks could not be recorded by the MIME type.
User agents should provide as much additional information as possible in
the message attribute.
|
SecurityError
| The isolation properties of the MediaStream do not allow the
MediaRecorder access to it.
|
InvalidModificationError
| The set of MediaStreamTracks of the recoded MediaStream has
changed, preventing any further recording.
|
5. Event summary
The following additional events fire on MediaRecorder objects:
| Event name | Interface | Fired when... |
|---|---|---|
start
| Event
| The UA has started recording data from the MediaStream. |
stop
| Event
| The UA has stopped recording data from the MediaStream. |
dataavailable
| BlobEvent
| The UA generates this event to return data to the application. The data attribute of this event contains a Blob of recorded
data.
|
pause
| Event
| The UA has paused recording data from the MediaStream. |
resume
| Event
| The UA has resumed recording data from the MediaStream. |
error
| ErrorEvent
| An error has occurred, e.g. out of memory or a modification to
the stream has occurred that makes it impossible to
continue recording (e.g. a Track has been added to or removed from
the said stream while recording is occurring).
|
6. Privacy and Security Considerations
This section is non-normative.
Given that the source of data for MediaRecorder is always going to be a MediaStream, a large part of the security is essentially offloaded onto the [GETUSERMEDIA] and its "Privacy and Security Consideration" Section. In
particular, the source MediaStream is assumed to be coming from a secure context.
6.1. Resource exhaustion
Video and audio encoding can consume a great deal of resources. A malicious website could try to block or bring down the UA by configuring too large a workload, e.g. encoding large frame resolutions and/or framerates.
MediaRecorder can be configured to hold on to the encoded data for a certain
period of time upon start() by means of the timeslice parameter. Too
large a time slice parameter can force the UA to buffer a large amount of data,
causing jankiness and otherwise memory exhaustion.
UAs should take measures to avoid the encoding and buffering process from exhausting the resources.
6.2. Fingerprinting
MediaRecorder provides information regarding the supported video and audio
MIME types via the isTypeSupported() method. It will also select the most
appropriate codec and bandwidth allocation combination when these are not
defined in the MediaRecorderOptions, and make this information available via
the type attribute of the event’s' data received in ondataavailable. It will also try to honour the MediaRecorderOptions if specified.
A malicious website could try to use this information for active fingerprinting in a number of ways, e.g. it might try to
-
Infer the device and hardware characteristics or determine the operating system vendor and/or version differences by means of identifying the user agent capabilities: a UA might provide use of a certain codec and/or hardware encoding accelerator only on a given platform (or generation thereof), or those might have a resolution/frame rate limit, making it vulnerable to fingerprinting.
-
Infer any of the above by statistical measurements of system performance: e.g. the UA might provide different by-default bandwidth allocations depending on the hardware capabilities, or the UA could try measuring the system load when encoding different resolutions of certain input vectors.
The UAs should take measures to mitigate this fingerprinting surface increase by e.g. implementing broad support for a given codec or MIME type and not making it dependent on e.g. architecture or hardware revisions nor OS/ version support, to prevent device/hardware characteristics inference through browser functionality. The UA should also take steps for making the default values that limit the amount and identifiability of the UA capabilities.
7. Examples
7.1. Check for MediaRecorder and content types
This example checks if the implementation supports a few popular codec/container combinations.
if ( window. MediaRecorder== undefined ) { console. error( 'MediaRecorder not supported, boo' ); } else { var contentTypes= [ "video/webm" , "video/webm;codecs=vp8" , "video/x-matroska;codecs=avc1" , "audio/webm" , "video/mp4;codecs=avc1" , "video/invalid" ]; contentTypes. forEach( contentType=> { console. log( contentType+ ' is ' + ( MediaRecorder. isTypeSupported( contentType) ? 'supported' : 'NOT supported ' )); }); }
7.2. Recording webcam video and audio
This example captures an video+audio MediaStream using getUserMedia(),
plugs it into a <video> tag and tries to record it,
retrieving the recorded chunks via the ondataavailable event. Note that the
recording will go on forever until either MediaRecorder is stop()ed or all
the MediaStreamTracks of the recorded MediaStream are ended.
< html> < body> < video autoplay/> < script> var recordedChunks= []; function gotMedia( stream) { // |video| shows a live view of the captured MediaStream. var video= document. querySelector( 'video' ); video. src= URL. createObjectURL( stream); var recorder= null ; try { recorder= new MediaRecorder( stream, { mimeType: "video/webm" }); } catch ( e) { console. error( 'Exception while creating MediaRecorder: ' + e); return ; } recorder. ondataavailable= ( event) => { console. log( ' Recorded chunk of size ' + event. data. size+ "B" ); recordedChunks. push( event. data); }; recorder. start( 100 ); } navigator. mediaDevices. getUserMedia({ video: true , audio: true }) . then( gotMedia) . catch ( e=> { console. error( 'getUserMedia() failed: ' + e); }); < /script>< /body>< /html>
recordedChunks can be saved to a file using e.g. the function download() in the MediaRecorder Web Fundamentals article.