Screen Capture

W3C Working Draft

This version:
https://www.w3.org/TR/2019/WD-screen-capture-20191119/
Latest published version:
https://www.w3.org/TR/screen-capture/
Latest editor's draft:
https://w3c.github.io/mediacapture-screen-share/
Previous version:
https://www.w3.org/TR/2016/WD-screen-capture-20160714/
Editors:
Martin Thomson (Mozilla)
Keith Griffin (Cisco)
Suhas Nandakumar (Cisco)
Henrik Boström (Google)
Jan-Ivar Bruaroey (Mozilla)
Participate:
GitHub w3c/mediacapture-screen-share
File a bug
Commit history
Pull requests
Participate:
Mailing list

Abstract

This document defines how a user's display, or parts thereof, can be used as the source of a media stream using getDisplayMedia, an extension to the Media Capture API [GETUSERMEDIA].

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document is not complete. It is subject to major changes and, while early experimentations are encouraged, it is therefore not intended for implementation.

This document was published by the Web Real-Time Communications Working Group as a Working Draft. This document is intended to become a W3C Recommendation.

GitHub Issues are preferred for discussion of this specification. Alternatively, you can send comments to our mailing list. Please send them to public-webrtc@w3.org (archives).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 March 2019 W3C Process Document.

1. Introduction

This section is non-normative.

This document describes an extension to the Media Capture API [GETUSERMEDIA] that enables the acquisition of a user's display, or part thereof, in the form of a video track. In some cases system, application or window audio is also captured which is presented in the form of an audio track. This enables a number of applications, including screen sharing using WebRTC [WEBRTC].

This feature has signficant security implications. Applications that use this API to access information that is displayed to users could access confidential information from other origins if that information is under the control of the application. This includes content that would otherwise be inaccessible due to the protections offered by the user agent sandbox.

This document concerns itself primarily with the capture of video and audio [GETUSERMEDIA], but the general mechanisms defined here could be extended to other types of media, of which depth [MEDIACAPTURE-DEPTH] is currently defined.

2. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MAY, MUST, MUST NOT, and SHOULD in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

This specification defines conformance criteria that apply to a single product: the user agent that implements the interfaces that it contains.

Implementations that use ECMAScript [ECMA-262] to implement the APIs defined in this specification must implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [WEBIDL], as this specification uses that specification and terminology.

3. Example

The following example demonstrates a request for display capture using the navigator.mediaDevices.getDisplayMedia method defined in this document.

try {
  let mediaStream = await navigator.mediaDevices.getDisplayMedia({video:true});
  videoElement.srcObject = mediaStream;
} catch (e) {
  console.log('Unable to acquire screen capture: ' + e);
}

4. Terminology

This document uses the definition of MediaStream, MediaStreamTrack, MediaStreamConstraints and ConstrainablePattern from [GETUSERMEDIA].

When referring to exceptions, the terms throw and create are defined in [WEBIDL-1].

The term "throw" is used as specified in [INFRA]: it terminates the current processing steps.

The terms fulfilled, rejected, resolved, pending and settled used in the context of Promises are defined in [ECMASCRIPT-6.0].

Screen capture encompasses the capture of several different types of screen-based surfaces. Collectively, these are referred to as display surfaces, of which this document defines the following types:

This document draws a distinction between two variants of each type of display surface:

Some operating systems permit windows from different applications to occlude other windows, in whole or part, so the visible display surface is a strict subset of the logical display surface.

The source pixel ratio of a display surface is 1/96th of 1 inch divided by its vertical pixel size.

The terms permission, retrieve the permission state, prompt the user to choose, and create a permission storage entry are defined in [permissions].

The devicechange event is defined in [GETUSERMEDIA] Section 9.2, the MediaTrackSupportedConstraints dictionary is defined in [GETUSERMEDIA] Section 4.3.4, the MediaTrackConstraintSet dictionary is defined in [GETUSERMEDIA] Section 4.3.6, and the MediaTrackSettings dictionary is defined in [GETUSERMEDIA] Section 4.3.7.

5. Capturing Displayed Media

Capture of displayed media is enabled through the addition of a new getDisplayMedia method on the MediaDevices interface, that is similar to getUserMedia [GETUSERMEDIA], except that it acquires media from one display device chosen by the end-user each time.

5.1 MediaDevices Additions

partial interface MediaDevices {
  Promise<MediaStream> getDisplayMedia(optional DisplayMediaStreamConstraints constraints = {});
};
getDisplayMedia

Prompts the user for permission to live-capture their display.

The user agent MUST let the end-user choose which display surface to share out of all available choices every time, and MUST NOT use constraints to limit that choice. Instead, constraints MUST be applied to the media chosen by the user, only after they have made their selection. This prevents an application from influencing the selection of sources, see § 5.3 Unconstrained Display Surface Selection for details.

In the case of audio, the user agent MAY present the end-user with audio sources to share. Which choices are available to choose from is up to the user agent, and the audio source(s) are not necessarily the same as the video source(s). An audio source may be a particular application, window, browser, the entire system audio or any combination thereof. Unlike getUserMedia with regards to audio+video, the user agent is allowed not to return audio even if the audio constraint is present. If the user agent knows no audio will be shared for the lifetime of the stream it MUST NOT include an audio track in the resulting stream. The user agent MAY accept a request for audio and video by only returning a video track in the resulting stream, or it MAY accept the request by returning both an audio track and a video track in the resulting stream. The user agent MUST reject audio-only requests.

In addition to drawing from a different set of sources and requiring user selection, getDisplayMedia also differs from getUserMedia in that "granted" permissions cannot be persisted.

When the getDisplayMedia() method is called, the User Agent MUST run the following steps:

  1. If the method call is not triggered by user activation, return a promise rejected with a DOMException object whose name attribute has the value InvalidStateError.

  2. Let constraints be the method's first argument.

  3. If constraints.video is false, return a promise rejected with a newly created TypeError.

  4. For each member present in constraints whose value, CS, is a dictionary, run the following steps:

    1. If CS contains a member named advanced, return a promise rejected with a newly created TypeError.

    2. If CS contains a member whose name specifies a constrainable property applicable to display surfaces, and whose value in turn is a dictionary containing a member named either min or exact, return a promise rejected with a newly created TypeError.

    3. If CS contains a member whose name, failedConstraint specifies a constrainable property, constraint, applicable to display surfaces, and whose value in turn is a dictionary containing a member named max, and that member's value in turn is less than the constrainable property's floor value, then let failedConstraint be the name of the constraint, let message be either undefined or an informative human-readable message, and return a promise rejected with a new OverconstrainedError created by calling OverconstrainedError(failedConstraint, message).

  5. Let requestedMediaTypes be the set of media types in constraints with either a dictionary value or a value of true.

  6. If the current settings object's responsible document is NOT fully active, return a promise rejected with a DOMException object whose name attribute has the value InvalidStateError.

  7. Let p be a new promise.

  8. Run the following steps in parallel:

    1. For each media type T in requestedMediaTypes,

      1. If no sources of type T are available, reject p with a new DOMException object whose name attribute has the value NotFoundError.

      2. Retrieve the permission state for obtaining sources of type T in the current browsing context. If the permission state is "denied", jump to the step labeled PermissionFailure below.

    2. Optionally, e.g., based on a previously-established user preference, for security reasons, or due to platform limitations, jump to the step labeled Permission Failure below.

    3. Prompt the user to choose a display device, with a PermissionDescriptor named "display-capture", resulting in a set of provided media.

      The provided media MUST include precisely one track of each media type in requestedMediaTypes. The devices chosen MUST be the ones determined by the user. Once selected, the source of a MediaStreamTrack MUST NOT change.

      User Agents are encouraged to warn users against sharing browser display devices as well as monitor display devices where browser windows are visible, or otherwise try to discourage their selection on the basis that these represent a significantly higher risk when shared.

      If the result of the request is "granted", then for each device that is sourcing the provided media, using a stable and private id for the device, deviceId, set [[devicesLiveMap]][deviceId] to true, if it isn’t already true, and set the [[devicesAccessibleMap]][deviceId] to true, if it isn’t already true.

      The User Agent MUST NOT create a permission storage entry with a value of "granted".

      If the result is "denied", jump to the step labeled Permission Failure below. If the user never responds, this algorithm stalls on this step.

      If the user grants permission but a hardware error such as an OS/program/webpage lock prevents access, reject p with a new DOMException object whose name attribute has the value NotReadableError and abort these steps.

      If the result is "granted" but device access fails for any reason other than those listed above, reject p with a new DOMException object whose name attribute has the value AbortError and abort these steps.

    4. Let stream be the MediaStream object for which the user granted permission.

    5. Run the ApplyConstraints algorithm on all tracks in stream with the appropriate constraints. Should this fail, let failedConstraint be the result of the algorithm that failed, and let message be either undefined or an informative human-readable message, and then reject p with a new OverconstrainedError created by calling OverconstrainedError(failedConstraint, message).

    6. Resolve p with stream and abort these steps.

    7. Permission Failure: Reject p with a new DOMException object whose name attribute has the value NotAllowedError.

  9. Return p.

5.2 Closed and Minimized Display Surfaces

A display surface that is being shared may temporarily or permanently become inaccessible to the application because of actions taken by the operating system or user agent. What makes a display surface considered inaccesible is outside the scope of this specification, but examples MAY include a monitor disconnecting or an application, window or browser closing or becoming minimized.

When display surface enters an inaccessible state that is not necessarily permanent, the user agent MUST queue a task that sets the muted state of the corresponding media track to true.

When display surface exits an inaccessible state and becomes accessible, the user agent MUST queue a task that sets the muted state of the corresponding media track to false.

When a display surface enters an inaccessible state that is permanent (such as the source application terminating), the user agent MUST queue a task that ends the corresponding media track.

A stream that was just returned by getDisplayMedia MAY contain tracks that are muted by default. Audio and video tracks belonging to the same stream MAY be muted/unmuted independently of one another.

5.3 Unconstrained Display Surface Selection

Not accepting constraints for source selection means that getDisplayMedia only provides fingerprinting surface that exposes whether audio, video or audio and video display sources are present. (This is a fingerprinting vector.)

5.4 Constrainable Properties for Captured Display Surfaces

Constraints serve a different purpose in getDisplayMedia than they do in getUserMedia. They do not aid discovery, instead they are applied only after user-selection.

This section define which constraints apply to getDisplayMedia tracks; constraints defined for getUserMedia do not apply unless listed here.

Some of these constraints enable user agent processing like downscaling and frame decimation, as well as display-specific features. Others enable observation of inherent properties of a user-selected display surface, as capabilities and settings.

The following new and existing MediaStreamTrack Constrainable Properties are defined to apply to the user-selected video display surface, with the following behavior:

Property Name Type Behavior
width ConstrainULong The width or width range, in pixels. As a capability, max MUST reflect the display surface's width, and min MUST reflect the width of the smallest aspect-preserving representation available through downscaling by the user agent.
height ConstrainULong The height or height range, in pixels. As a capability, max MUST reflect the display surface's height, and min MUST reflect the height of the smallest aspect-preserving representation available through downscaling by the user agent.
frameRate ConstrainDouble The frame rate (frames per second) or frame rate range. As a capability, max MUST reflect the display surface's frame rate, and min MUST reflect the lowest frame rate available through frame decimation by the user agent.
aspectRatio ConstrainDouble The exact aspect ratio (width in pixels divided by height in pixels, represented as a double rounded to the tenth decimal place) or aspect ratio range. As a setting, represents width / height. As a capability, min and max both MUST be the current setting value, rendering this property immutable from the application viewpoint.
resizeMode ConstrainDOMString This string (or each string, when a list) should be one of the members of VideoResizeModeEnum. As a setting, none means the MediaStreamTrack contains all bits needed to render the display in full detail, which if the source pixel ratio > 1, means width and height will be larger than the display's appearance from an end-user viewpoint would suggest, whereas crop-and-scale means the MediaStreamTrack contains an aspect-preserved representation of the display surface that has been downscaled by the user agent, but not cropped. As a capability, the values none and crop-and-scale both MUST be present.
displaySurface ConstrainDOMString This string (or each string, when a list) should be one of the members of DisplayCaptureSurfaceType. As a setting, indicates the type of display surface that is being captured. As a capability, the setting value MUST be the lone value present, rendering this property immutable from the application viewpoint.
logicalSurface ConstrainBoolean As a setting, a value of true indicates capture of a logical display surface, whereas a value of false indicates a capture capture of a visible display surface. As a capability, this same value MUST be the lone value present, rendering this property immutable from the application viewpoint.
cursor ConstrainDOMString This string (or each string, when a list) should be one of the members of CursorCaptureConstraint. As a setting, indicates if and when the cursor is included in the captured display surface. As a capability, the user agent MUST include only the set of values from CursorCaptureConstraint it is capable of supporting for this display surface.

The following new and existing MediaStreamTrack Constrainable Properties are defined to apply to the user-selected audio sources, with the following behavior:

Property Name Type Behavior
restrictOwnAudio ConstrainBoolean

As a setting, this value indicates whether or not the user agent is applying own audio restriction to the source.

As a constraint, this property can be constrained resulting in a source with own audio restriction enabled or disabled.

When own audio restriction is applied, the user agent MUST attempt to remove any audio from the audio being captured that was produced by the document that performed getDisplayMedia. If the user agent is not able to remove the audio through processing it SHOULD remove the audio by excluding the document's audio from being captured. If this results in no audio being captured, the user agent MUST keep the track muted until it is able to capture audio again.

When inherent properties of the underlying source of a user-selected display surface change, for example in response to the end-user resizing a captured window, and these changes render the capabilities and/or settings of one or more constrainable properties outdated, the user agent MUST queue a task to run the following step:

  1. Update all affected constrainable properties at the same time.

    If this causes an "overconstrained" situation, then the user agent MUST ignore the culprit constraints for as long as they overconstrain. The user agent MUST NOT mute the track, and the user agent MUST NOT fire the overconstrained event.

Note

While min and exact constraints produce TypeError on getDisplayMedia(), this specification does not alter the track.applyConstraints() method. Therefore, they may instead produce OverconstrainedError or succeed depending on values, and therefore potentially be present to cause this "overconstrained" situation. The max constraint may also cause this, e.g. with aspectRatio. This spec considers these to be edge cases that aren't useful.

5.4.1 Downscaling and Frame Decimation

For the purposes of the SelectSettings algorithm, the user agent SHOULD consider all possible combinations of downscaled dimensions that preserve the aspect ratio of the original display surface (to the nearest pixel), and frame rates available through frame decimation, as available settings dictionaries.

The downscaling and decimation effects of constraints is then effectively governed by the fitness distance algorithm.

The intent is for the user agent to produce output that is close to the ideal width, ideal height, and/or ideal frameRate when these are specified, while at all times preserving the aspect ratio of the original display surface.

The user agent SHOULD downscale by the source pixel ratio by default, unless otherwise directed by applied constraints.

The user agent MUST NOT crop the captured output.

The user agent MUST NOT upscale the captured output, or create additional frames, except as needed to preserve high resolutions and frame rates in an aggregated display surface.

Note

The max constraint type lets a web application provide a maximum envelope for constrainable properties like width and height. This is helpful to limit extreme aspect ratios, should the end-user resize a window or browser surface to such an extreme while it is being captured.

For each constrainable property of positive numeric type in this specification, the user agent MUST establish a floor value, representing the smallest allowable value supported by the user agent regardless of source. This value MUST be constant and MUST be greater than 0. The user agent is encouraged to support all values above the floor value regardless of source.

Note

The purpose of the floor value is to help user agents avoid failing getDisplayMedia() with OverconstrainedError after the user has already been prompted, and avoid leaking information about the user's system.

5.4.2 DisplayMediaStreamConstraints

The DisplayMediaStreamConstraints dictionary is used to instruct the User Agent what sort of MediaStreamTracks may be included in the MediaStream returned by getDisplayMedia.

dictionary DisplayMediaStreamConstraints {
  (boolean or MediaTrackConstraints) video = true;
  (boolean or MediaTrackConstraints) audio = false;
};
Dictionary DisplayMediaStreamConstraints Members
video of type (boolean or MediaTrackConstraints), defaulting to true

If true, it requests that the returned MediaStream contain a video track. If a Constraints structure is provided, it further specifies desired processing options to be applied to the video track rendition of the display surface chosen by the user. If false, the request MUST be rejected with a TypeError.

audio of type (boolean or MediaTrackConstraints), defaulting to false

If true, it signals an interest that the returned MediaStream contain an audio track, if supported and audio is available for display surface chosen by the user. If a Constraints structure is provided, it further specifies desired processing options to be applied to the audio track. If false, the MediaStream MUST NOT contain an audio track.

5.4.3 Extensions to MediaTrackSupportedConstraints

MediaTrackSupportedConstraints is extended here with the list of constraints that a User Agent recognizes.

partial dictionary MediaTrackSupportedConstraints {
  boolean displaySurface = true;
  boolean logicalSurface = true;
  boolean cursor = true;
  boolean restrictOwnAudio = true;
};
displaySurface of type boolean, defaulting to true

Whether displaySurface constraint is recognized.

logicalSurface of type boolean, defaulting to true

Whether logicalSurface constraint is recognized.

cursor of type boolean, defaulting to true

Whether cursor constraint is recognized.

restrictOwnAudio of type boolean, defaulting to true

Whether restrictOwnAudio constraint is recognized.

5.4.4 Extensions to MediaTrackConstraintSet

MediaTrackConstraintSet is used for reading the current status of constraints.

partial dictionary MediaTrackConstraintSet {
  ConstrainDOMString displaySurface;
  ConstrainBoolean logicalSurface;
  ConstrainDOMString cursor;
  ConstrainBoolean restrictOwnAudio;
};
displaySurface of type ConstrainDOMString

The type of display surface that is being captured. This assumes values from the DisplayCaptureSurfaceType enumeration.

logicalSurface of type ConstrainBoolean

A value of true indicates capture of a logical display surface; a value of false indicates a capture capture of a visible display surface.

cursor of type ConstrainDOMString

Assumes values from the CursorCaptureConstraint enumeration that determines if and when the cursor is included in the captured display surface.

restrictOwnAudio of type ConstrainBoolean

This constraint is only applicable to audio tracks. See restrictOwnAudio.

5.4.5 Extensions to MediaTrackSettings

When the getSettings() method is invoked on a video stream track, the user agent must return the extended MediaTrackSettings dictionary, representing the current status of the underlying user agent.

partial dictionary MediaTrackSettings {
  DOMString displaySurface;
  boolean logicalSurface;
  DOMString cursor;
};
displaySurface of type DOMString

The type of display surface that is being captured. This assumes values from the DisplayCaptureSurfaceType enumeration.

logicalSurface of type boolean

A value of true indicates capture of a logical display surface; a value of false indicates a capture capture of a visible display surface.

cursor of type DOMString

Assumes values from the CursorCaptureConstraint enumeration that determines if and when the cursor is included in the captured display surface.

5.4.6 DisplayCaptureSurfaceType

The DisplayCaptureSurfaceType enumeration describes the different types of display surface.

enum DisplayCaptureSurfaceType {
  "monitor",
  "window",
  "application",
  "browser"
};
Enumeration description
monitor a monitor display surface, physical display, or collection of physical displays
window a window display surface, or single application window
application an application display surface, or entire collection of windows for an application
browser a browser display surface, or single browser window

5.4.7 CursorCaptureConstraint

The CursorCaptureConstraint enumerates the conditions under which the cursor is captured.

enum CursorCaptureConstraint {
  "never",
  "always",
  "motion"
};
Enumeration description
never a never cursor capture constraint omits the cursor from the captured display surface.
always a always cursor capture constraint includes the cursor in the captured display surface.
motion a motion cursor capture constraint includes the cursor in the captured display surface when the cursor/pointer is moved. The captured cursor is removed when there is no further movement of the pointer/cursor for certain period of time, as determined by the user agent.

5.5 Device Identifiers

Each potential source of capture is treated by this API as a discrete media source. However, display capture sources MUST NOT be enumerated by enumerateDevices, since this would reveal too much information about the host system.

Display capture sources therefore cannot be selected with the deviceId constraint, since their deviceIds are not exposed.

Note

This is not to be confused with the stable and private id of the same name used in algorithms to implement privacy indicators.

6. Feature Policy Integration

This specification defines a policy-controlled feature identified by the string "display-capture". Its default allowlist is "self".

Note

A document's feature policy determines whether any content in that document is allowed to use getDisplayMedia. If disabled in any document, no content in the document will be allowed to use getDisplayMedia. This is enforced by the prompt the user to choose algorithm.

7. Privacy Indicator Requirements

This specification extends the Privacy Indicator Requirements of getUserMedia to include getDisplayMedia.

References in this specification to [[devicesLiveMap]], [[devicesAccessibleMap]], and [[kindsAccessibleMap]] refer to the definitions already created to support Privacy Indicator Requirements for getUserMedia.

For each kind of device that getDisplayMedia exposes, using a stable and private id for the device, deviceId, set kind to "Display" + kind, and do the following:

Then, given the new definitions above, the requirements on the User Agent are those specified in Privacy Indicator Requirements of getUserMedia.

Note

Even though there's a single permission descriptor for getDisplayMedia, the above definitions distinguish by kind to enable user agents to implement privacy indicators that show the end-user the specific kinds of display sources that are being shared at any point.

Note

Since this specification forbids user agents from persisting "granted" permissions, only the "Live" indicators are significant.

The User Agent MUST NOT fire the devicechange event based on changes in the set of available sources from getDisplayMedia.

8. Security and Permissions

This section is informative; however, it notes some serious risks to platform security if the advice it contains are not adhered to.

Issue 1

This is consistent with other documents, but the absence of strong normative language here is a little worrying.

The risks to user privacy and security posed by capture of displayed content are twofold. The immediate and obvious risk is that users inadvertently share content that they did not wish to share, or might not have realized would be shared.

Display capture presents a less obvious risk to the cross site request forgery protections offered by the browser sandbox. Display and capture of information that is also under the control of an application, even indirectly, can allow that application to access information that would otherwise by inaccessible to it directly. For example, the canvas API does not permit sampling of a canvas, or conversion to an accessible form if it is not origin-clean [2DCONTEXT].

This issue is discussed in further detail in [RTCWEB-SECURITY-ARCH] and [RTCWEB-SECURITY].

Display capture that includes browser windows, particularly those that are under any form of control by the application, risks violation of these basic security protections. This risk is not entirely contained to browser windows, since control channels between browser applications and other applications, depending on the operating system. The key consideration is whether the captured display surface could be somehow induced to present information that would otherwise be secret from the application that is receiving the resulting media.

8.1 Capturing Logical or Visible Display Surfaces

Capture of logical display surfaces causes there to be a potential for content to be shared that a user is not made aware of. A logical display surface might render information that a user did not intend to expose. This can be more easily recognized if this information is visible. Such means are likely ineffectual against a machine, but a human recipient is less able to process content that appears only briefly.

Information that is not currently rendered to the screen SHOULD be obscured in captures unless the application has been specifically authorized to access that content (this might require elevated permissions).

How obscured areas of the logical display surface are captured to produce a visible display surface capture MAY vary. Some applications, like presentation software, benefit from having obscured portions of the screen render the image that appeared prior to being obscured. Freezing images can cause visual artifacts for changing content, or hide the fact that content is being obscured. Note that frozen portions of a capture can be incorrectly perceived as a bug. Alternatively, obscured areas might be replaced with content that marks them as being obscured, such as a grey color or hatching.

Some systems MAY only capture the logical display surface. Devices with small screens, for instance, do not typically have the concept of a window, and render applications in full screen modes only. These systems might provide a capture of an application that is not currently visible, which could be unusable without capturing the logical display surface.

An important consideration when capturing a window or other display surface that is partially transparent is that content from the background might be shared. A user agent MUST NOT capture content from the background of a captured display surface.

There is a risk that the user prompt be exposed to the web page for a short amount of time by the newly created MediaStreamTrack, for instance if the user selects the screen on which the user prompt is displayed. In the case the user prompt displays previews of the various surfaces available for selection, the user agent MUST NOT capture, for the newly created MediaStreamTrack, the previews that the user did not intend to share explicitly.

Capturing Audio

getDisplayMedia allows capturing audio alongside video, this poses privacy and security concern as this may expose additional information about system applications, and the set of shared audio sources are not necessarily the same as the set of shared video sources. For example, a user agent MAY be capturing the video of a window and capture the audio of the entire system, including applications unrelated to that window. The user agent MUST NOT share audio without active user consent. It is important that the user is aware of what content will be shared, including any possible audio. It is strongly recommended that the user is allowed to give consent to video but not audio, resulting in a video-only stream. This ensures that the request for audio is always optional and does not restrict the user's choices compared to a video-only request.

8.2 Authorizing Display Capture

This document provides recommends that implementations provide additional limitations on the mechanisms used to affirm user consent. These limitations are designed to mitigate the security and privacy risks that the API poses.

Two forms of consent interaction are described: active user consent and a range of elevated permissions. These are non-normative recommandations only.

8.2.2 Elevated Permissions

It is strongly advised that elevated permissions be required to access any display surface that might be used to circumvent cross-origin protections for content. The key goal of this consent process is not just to demonstrate that a user intends to share content, but to also to determine that the user exhibits an elevated level of trust in the application that is being granted access.

Several different controls might be provided to grant elevated permissions. This section describes several different capabilities that could be independently granted. A user agent might opt to prohibit access to any capability that requires elevated permissions.

If access to these surfaces is supported, it is strongly advised that any mechanism to acquire elevated permissions not rely solely on simple prompts for user consent. Any action needs to ensure that a decision to authorize an application with elevated privileges is deliberate. For instance, a user agent might require a process equivalent to software installation to signify that user consent for elevated permissions is granted.

An elevated permissions experience could allow the user agent to communicate the risks associated with enabling this feature, or at least to convey the need for augmented trust in the application.

Note that elevated permissions are not a substitute for active user consent. It is advised that user agents still present users with the ability to select what is shared, even for applications that have elevated permissions.

8.2.3 Capabilities Depending on Elevated Permissions

Elevated permissions are recommended as a prerequisite for access to capture of monitor or browser display surfaces. Note that capture of a complete monitor is included because this could include a window from the user agent.

Similarly, elevated permissions are a recommended prerequisite for access to logical display surfaces, where that would not ordinarily be provided.

A user agent SHOULD persist any elevated permissions that are granted to an origin. An elevated permissions process in part relies on its novelty to ensure that it correctly captures user intent.

8.3 Feedback and Interface During Capture

Implementations are advised to provide user feedback and control mechanisms similar to those offered users when sharing a camera or microphone, as recommended in [GETUSERMEDIA].

It is important that a user be aware that content is being shared when content is actively being captured. User agents are advised to display a prominent indicator while content is being captured. In addition to an indicator, a user agent is advised to provide a means to learn precisely what is being shared; while this capability is trivially provided by an application by rendering the captured content, this information allows a user to accurately assess what is being shared.

In addition to feedback mechanisms, a means to for the user to stop any active capture is advisable.

A. References

A.1 Normative references

[2DCONTEXT]
HTML Canvas 2D Context. Rik Cabanier; Jatinder Mann; Jay Munro; Tom Wiltzius; Ian Hickson. W3C. 19 November 2015. W3C Recommendation. URL: https://www.w3.org/TR/2dcontext/
[ECMA-262]
ECMAScript Language Specification. Ecma International. URL: https://tc39.es/ecma262/
[ECMASCRIPT-6.0]
ECMA-262 6th Edition, The ECMAScript 2015 Language Specification. Allen Wirfs-Brock. Ecma International. June 2015. Standard. URL: http://www.ecma-international.org/ecma-262/6.0/index.html
[GETUSERMEDIA]
Media Capture and Streams. Daniel Burnett; Adam Bergkvist; Cullen Jennings; Anant Narayanan; Bernard Aboba; Jan-Ivar Bruaroey; Henrik Boström. W3C. 2 July 2019. W3C Candidate Recommendation. URL: https://www.w3.org/TR/mediacapture-streams/
[HTML]
HTML Standard. Anne van Kesteren; Domenic Denicola; Ian Hickson; Philip Jägenstedt; Simon Pieters. WHATWG. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[HTML5]
HTML5. Ian Hickson; Robin Berjon; Steve Faulkner; Travis Leithead; Erika Doyle Navara; Theresa O'Connor; Silvia Pfeiffer. W3C. 27 March 2018. W3C Recommendation. URL: https://www.w3.org/TR/html5/
[HTML52]
HTML 5.2. Steve Faulkner; Arron Eicholz; Travis Leithead; Alex Danilo; Sangwhan Moon. W3C. 14 December 2017. W3C Recommendation. URL: https://www.w3.org/TR/html52/
[INFRA]
Infra Standard. Anne van Kesteren; Domenic Denicola. WHATWG. Living Standard. URL: https://infra.spec.whatwg.org/
[permissions]
Permissions. Mounir Lamouri; Marcos Caceres; Jeffrey Yasskin. W3C. 25 September 2017. W3C Working Draft. URL: https://www.w3.org/TR/permissions/
[RFC2119]
Key words for use in RFCs to Indicate Requirement Levels. S. Bradner. IETF. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[RFC8174]
Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words. B. Leiba. IETF. May 2017. Best Current Practice. URL: https://tools.ietf.org/html/rfc8174
[RTCWEB-SECURITY]
Security Considerations for WebRTC. Eric Rescorla. IETF. 22 January 2014. Active Internet-Draft. URL: https://tools.ietf.org/html/draft-ietf-rtcweb-security
[RTCWEB-SECURITY-ARCH]
WebRTC Security Architecture. Eric Rescorla. IETF. 10 December 2016. Active Internet-Draft. URL: https://tools.ietf.org/html/draft-ietf-rtcweb-security-arch
[WEBIDL]
Web IDL. Boris Zbarsky. W3C. 15 December 2016. W3C Editor's Draft. URL: https://heycam.github.io/webidl/
[WEBIDL-1]
WebIDL Level 1. Cameron McCormack. W3C. 15 December 2016. W3C Recommendation. URL: https://www.w3.org/TR/2016/REC-WebIDL-1-20161215/

A.2 Informative references

[MEDIACAPTURE-DEPTH]
Media Capture Depth Stream Extensions. Anssi Kostiainen; Ningxin Hu; Aleksandar Stojiljkovic; Rob Manson. W3C. 18 April 2017. W3C Working Draft. URL: https://www.w3.org/TR/mediacapture-depth/
[WEBRTC]
WebRTC 1.0: Real-time Communication Between Browsers. Adam Bergkvist; Daniel Burnett; Cullen Jennings; Anant Narayanan; Bernard Aboba; Taylor Brandstetter; Jan-Ivar Bruaroey. W3C. 27 September 2018. W3C Candidate Recommendation. URL: https://www.w3.org/TR/webrtc/