Copyright © 2012-2016 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and document use rules apply.
This document specifies the takePhoto()
and grabFrame()
methods, and corresponding camera settings for use with MediaStreamTracks (as defined in Media Capture and Streams [GETUSERMEDIA]).
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
Comments on this document are welcomed.
This document was published by the Device and Sensors Working Group and the Web Real-Time Communications Working Group as a Working Draft. This document is intended to become a W3C Recommendation. If you wish to make comments regarding this document, please send them to public-media-capture@w3.org (subscribe, archives). All comments are welcome.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by groups operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures (Device and Sensors Working Group) and a public list of any patent disclosures (Web Real-Time Communications Working Group) made in connection with the deliverables of each group; these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 1 September 2015 W3C Process Document.
The API defined in this document captures images from a valid MediaStreamTrack. The produced image can be in the form of a Blob
(as defined in [FILE-API]) or as an ImageBitmap
(as defined in [HTML51])). The source image is provided by the capture device that provides the MediaStreamTrack. Moreover, picture-specific settings can be optionally provided as arguments that can be applied to the device for the capture.
The User Agent must support Promises in order to implement the Image Capture API. Any Promise object is assumed to have resolver object, with resolve() and reject() methods associated with it.
[Constructor(MediaStreamTrack track)]
interface ImageCapture
{
readonly attribute MediaStreamTrack videoStreamTrack
;
readonly attribute MediaStream previewStream
;
Promise<PhotoCapabilities
> getPhotoCapabilities
();
Promise<void> setOptions
(PhotoSettings
? photoSettings);
Promise<Blob> takePhoto
(PhotoSettings
? photoSettings);
Promise<ImageBitmap> grabFrame
();
};
ImageCapture
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
track | MediaStreamTrack |
✘ | ✘ | The MediaStreamTrack to be used as source of data. This will be the value of the videoStreamTrack attribute. The MediaStreamTrack passed to the constructor MUST have its kind attribute set to "video " otherwise a DOMException of type NotSupportedError will be thrown. |
videoStreamTrack
of type MediaStreamTrack, readonlypreviewStream
of type MediaStream, readonlygetPhotoCapabilities
getPhotoCapabilities()
method of an ImageCapture
object is invoked, a new Promise is returned. If the UA is unable to execute the getPhotoCapabilities()
method for any reason (for example, the MediaStreamTrack
being ended asynchronously), then the UA MUST return a promise rejected with a newly created ImageCaptureError
with the appropriate errorDescription
set. Otherwise it MUST queue a task, using the DOM manipulation task source, that runs the following steps:
MediaStreamTrack
into a PhotoCapabilities
object containing the available capabilities of the device, including ranges where appropriate. The resolved PhotoCapabilities
will also include the current conditions in which the capabilities of the device are found. The method of doing this will depend on the underlying device. PhotoCapabilities
object.setOptions
setOptions()
method of an ImageCapture
object is invoked, then a valid PhotoSettings
object MUST be passed in the method to the ImageCapture
object. In addition, a new Promise object is returned. If the UA can successfully apply the settings, then the UA MUST return a resolved promise. If the UA cannot successfully apply the settings, then the UA MUST return a promise rejected with a newly created ImageCaptureError
whose errorDescription
is set to OPTIONS_ERROR. If the UA can successfully apply the settings, the effect MAY be reflected, if visible at all, in previewStream
.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
type | PhotoSettings |
✔ | ✘ |
The PhotoSettings dictionary to be applied.
|
takePhoto
takePhoto()
method of an ImageCapture
object is invoked, a new Promise object is returned. If the readyState
of the VideoStreamTrack
provided in the constructor is not "live", the UA MUST return a promise rejected with a newly created ImageCaptureError
object whose errorDescription
is set to INVALID_TRACK. If the UA is unable to execute the takePhoto()
method for any other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA MUST return a promise rejected with a newly created ImageCaptureError
object whose errorDescription
is set to PHOTO_ERROR. Otherwise it MUST queue a task, using the DOM manipulation task source, that runs the following steps:
undefined
.setOptions()
had been called immediately before takePhoto()
). If the UA cannot successfully apply the settings, then the UA MUST return a promise rejected with a newly created ImageCaptureError
whose errorDescription
is set to OPTIONS_ERROR.MediaStreamTrack
into a Blob
containing a single still image. The method of doing this will depend on the underlying device. Devices may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo, and then resume streaming. In this case, the stopping and restarting of streaming SHOULD cause mute
and unmute
events to fire on the Track in question. Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
type | PhotoSettings |
✔ | ✔ |
The PhotoSettings dictionary to be applied.
|
grabFrame
grabFrame()
method of an ImageCapture
object is invoked, a new Promise object is returned. If the readyState
of the MediaStreamTrack
provided in the contructor is not "live", the UA MUST return a promise rejected with a newly created ImageCaptureError
object whose errorDescription
is set to INVALID_TRACK. If the UA is unable to execute the grabFrame()
method for any other reason, then the UA MUST return a promise rejected with a newly created ImageCaptureError
object whose errorDescription
is set to FRAME_ERROR. Otherwise it MUST queue a task, using the DOM manipulation task source, that runs the following steps:
MediaStreamTrack
into an ImageBitmap
object (as defined in [HTML51]). The width
and height
of the ImageBitmap
object are derived from the constraints of the MediaStreamTrack
. ImageBitmap
object. (Note: grabFrame()
returns data only once upon being invoked).ImageCaptureError
[NoInterfaceObject]
interface ImageCaptureError {
readonly attribute DOMString? errorDescription
;
};
errorDescription
of type DOMString?, readonlyerrorDescription
attribute returns the appropriate DOMString for the error description. Acceptable values are FRAME_ERROR, OPTIONS_ERROR, PHOTO_ERROR, INVALID_TRACK, and ERROR_UNKNOWN.PhotoCapabilities
interface PhotoCapabilities {
readonly attribute MeteringMode
whiteBalanceMode
;
readonly attribute unsigned long colorTemperature
;
readonly attribute MeteringMode
exposureMode
;
readonly attribute MediaSettingsRange
exposureCompensation
;
readonly attribute MediaSettingsRange
iso
;
readonly attribute boolean redEyeReduction
;
readonly attribute MeteringMode
focusMode
;
readonly attribute MediaSettingsRange
brightness
;
readonly attribute MediaSettingsRange
contrast
;
readonly attribute MediaSettingsRange
saturation
;
readonly attribute MediaSettingsRange
sharpness
;
readonly attribute MediaSettingsRange
imageHeight
;
readonly attribute MediaSettingsRange
imageWidth
;
readonly attribute MediaSettingsRange
zoom
;
readonly attribute FillLightMode
fillLightMode
;
};
whiteBalanceMode
of type MeteringMode
colorTemperature
of type unsigned longwhiteBalanceMode
is manual
.exposureMode
of type MeteringMode
exposureCompensation
of type MediaSettingsRange
iso
of type MediaSettingsRange
redEyeReduction
of type booleanfocusMode
of type MeteringMode
brightness
of type MediaSettingsRange
contrast
of type MediaSettingsRange
saturation
of type MediaSettingsRange
sharpness
of type MediaSettingsRange
imageHeight
of type MediaSettingsRange
imageWidth
of type MediaSettingsRange
zoom
of type MediaSettingsRange
fillLightMode
of type FillLightMode
FillLightMode
.imageWidth
and imageHeight
ranges to prevent increasing the fingerprinting surface and to allow the UA to make a best-effort decision with regards to actual hardware configuration.
This section is non-normative.
The PhotoCapabilities
interface provides the photo-specific settings options and current settings values. The following definitions are assumed for individual settings and are provided for information purposes:
manual
mode in which the estimated temperature of the scene illumination is hinted to the implementation. Typical temperature ranges for popular mode are provided below:
Mode | Kelvin range |
---|---|
incandescent | 2500-3500 |
fluorescent | 4000-5000 |
warm-fluorescent | 5000-5500 |
daylight | 5500-6500 |
cloudy-daylight | 6500-8000 |
twilight | 8000-9000 |
shade | 9000-10000 |
auto
or manual
). auto
, off
, on
). PhotoSettings
The PhotoSettings
object is optionally passed into the setOptions()
method in order to modify capture device settings specific to still imagery. Each of the attributes in this object is optional.
dictionary PhotoSettings {
MeteringMode
whiteBalanceMode
;
unsigned long colorTemperature
;
MeteringMode
exposureMode
;
unsigned long exposureCompensation
;
unsigned long iso
;
boolean redEyeReduction
;
MeteringMode
focusMode
;
sequence<Point2D
> pointsOfInterest
;
unsigned long brightness
;
unsigned long contrast
;
unsigned long saturation
;
unsigned long sharpness
;
unsigned long zoom
;
unsigned long imageHeight
;
unsigned long imageWidth
;
FillLightMode
fillLightMode
;
};
whiteBalanceMode
of type MeteringMode
colorTemperature
of type unsigned longwhiteBalanceMode
is manual
.exposureMode
of type MeteringMode
MeteringMode
.exposureCompensation
of type unsigned long, multiplied by 100 (to avoid using floating point). iso
of type unsigned longredEyeReduction
of type booleanfocusMode
of type MeteringMode
MeteringMode
.pointsOfInterest
of type sequence<Point2D
>sequence
of Point2D
s to be used as metering area centers for other settings, e.g. Focus, Exposure and Auto White Balance.brightness
of type unsigned longcontrast
of type unsigned longsaturation
of type unsigned longsharpness
of type unsigned longzoom
of type unsigned longimageHeight
of type unsigned longimageWidth
of type unsigned longfillLightMode
of type FillLightMode
FillLightMode
.MediaSettingsRange
interface MediaSettingsRange {
readonly attribute long max
;
readonly attribute long min
;
readonly attribute long current
;
};
max
of type long, readonlymin
of type long, readonlycurrent
of type long, readonlyFillLightMode
enum FillLightMode {
"unavailable",
"auto",
"off",
"flash",
"on"
};
unavailable
auto
flash
to guarantee firing of the flash for the takePhoto()
or getFrame()
methods.off
flash
takePhoto()
or getFrame()
methods. on
MediaStreamTrack
is activeMeteringMode
Note that MeteringMode
is used for both status enumeration and for setting options for capture(s).
enum MeteringMode {
"none",
"manual",
"single-shot",
"continuous"
};
none
manual
single-shot
continuous
Point2D
A Point2D
represents a location in a normalized square space with values in [0.0, 1.0].
x
of type floaty
of type floatThis section is non-normative.
navigator.mediaDevices.getUserMedia({video: true}).then(gotMedia, failedToGetMedia);
function gotMedia(mediastream) {
//Extract video track.
var videoTrack = mediastream.getVideoTracks()[0];
// Check if this device supports a picture mode...
var captureDevice = new ImageCapture(videoTrack);
if (captureDevice) {
captureDevice.grabFrame().then(processFrame(imgData));
}
}
function processFrame(e) {
imgData = e.imageData;
width = imgData.width;
height = imgData.height;
for (j=3; j < imgData.width; j+=4) {
// Set all alpha values to medium opacity
imgData.data[j] = 128;
}
// Create new ImageObject with the modified pixel values
var canvas = document.createElement('canvas');
ctx = canvas.getContext("2d");
newImg = ctx.createImageData(width,height);
for (j=0; j < imgData.width; j++) {
newImg.data[j] = imgData.data[j];
}
// ... and do something with the modified image ...
}
}
function failedToGetMedia(e) {
console.log('Stream failure: ' + e);
}
navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
function gotMedia(mediastream) {
//Extract video track.
var videoDevice = mediastream.getVideoTracks()[0];
// Check if this device supports a picture mode...
var captureDevice = new ImageCapture(videoDevice);
if (captureDevice) {
if (captureDevice.photoCapabilities.redEyeReduction) {
captureDevice.setOptions({redEyeReductionSetting:true})
.then(captureDevice.takePhoto()
.then(showPicture(blob),function(error){alert("Failed to take photo");}));
} else {
console.log('No red eye reduction');
}
}
}
function showPicture(e) {
var img = document.querySelector("img");
img.src = URL.createObjectURL(e.data);
}
function failedToGetMedia(e) {
console.log('Stream failure: ' + e);
}
<html>
<body>
<p><canvas id="frame"></canvas></p>
<button onclick="stopFunction()">Stop frame grab</button>
<script>
var canvas = document.getElementById('frame');
navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
function gotMedia(mediastream) {
//Extract video track.
var videoDevice = mediastream.getVideoTracks()[0];
// Check if this device supports a picture mode...
var captureDevice = new ImageCapture(videoDevice);
var frameVar;
if (captureDevice) {
frameVar = setInterval(captureDevice.grabFrame().then(processFrame()), 1000);
}
}
function processFrame(e) {
imgData = e.imageData;
canvas.width = imgData.width;
canvas.height = imgData.height;
canvas.getContext('2d').drawImage(imgData, 0, 0,imgData.width,imgData.height);
}
function stopFunction(e) {
clearInterval(myVar);
}
</script>
</body>
</html>