Copyright © 2012-2015 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document specific the takePhoto() and grabFrame() methods, and corresponding camera settings for use with MediaStreams as defined in Media Capture and Streams [GETUSERMEDIA].
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
Comments on this document are welcomed.
Since its previous publication, the API has evolved to incorporate Promises as a way to manage asynchronous workflows in addition to several other minor or cosmetic fixes.
This document was published by the Device APIs Working Group and Web Real-Time Communications Working Group as a Working Draft. This document is intended to become a W3C Recommendation. If you wish to make comments regarding this document, please send them to public-media-capture@w3.org (subscribe, archives). All comments are welcome.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures (Web Real-Time Communication Working Group, Device APIs Working Group) made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 1 August 2014 W3C Process Document.
The API defined in this document taks a valid MediaStream and returns an encoded image in the form of a Blob (as defined in [FILE-API]).  The image is
 provided by the capture device that provides the MediaStream.  Moreover, 
 picture-specific settings can be optionally provided as arguments that can be applied to the image being captured.
The User Agent must support Promises in order to implement the Image Capture API.  Any Promise object is assumed to have resolver object, with resolve() and reject() methods associated with it.
 The MediaStreamTrack passed to the constructor must have its kind attribute set to "video" otherwise an exception will be thrown.
[Constructor(MediaStreamTrack track)]
interface ImageCapture {
    readonly    attribute PhotoCapabilities photoCapabilities;
    readonly    attribute MediaStreamTrack  videoStreamTrack;
    readonly    attribute MediaStream       previewStream;
    Promise setOptions (PhotoSettings? photoSettings);
    Promise takePhoto ();
    Promise grabFrame ();
};photoCapabilities of type PhotoCapabilities, readonly   previewStream of type MediaStream, readonly   videoStreamTrack of type MediaStreamTrack, readonly   grabFramegrabFrame() method of an ImageCapture object is invoked, a new Promise object is returned. If the readyState of the MediaStreamTrack provided in the contructor is not "live", the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a 
       new ImageCaptureError object whose errorDescription is set to INVALID_TRACK. If the UA is unable to execute the grabFrame() method for any
       other reason, then the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a 
       new ImageCaptureError object whose errorDescription is set to FRAME_ERROR.    Otherwise it must
       queue a task, using the DOM manipulation task source, that runs the following steps:
        MediaStreamTrack into an ImageData object (as defined in [CANVAS-2D]) containing a single still frame in RGBA format. The width and height of the
            ImageData object are derived from the constraints of the MediaStreamTrack. The method of doing
                this will depend on the underlying device.  Devices 
                may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings (which may be a subset of the settings provided in photoCapabilities), take the photo (and convert to an ImageData object),
                and then resume streaming.  In this case, the stopping and restarting of streaming should
                cause mute and unmute events to fire on the Track in question.   FrameGrabEvent event containing the ImageData to the resolver object's resolve() method. {Note: grabFrame() returns data only once upon being invoked.}PromisesetOptionssetOptions() method of an ImageCapture object is invoked, then a valid PhotoSettings object must be passed in the method to the
 ImageCapture object.  In addition, a new Promise object is returned.  If the UA can successfully apply the settings, then the UA must return a SettingsChangeEvent event to the 
 resolver object's resolve() method.  If the UA cannot successfully apply the settings, then the UA
 must return an ImageCaptureErrorEvent to the resolver object's reject() method whose errorDescription is set to OPTIONS_ERROR. | Parameter | Type | Nullable | Optional | Description | 
|---|---|---|---|---|
| photoSettings |  | ✔ | ✘ | 
PromisetakePhototakePhoto() method of an ImageCapture object is invoked, a new Promise object is returned.
    If the readyState of the VideoStreamTrack provided in the constructor is not "live", the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a 
       new ImageCaptureError object whose errorDescription is set to INVALID_TRACK.  If the UA is unable to execute the takePhoto() method for any
       other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a 
       new ImageCaptureError object whose errorDescription is set to PHOTO_ERROR.
Otherwise it must
       queue a task, using the DOM manipulation task source, that runs the following steps:
        MediaStreamTrack into a Blob containing a single still image. The method of doing
                this will depend on the underlying device.  Devices 
                may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo,
                and then resume streaming.  In this case, the stopping and restarting of streaming should
                cause mute and unmute events to fire on the Track in question.  BlobEvent event containing the Blob to the resolver object's resolve() method.PromiseFrameGrabEvent[Constructor(DOMString type, optional FrameGrabEventInit frameGrabInitDict)]
interface FrameGrabEvent : Event {
    readonly    attribute ImageData imageData;
};imageData of type ImageData, readonly   ImageData object whose width and height attributes indicates the dimensions of the captured frame. FrameGrabEventInit Dictionarydictionary FrameGrabEventInit : EventInit {
    ImageData imageData;
};FrameGrabEventInit MembersimageData of type ImageDataImageData object containing the data to deliver via this event.ImageCaptureErrorEvent[Constructor(DOMString type, optional ImageCaptureErrorEventInit imageCaptureErrorInitDict)]
interface ImageCaptureErrorEvent : Event {
    readonly    attribute ImageCaptureError imageCaptureError;
};imageCaptureError of type ImageCaptureError, readonly   ImageCaptureError object whose errorDescription attribute indicates the type of error occurrence. ImageCaptureErrorEventInit Dictionarydictionary ImageCaptureErrorEventInit : EventInit {
    ImageCaptureError imageCaptureError;
};ImageCaptureErrorEventInit MembersimageCaptureError of type ImageCaptureErrorImageCaptureError object containing the data to deliver via this event.BlobEvent[Constructor(DOMString type, optional BlobEventInit blobInitDict)]
interface BlobEvent : Event {
    readonly    attribute Blob data;
};data of type Blob, readonly   Blob object whose type attribute indicates the encoding of the blob data. An implementation must return a Blob in a format that is capable of being viewed in an HTML <img> tag. BlobEventInit Dictionarydictionary BlobEventInit : EventInit {
    Blob data;
};BlobEventInit Membersdata of type BlobBlob object containing the data to deliver via this event.SettingsChangeEvent[Constructor(DOMString type, optional SettingsChangeEventInit photoSettingsInitDict)]
interface PhotoSettingsEvent : Event {
    readonly    attribute PhotoSettings photoSettings;
};photoSettings of type PhotoSettings, readonly   PhotoSettings object whose type attribute indicates the current photo settings. SettingsChangeEventInit Dictionarydictionary SettingsChangeEventInit : EventInit {
    PhotoSettings photoSettings;
};SettingsChangeEventInit MembersphotoSettings of type PhotoSettingsPhotoSettings object containing the data to deliver via this event.ImageCaptureError
The ImageCaptureError object is passed to an onerror event handler of an
 ImageCapture object if an error occurred when the object was created or any of its methods were invoked.
[NoInterfaceObject]
interface ImageCaptureError {
    readonly    attribute DOMString? errorDescription;
};errorDescription of type DOMString, readonly   , nullableerrorDescription attribute returns the appropriate DOMString for the error event.  Acceptable values are FRAME_ERROR, OPTIONS_ERROR, PHOTO_ERROR, and ERROR_UNKNOWN.MediaSettingsRangeinterface MediaSettingsRange {
    readonly    attribute unsigned long max;
    readonly    attribute unsigned long min;
    readonly    attribute unsigned long initial;
};initial of type unsigned long, readonly   max of type unsigned long, readonly   min of type unsigned long, readonly   MediaSettingsItemThe MediaSettingsItem interface is now defined, which allows for a single setting to be managed.
interface MediaSettingsItem {
    readonly    attribute any value;
};value of type any, readonly   PhotoCapabilitiesThe photoCapabilities attribute of the ImageCapture object provides
the photo-specific settings options and current settings values.  The following definitions are assumed
for individual settings and are provided for information purposes:
| Mode | Kelvin range | 
|---|---|
| incandescent | 2500-3500 | 
| fluorescent | 4000-5000 | 
| warm-fluorescent | 5000-5500 | 
| daylight | 5500-6500 | 
| cloudy-daylight | 6500-8000 | 
| twilight | 8000-9000 | 
| shade | 9000-10000 | 
interface PhotoCapabilities {
                attribute MediaSettingsItem  autoWhiteBalanceMode;
                attribute MediaSettingsRange whiteBalanceMode;
                attribute ExposureMode       autoExposureMode;
                attribute MediaSettingsRange exposureCompensation;
                attribute MediaSettingsRange iso;
                attribute MediaSettingsItem  redEyeReduction;
                attribute MediaSettingsRange brightness;
                attribute MediaSettingsRange contrast;
                attribute MediaSettingsRange saturation;
                attribute MediaSettingsRange sharpness;
                attribute MediaSettingsRange imageHeight;
                attribute MediaSettingsRange imageWidth;
                attribute MediaSettingsRange zoom;
                attribute FillLightMode      fillLightMode;
                attribute FocusMode          focusMode;
};autoExposureMode of type ExposureMode,            ExposureMode.autoWhiteBalanceMode of type MediaSettingsItem,            brightness of type MediaSettingsRange,            contrast of type MediaSettingsRange,            exposureCompensation of type MediaSettingsRange,            fillLightMode of type FillLightMode,            FillLightMode.focusMode of type FocusMode,            FocusMode.imageHeight of type MediaSettingsRange,            imageWidth of type MediaSettingsRange,            iso of type MediaSettingsRange,            redEyeReduction of type MediaSettingsItem,            saturation of type MediaSettingsRange,            sharpness of type MediaSettingsRange,            whiteBalanceMode of type MediaSettingsRange,            WhiteBalanceModeEnum.zoom of type MediaSettingsRange,            ExposureModeenum ExposureMode {
    "frame-average",
    "center-weighted",
    "spot-metering"
};| Enumeration description | |
|---|---|
frame-average | Average of light information from entire scene | 
center-weighted | Sensitivity concentrated towards center of viewfinder | 
spot-metering | Spot-centered weighting | 
FillLightModeenum FillLightMode {
    "unavailable",
    "auto",
    "off",
    "flash",
    "on"
};| Enumeration description | |
|---|---|
unavailable | This source does not have an option to change fill light modes (e.g., the camera does not have a flash) | 
auto | The video device's fill light will be enabled when required (typically low light conditions). Otherwise it will be off. Note that auto does not guarantee that a flash will fire when takePhoto is called. Use flash to guarantee firing of the flash for the takePhoto() or getFrame() methods. | 
off | The source's fill light and/or flash will not be used. | 
flash | This value will always cause the flash to fire for the takePhoto() or getFrame() methods.  | 
on | The source's fill light will be turned on (and remain on) while the source MediaStreamTrack is active | 
FocusModeenum FocusMode {
    "unavailable",
    "auto",
    "manual"
};| Enumeration description | |
|---|---|
unavailable | This source does not have an option to change focus modes. | 
auto | Auto-focus mode is enabled. | 
manual | TManual focus mode is enabled. | 
PhotoSettingsThe PhotoSettings object is optionally passed into the ImageCapture.setOptions() method
in order to modify capture device settings specific to still imagery.  Each of the attributes in this object
are optional.
dictionary PhotoSettings {
    attribute boolean       autoWhiteBalanceMode;
    attribute unsigned long whiteBalanceMode;
    attribute ExposureMode  autoExposureMode;
    attribute unsigned long exposureCompensation;
    attribute unsigned long iso;
    attribute boolean       redEyeReduction;
    attribute unsigned long brightness;
    attribute unsigned long contrast;
    attribute unsigned long saturation;
    attribute unsigned long sharpness;
    attribute unsigned long zoom;
    attribute unsigned long imageHeight;
    attribute unsigned long imageWidth;
    attribute FillLightMode fillLightMode;
    attribute FocusMode     focusMode;
};PhotoSettings MembersautoExposureMode of type attribute ExposureModeExposureMode.autoWhiteBalanceMode of type attribute booleanbrightness of type attribute unsigned longcontrast of type attribute unsigned longexposureCompensation of type attribute unsigned longfillLightMode of type attribute FillLightModeFillLightMode.focusMode of type attribute FocusModeFocusMode.imageHeight of type attribute unsigned longimageWidth of type attribute unsigned longiso of type attribute unsigned longredEyeReduction of type attribute booleansaturation of type attribute unsigned longsharpness of type attribute unsigned longwhiteBalanceMode of type attribute unsigned longzoom of type attribute unsigned longnavigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
function gotMedia(mediastream) {
   //Extract video track.
   var videoDevice = mediastream.getVideoTracks()[0];
   // Check if this device supports a picture mode...
   var captureDevice = new ImageCapture(videoDevice);
   if (captureDevice) {
         captureDevice.grabFrame().then(processFrame(imgData));
         }
     }
 function processFrame(e) {
    imgData = e.imageData;
    width = imgData.width;
    height = imgData.height;
    for (j=3; j < imgData.width; j+=4)
         {
         // Set all alpha values to medium opacity
         imgData.data[j] = 128;
         }
    // Create new ImageObject with the modified pixel values
    var canvas = document.createElement('canvas');
    ctx = canvas.getContext("2d");
    newImg = ctx.createImageData(width,height);
    for (j=0; j < imgData.width; j++)
         {
         newImg.data[j] = imgData.data[j];
         }
    // ... and do something with the modified image ...
    }
 function failedToGetMedia{
    console.log('Stream failure');
    }navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
function gotMedia(mediastream) {
   //Extract video track.
   var videoDevice = mediastream.getVideoTracks()[0];
   // Check if this device supports a picture mode...
   var captureDevice = new ImageCapture(videoDevice);
   if (captureDevice) {
         if (captureDevice.photoCapabilities.redEyeReduction) {
            captureDevice.setOptions({redEyeReductionSetting:true}).then(captureDevice.takePhoto().then(showPicture(blob),function(error){alert("Failed to take photo");}));
            }
         else
            console.log('No red eye reduction');
         }
     }
 function showPicture(e) {
    var img = document.querySelector("img");
    img.src = URL.createObjectURL(e.data);
    }
 function failedToGetMedia{
    console.log('Stream failure');
    }<html>
<body>
<p><canvas id="frame"></canvas></p> 
<button onclick="stopFunction()">Stop frame grab</button>
<script>
var canvas = document.getElementById('frame');
navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
function gotMedia(mediastream) {
   //Extract video track.  
   var videoDevice = mediastream.getVideoTracks()[0];
   // Check if this device supports a picture mode...
   var captureDevice = new ImageCapture(videoDevice);
   var frameVar;
   if (captureDevice) {
         frameVar = setInterval(captureDevice.grabFrame().then(processFrame()), 1000);
         }
     }
 function processFrame(e) {
     imgData = e.imageData;
     canvas.width = imgData.width;
     canvas.height = imgData.height;
     canvas.getContext('2d').drawImage(imgData, 0, 0,imgData.width,imgData.height);
     }
 function stopFunction(e) {
     clearInterval(myVar);
     }
</script>
</body>
</html>