WebXR Device API

W3C Working Draft,

This version:
https://www.w3.org/TR/2019/WD-webxr-20190521/
Latest published version:
https://www.w3.org/TR/webxr/
Editor's Draft:
https://immersive-web.github.io/webxr/
Previous Versions:
Issue Tracking:
GitHub
Inline In Spec
Editors:
(Google)
(Amazon [Microsoft until 2018])
Participate:
File an issue (open issues)
Mailing list archive
W3C’s #immersive-web IRC
Unstable API

Parts of the API represented in this document are incomplete and may change at any time.

While this specification is under development some concepts may be represented better by the WebXR Device API Explainer.


Abstract

This specification describes support for accessing virtual reality (VR) and augmented reality (AR) devices, including sensors and head-mounted displays, on the Web.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

The Immersive Web Working Group maintains a list of all bug reports that the group has not yet addressed. This draft highlights some of the pending issues that are still to be discussed in the working group. No decision has been taken on the outcome of these issues including whether they are valid. Pull requests with proposed specification text for outstanding issues are strongly encouraged.

This document was published by the Immersive Web Working Group as a Working Draft. This document is intended to become a W3C Recommendation.

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 March 2019 W3C Process Document.

1. Introduction

Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment.

Immersive computing introduces strict requirements for high-precision, low-latency communication in order to deliver an acceptable experience. It also brings unique security concerns for a platform like the web. The WebXR Device API provides the interfaces necessary to enable developers to build compelling, comfortable, and safe immersive applications on the web across a wide variety of hardware form factors.

Other web interfaces, such as the RelativeOrientationSensor and AbsoluteOrientationSensor, can be repurposed to surface input from some devices to polyfill the WebXR Device API in limited situations. These interfaces cannot support multiple features of high-end immersive experiences, however, such as 6DoF tracking, presentation to headset peripherals, or tracked input devices.

1.1. Terminology

This document uses the acronym XR throughout to refer to the spectrum of hardware, applications, and techniques used for Virtual Reality, Augmented Reality, and other related technologies. Examples include, but are not limited to:

The important commonality between them being that they offer some degree of spatial tracking with which to simulate a view of virtual content.

Terms like "XR Device", "XR Application", etc. are generally understood to apply to any of the above. Portions of this document that only apply to a subset of these devices will indicate so as appropriate.

The terms 3DoF and 6DoF are used throughout this document to describe the tracking capabilities of XR devices.

1.2. Application flow

Most applications using the WebXR Device API will follow a similar usage pattern:

2. Model

2.1. XR device

An XR device is a physical unit of hardware that can present imagery to the user. On desktop clients, this is usually a headset peripheral. On mobile clients, it may represent the mobile device itself in conjunction with a viewer harness. It may also represent devices without stereo-presentation capabilities but with more advanced tracking.

An XR device has a list of supported modes (a list of strings) that contains "inline" and all the other enumeration values of XRSessionMode that the XR device supports.

3. Initialization

partial interface Navigator {
  [SecureContext, SameObject] readonly attribute XR xr;
};

The xr attribute’s getter MUST return the XR object that is associated with the context object.

3.2. XR

[SecureContext, Exposed=Window] interface XR : EventTarget {
  // Methods
  Promise<void> supportsSession(XRSessionMode mode);
  Promise<XRSession> requestSession(XRSessionMode mode);

  // Events
  attribute EventHandler ondevicechange;
};

The user agent MUST create an XR object when a Navigator object is created and associate it with that object.

An XR object is the entry point to the API, used to query for XR features available to the user agent and initiate communication with XR hardware via the creation of XRSessions.

An XR object has a list of XR devices (a list of XR device), which MUST be initially an empty list.

An XR object has an XR device (null or XR device) which is initially null and represents the active XR device from the list of XR devices.

The user agent MUST be able to enumerate XR devices attached to the system, at which time each available device is placed in the list of XR devices. Subsequent algorithms requesting enumeration MUST reuse the cached list of XR devices. Enumerating the devices should not initialize device tracking. After the first enumeration the user agent MUST begin monitoring device connection and disconnection, adding connected devices to the list of XR devices and removing disconnected devices.

Each time the list of XR devices changes the user agent should select an XR device by running the following steps:

  1. Let oldDevice be the XR device.

  2. If the list of XR devices is an empty list, set the XR device to null.

  3. If the list of XR devices's size is one, set the XR device to the list of XR devices[0].

  4. If there are any active XRSessions and the list of XR devices contains oldDevice, set the XR device to oldDevice.

  5. Else, set the XR device to a device of the user agent’s choosing.

  6. If this is the first time devices have been enumerated or oldDevice equals the XR device, abort these steps.

  7. Shut down any active XRSessions.

  8. Set the XR compatible boolean of all WebGLRenderingContextBase instances to `false`.

  9. Queue a task that fires a simple event named devicechange on the context object.

NOTE: The user agent is allowed to use any criteria it wishes to select an XR device when the list of XR devices contains multiple devices. For example, the user agent may always select the first item in the list, or provide settings UI that allows users to manage device priority. Ideally the algorithm used to select the default device is stable and will result in the same device being selected across multiple browsing sessions.

The user agent ensures an XR device is selected by running the following steps:

  1. If the context object's XR device is not null, abort these steps.

  2. Enumerate XR devices.

  3. Select an XR device.

The ondevicechange attribute is an Event handler IDL attribute for the devicechange event type.

When the supportsSession(mode) method is invoked, it MUST return a new Promise promise and run the following steps in parallel:

  1. Ensure an XR device is selected.

  2. If the XR device is null, reject promise with a "NotSupportedError" DOMException and abort these steps.

  3. If the XR device's list of supported modes does not contain mode, reject promise with a "NotSupportedError" DOMException and abort these steps.

  4. Else resolve promise.

Calling supportsSession() MUST NOT trigger device-selection UI as this would cause many sites to display XR-specific dialogs early in the document lifecycle without user activation. Additionally, calling supportsSession() MUST NOT interfere with any running XR applications on the system, and MUST NOT cause XR-related applications to launch such as system trays or storefronts.

The following code checks to see if immersive-vr sessions are supported.
navigator.xr.supportsSession('immersive-vr').then(() => {
  // 'immersive-vr' sessions are supported.
  // Page should advertise support to the user.
}

The XR object has a pending immersive session boolean, which MUST be initially false, an active immersive session, which MUST be initially null, and a list of inline sessions, which MUST be initially empty.

When the requestSession(mode) method is invoked, the user agent MUST return a new Promise promise and run the following steps in parallel:

  1. Let immersive be true if mode is "immersive-vr" or "immersive-ar", and false otherwise.

  2. If immersive is true:

    1. If pending immersive session is true or active immersive session is not null, reject promise with an "InvalidStateError" DOMException and abort these steps.

    2. Else set pending immersive session to be true.

  3. Ensure an XR device is selected.

  4. If the XR device is null, reject promise with null.

  5. Else if the XR device's list of supported modes does not contain mode, reject promise with a "NotSupportedError" DOMException.

  6. Else If immersive is true and the algorithm is not triggered by user activation, reject promise with a "SecurityError" DOMException and abort these steps.

  7. If promise was rejected and immersive is true, set pending immersive session to false.

  8. If promise was rejected, abort these steps.

  9. Let session be a new XRSession object.

  10. Initialize the session with session and mode.

  11. If immersive is true, set the active immersive session to session, and set pending immersive session to false.

  12. Else append session to the list of inline sessions.

  13. Resolve promise with session.

The following code attempts to retrieve an immersive-vr XRSession.
let xrSession;

navigator.xr.requestSession("immersive-vr").then((session) => {
  xrSession = session;
});

3.3. XRSessionMode

The XRSessionMode enum defines the modes that an XRSession can operate in.

enum XRSessionMode {
  "inline",
  "immersive-vr",
  "immersive-ar"
};

An immersive session refers to either an immersive-vr or an immersive-ar session. Immersive sessions MUST provide some level of viewer tracking, and content MUST be shown at the proper scale relative to the user and/or the surrounding environment. Additionally, Immersive sessions MUST be given exclusive access to the XR device, meaning that while the immersive session is not blurred the HTML document is not shown on the XR device's display, nor is content from other applications shown on the XR device's display.

NOTE: Examples of ways exclusive access may be presented include stereo content displayed on a virtual reality or augmented reality headset, or augmented reality content displayed fullscreen on a mobile device.

4. Session

4.1. XRSession

Any interaction with XR hardware is done via an XRSession object, which can only be retrieved by calling requestSession() on the XR object. Once a session has been successfully acquired it can be used to poll the device pose, query information about the user’s environment and, present imagery to the user.

The user agent, when possible, SHOULD NOT initialize device tracking or rendering capabilities until an XRSession has been acquired. This is to prevent unwanted side effects of engaging the XR systems when they’re not actively being used, such as increased battery usage or related utility applications from appearing when first navigating to a page that only wants to test for the presence of XR hardware in order to advertise XR features. Not all XR platforms offer ways to detect the hardware’s presence without initializing tracking, however, so this is only a strong recommendation.

enum XREnvironmentBlendMode {
  "opaque",
  "additive",
  "alpha-blend",
};

[SecureContext, Exposed=Window] interface XRSession : EventTarget {
  // Attributes
  readonly attribute XREnvironmentBlendMode environmentBlendMode;
  [SameObject] readonly attribute XRRenderState renderState;
  [SameObject] readonly attribute XRSpace viewerSpace;

  // Methods
  void updateRenderState(optional XRRenderStateInit state);
  Promise<XRReferenceSpace> requestReferenceSpace(XRReferenceSpaceType type);

  FrozenArray<XRInputSource> getInputSources();

  long requestAnimationFrame(XRFrameRequestCallback callback);
  void cancelAnimationFrame(long handle);

  Promise<void> end();

  // Events
  attribute EventHandler onblur;
  attribute EventHandler onfocus;
  attribute EventHandler onend;
  attribute EventHandler onselect;
  attribute EventHandler oninputsourceschange;
  attribute EventHandler onselectstart;
  attribute EventHandler onselectend;
};

Each XRSession has a mode, which is one of the values of XRSessionMode.

The viewerSpace on an XRSession has a list of views, which is a list of views corresponding to the views provided by the XR device.

To initialize the session, given session and mode, the user agent MUST run the following steps:

  1. Set session’s mode to mode.

  2. Initialize the render state.

  3. If no other features of the user agent have done so already, perform the necessary platform-specific steps to initialize the device’s tracking and rendering capabilities.

A number of different circumstances may shut down the session, which is permanent and irreversible. Once a session has been shut down the only way to access the XR device's tracking or rendering capabilities again is to request a new session. Each XRSession has an ended boolean, initially set to false, that indicates if it has been shut down.

When an XRSession is shut down the following steps are run:

  1. Let session be the target XRSession object.

  2. Set session’s ended value to true.

  3. If the active immersive session is equal to session, set the active immersive session to null.

  4. Remove session from the list of inline sessions.

  5. Reject any outstanding promises returned by session with an InvalidStateError.

  6. If no other features of the user agent are actively using them, perform the necessary platform-specific steps to shut down the device’s tracking and rendering capabilities.

The end() method provides a way to manually shut down a session. When invoked, it MUST run the following steps:

  1. Let promise be a new Promise.

  2. Shut down the target XRSession object.

  3. Queue a task to perform the following steps:

    1. Wait until any platform-specific steps related to shutting down the session have completed.

    2. Resolve promise.

  4. Return promise.

Each XRSession has an active render state which is a new XRRenderState, a list of pending render states, which is initially empty.

The renderState attribute returns the XRSession's active render state.

Each XRSession has a minimum inline field of view and a maximum inline field of view, defined in radians. The values MUST be determined by the user agent and MUST fall in the range of 0 to PI.

When the updateRenderState(newState) method is invoked, the user agent MUST run the following steps:

  1. Let session be the target XRSession.

  2. If session’s ended value is true, throw an InvalidStateError and abort these steps.

  3. If newState’s baseLayer's was created with an XRSession other than session, throw an InvalidStateError and abort these steps.

  4. If newState’s inlineVerticalFieldOfView is set and session is an immersive session, throw an InvalidStateError and abort these steps.

  5. Append newState to session’s list of pending render states.

When requested, the XRSession MUST apply pending render states by running the following steps:

  1. Let session be the target XRSession.

  2. Let activeState be session’s active render state.

  3. Let pendingStates be session’s list of pending render states.

  4. Set session’s list of pending render states to the empty list.

  5. For each newState in pendingStates:

    1. If newState’s depthNear value is set, set activeState’s depthNear to newState’s depthNear.

    2. If newState’s depthFar value is set, set activeState’s depthFar to newState’s depthFar.

    3. If newState’s inlineVerticalFieldOfView is set, set activeState’s inlineVerticalFieldOfView to newState’s inlineVerticalFieldOfView.

    4. If newState’s baseLayer is set, set activeState’s baseLayer to newState’s baseLayer.

    5. If newState’s outputContext is set, set activeState’s outputContext to newState’s outputContext and update the XRPresentationContext session to session.

  6. If activeState’s inlineVerticalFieldOfView is less than session’s minimum inline field of view set activeState’s inlineVerticalFieldOfView to session’s minimum inline field of view.

  7. If activeState’s inlineVerticalFieldOfView is greater than session’s maximum inline field of view set activeState’s inlineVerticalFieldOfView to session’s maximum inline field of view.

When the requestReferenceSpace(options) method is invoked, the user agent MUST return a new Promise promise and run the following steps in parallel:

  1. Create a reference space, referenceSpace, as described by options.

  2. If referenceSpace is null, reject promise with a NotSupportedError and abort these steps.

  3. Resolve promise with referenceSpace.

When the getInputSources() method is invoked, the user agent MUST run the following steps:

  1. Return the current list of active input sources.

Each XRSession has a environment blending mode value, which is a enum which MUST be set to whichever of the following values best matches the behavior of imagery rendered by the session in relation to the user’s surrounding environment.

The environmentBlendMode attribute returns the XRSession's environment blending mode

NOTE: Most Virtual Reality devices exhibit opaque blending behavior. Augmented Reality devices that use transparent optical elements frequently exhibit additive blending behavior, and Augmented Reality devices that use passthrough cameras frequently exhibit alpha-blend blending behavior.

The viewerSpace attribute is an XRSpace which tracks the pose of the viewer.

NOTE: For any given XRFrame calling getPose() with the viewerSpace and any XRReferenceSpace will return the same pose (without the views array) as calling getViewerPose() with the same XRReferenceSpace.

The onblur attribute is an Event handler IDL attribute for the blur event type.

The onfocus attribute is an Event handler IDL attribute for the focus event type.

The onend attribute is an Event handler IDL attribute for the end event type.

The oninputsourceschange attribute is an Event handler IDL attribute for the inputsourceschange event type.

The onselectstart attribute is an Event handler IDL attribute for the selectstart event type.

The onselectend attribute is an Event handler IDL attribute for the selectend event type.

The onselect attribute is an Event handler IDL attribute for the select event type.

We still need to document what happens when we end the session. (This is filed.)

We still need to document what happens when we blur all sessions. (This is filed.)

We still need to document what happens when we poll the device pose (This is filed.)

We still need to document how thelist of active input sources is maintained. (This is filed.)

4.2. XRRenderState

There are multiple values that developers can configure which affect how the session’s output is composited. These values are tracked by an XRRenderState object.

dictionary XRRenderStateInit {
  double depthNear;
  double depthFar;
  double inlineVerticalFieldOfView;
  XRLayer? baseLayer;
  XRPresentationContext? outputContext;
};

[SecureContext, Exposed=Window] interface XRRenderState {
  readonly attribute double depthNear;
  readonly attribute double depthFar;
  readonly attribute double? inlineVerticalFieldOfView;
  readonly attribute XRLayer? baseLayer;
  readonly attribute XRPresentationContext? outputContext;
};

When an XRRenderState object is created for an XRSession session, the user agent MUST initialize the render state by running the following steps:

  1. Let state be the newly created XRRenderState object.

  2. Initialize state’s depthNear to 0.1.

  3. Initialize state’s depthFar to 1000.0.

  4. If session is an immersive session, initialize state’s inlineVerticalFieldOfView to null.

  5. Else initialize state’s inlineVerticalFieldOfView to PI * 0.5.

  6. Initialize state’s baseLayer to null.

  7. Initialize state’s outputContext to null.

The depthNear attribute defines the distance, in meters, of the near clip plane from the viewer. The depthFar attribute defines the distance, in meters, of the far clip plane from the viewer.

depthNear and depthFar is used in the computation of the projectionMatrix of XRViews and determines how the values of an XRWebGLLayer depth buffer are interpreted. depthNear MAY be greater than depthFar.

The inlineVerticalFieldOfView attribute defines the default vertical field of view in radians used when computing projection matrices for inline XRSessions. The projection matrix calculation also takes into account the aspect ratio of the outputContext's canvas. This value MUST be null for immersive sessions.

4.3. Animation Frames

The primary way an XRSession provides information about the tracking state of the XR device is via callbacks scheduled by calling requestAnimationFrame() on the XRSession instance.

callback XRFrameRequestCallback = void (DOMHighResTimeStamp time, XRFrame frame);

Each XRFrameRequestCallback object has a cancelled boolean initially set to false.

Each XRSession has a list of animation frame callbacks, which is initially empty, and an animation frame callback identifier, which is a number initially be zero.

When the requestAnimationFrame(callback) method is invoked, the user agent MUST run the following steps:

  1. Let session be the target XRSession object.

  2. Increment session’s animation frame callback identifier by one.

  3. Append callback to session’s list of animation frame callbacks, associated with session’s animation frame callback identifier’s current value.

  4. Return session’s animation frame callback identifier’s current value.

When the cancelAnimationFrame(handle) method is invoked, the user agent MUST run the following steps:

  1. Let session be the target XRSession object.

  2. Find the entry in session’s list of animation frame callbacks that is associated with the value handle.

  3. If there is such an entry, set it’s cancelled boolean to true and remove it from session’s list of animation frame callbacks.

When an XRSession session receives updated viewer state from the XR device, it runs an XR animation frame with a timestamp now and an XRFrame frame, which MUST run the following steps regardless of if the list of animation frame callbacks is empty or not:

  1. If session’s list of pending render states is not empty, apply pending render states.

  2. If session’s renderState's baseLayer is null, abort these steps.

  3. If session’s mode is "inline" and session’s renderState's outputContext is null, abort these steps.

  4. Let callbacks be a list of the entries in session’s list of animation frame callback, in the order in which they were added to the list.

  5. Set session’s list of animation frame callbacks to the empty list.

  6. Set frame’s active boolean to true.

  7. Set frame’s animationFrame boolean to true.

  8. For each entry in callbacks, in order:

    1. If the entry’s cancelled boolean is true, continue to the next entry.

    2. Invoke the Web IDL callback function, passing now and frame as the arguments

    3. If an exception is thrown, report the exception.

  9. Set frame’s active boolean to false.

4.4. The XR Compositor

The user agent MUST maintain an XR Compositor which handles presentation to the XR device and frame timing. The compositor MUST use an independent rendering context whose state is isolated from that of any WebGL contexts used as XRWebGLLayer sources to prevent the page from corrupting the compositor state or reading back content from other pages. the compositor MUST also run in separate thread or processes to decouple performance of the page from the ability to present new imagery to the user at the appropriate framerate.

The XR Compositor has a list of layer images, which is initially empty.

5. Frame Loop

5.1. XRFrame

An XRFrame represents a snapshot of the state of all of the tracked objects for an XRSession. Applications can acquire an XRFrame by calling requestAnimationFrame() on an XRSession with an XRFrameRequestCallback. When the callback is called it will be passed an XRFrame. Events which need to communicate tracking state, such as the select event, will also provide a XRFrame.

[SecureContext, Exposed=Window] interface XRFrame {
  [SameObject] readonly attribute XRSession session;

  XRViewerPose? getViewerPose(XRReferenceSpace referenceSpace);
  XRPose? getPose(XRSpace sourceSpace, XRSpace destinationSpace);
};

Each XRFrame has a active boolean which is initially set to false, and an animationFrame boolean which is initially set to false.

The session attribute returns the XRSession that produced the XRFrame.

When the getViewerPose(referenceSpace) method is invoked, the user agent MUST run the following steps:

  1. Let frame be the target XRFrame

  2. Let session be frame’s session object.

  3. If frame’s animationFrame boolean is false, throw an InvalidStateError and abort these steps.

  4. Let pose be a new XRViewerPose object.

  5. Populate the pose of session’s viewerSpace in referenceSpace at the time represented by frame into pose.

  6. If pose is null return null.

  7. Let xrviews be an empty list.

  8. For each view view in the list of views on the viewerSpace of session, perform the following steps:

    1. Let xrview be a new XRView object.

    2. Initialize xrview’s eye to view’s eye

    3. Initialize xrview’s projectionMatrix to view’s projection matrix

    4. Let offset be an XRRigidTransform equal to the view offset of view

    5. Set xrview’s transform property to the result of multiplying the offset transform by the XRViewerPose's transform

    6. Append xrview to xrviews

  9. Set pose’s views to xrviews

  10. Return pose.

When the getPose(sourceSpace, destinationSpace) method is invoked, the user agent MUST run the following steps:

  1. Let frame be the target XRFrame

  2. Let pose be a new XRPose object.

  3. Populate the pose of sourceSpace in destinationSpace at the time represented by frame into pose.

  4. Return pose.

6. Spaces

A core feature of the WebXR Device API is the ability to provide spatial tracking. Spaces are the interface that enable applications to reason about how tracked entities are spatially related to the user’s physical environment and each other.

6.1. XRSpace

An XRSpace represents a virtual coordinate system with an origin that corresponds to a physical location. Spatial data that is requested from the API or given to the API is always expressed in relation to a specific XRSpace at the time of a specific XRFrame. Numeric values such as pose positions are coordinates in that space relative to its origin. The interface is intentionally opaque.

[SecureContext, Exposed=Window] interface XRSpace : EventTarget {
  
};

Each XRSpace has a session which is set to the XRSession that created the XRSpace.

Each XRSpace has a native origin that is tracked by the XR device's underlying tracking system, and an effective origin, which is the basis of the XRSpace's coordinate system. The transform from the effective space to the native origin's space is defined by an origin offset, which is an XRRigidTransform initially set to an identity transform.

The effective origin of an XRSpace can only be observed in the coordinate system of another XRSpace as an XRPose, returned by an XRFrame's getPose() method. The spatial relationship between XRSpaces MAY change between XRFrames.

To populate the pose of a XRSpace sourceSpace in an XRSpace destinationSpace at the time represented by a XRFrame frame into an XRPose pose, the user agent MUST run the following steps:

    1. If frame’s active boolean is false, throw an InvalidStateError and abort these steps.

  1. Let session be frame’s session object.

  2. If sourceSpace’s session does not equal session, throw an InvalidStateError and abort these steps.

  3. If destinationSpace’s session does not equal session, throw an InvalidStateError and abort these steps.

  4. If destinationSpace’s pose cannot be determined relative to sourceSpace at the time represented by frame, set pose to null.

  5. Let transform be pose’s transform.

  6. Set transform’s position to the location of sourceSpace’s effective origin in destinationSpace’s coordinate system.

  7. Set transform’s orientation to the orientation of sourceSpace’s effective origin in destinationSpace’s coordinate system.

6.2. XRReferenceSpace

An XRReferenceSpace is one of several common XRSpaces that applications can use to establish a spatial relationship with the user’s physical environment.

XRReferenceSpaces are generally expected to remain static for the duration of the XRSession, with the most common exception being mid-session reconfiguration by the user. The native origin for every XRReferenceSpace describes a coordinate system where +X is considered "Right", +Y is considered "Up", and -Z is considered "Forward".

enum XRReferenceSpaceType {
  "identity",
  "position-disabled",
  "eye-level",
  "floor-level",
  "bounded",
  "unbounded"
};

[SecureContext, Exposed=Window]
interface XRReferenceSpace : XRSpace {
  XRReferenceSpace getOffsetReferenceSpace(XRRigidTransform originOffset);

  attribute EventHandler onreset;
};

Each XRReferenceSpace has a type, which is an XRReferenceSpaceType.

An XRReferenceSpace is most frequently obtained by calling requestReferenceSpace(), which creates an instance of an XRReferenceSpace or an interface extending it, determined by the XRReferenceSpaceType enum value passed into the call. The type indicates the tracking behavior that the reference space will exhibit:

Devices that support any of the following XRReferenceSpaceTypes MUST support all three: position-disabled, eye-level, floor-level. The minimum sensor data necessary for supporting all three reference spaces is the same.

Note: The position-disabled type is primarily intended for use with pre-rendered media such as panoramic photos or videos. It should not be used for most other media types due to user discomfort associated with the lack of a neck model or full positional tracking.

The onreset attribute is an Event handler IDL attribute for the reset event type.

When an XRReferenceSpace is requested, the user agent MUST create a reference space by running the following steps:

  1. Let session be the XRSession object that requested creation of a reference space.

  2. Let type be set to the XRReferenceSpaceType passed to requestReferenceSpace().

  3. If type is bounded, let referenceSpace be a new XRBoundedReferenceSpace.

  4. Else let referenceSpace be a new XRReferenceSpace.

  5. Initialize referenceSpace’s type to be type.

  6. Initialize session to be session.

  7. Return referenceSpace.

The getOffsetReferenceSpace(originOffset) method MUST perform the following steps when invoked:
  1. Let base be the XRReferenceSpace the method was called on.

  2. If base is an instance of XRBoundedReferenceSpace, let offsetSpace be a new XRBoundedReferenceSpace and set offsetSpace’s boundsGeometry to base’s boundsGeometry, with each point multiplied by the inverse of originOffset.

  3. Else let offsetSpace be a new XRReferenceSpace.

  4. Set offsetSpace’s native origin to base’s native origin.

  5. Set offsetSpace’s origin offset to the result of multiplying originOffset by base’s origin offset.

  6. Return offsetSpace.

6.3. XRBoundedReferenceSpace

XRBoundedReferenceSpace extends XRReferenceSpace to include boundsGeometry, indicating the pre-confingured boundaries of the users space.

[SecureContext, Exposed=Window]
interface XRBoundedReferenceSpace : XRReferenceSpace {
  readonly attribute FrozenArray<DOMPointReadOnly> boundsGeometry;
};    

The origin of a XRBoundedReferenceSpace MUST be positioned at the floor, such that the `y` axis equals `0` at floor level. The `x` and `z` position and orientation are initialized based on the conventions of the underlying platform, typically expected to be near the center of the room facing in a logical forward direction.

Note: Other XR platforms sometimes refer to the type of tracking offered by a bounded reference space as "room scale" tracking. An XRBoundedReferenceSpace is not intended to describe multi-room spaces, areas with uneven floor levels, or very large open areas. Content that needs to handle those scenarios should use an unbounded reference space.

Each XRBoundedReferenceSpace has a native bounds geometry describing the border around the XRBoundedReferenceSpace, which the user can expect to safely move within. The polygonal boundary is given as an array of DOMPointReadOnlys, which represents a loop of points at the edges of the safe space. The points describe offsets from the native origin in meters. Points MUST be given in a clockwise order as viewed from above, looking towards the negative end of the Y axis. The y value of each point MUST be 0 and the w value of each point MUST be 1. The bounds can be considered to originate at the floor and extend infinitely high. The shape it describes MAY be convex or concave.

The boundsGeometry attribute is an array of DOMPointReadOnlys such that each entry is equal to the entry in the XRBoundedReferenceSpace's native bounds geometry premultiplied by the inverse of the origin offset. In other words, it provides the same border in XRBoundedReferenceSpace coordinates relative to the effective origin.

Note: Content should not require the user to move beyond the boundsGeometry. It is possible for the user to move beyond the bounds if their physical surroundings allow for it, resulting in position values outside of the polygon they describe. This is not an error condition and should be handled gracefully by page content.

Note: Content generally should not provide a visualization of the boundsGeometry, as it’s the user agent’s responsibility to ensure that safety critical information is provided to the user.

7. Views

7.1. XRView

An XRView describes a single view into an XR scene for a given frame.

Each view corresponds to a display or portion of a display used by an XR device to present imagery to the user. They are used to retrieve all the information necessary to render content that is well aligned to the view's physical output properties, including the field of view, eye offset, and other optical properties. Views may cover overlapping regions of the user’s vision. No guarantee is made about the number of views any XR device uses or their order, nor is the number of views required to be constant for the duration of an XRSession.

A view has an associated internal view offset, which is an XRRigidTransform describing the position and orientation of the view in the viewerSpace's coordinate system.

A view has an associated projection matrixwhich is a matrix describing the projection to be used when rendering the view, provided by the underlying XR device. The projection matrix MAY include transformations such as shearing that prevent the projection from being accurately described by a simple frustum.

A view has an associated eyewhich is an XREye describing which eye this view is expected to be shown to. If the view does not have an intrinsically associated eye (the display is monoscopic, for example) this value MUST be set to "left".

NOTE: Many HMDs will request that content render two views, one for the left eye and one for the right, while most magic window devices will only request one view, but applications should never assume a specific view configuration. For example: A magic window device may request two views if it is capable of stereo output, but may revert to requesting a single view for performance reasons if the stereo output mode is turned off. Similarly, HMDs may request more than two views to facilitate a wide field of view or displays of different pixel density.

enum XREye {
  "left",
  "right"
};

[SecureContext, Exposed=Window] interface XRView {
  readonly attribute XREye eye;
  [SameObject] readonly attribute Float32Array projectionMatrix;
  [SameObject] readonly attribute XRRigidTransform transform;
};

The eye attribute describes is the eye of the underlying view. This attribute’s primary purpose is to ensure that pre-rendered stereo content can present the correct portion of the content to the correct eye.

The projectionMatrix attribute is the projection matrix of the underlying view. It is strongly recommended that applications use this matrix without modification or decomposition. Failure to use the provided projection matrices when rendering may cause the presented frame to be distorted or badly aligned, resulting in varying degrees of user discomfort.

The transform attribute is the XRRigidTransform of the viewpoint.

NOTE: The transform can be used to position camera objects in many rendering libraries. If a more traditional view matrix is needed by the application one can be retrieved by calling `view.transform.inverse.matrix`.

7.2. XRViewport

An XRViewport object describes a viewport, or rectangular region, of a graphics surface.

[SecureContext, Exposed=Window] interface XRViewport {
  readonly attribute long x;
  readonly attribute long y;
  readonly attribute long width;
  readonly attribute long height;
};

The x and y attributes define an offset from the surface origin and the width and height attributes define the rectangular dimensions of the viewport.

The exact interpretation of the viewport values depends on the conventions of the graphics API the viewport is associated with:

The following code loops through all of the XRViews of an XRViewerPose, queries an XRViewport from an XRWebGLLayer for each, and uses them to set the appropriate WebGL viewports for rendering.
xrSession.requestAnimationFrame((time, xrFrame) => {
  let viewer = xrFrame.getViewerPose(xrReferenceSpace);

  gl.bindFramebuffer(xrWebGLLayer.framebuffer);
  for (xrView of viewer.views) {
    let xrViewport = xrWebGLLayer.getViewport(xrView);
    gl.viewport(xrViewport.x, xrViewport.y, xrViewport.width, xrViewport.height);

    // WebGL draw calls will now be rendered into the appropriate viewport.
  }
});

8. Geometric Primitives

8.1. Matrices

WebXR provides various transforms in the form of matrices. WebXR uses the WebGL conventions when communicating matrices, in which 4x4 matrices are given as 16 element Float32Arrays with column major storage, and are applied to column vectors by premultiplying the matrix from the left. They may be passed directly to WebGL’s uniformMatrix4fv function, used to create an equivalent DOMMatrix, or used with a variety of third party math libraries.

Matrices returned from the WebXR Device API will be a 16 element Float32Array laid out like so:
[a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15]

Applying this matrix as a transform to a column vector specified as a DOMPointReadOnly lik so:

{x:X, y:Y, z:Z, w:1}

Produces the following result:

a0 a4 a8  a12  *  X  =  a0 * X + a4 * Y +  a8 * Z + a12
a1 a5 a9  a13     Y     a1 * X + a5 * Y +  a9 * Z + a13
a2 a6 a10 a14     Z     a2 * X + a6 * Y + a10 * Z + a14
a3 a7 a11 a15     1     a3 * X + a7 * Y + a11 * Z + a15

8.2. Normalization

There are several algorithms which call for a vector or quaternion to be normalized, which means to scale the components to have a collective magnitude of 1.0.

To normalize a list of components the UA MUST perform the following steps:

  1. Let length be the square root of the sum of the squares of each component.

  2. If length is 0, throw an InvalidStateError and abort these steps.

  3. Divide each component by length and set the component.

8.3. XRRigidTransform

An XRRigidTransform is a transform described by a position and orientation. When interpreting an XRRigidTransform the orientation is always applied prior to the position.

[SecureContext, Exposed=Window,
 Constructor(optional DOMPointInit position, optional DOMPointInit orientation)]
interface XRRigidTransform {
  [SameObject] readonly attribute DOMPointReadOnly position;
  [SameObject] readonly attribute DOMPointReadOnly orientation;
  [SameObject] readonly attribute Float32Array matrix;
  [SameObject] readonly attribute XRRigidTransform inverse;
};

The XRRigidTransform(position, orientation) constructor MUST perform the following steps when invoked:

  1. Let transform be a new XRRigidTransform.

  2. If position is not a DOMPointInit initialize transform’s position to { x: 0.0, y: 0.0, z: 0.0, w: 1.0 }.

  3. If position’s w value is not 1.0, throw a TypeError.

  4. Else initialize transform’s position’s x value to position’s x dictionary member, y value to position’s y dictionary member, z value to position’s z dictionary member and w to 1.0.

  5. If orientation is not a DOMPointInit initialize transform’s orientation to { x: 0.0, y: 0.0, z: 0.0, w: 1.0 }.

  6. Else initialize transform’s orientation’s x value to orientation’s x dictionary member, y value to orientation’s y dictionary member, z value to orientation’s z dictionary member and w value to orientation’s w dictionary member.

  7. Normalize x, y, z, and w components of transform’s orientation.

  8. Return transform.

The position attribute is a 3-dimensional point, given in meters, describing the translation component of the transform. The position's w attribute MUST be 1.0.

The orientation attribute is a quaternion describing the rotational component of the transform. The orientation MUST be normalized to have a length of 1.0.

The matrix attribute returns the transform described by the position and orientation attributes as a matrix. This attribute SHOULD be lazily evaluated.

The inverse attribute returns a XRRigidTransform which, if applied to an object that had previously been transformed by the original XRRigidTransform, would undo the transform and return the object to its initial pose. This attribute SHOULD be lazily evaluated. The XRRigidTransform returned by inverse MUST return the originating XRRigidTransform as its inverse.

An XRRigidTransform with a position of { x: 0, y: 0, z: 0 w: 1 } and an orientation of { x: 0, y: 0, z: 0, w: 1 } is known as an identity transform.

To multiply two XRRigidTransforms, B and A, the UA MUST perform the following steps:

  1. Let result be a new XRRigidTransform object.

  2. Set result’s matrix to the result of premultiplying B’s matrix from the left onto A’s matrix.

  3. Set result’s orientation to the quaternion that describes the rotation indicated by the top left 3x3 sub-matrix of result’s matrix.

  4. Set result’s position to the vector given by the fourth column of result’s matrix.

  5. Return result.

result is a transform from A’s source space to B’s destination space.

8.4. XRRay

An XRRay is a geometric ray described by a origin point and direction vector.

[SecureContext, Exposed=Window,
 Constructor(optional DOMPointInit origin, optional DOMPointInit direction),
 Constructor(XRRigidTransform transform)]
interface XRRay {
  [SameObject] readonly attribute DOMPointReadOnly origin;
  [SameObject] readonly attribute DOMPointReadOnly direction;
  [SameObject] readonly attribute Float32Array matrix;
};

The XRRay(origin, direction) constructor MUST perform the following steps when invoked:

  1. Let ray be a new XRRay.

  2. If origin is not a DOMPointInit initialize ray’s origin to { x: 0.0, y: 0.0, z: 0.0, w: 1.0 }.

  3. Else initialize ray’s origin’s x value to origin’s x dictionary member, y value to origin’s y dictionary member, z value to origin’s z dictionary member and w to 1.0.

  4. If direction is not a DOMPointInit initialize ray’s direction to { x: 0.0, y: 0.0, z: -1.0, w: 0.0 }.

  5. Else initialize ray’s direction’s x value to direction’s x dictionary member, y value to direction’s y dictionary member, z value to direction’s z dictionary member and w value to to 0.0.

  6. Normalize the x, y, and z components of ray’s direction.

  7. Return ray.

The XRRay(transform) constructor MUST perform the following steps when invoked:

  1. Let ray be a new XRRay.

  2. Initialize ray’s origin to { x: 0.0, y: 0.0, z: 0.0, w: 1.0 }.

  3. Initialize ray’s direction to { x: 0.0, y: 0.0, z: -1.0, w: 0.0 }.

  4. Multiply ray’s origin by the transform’s matrix and set ray to the result.

  5. Multiply ray’s direction by the transform’s matrix and set ray to the result.

  6. Normalize the x, y, and z components of ray’s direction

  7. Return ray.

The origin attribute defines the 3-dimensional point in space that the ray originates from, given in meters. The origin's w attribute MUST be 1.0.

The direction attribute defines the ray’s 3-dimensional directional vector. The direction's w attribute MUST be 0.0 and the vector MUST be normalized to have a length of 1.0.

The matrix attribute is a matrix which represents the transform from a ray originating at [0, 0, 0] and extending down the negative Z axis to the ray described by the XRRay's origin and direction.

NOTE: The XRRay's matrix can be used to easily position graphical representations of the ray when rendering.

9. Pose

9.1. XRPose

An XRPose describes a position and orientation in space relative to an XRSpace.

[SecureContext, Exposed=Window] interface XRPose {
  [SameObject] readonly attribute XRRigidTransform transform;
  readonly attribute boolean emulatedPosition;
};

The transform attribute describes the XRRigidTransform between two XRSpace.

The emulatedPosition attribute MUST be false if the transform.position values are based on sensor readings, or true if the position values are software estimations, such as those provided by a neck or arm model.

9.2. XRViewerPose

An XRViewerPose is an XRPose describing the state of a viewer of the XR scene as tracked by the XR device. A viewer may represent a tracked piece of hardware, the observed position of a users head relative to the hardware, or some other means of computing a series of viewpoints into the XR scene. XRViewerPoses can only be queried relative to an XRReferenceSpace. It provides, in addition to the XRPose values, an array of views which include include rigid transforms to indicate the viewpoint and projection matrices. These values should be used by the application when render a frame of an XR scene.

[SecureContext, Exposed=Window] interface XRViewerPose : XRPose {
  [SameObject] readonly attribute FrozenArray<XRView> views;
};

The views array is a sequence of XRViews describing the viewpoints of the XR scene, relative to the XRReferenceSpace the XRViewerPose was queried with. Every view of the XR scene in the array must be rendered in order to display correctly on the XR device. Each XRView includes rigid transforms to indicate the viewpoint and projection matrices, and can be used to query XRViewports from layers when needed.

NOTE: The XRViewerPose's transform can be used to position graphical representations of the viewer for spectator views of the scene or multi-user interaction.

10. Input

10.1. XRInputSource

An XRInputSource represents any input mechanism which allows the user to perform targeted actions in the same virtual space as the viewer.

enum XRHandedness {
  "none",
  "left",
  "right"
};

enum XRTargetRayMode {
  "gaze",
  "tracked-pointer",
  "screen"
};

[SecureContext, Exposed=Window]
interface XRInputSource {
  readonly attribute XRHandedness handedness;
  readonly attribute XRTargetRayMode targetRayMode;
  [SameObject] readonly attribute XRSpace targetRaySpace;
  [SameObject] readonly attribute XRSpace? gripSpace;
  [SameObject] readonly attribute Gamepad? gamepad;
};

Each XRInputSource SHOULD define a primary action. The primary action is a platform-specific action that, when engaged, produces selectstart, selectend, and select events. Examples of possible primary actions are pressing a trigger, touchpad, or button, speaking a command, or making a hand gesture. If the platform guidelines define a recommended primary input then it should be used as the primary action, otherwise the user agent is free to select one.

The handedness attribute describes which hand the input source is associated with, if any. Input sources with no natural handedness (such as headset-mounted controls or standard gamepads) or for which the handedness is not currently known MUST set this attribute none.

The targetRayMode attribute describes the method used to produce the target ray, and indicates how the application should present the target ray to the user if desired.

Note: Some input sources, like an XRInputSource with targetRayMode set to screen, will only be added to the session’s list of active input sources immediately before the selectstart event, and removed from the session’s list of active input sources immediately after the selectend event.

The targetRaySpace attribute is an XRSpace that tracks the pose of the preferred pointing ray of the XRInputSource, as defined by the targetRayMode.

The gripSpace attribute is an XRSpace that tracks the pose that should be used to render virtual objects such that they appear to be held in the user’s hand. If the user were to hold a straight rod, this XRSpace places the origin at the centroid of their curled fingers and where the -Z axis points along the length of the rod towards their thumb. The X axis is perpendicular to the back of the hand being described, with back of the users right hand pointing towards +X and the back of the user’s left hand pointing towards -X. The Y axis is implied by the relationship between the X and Z axis, with +Y roughly pointing in the direction of the user’s arm.

The gripSpace MUST be null if the input source isn’t inherently trackable such as for input sources with a targetRayMode of gaze or screen.

The gamepad attribute is a Gamepad that describes the state of any buttons and axes on the input device. If the device does not have at least one of the following properties, the gamepad attribute MUST be null:

NOTE: XRInputSources with a null gamepad can still fire select, selectstart, and selectend events to report binary inputs.

10.2. Gamepad API Integration

Gamepad instances returned by an XRInputSource's gamepad attribute behave as described by the Gamepad API, with several additional behavioral restrictions:

The gamepad's id also enforces additional behavioral restrictions, and MUST conform to the following rules:

GitHub #550 - Additional restrictions, still under discussion, will be applied to the formatting of the gamepad's id attribute. Until those restrictions have been identified, all gamepad id's should default to "unknown".

10.3. "xr-standard" Gamepad Mapping

The WebXR Device API extends the GamepadMappingType to describe the mapping of common XR controller devices.

enum GamepadMappingType {
  "",            // Defined in the Gamepad API
  "standard",    // Defined in the Gamepad API
  "xr-standard",
};

The xr-standard mapping indicates that the layout of the buttons and axes of the gamepad corresponds as closely as possible to the tables below.

In order to report a mapping of xr-standard the device MUST report a targetRayMode of tracked-pointer and MUST have a non-null gripSpace. It also MUST have at least one touchpad or joystick and MUST have at least one primary button, often a trigger, separate from the touchpad or joystick. If a device does not meet the requirements for the xr-standard mapping it may still be exposed on an XRInputSource with the "" (empty string) mapping. The xr-standard mapping MUST only be used by Gamepad instances reported by an XRInputSource.

Button xr-standard Location Required
buttons[0] Primary Button/Trigger Yes
buttons[1] Primary Touchpad/Joystick Button Yes
buttons[2] Secondary Button/Grip Trigger No
buttons[3] Secondary Touchpad/Joystick Button No

Axis xr-standard Location Required
axis[0] Primary Touchpad/Joystick X Yes
axis[1] Primary Touchpad/Joystick Y Yes
axis[2] Secondary Touchpad/Joystick X No
axis[3] Secondary Touchpad/Joystick Y No

Additional buttons or axes may be exposed after these reserved indices, and SHOULD appear in order of decreasing importance. Related axes (such both axes of a joystick) SHOULD be grouped and, if applicable, SHOULD appear in X, Y, Z order. Devices that lack one of the inputs listed in the tables above MUST still preserve their place in the buttons or axes array. If a device has both a touchpad and a joystick the UA MAY designate whichever it chooses to be the primary axis-based input. Buttons reserved by the UA or platform MUST NOT be exposed.

NOTE: This diagram demonstrates how two potential controllers would be exposed with the `xr-standard` mapping. Simple 'xr-standard' controller and Advanced 'xr-standard' controller

11. Layers

11.1. XRLayer

An XRLayer defines a source of bitmap images and a description of how the image is to be rendered to the XR device. Initially only one type of layer, the XRWebGLLayer, is defined but future revisions of the spec may extend the available layer types.

[SecureContext, Exposed=Window] interface XRLayer {};

11.2. XRWebGLLayer

An XRWebGLLayer is a layer which provides a WebGL framebuffer to render into, enabling hardware accelerated rendering of 3D graphics to be presented on the XR device.

typedef (WebGLRenderingContext or
         WebGL2RenderingContext) XRWebGLRenderingContext;

dictionary XRWebGLLayerInit {
  boolean antialias = true;
  boolean depth = true;
  boolean stencil = false;
  boolean alpha = true;
  boolean ignoreDepthValues = false;
  double framebufferScaleFactor = 1.0;
};

[SecureContext, Exposed=Window, Constructor(XRSession session,
             XRWebGLRenderingContext context,
             optional XRWebGLLayerInit layerInit)]
interface XRWebGLLayer : XRLayer {
  // Attributes
  [SameObject] readonly attribute XRWebGLRenderingContext context;

  readonly attribute boolean antialias;
  readonly attribute boolean ignoreDepthValues;

  [SameObject] readonly attribute WebGLFramebuffer framebuffer;
  readonly attribute unsigned long framebufferWidth;
  readonly attribute unsigned long framebufferHeight;

  // Methods
  XRViewport? getViewport(XRView view);
  void requestViewportScaling(double viewportScaleFactor);

  // Static Methods
  static double getNativeFramebufferScaleFactor(XRSession session);
};

The XRWebGLLayer(session, context, layerInit) constructor MUST perform the following steps when invoked:

  1. Let layer be a new XRWebGLLayer

  2. If session’s ended value is true, throw an InvalidStateError and abort these steps.

  3. If context is lost, throw an InvalidStateError and abort these steps.

  4. If context’s XR compatible boolean is false, throw an InvalidStateError and abort these steps.

  5. Initialize layer’s context to context.

  6. Initialize layer’s antialias to layerInit’s antialias value.

  7. If layerInit’s ignoreDepthValues value is false and the XR Compositor will make use of depth values, Initialize layer’s ignoreDepthValues to false.

  8. Else Initialize layer’s ignoreDepthValues to true

  9. Initialize layer’s framebuffer to a new opaque framebuffer created with context.

  10. Initialize the layer’s swap chain.

  11. If layer’s swap chain was unable to be created for any reason, throw an OperationError and abort these steps.

  12. Return layer.

The context attribute is the WebGLRenderingContext the XRWebGLLayer was created with.

The framebuffer attribute of an XRWebGLLayer is an instance of a WebGLFramebuffer which has been marked as opaque. An opaque framebuffer functions identically to a standard WebGLFramebuffer with the following changes that make it behave more like the default framebuffer:

The framebufferWidth and framebufferHeight attributes return the width and height of the framebuffer's attachments, respectively.

The antialias attribute is true if the framebuffer supports antialiasing using a technique of the UAs choosing, and false if no antialiasing will be performed.

The ignoreDepthValues attribute, if true, indicates the XR Compositor MUST NOT make use of values in the depth buffer attachment when rendering. When the attribute is false it indicates that the content of the depth buffer attachment will be used by the XR Compositor and is expected to be representative of the scene rendered into the layer.

Depth values stored in the buffer are expected to be between 0.0 and 1.0, with 0.0 representing the distance of depthNear and 1.0 representing the distance of depthFar, with intermediate values interpolated linearly. This is the default behavior of WebGL. (See documentation for the depthRange function for additional details.))

NOTE: Making the scene’s depth buffer available to the compositor allows some platforms to provide quality and comfort improvements such as improved reprojection.

Each XRWebGLLayer MUST have a list of viewports which contains one WebGL viewport for each XRView the XRSession currently exposes. The viewports MUST NOT be overlapping. The XRWebGLLayer MUST also have a viewport scale factor, initially set to 1.0, and a minimum viewport scale factor set to a UA-determined value between 0 and 1.

getViewport() queries the XRViewport the given XRView should use when rendering to the layer.

The getViewport(view) method, when invoked, MUST run the following steps:

  1. If layer was created with a different XRSession than the one that produced view return null.

  2. Let glViewport be the WebGL viewport from the list of viewports associated with view.

  3. Let viewport be a new XRViewport instance.

  4. Initialize viewport’s x to glViewport’s x component.

  5. Initialize viewport’s y to glViewport’s y component.

  6. Initialize viewport’s width to glViewport’s width component multiplied by the viewport scale factor.

  7. Initialize viewport’s height to glViewport’s height component multiplied by the viewport scale factor.

  8. Return viewport.

The framebuffer size cannot be adjusted by the developer after the XRWebGLLayer has been created, but it can be useful to adjust the resolution content is rendered at at runtime to aid application performance. To do so, developers can request that the size of the viewports in the list of viewports be changed using the requestViewportScaling() method.

The requestViewportScaling(scaleFactor) method, when invoked, MUST run the following steps:

  1. If scaleFactor is greater than 1.0 set scaleFactor to 1.0.

  2. If scaleFactor is less than the minimum viewport scale factor set scaleFactor to the minimum viewport scale factor.

  3. If the XR device places additional device-specific restrictions on viewport size, adjust scaleFactor accordingly.

  4. Set the viewport scale factor to scaleFactor.

Each XRSession MUST identify a native WebGL framebuffer resolution, which is the pixel resolution of a WebGL framebuffer required to match the physical pixel resolution of the XR device.

The native WebGL framebuffer resolution is determined by running the following steps:

  1. Let session be the target XRSession.

  2. If session’s mode value is not "inline", set the native WebGL framebuffer resolution to the resolution required to have a 1:1 ratio between the pixels of a framebuffer large enough to contain all of the session’s XRViews and the physical screen pixels in the area of the display under the highest magnification and abort these steps. If no method exists to determine the native resolution as described, the recommended WebGL framebuffer resolution MAY be used.

  3. If session’s mode value is "inline", set the native WebGL framebuffer resolution to the size of the session’s renderState's outputContext's canvas in physical display pixels and reevaluate these steps every time the size of the canvas changes or the outputContext is changed.

Additionally, the XRSession MUST identify a recommended WebGL framebuffer resolution, which represents a best estimate of the WebGL framebuffer resolution large enough to contain all of the session’s XRViews that provides an average application a good balance between performance and quality. It MAY be smaller than, larger than, or equal to the native WebGL framebuffer resolution.

NOTE: The user agent is free to use and method of it’s choosing to estimate the recommended WebGL framebuffer resolution. If there are platform-specific methods for querying a recommended size it is recommended that they be used, but not required.

The getNativeFramebufferScaleFactor(session) method, when invoked, MUST run the following steps:

  1. Let session be the target XRSession.

  2. If session’s ended value is true, return 0.0 and abort these steps.

  3. Return the value that the session’s recommended WebGL framebuffer resolution must be multiplied by to yield the session’s native WebGL framebuffer resolution.

We either need to define the swap chain or remove references to it. (This is filed.)

11.3. WebGL Context Compatibility

In order for a WebGL context to be used as a source for XR imagery it must be created on a compatible graphics adapter for the XR device. What is considered a compatible graphics adapter is platform dependent, but is understood to mean that the graphics adapter can supply imagery to the XR device without undue latency. If a WebGL context was not already created on the compatible graphics adapter, it typically must be re-created on the adapter in question before it can be used with an XRWebGLLayer.

Note: On an XR platform with a single GPU, it can safely be assumed that the GPU is compatible with the XR devices advertised by the platform, and thus any hardware accelerated WebGL contexts are compatible as well. On PCs with both an integrated and discreet GPU the discreet GPU is often considered the compatible graphics adapter since it generally a higher performance chip. On desktop PCs with multiple graphics adapters installed, the one with the XR device physically connected to it is likely to be considered the compatible graphics adapter.

partial dictionary WebGLContextAttributes {
    boolean xrCompatible = null;
};

partial interface mixin WebGLRenderingContextBase {
    Promise<void> makeXRCompatible();
};

When a user agent implements this specification it MUST set a XR compatible boolean, initially set to false, on every WebGLRenderingContextBase. Once the XR compatible boolean is set to true, the context can be used with layers for any XRSession requested from the current XR device.

The XR compatible boolean can be set either at context creation time or after context creation, potentially incurring a context loss. To set the XR compatible boolean at context creation time, the xrCompatible context creation attribute must be set to true when requesting a WebGL context.

When the HTMLCanvasElement's getContext() method is invoked with a WebGLContextAttributes dictionary with xrCompatible set to true, run the following steps:

  1. Create the WebGL context as usual, ensuring it is created on a compatible graphics adapter for the XR device.

  2. Let context be the newly created WebGL context.

  3. Set context’s XR compatible boolean to true.

  4. Return context.

The following code creates a WebGL context that is compatible with an XR device and then uses it to create an XRWebGLLayer.
function onXRSessionStarted(xrSession) {
  let glCanvas = document.createElement("canvas");
  let gl = glCanvas.getContext("webgl", { xrCompatible: true });

  loadWebGLResources();

  xrSession.updateRenderState({ baseLayer: new XRWebGLLayer(xrSession, gl) });
}

To set the XR compatible boolean after the context has been created, the makeXRCompatible() method is used.

When the makeXRCompatible() method is invoked, the user agent MUST return a new Promise promise and run the following steps in parallel:

  1. Let context be the target WebGLRenderingContextBase object.

  2. If context’s WebGL context lost flag is set, reject promise with an InvalidStateError and abort these abort these steps.

  3. If context’s XR compatible boolean is true, resolve promise and abort these steps.

  4. If context was created on a compatible graphics adapter for the XR device:

    1. Set context’s XR compatible boolean to true.

    2. Resolve promise and abort these steps.

  5. Queue a task to perform the following steps:

    1. Force context to be lost and handle the context loss as described by the WebGL specification.

    2. If the canceled flag of the "webglcontextlost" event fired in the previous step was not set, reject promise with an AbortError and abort these steps.

    3. Restore the context on a compatible graphics adapter for the XR device.

    4. Set context’s XR compatible boolean to true.

    5. Resolve promise.

Additionally, when any WebGL context is lost run the following steps prior to firing the "webglcontextlost" event:

  1. Set the context’s XR compatible boolean to false.

The following code creates an XRWebGLLayer from a pre-existing WebGL context.
let glCanvas = document.createElement("canvas");
let gl = glCanvas.getContext("webgl");

loadWebGLResources();

glCanvas.addEventListener("webglcontextlost", (event) => {
  // Indicates that the WebGL context can be restored.
  event.canceled = true;
});

glCanvas.addEventListener("webglcontextrestored", (event) => {
  // WebGL resources need to be re-created after a context loss.
  loadWebGLResources();
});

function onXRSessionStarted(xrSession) {
  // Make sure the canvas context we want to use is compatible with the device.
  // May trigger a context loss.
  return gl.makeXRCompatible().then(() => {
    return xrSession.updateRenderState({
      baseLayer: new XRWebGLLayer(xrSession, gl)
    });
  });
}

12. Canvas Rendering Context

12.1. XRPresentationContext

When the getContext() method of an HTMLCanvasElement canvas is to return a new object for the contextId present-xr, the user agent must return an XRPresentationContext with canvas set to canvas.

[SecureContext, Exposed=Window] interface XRPresentationContext {
  [SameObject] readonly attribute HTMLCanvasElement canvas;
};

The canvas attribute indicates the HTMLCanvasElement that created this context.

Each XRPresentationContext has an session which is an XRSession initially set to null.

When another algorithm indicates it will update the XRPresentationContext session to XRSession session, run the following steps:

  1. Let context be the target XRPresentationContext.

  2. Let prevSession be context’s session.

  3. If prevSession is equal to session abort these steps.

  4. If prevSession is not null:

    1. If prevSession’s renderState's outputContext is equal to context set prevSession’s renderState's outputContext to null.

  5. Set context’s session to session.

13. Events

13.1. XRSessionEvent

XRSessionEvents are fired to indicate changes to the state of an XRSession.

[SecureContext, Exposed=Window, Constructor(DOMString type, XRSessionEventInit eventInitDict)]
interface XRSessionEvent : Event {
  [SameObject] readonly attribute XRSession session;
};

dictionary XRSessionEventInit : EventInit {
  required XRSession session;
};

The session attribute indicates the XRSession that generated the event.

13.2. XRInputSourceEvent

XRInputSourceEvents are fired to indicate changes to the state of an XRInputSource.

[SecureContext, Exposed=Window, Constructor(DOMString type, XRInputSourceEventInit eventInitDict)]
interface XRInputSourceEvent : Event {
  [SameObject] readonly attribute XRFrame frame;
  [SameObject] readonly attribute XRInputSource inputSource;
  [SameObject] readonly attribute long? buttonIndex;
};

dictionary XRInputSourceEventInit : EventInit {
  required XRFrame frame;
  required XRInputSource inputSource;
  long? buttonIndex = null;
};

The inputSource attribute indicates the XRInputSource that generated this event.

The frame attribute is an XRFrame that corresponds with the time that the event took place. It may represent historical data. Any XRViewerPose queried from the frame MUST have an empty views array.

The buttonIndex attribute indicates the index of the button in the buttons array of the inputSource's gamepad that caused the event to be generated. If the gamepad is null or the event was not generated by a button interaction the values MUST be null.

When the user agent fires an XRInputSourceEvent event it MUST run the following steps:

  1. Let frame be event’s frame.

  2. Set frame’s active boolean to true.

  3. Dispatch event.

  4. Set frame’s active boolean to false.

13.3. XRReferenceSpaceEvent

XRReferenceSpaceEvents are fired to indicate changes to the state of an XRReferenceSpace.

[SecureContext, Exposed=Window, Constructor(DOMString type, XRReferenceSpaceEventInit eventInitDict)]
interface XRReferenceSpaceEvent : Event {
  [SameObject] readonly attribute XRReferenceSpace referenceSpace;
  [SameObject] readonly attribute XRRigidTransform? transform;
};

dictionary XRReferenceSpaceEventInit : EventInit {
  required XRReferenceSpace referenceSpace;
  XRRigidTransform transform;
};

The referenceSpace attribute indicates the XRReferenceSpace that generated this event.

The transform attribute describes the transform the referenceSpace underwent during this event, if applicable.

13.4. Event Types

The user agent MUST provide the following new events. Registration for and firing of the events must follow the usual behavior of DOM4 Events.

The user agent MAY fire a devicechange event on the XR object to indicate that the availability of XR devices has been changed. The event MUST be of type Event.

A user agent MAY dispatch a blur event on an XRSession to indicate that presentation to the XRSession by the page has been suspended by the user agent, OS, or XR hardware. While an XRSession is blurred it remains active but it may have its frame production throttled. This is to prevent tracking while the user interacts with potentially sensitive UI. For example: The user agent SHOULD blur the presenting application when the user is typing a URL into the browser with a virtual keyboard, otherwise the presenting page may be able to guess the URL the user is entering by tracking their head motions. The event MUST be of type XRSessionEvent.

A user agent MAY dispatch a focus event on an XRSession to indicate that presentation to the XRSession by the page has resumed after being suspended. The event MUST be of type XRSessionEvent.

A user agent MUST dispatch a end event on an XRSession when the session ends, either by the application or the user agent. The event MUST be of type XRSessionEvent.

A user agent MUST dispatch a inputsourceschange event on an XRSession when the session’s list of active input sources has changed. The event MUST be of type XRSessionEvent.

A user agent MUST dispatch a selectstart event on an XRSession when one of its XRInputSources begins its primary action. The event MUST be of type XRInputSourceEvent.

A user agent MUST dispatch a selectend event on an XRSession when one of its XRInputSources ends its primary action or when an XRInputSource that has begun a primary action is disconnected. The event MUST be of type XRInputSourceEvent.

A user agent MUST dispatch a select event on an XRSession when one of its XRInputSources has fully completed a primary action. The event MUST be of type XRInputSourceEvent.

A user agent MUST dispatch a reset event on an XRReferenceSpace when discontinuities of the origin occur. (That is, significant changes in the origin’s position or orientation relative to the user’s environment.) It also fires when the boundsGeometry changes for a XRBoundedReferenceSpace. The event MUST be of type XRReferenceSpaceEvent, and MUST be dispatched prior to the execution of any XR animation frames that make use of the new origin.

14. Security, Privacy, and Comfort Considerations

The WebXR Device API provides powerful new features which bring with them several unique privacy, security, and comfort risks that user agents must take steps to mitigate.

14.1. Gaze Tracking

While the API does not yet expose eye tracking capabilities a lot can be inferred about where the user is looking by tracking the orientation of their head. This is especially true of XR devices that have limited input capabilities, such as Google Cardboard, which frequently require users to control a "gaze cursor" with their head orientation. This means that it may be possible for a malicious page to infer what a user is typing on a virtual keyboard or how they are interacting with a virtual UI based solely on monitoring their head movements. For example: if not prevented from doing so a page could estimate what URL a user is entering into the user agent’s URL bar.

To prevent this risk the user agent MUST blur all sessions when the users is interacting with sensitive, trusted UI such as URL bars or system dialogs. Additionally, to prevent a malicious page from being able to monitor input on a other pages the user agent MUST blur all sessions on non-focused pages.

14.2. Trusted Environment

If the virtual environment does not consistently track the user’s head motion with low latency and at a high frame rate the user may become disoriented or physically ill. Since it is impossible to force pages to produce consistently performant and correct content the user agent MUST provide a tracked, trusted environment and an XR Compositor which runs asynchronously from page content. The compositor is responsible for compositing the trusted and untrusted content. If content is not performant, does not submit frames, or terminates unexpectedly the user agent should be able to continue presenting a responsive, trusted UI.

Additionally, page content has the ability to make users uncomfortable in ways not related to performance. Badly applied tracking, strobing colors, and content intended to offend, frighten, or intimidate are examples of content which may cause the user to want to quickly exit the XR experience. Removing the XR device in these cases may not always be a fast or practical option. To accommodate this the user agent SHOULD provide users with an action, such as pressing a reserved hardware button or performing a gesture, that escapes out of WebXR content and displays the user agent’s trusted UI.

When navigating between pages in XR the user agent should display trusted UI elements informing the user of the security information of the site they are navigating to which is normally presented by the 2D UI, such as the URL and encryption status.

14.3. Context Isolation

The trusted UI must be drawn by an independent rendering context whose state is isolated from any rendering contexts used by the page. (For example, any WebGL rendering contexts.) This is to prevent the page from corrupting the state of the trusted UI’s context, which may prevent it from properly rendering a tracked environment. It also prevents the possibility of the page being able to capture imagery from the trusted UI, which could lead to private information being leaked.

Also, to prevent CORS-related vulnerabilities each page will see a new instance of objects returned by the API, such as XRSession. Attributes such as the context set by one page must not be able to be read by another. Similarly, methods invoked on the API MUST NOT cause an observable state change on other pages. For example: No method will be exposed that enables a system-level orientation reset, as this could be called repeatedly by a malicious page to prevent other pages from tracking properly. The user agent MUST, however, respect system-level orientation resets triggered by a user gesture or system menu.

14.4. Fingerprinting

Given that the API describes hardware available to the user and its capabilities it will inevitably provide additional surface area for fingerprinting. While it’s impossible to completely avoid this, steps can be taken to mitigate the issue. This spec limits reporting of available hardware to only a single device at a time, which prevents using the rare cases of multiple headsets being connected as a fingerprinting signal. Also, the devices that are reported have no string identifiers and expose very little information about the devices capabilities until an XRSession is created, which may only be triggered via user activation in the most sensitive case.

15. Integrations

15.1. Feature Policy

This specification defines a policy-controlled feature that controls whether the xr attribute is exposed on the Navigator object.

The feature identifier for this feature is "xr".

The default allowlist for this feature is ["self"].

16. Acknowledgements

The following individuals have contributed to the design of the WebXR Device API specification:

And a special thanks to Vladimir Vukicevic (Unity) for kick-starting this whole adventure!

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[FEATURE-POLICY]
Feature Policy. Living Standard. URL: https://wicg.github.io/feature-policy/
[GAMEPAD]
Scott Graham; Theodore Mielczarek; Brandon Jones. Gamepad. 14 December 2018. WD. URL: https://www.w3.org/TR/gamepad/
[GEOMETRY-1]
Simon Pieters; Chris Harrelson. Geometry Interfaces Module Level 1. 4 December 2018. CR. URL: https://www.w3.org/TR/geometry-1/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[PROMISES-GUIDE]
Domenic Denicola. Writing Promise-Using Specifications. 16 February 2016. TAG Finding. URL: https://www.w3.org/2001/tag/doc/promises-guide
[WebIDL]
Cameron McCormack; Boris Zbarsky; Tobie Langel. Web IDL. 15 December 2016. ED. URL: https://heycam.github.io/webidl/

Informative References

[ORIENTATION-SENSOR]
Mikhail Pozdnyakov; et al. Orientation Sensor. 20 March 2018. CR. URL: https://www.w3.org/TR/orientation-sensor/

IDL Index

partial interface Navigator {
  [SecureContext, SameObject] readonly attribute XR xr;
};

[SecureContext, Exposed=Window] interface XR : EventTarget {
  // Methods
  Promise<void> supportsSession(XRSessionMode mode);
  Promise<XRSession> requestSession(XRSessionMode mode);

  // Events
  attribute EventHandler ondevicechange;
};

enum XRSessionMode {
  "inline",
  "immersive-vr",
  "immersive-ar"
};

enum XREnvironmentBlendMode {
  "opaque",
  "additive",
  "alpha-blend",
};

[SecureContext, Exposed=Window] interface XRSession : EventTarget {
  // Attributes
  readonly attribute XREnvironmentBlendMode environmentBlendMode;
  [SameObject] readonly attribute XRRenderState renderState;
  [SameObject] readonly attribute XRSpace viewerSpace;

  // Methods
  void updateRenderState(optional XRRenderStateInit state);
  Promise<XRReferenceSpace> requestReferenceSpace(XRReferenceSpaceType type);

  FrozenArray<XRInputSource> getInputSources();

  long requestAnimationFrame(XRFrameRequestCallback callback);
  void cancelAnimationFrame(long handle);

  Promise<void> end();

  // Events
  attribute EventHandler onblur;
  attribute EventHandler onfocus;
  attribute EventHandler onend;
  attribute EventHandler onselect;
  attribute EventHandler oninputsourceschange;
  attribute EventHandler onselectstart;
  attribute EventHandler onselectend;
};

dictionary XRRenderStateInit {
  double depthNear;
  double depthFar;
  double inlineVerticalFieldOfView;
  XRLayer? baseLayer;
  XRPresentationContext? outputContext;
};

[SecureContext, Exposed=Window] interface XRRenderState {
  readonly attribute double depthNear;
  readonly attribute double depthFar;
  readonly attribute double? inlineVerticalFieldOfView;
  readonly attribute XRLayer? baseLayer;
  readonly attribute XRPresentationContext? outputContext;
};

callback XRFrameRequestCallback = void (DOMHighResTimeStamp time, XRFrame frame);

[SecureContext, Exposed=Window] interface XRFrame {
  [SameObject] readonly attribute XRSession session;

  XRViewerPose? getViewerPose(XRReferenceSpace referenceSpace);
  XRPose? getPose(XRSpace sourceSpace, XRSpace destinationSpace);
};

[SecureContext, Exposed=Window] interface XRSpace : EventTarget {
  
};

enum XRReferenceSpaceType {
  "identity",
  "position-disabled",
  "eye-level",
  "floor-level",
  "bounded",
  "unbounded"
};

[SecureContext, Exposed=Window]
interface XRReferenceSpace : XRSpace {
  XRReferenceSpace getOffsetReferenceSpace(XRRigidTransform originOffset);

  attribute EventHandler onreset;
};

[SecureContext, Exposed=Window]
interface XRBoundedReferenceSpace : XRReferenceSpace {
  readonly attribute FrozenArray<DOMPointReadOnly> boundsGeometry;
};    

enum XREye {
  "left",
  "right"
};

[SecureContext, Exposed=Window] interface XRView {
  readonly attribute XREye eye;
  [SameObject] readonly attribute Float32Array projectionMatrix;
  [SameObject] readonly attribute XRRigidTransform transform;
};

[SecureContext, Exposed=Window] interface XRViewport {
  readonly attribute long x;
  readonly attribute long y;
  readonly attribute long width;
  readonly attribute long height;
};

[SecureContext, Exposed=Window,
 Constructor(optional DOMPointInit position, optional DOMPointInit orientation)]
interface XRRigidTransform {
  [SameObject] readonly attribute DOMPointReadOnly position;
  [SameObject] readonly attribute DOMPointReadOnly orientation;
  [SameObject] readonly attribute Float32Array matrix;
  [SameObject] readonly attribute XRRigidTransform inverse;
};

[SecureContext, Exposed=Window,
 Constructor(optional DOMPointInit origin, optional DOMPointInit direction),
 Constructor(XRRigidTransform transform)]
interface XRRay {
  [SameObject] readonly attribute DOMPointReadOnly origin;
  [SameObject] readonly attribute DOMPointReadOnly direction;
  [SameObject] readonly attribute Float32Array matrix;
};

[SecureContext, Exposed=Window] interface XRPose {
  [SameObject] readonly attribute XRRigidTransform transform;
  readonly attribute boolean emulatedPosition;
};

[SecureContext, Exposed=Window] interface XRViewerPose : XRPose {
  [SameObject] readonly attribute FrozenArray<XRView> views;
};

enum XRHandedness {
  "none",
  "left",
  "right"
};

enum XRTargetRayMode {
  "gaze",
  "tracked-pointer",
  "screen"
};

[SecureContext, Exposed=Window]
interface XRInputSource {
  readonly attribute XRHandedness handedness;
  readonly attribute XRTargetRayMode targetRayMode;
  [SameObject] readonly attribute XRSpace targetRaySpace;
  [SameObject] readonly attribute XRSpace? gripSpace;
  [SameObject] readonly attribute Gamepad? gamepad;
};

enum GamepadMappingType {
  "",            // Defined in the Gamepad API
  "standard",    // Defined in the Gamepad API
  "xr-standard",
};

[SecureContext, Exposed=Window] interface XRLayer {};

typedef (WebGLRenderingContext or
         WebGL2RenderingContext) XRWebGLRenderingContext;

dictionary XRWebGLLayerInit {
  boolean antialias = true;
  boolean depth = true;
  boolean stencil = false;
  boolean alpha = true;
  boolean ignoreDepthValues = false;
  double framebufferScaleFactor = 1.0;
};

[SecureContext, Exposed=Window, Constructor(XRSession session,
             XRWebGLRenderingContext context,
             optional XRWebGLLayerInit layerInit)]
interface XRWebGLLayer : XRLayer {
  // Attributes
  [SameObject] readonly attribute XRWebGLRenderingContext context;

  readonly attribute boolean antialias;
  readonly attribute boolean ignoreDepthValues;

  [SameObject] readonly attribute WebGLFramebuffer framebuffer;
  readonly attribute unsigned long framebufferWidth;
  readonly attribute unsigned long framebufferHeight;

  // Methods
  XRViewport? getViewport(XRView view);
  void requestViewportScaling(double viewportScaleFactor);

  // Static Methods
  static double getNativeFramebufferScaleFactor(XRSession session);
};

partial dictionary WebGLContextAttributes {
    boolean xrCompatible = null;
};

partial interface mixin WebGLRenderingContextBase {
    Promise<void> makeXRCompatible();
};

[SecureContext, Exposed=Window] interface XRPresentationContext {
  [SameObject] readonly attribute HTMLCanvasElement canvas;
};

[SecureContext, Exposed=Window, Constructor(DOMString type, XRSessionEventInit eventInitDict)]
interface XRSessionEvent : Event {
  [SameObject] readonly attribute XRSession session;
};

dictionary XRSessionEventInit : EventInit {
  required XRSession session;
};

[SecureContext, Exposed=Window, Constructor(DOMString type, XRInputSourceEventInit eventInitDict)]
interface XRInputSourceEvent : Event {
  [SameObject] readonly attribute XRFrame frame;
  [SameObject] readonly attribute XRInputSource inputSource;
  [SameObject] readonly attribute long? buttonIndex;
};

dictionary XRInputSourceEventInit : EventInit {
  required XRFrame frame;
  required XRInputSource inputSource;
  long? buttonIndex = null;
};

[SecureContext, Exposed=Window, Constructor(DOMString type, XRReferenceSpaceEventInit eventInitDict)]
interface XRReferenceSpaceEvent : Event {
  [SameObject] readonly attribute XRReferenceSpace referenceSpace;
  [SameObject] readonly attribute XRRigidTransform? transform;
};

dictionary XRReferenceSpaceEventInit : EventInit {
  required XRReferenceSpace referenceSpace;
  XRRigidTransform transform;
};

Issues Index

GitHub #550 - Additional restrictions, still under discussion, will be applied to the formatting of the gamepad's id attribute. Until those restrictions have been identified, all gamepad id's should default to "unknown".