1. Introduction
Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment.
Immersive computing introduces strict requirements for high-precision, low-latency communication in order to deliver an acceptable experience. It also brings unique security concerns for a platform like the web. The WebXR Device API provides the interfaces necessary to enable developers to build compelling, comfortable, and safe immersive applications on the web across a wide variety of hardware form factors.
Other web interfaces, such as the RelativeOrientationSensor
and AbsoluteOrientationSensor
, can be repurposed to surface input from some devices to polyfill the WebXR Device API in limited situations. These interfaces cannot support multiple features of high-end immersive experiences, however, such as 6DoF tracking, presentation to headset peripherals, or tracked input devices.
1.1. Terminology
This document uses the acronym XR throughout to refer to the spectrum of hardware, applications, and techniques used for Virtual Reality, Augmented Reality, and other related technologies. Examples include, but are not limited to:
-
Head mounted displays, whether they are opaque, transparent, or utilize video passthrough
-
Mobile devices with positional tracking
-
Fixed displays with head tracking capabilities
The important commonality between them being that they offer some degree of spatial tracking with which to simulate a view of virtual content.
Terms like "XR Device", "XR Application", etc. are generally understood to apply to any of the above. Portions of this document that only apply to a subset of these devices will indicate so as appropriate.
The terms 3DoF and 6DoF are used throughout this document to describe the tracking capabilities of XR devices.
-
A 3DoF device, short for "Three Degrees of Freedom", is one that can only track rotational movement. This is common in devices which rely exclusively on accelerometer and gyroscope readings to provide tracking. 3DoF devices do not respond translational movements from the user, though they may employ algorithms to estimate translational changes based on modeling of the neck or arms.
-
A 6DoF device, short for "Six Degrees of Freedom", is one that can track both rotation and translation, enabling for precise 1:1 tracking in space. This typically requires some level of understanding of the user’s environment. That environmental understanding may be achieved via inside-out tracking, where sensors on the tracked device itself (such as cameras or depth sensors) are used to determine the device’s position, or outside-in tracking, where external devices placed in the user’s environment (like a camera or light emitting device) provides a stable point of reference against which the XR device can determine it’s position.
1.2. Application flow
Most applications using the WebXR Device API will follow a similar usage pattern:
-
Query
navigator.xr.supportsSession()
to determine if the desired type of XR content is supported by the hardware and UA. -
If so, advertise the XR content to the user.
-
Wait for the user to trigger a user activation event indicating they want to begin viewing XR content.
-
Request an
XRSession
within the user activation event withnavigator.xr.requestSession()
. -
If the
XRSession
request succeeds, use it to run a frame loop to respond to XR input and produce images to display on the XR device in response. -
Continue running the frame loop until the UA ends the session or the user indicates they want to exit the XR content.
2. Model
2.1. XR device
An XR device is a physical unit of hardware that can present imagery to the user. On desktop clients, this is usually a headset peripheral. On mobile clients, it may represent the mobile device itself in conjunction with a viewer harness. It may also represent devices without stereo-presentation capabilities but with more advanced tracking.
An XR device has a list of supported modes (a list of strings) that contains "inline
" and all the other enumeration values of XRSessionMode
that the XR device supports.
3. Initialization
3.1. navigator.xr
partial interface Navigator { [SecureContext ,SameObject ]readonly attribute XR xr ; };
The xr
attribute’s getter MUST return the XR
object that is associated with the context object.
3.2. XR
[SecureContext ,Exposed =Window ]interface :
XR EventTarget { // MethodsPromise <void >supportsSession (XRSessionMode );
mode Promise <XRSession >requestSession (XRSessionMode ); // Events
mode attribute EventHandler ondevicechange ; };
The user agent MUST create an XR
object when a Navigator
object is created and associate it with that object.
An XR
object is the entry point to the API, used to query for XR features available to the user agent and initiate communication with XR hardware via the creation of XRSession
s.
An XR
object has a list of XR devices (a list of XR device), which MUST be initially an empty list.
An XR
object has an XR device (null or XR device) which is initially null and represents the active XR device from the list of XR devices.
The user agent MUST be able to enumerate XR devices attached to the system, at which time each available device is placed in the list of XR devices. Subsequent algorithms requesting enumeration MUST reuse the cached list of XR devices. Enumerating the devices should not initialize device tracking. After the first enumeration the user agent MUST begin monitoring device connection and disconnection, adding connected devices to the list of XR devices and removing disconnected devices.
Each time the list of XR devices changes the user agent should select an XR device by running the following steps:
-
Let oldDevice be the XR device.
-
If the list of XR devices is an empty list, set the XR device to null.
-
If the list of XR devices's size is one, set the XR device to the list of XR devices[0].
-
If there are any active
XRSession
s and the list of XR devices contains oldDevice, set the XR device to oldDevice. -
Else, set the XR device to a device of the user agent’s choosing.
-
If this is the first time devices have been enumerated or oldDevice equals the XR device, abort these steps.
-
Set the XR compatible boolean of all
WebGLRenderingContextBase
instances to `false`. -
Queue a task that fires a simple event named
devicechange
on the context object.
NOTE: The user agent is allowed to use any criteria it wishes to select an XR device when the list of XR devices contains multiple devices. For example, the user agent may always select the first item in the list, or provide settings UI that allows users to manage device priority. Ideally the algorithm used to select the default device is stable and will result in the same device being selected across multiple browsing sessions.
The user agent ensures an XR device is selected by running the following steps:
-
If the context object's XR device is not null, abort these steps.
The ondevicechange
attribute is an Event handler IDL attribute for the devicechange
event type.
When the supportsSession(mode)
method is invoked, it MUST return a new Promise promise and run the following steps in parallel:
-
If the XR device is null, reject promise with a "
NotSupportedError
"DOMException
and abort these steps. -
If the XR device's list of supported modes does not contain mode, reject promise with a "
NotSupportedError
"DOMException
and abort these steps. -
Else resolve promise.
Calling supportsSession()
MUST NOT trigger device-selection UI as this would cause many sites to display XR-specific dialogs early in the document lifecycle without user activation. Additionally, calling supportsSession()
MUST NOT interfere with any running XR applications on the system, and MUST NOT cause XR-related applications to launch such as system trays or storefronts.
immersive-vr
sessions are supported.
navigator. xr. supportsSession( 'immersive-vr' ). then(() => { // 'immersive-vr' sessions are supported. // Page should advertise support to the user. }
The XR
object has a pending immersive session boolean, which MUST be initially false
, an active immersive session, which MUST be initially null
, and a list of inline sessions, which MUST be initially empty.
When the requestSession(mode)
method is invoked, the user agent MUST return a new Promise promise and run the following steps in parallel:
-
Let immersive be
true
if mode is"immersive-vr"
or"immersive-ar"
, andfalse
otherwise. -
If immersive is
true
:-
If pending immersive session is
true
or active immersive session is notnull
, reject promise with an "InvalidStateError
"DOMException
and abort these steps. -
Else set pending immersive session to be
true
.
-
-
Else if the XR device's list of supported modes does not contain mode, reject promise with a "
NotSupportedError
"DOMException
. -
Else If immersive is
true
and the algorithm is not triggered by user activation, reject promise with a "SecurityError
"DOMException
and abort these steps. -
If promise was rejected and immersive is
true
, set pending immersive session tofalse
. -
If promise was rejected, abort these steps.
-
Let session be a new
XRSession
object. -
Initialize the session with session and mode.
-
If immersive is
true
, set the active immersive session to session, and set pending immersive session tofalse
. -
Else append session to the list of inline sessions.
-
Resolve promise with session.
immersive-vr
XRSession
.
let xrSession; navigator. xr. requestSession( "immersive-vr" ). then(( session) => { xrSession= session; });
3.3. XRSessionMode
The XRSessionMode
enum defines the modes that an XRSession
can operate in.
enum {
XRSessionMode "inline" ,"immersive-vr" ,"immersive-ar" };
-
A session mode of
inline
indicates that the session’s output will be shown as an element in the HTML document.inline
session content MAY be displayed in mono or stereo and MAY allow for viewer tracking. User agents MUST allowinline
sessions to be created for any XR device. -
A session mode of
immersive-vr
indicates that the session’s output will be given exclusive access to the XR device display and that content is not intended to be integrated with the user’s environment. TheenvironmentBlendMode
forimmersive-vr
sessions is expected to beopaque
when possible, but MAY beadditive
if the hardware requires it. -
A session mode of
immersive-ar
indicates that the session’s output will be given exclusive access to the XR device display and that content is intended to be integrated with the user’s environment. TheenvironmentBlendMode
MUST NOT beopaque
forimmersive-ar
sessions.
An immersive session refers to either an immersive-vr
or an immersive-ar
session. Immersive sessions MUST provide some level of viewer tracking, and content MUST be shown at the proper scale relative to the user and/or the surrounding environment. Additionally, Immersive sessions MUST be given exclusive access to the XR device, meaning that while the immersive session is not blurred the HTML document is not shown on the XR device's display, nor is content from other applications shown on the XR device's display.
NOTE: Examples of ways exclusive access may be presented include stereo content displayed on a virtual reality or augmented reality headset, or augmented reality content displayed fullscreen on a mobile device.
4. Session
4.1. XRSession
Any interaction with XR hardware is done via an XRSession
object, which can only be retrieved by calling requestSession()
on the XR
object. Once a session has been successfully acquired it can be used to poll the device pose, query information about the user’s environment and, present imagery to the user.
The user agent, when possible, SHOULD NOT initialize device tracking or rendering capabilities until an XRSession
has been acquired. This is to prevent unwanted side effects of engaging the XR systems when they’re not actively being used, such as increased battery usage or related utility applications from appearing when first navigating to a page that only wants to test for the presence of XR hardware in order to advertise XR features. Not all XR platforms offer ways to detect the hardware’s presence without initializing tracking, however, so this is only a strong recommendation.
enum {
XREnvironmentBlendMode "opaque" ,"additive" ,"alpha-blend" , }; [SecureContext ,Exposed =Window ]interface :
XRSession EventTarget { // Attributesreadonly attribute XREnvironmentBlendMode environmentBlendMode ; [SameObject ]readonly attribute XRRenderState renderState ; [SameObject ]readonly attribute XRSpace viewerSpace ; // Methodsvoid updateRenderState (optional XRRenderStateInit );
state Promise <XRReferenceSpace >(
requestReferenceSpace XRReferenceSpaceType );
type FrozenArray <XRInputSource >getInputSources ();long requestAnimationFrame (XRFrameRequestCallback );
callback void cancelAnimationFrame (long );
handle Promise <void >end (); // Eventsattribute EventHandler onblur ;attribute EventHandler onfocus ;attribute EventHandler onend ;attribute EventHandler onselect ;attribute EventHandler oninputsourceschange ;attribute EventHandler onselectstart ;attribute EventHandler onselectend ; };
Each XRSession
has a mode, which is one of the values of XRSessionMode
.
The viewerSpace
on an XRSession
has a list of views, which is a list of views corresponding to the views provided by the XR device.
To initialize the session, given session and mode, the user agent MUST run the following steps:
-
Set session’s mode to mode.
-
If no other features of the user agent have done so already, perform the necessary platform-specific steps to initialize the device’s tracking and rendering capabilities.
A number of different circumstances may shut down the session, which is permanent and irreversible. Once a session has been shut down the only way to access the XR device's tracking or rendering capabilities again is to request a new session. Each XRSession
has an ended boolean, initially set to false
, that indicates if it has been shut down.
When an XRSession
is shut down the following steps are run:
-
Let session be the target
XRSession
object. -
Set session’s ended value to
true
. -
If the active immersive session is equal to session, set the active immersive session to
null
. -
Remove session from the list of inline sessions.
-
Reject any outstanding promises returned by session with an
InvalidStateError
. -
If no other features of the user agent are actively using them, perform the necessary platform-specific steps to shut down the device’s tracking and rendering capabilities.
The end()
method provides a way to manually shut down a session. When invoked, it MUST run the following steps:
-
Let promise be a new Promise.
-
Queue a task to perform the following steps:
-
Wait until any platform-specific steps related to shutting down the session have completed.
-
Resolve promise.
-
-
Return promise.
Each XRSession
has an active render state which is a new XRRenderState
, a list of pending render states, which is initially empty.
The renderState
attribute returns the XRSession
's active render state.
Each XRSession
has a minimum inline field of view and a maximum inline field of view, defined in radians. The values MUST be determined by the user agent and MUST fall in the range of 0
to PI
.
When the updateRenderState(newState)
method is invoked, the user agent MUST run the following steps:
-
Let session be the target
XRSession
. -
If session’s ended value is
true
, throw anInvalidStateError
and abort these steps. -
If newState’s
baseLayer
's was created with anXRSession
other than session, throw anInvalidStateError
and abort these steps. -
If newState’s
inlineVerticalFieldOfView
is set and session is an immersive session, throw anInvalidStateError
and abort these steps. -
Append newState to session’s list of pending render states.
When requested, the XRSession
MUST apply pending render states by running the following steps:
-
Let session be the target
XRSession
. -
Let activeState be session’s active render state.
-
Let pendingStates be session’s list of pending render states.
-
Set session’s list of pending render states to the empty list.
-
For each newState in pendingStates:
-
If newState’s
depthNear
value is set, set activeState’sdepthNear
to newState’sdepthNear
. -
If newState’s
depthFar
value is set, set activeState’sdepthFar
to newState’sdepthFar
. -
If newState’s
inlineVerticalFieldOfView
is set, set activeState’sinlineVerticalFieldOfView
to newState’sinlineVerticalFieldOfView
. -
If newState’s
baseLayer
is set, set activeState’sbaseLayer
to newState’sbaseLayer
. -
If newState’s
outputContext
is set, set activeState’soutputContext
to newState’soutputContext
and update the XRPresentationContext session to session.
-
-
If activeState’s
inlineVerticalFieldOfView
is less than session’s minimum inline field of view set activeState’sinlineVerticalFieldOfView
to session’s minimum inline field of view. -
If activeState’s
inlineVerticalFieldOfView
is greater than session’s maximum inline field of view set activeState’sinlineVerticalFieldOfView
to session’s maximum inline field of view.
When the requestReferenceSpace(options)
method is invoked, the user agent MUST return a new Promise promise and run the following steps in parallel:
-
Create a reference space, referenceSpace, as described by options.
-
If referenceSpace is
null
, reject promise with aNotSupportedError
and abort these steps. -
Resolve promise with referenceSpace.
When the getInputSources()
method is invoked, the user agent MUST run the following steps:
-
Return the current list of active input sources.
Each XRSession
has a environment blending mode value, which is a enum which MUST be set to whichever of the following values best matches the behavior of imagery rendered by the session in relation to the user’s surrounding environment.
-
A blend mode of
opaque
indicates that the user’s surrounding environment is not visible at all. Alpha values in thebaseLayer
will be ignored, with the compositor treating all alpha values as 1.0. -
A blend mode of
additive
indicates that the user’s surrounding environment is visible and thebaseLayer
will be shown additively against it. Alpha values in thebaseLayer
will be ignored, with the compositor treating all alpha values as 1.0. When this blend mode is in use black pixels will appear fully transparent, and there is no way to make a pixel appear fully opaque. -
A blend mode of
alpha-blend
indicates that the user’s surrounding environment is visible and thebaseLayer
will be blended with it according to the alpha values of each pixel. Pixels with an alpha value of 1.0 will be fully opaque and pixels with an alpha value of 0.0 will be fully transparent.
The environmentBlendMode
attribute returns the XRSession
's environment blending mode
NOTE: Most Virtual Reality devices exhibit opaque
blending behavior. Augmented Reality devices that use transparent optical elements frequently exhibit additive
blending behavior, and Augmented Reality devices that use passthrough cameras frequently exhibit alpha-blend
blending behavior.
The viewerSpace
attribute is an XRSpace
which tracks the pose of the viewer.
NOTE: For any given XRFrame
calling getPose()
with the viewerSpace
and any XRReferenceSpace
will return the same pose (without the views
array) as calling getViewerPose()
with the same XRReferenceSpace
.
onblur
attribute is an Event handler IDL attribute for the blur
event type.
The onfocus
attribute is an Event handler IDL attribute for the focus
event type.
The onend
attribute is an Event handler IDL attribute for the end
event type.
oninputsourceschange
attribute is an Event handler IDL attribute for the inputsourceschange
event type. The onselectstart
attribute is an Event handler IDL attribute for the selectstart
event type.
The onselectend
attribute is an Event handler IDL attribute for the selectend
event type.
The onselect
attribute is an Event handler IDL attribute for the select
event type.
We still need to document what happens when we end the session. (This is filed.)
We still need to document what happens when we blur all sessions. (This is filed.)
We still need to document what happens when we poll the device pose (This is filed.)
We still need to document how thelist of active input sources is maintained. (This is filed.)
4.2. XRRenderState
There are multiple values that developers can configure which affect how the session’s output is composited. These values are tracked by an XRRenderState
object.
dictionary {
XRRenderStateInit double ;
depthNear double ;
depthFar double ;
inlineVerticalFieldOfView XRLayer ?;
baseLayer XRPresentationContext ?; }; [
outputContext SecureContext ,Exposed =Window ]interface {
XRRenderState readonly attribute double depthNear ;readonly attribute double depthFar ;readonly attribute double ?inlineVerticalFieldOfView ;readonly attribute XRLayer ?;
baseLayer readonly attribute XRPresentationContext ?; };
outputContext
When an XRRenderState
object is created for an XRSession
session, the user agent MUST initialize the render state by running the following steps:
-
Let state be the newly created
XRRenderState
object. -
Initialize state’s
depthNear
to0.1
. -
Initialize state’s
depthFar
to1000.0
. -
If session is an immersive session, initialize state’s
inlineVerticalFieldOfView
tonull
. -
Else initialize state’s
inlineVerticalFieldOfView
toPI * 0.5
. -
Initialize state’s
baseLayer
tonull
. -
Initialize state’s
outputContext
tonull
.
The depthNear
attribute defines the distance, in meters, of the near clip plane from the viewer. The depthFar
attribute defines the distance, in meters, of the far clip plane from the viewer.
depthNear
and depthFar
is used in the computation of the projectionMatrix
of XRView
s and determines how the values of an XRWebGLLayer
depth buffer are interpreted. depthNear
MAY be greater than depthFar
.
The inlineVerticalFieldOfView
attribute defines the default vertical field of view in radians used when computing projection matrices for inline
XRSession
s. The projection matrix calculation also takes into account the aspect ratio of the outputContext
's canvas
. This value MUST be null
for immersive sessions.
4.3. Animation Frames
The primary way an XRSession
provides information about the tracking state of the XR device is via callbacks scheduled by calling requestAnimationFrame()
on the XRSession
instance.
callback =
XRFrameRequestCallback void (DOMHighResTimeStamp ,
time XRFrame );
frame
Each XRFrameRequestCallback
object has a cancelled boolean initially set to false
.
Each XRSession
has a list of animation frame callbacks, which is initially empty, and an animation frame callback identifier, which is a number initially be zero.
When the requestAnimationFrame(callback)
method is invoked, the user agent MUST run the following steps:
-
Let session be the target
XRSession
object. -
Increment session’s animation frame callback identifier by one.
-
Append callback to session’s list of animation frame callbacks, associated with session’s animation frame callback identifier’s current value.
-
Return session’s animation frame callback identifier’s current value.
When the cancelAnimationFrame(handle)
method is invoked, the user agent MUST run the following steps:
-
Let session be the target
XRSession
object. -
Find the entry in session’s list of animation frame callbacks that is associated with the value handle.
-
If there is such an entry, set it’s cancelled boolean to
true
and remove it from session’s list of animation frame callbacks.
When an XRSession
session receives updated viewer state from the XR device, it runs an XR animation frame with a timestamp now and an XRFrame
frame, which MUST run the following steps regardless of if the list of animation frame callbacks is empty or not:
-
If session’s list of pending render states is not empty, apply pending render states.
-
If session’s
renderState
'sbaseLayer
isnull
, abort these steps. -
If session’s mode is
"inline"
and session’srenderState
'soutputContext
isnull
, abort these steps. -
Let callbacks be a list of the entries in session’s list of animation frame callback, in the order in which they were added to the list.
-
Set session’s list of animation frame callbacks to the empty list.
-
Set frame’s active boolean to
true
. -
Set frame’s animationFrame boolean to
true
. -
For each entry in callbacks, in order:
-
If the entry’s cancelled boolean is
true
, continue to the next entry. -
Invoke the Web IDL callback function, passing now and frame as the arguments
-
If an exception is thrown, report the exception.
-
-
Set frame’s active boolean to
false
.
4.4. The XR Compositor
The user agent MUST maintain an XR Compositor which handles presentation to the XR device and frame timing. The compositor MUST use an independent rendering context whose state is isolated from that of any WebGL contexts used as XRWebGLLayer
sources to prevent the page from corrupting the compositor state or reading back content from other pages. the compositor MUST also run in separate thread or processes to decouple performance of the page from the ability to present new imagery to the user at the appropriate framerate.
The XR Compositor has a list of layer images, which is initially empty.
5. Frame Loop
5.1. XRFrame
An XRFrame
represents a snapshot of the state of all of the tracked objects for an XRSession
. Applications can acquire an XRFrame
by calling requestAnimationFrame()
on an XRSession
with an XRFrameRequestCallback
. When the callback is called it will be passed an XRFrame
. Events which need to communicate tracking state, such as the select
event, will also provide a XRFrame
.
[SecureContext ,Exposed =Window ]interface { [
XRFrame SameObject ]readonly attribute XRSession session ;XRViewerPose ?getViewerPose (XRReferenceSpace );
referenceSpace XRPose ?getPose (XRSpace ,
sourceSpace XRSpace ); };
destinationSpace
Each XRFrame
has a active boolean which is initially set to false
, and an animationFrame boolean which is initially set to false
.
The session
attribute returns the XRSession
that produced the XRFrame
.
When the getViewerPose(referenceSpace)
method is invoked, the user agent MUST run the following steps:
-
Let frame be the target
XRFrame
-
Let session be frame’s
session
object. -
If frame’s animationFrame boolean is
false
, throw anInvalidStateError
and abort these steps. -
Let pose be a new
XRViewerPose
object. -
Populate the pose of session’s
viewerSpace
in referenceSpace at the time represented by frame into pose. -
If pose is
null
returnnull
. -
Let xrviews be an empty list.
-
For each view view in the list of views on the
viewerSpace
ofsession
, perform the following steps:-
Let xrview be a new
XRView
object. -
Initialize xrview’s
projectionMatrix
to view’s projection matrix -
Let offset be an
XRRigidTransform
equal to the view offset of view -
Set xrview’s
transform
property to the result of multiplying the offset transform by theXRViewerPose
'stransform
-
Append xrview to xrviews
-
-
Set pose’s
views
to xrviews -
Return pose.
When the getPose(sourceSpace, destinationSpace)
method is invoked, the user agent MUST run the following steps:
-
Let frame be the target
XRFrame
-
Let pose be a new
XRPose
object. -
Populate the pose of sourceSpace in destinationSpace at the time represented by frame into pose.
-
Return pose.
6. Spaces
A core feature of the WebXR Device API is the ability to provide spatial tracking. Spaces are the interface that enable applications to reason about how tracked entities are spatially related to the user’s physical environment and each other.
6.1. XRSpace
An XRSpace
represents a virtual coordinate system with an origin that corresponds to a physical location. Spatial data that is requested from the API or given to the API is always expressed in relation to a specific XRSpace
at the time of a specific XRFrame
. Numeric values such as pose positions are coordinates in that space relative to its origin. The interface is intentionally opaque.
[SecureContext ,Exposed =Window ]interface :
XRSpace EventTarget { };
Each XRSpace
has a session which is set to the XRSession
that created the XRSpace
.
Each XRSpace
has a native origin that is tracked by the XR device's underlying tracking system, and an effective origin, which is the basis of the XRSpace
's coordinate system. The transform from the effective space to the native origin's space is defined by an origin offset, which is an XRRigidTransform
initially set to an identity transform.
The effective origin of an XRSpace
can only be observed in the coordinate system of another XRSpace
as an XRPose
, returned by an XRFrame
's getPose()
method. The spatial relationship between XRSpace
s MAY change between XRFrame
s.
To populate the pose of a XRSpace
sourceSpace in an XRSpace
destinationSpace at the time represented by a XRFrame
frame into an XRPose
pose, the user agent MUST run the following steps:
-
-
If frame’s active boolean is
false
, throw anInvalidStateError
and abort these steps.
-
-
Let session be frame’s
session
object. -
If sourceSpace’s session does not equal session, throw an
InvalidStateError
and abort these steps. -
If destinationSpace’s session does not equal session, throw an
InvalidStateError
and abort these steps. -
If destinationSpace’s pose cannot be determined relative to sourceSpace at the time represented by frame, set pose to
null
. -
Let transform be pose’s
transform
. -
Set transform’s
position
to the location of sourceSpace’s effective origin in destinationSpace’s coordinate system. -
Set transform’s
orientation
to the orientation of sourceSpace’s effective origin in destinationSpace’s coordinate system.
6.2. XRReferenceSpace
An XRReferenceSpace
is one of several common XRSpace
s that applications can use to establish a spatial relationship with the user’s physical environment.
XRReferenceSpace
s are generally expected to remain static for the duration of the XRSession
, with the most common exception being mid-session reconfiguration by the user. The native origin for every XRReferenceSpace
describes a coordinate system where +X
is considered "Right", +Y
is considered "Up", and -Z
is considered "Forward".
enum {
XRReferenceSpaceType "identity" ,"position-disabled" ,"eye-level" ,"floor-level" ,"bounded" ,"unbounded" }; [SecureContext ,Exposed =Window ]interface :
XRReferenceSpace XRSpace {XRReferenceSpace getOffsetReferenceSpace (XRRigidTransform );
originOffset attribute EventHandler onreset ; };
Each XRReferenceSpace
has a type, which is an XRReferenceSpaceType
.
An XRReferenceSpace
is most frequently obtained by calling requestReferenceSpace()
, which creates an instance of an XRReferenceSpace
or an interface extending it, determined by the XRReferenceSpaceType
enum value passed into the call. The type indicates the tracking behavior that the reference space will exhibit:
-
Passing a type of
identity
creates anXRReferenceSpace
instance. It represents a reference space where the viewer is always at the native origin. Only theviewerSpace
andtargetRaySpace
's ofXRInputSource
's with antargetRayMode
ofscreen
can be spatially related to anidentity
reference space. -
Passing a type of
position-disabled
creates anXRReferenceSpace
instance. It represents a tracking space where orientation is tracked but the viewers position is always at the native origin. -
Passing a type of
eye-level
creates anXRReferenceSpace
instance. It represents a tracking space with a native origin near the user’s head at the time of creation. The exact position and orientation will be initialized based on the conventions of the underlying platform. When using this reference space the user is not expected to move beyond their initial position much, if at all, and tracking is optimized for that purpose. For devices with 6DoF tracking,eye-level
reference spaces should emphasize keeping the origin stable relative to the user’s environment. -
Passing a type of
floor-level
creates anXRReferenceSpace
instance. It represents a tracking space with a native origin at the floor in a safe position for the user to stand. The `y` axis equals `0` at floor level, with the `x` and `z` position and orientation initialized based on the conventions of the underlying platform. If the floor level isn’t known it MUST be estimated. When using this reference space the user is not expected to move beyond their initial position much, if at all, and tracking is optimized for that purpose. For devices with 6DoF tracking,floor-level
reference spaces should emphasize keeping the origin stable relative to the user’s environment. -
Passing a type of
bounded
creates anXRBoundedReferenceSpace
instance if supported by the XR device and theXRSession
. It represents a tracking space with it’s native origin at the floor, where the user is expected to move within a pre-established boundary, given as theboundsGeometry
. Tracking in abounded
reference space is optimized for keeping the native origin andboundsGeometry
stable relative to the user’s environment. -
Passing a type of
unbounded
creates anXRReferenceSpace
instance if supported by the XR device and theXRSession
. It represents a tracking space where the user is expected to move freely around their environment, potentially even long distances from their starting point. Tracking in anunbounded
reference space is optimized for stability around the user’s current position, and as such the native origin may drift over time.
Devices that support any of the following XRReferenceSpaceType
s MUST support all three: position-disabled
, eye-level
, floor-level
. The minimum sensor data necessary for supporting all three reference spaces is the same.
Note: The position-disabled
type is primarily intended for use with pre-rendered media such as panoramic photos or videos. It should not be used for most other media types due to user discomfort associated with the lack of a neck model or full positional tracking.
The onreset
attribute is an Event handler IDL attribute for the reset
event type.
When an XRReferenceSpace
is requested, the user agent MUST create a reference space by running the following steps:
-
Let session be the
XRSession
object that requested creation of a reference space. -
Let type be set to the
XRReferenceSpaceType
passed torequestReferenceSpace()
. -
If type is
bounded
, let referenceSpace be a newXRBoundedReferenceSpace
. -
Else let referenceSpace be a new
XRReferenceSpace
. -
Initialize referenceSpace’s type to be type.
-
Initialize session to be session.
-
Return referenceSpace.
getOffsetReferenceSpace(originOffset)
method MUST perform the following steps when invoked:
-
Let base be the
XRReferenceSpace
the method was called on. -
If base is an instance of
XRBoundedReferenceSpace
, let offsetSpace be a newXRBoundedReferenceSpace
and set offsetSpace’sboundsGeometry
to base’sboundsGeometry
, with each point multiplied by theinverse
of originOffset. -
Else let offsetSpace be a new
XRReferenceSpace
. -
Set offsetSpace’s native origin to base’s native origin.
-
Set offsetSpace’s origin offset to the result of multiplying originOffset by base’s origin offset.
-
Return offsetSpace.
6.3. XRBoundedReferenceSpace
XRBoundedReferenceSpace
extends XRReferenceSpace
to include boundsGeometry
, indicating the pre-confingured boundaries of the users space.
[SecureContext ,Exposed =Window ]interface :
XRBoundedReferenceSpace XRReferenceSpace {readonly attribute FrozenArray <DOMPointReadOnly >boundsGeometry ; };
The origin of a XRBoundedReferenceSpace
MUST be positioned at the floor, such that the `y` axis equals `0` at floor level. The `x` and `z` position and orientation are initialized based on the conventions of the underlying platform, typically expected to be near the center of the room facing in a logical forward direction.
Note: Other XR platforms sometimes refer to the type of tracking offered by a bounded
reference space as "room scale" tracking. An XRBoundedReferenceSpace
is not intended to describe multi-room spaces, areas with uneven floor levels, or very large open areas. Content that needs to handle those scenarios should use an unbounded
reference space.
Each XRBoundedReferenceSpace
has a native bounds geometry describing the border around the XRBoundedReferenceSpace
, which the user can expect to safely move within. The polygonal boundary is given as an array of DOMPointReadOnly
s, which represents a loop of points at the edges of the safe space. The points describe offsets from the native origin in meters. Points MUST be given in a clockwise order as viewed from above, looking towards the negative end of the Y axis. The y
value of each point MUST be 0
and the w
value of each point MUST be 1
. The bounds can be considered to originate at the floor and extend infinitely high. The shape it describes MAY be convex or concave.
The boundsGeometry
attribute is an array of DOMPointReadOnly
s such that each entry is equal to the entry in the XRBoundedReferenceSpace
's native bounds geometry premultiplied by the inverse
of the origin offset. In other words, it provides the same border in XRBoundedReferenceSpace
coordinates relative to the effective origin.
Note: Content should not require the user to move beyond the boundsGeometry
. It is possible for the user to move beyond the bounds if their physical surroundings allow for it, resulting in position values outside of the polygon they describe. This is not an error condition and should be handled gracefully by page content.
Note: Content generally should not provide a visualization of the boundsGeometry
, as it’s the user agent’s responsibility to ensure that safety critical information is provided to the user.
7. Views
7.1. XRView
An XRView
describes a single view into an XR scene for a given frame.
Each view corresponds to a display or portion of a display used by an XR device to present imagery to the user. They are used to retrieve all the information necessary to render content that is well aligned to the view's physical output properties, including the field of view, eye offset, and other optical properties. Views may cover overlapping regions of the user’s vision. No guarantee is made about the number of views any XR device uses or their order, nor is the number of views required to be constant for the duration of an XRSession
.
A view has an associated internal view offset, which is an XRRigidTransform
describing the position and orientation of the view in the viewerSpace
's coordinate system.
A view has an associated projection matrixwhich is a matrix describing the projection to be used when rendering the view, provided by the underlying XR device. The projection matrix MAY include transformations such as shearing that prevent the projection from being accurately described by a simple frustum.
A view has an associated eyewhich is an XREye
describing which eye this view is expected to be shown to. If the view does not have an intrinsically associated eye (the display is monoscopic, for example) this value MUST be set to "left"
.
NOTE: Many HMDs will request that content render two views, one for the left eye and one for the right, while most magic window devices will only request one view, but applications should never assume a specific view configuration. For example: A magic window device may request two views if it is capable of stereo output, but may revert to requesting a single view for performance reasons if the stereo output mode is turned off. Similarly, HMDs may request more than two views to facilitate a wide field of view or displays of different pixel density.
enum {
XREye ,
"left" }; [
"right" SecureContext ,Exposed =Window ]interface {
XRView readonly attribute XREye eye ; [SameObject ]readonly attribute Float32Array projectionMatrix ; [SameObject ]readonly attribute XRRigidTransform transform ; };
The eye
attribute describes is the eye of the underlying view. This attribute’s primary purpose is to ensure that pre-rendered stereo content can present the correct portion of the content to the correct eye.
The projectionMatrix
attribute is the projection matrix of the underlying view. It is strongly recommended that applications use this matrix without modification or decomposition. Failure to use the provided projection matrices when rendering may cause the presented frame to be distorted or badly aligned, resulting in varying degrees of user discomfort.
The transform
attribute is the XRRigidTransform
of the viewpoint.
NOTE: The transform
can be used to position camera objects in many rendering libraries. If a more traditional view matrix is needed by the application one can be retrieved by calling `view.transform.inverse.matrix`.
7.2. XRViewport
An XRViewport
object describes a viewport, or rectangular region, of a graphics surface.
[SecureContext ,Exposed =Window ]interface {
XRViewport readonly attribute long x ;readonly attribute long y ;readonly attribute long width ;readonly attribute long height ; };
The x
and y
attributes define an offset from the surface origin and the width
and height
attributes define the rectangular dimensions of the viewport.
The exact interpretation of the viewport values depends on the conventions of the graphics API the viewport is associated with:
-
When used with a
XRWebGLLayer
thex
andy
attributes specify the lower left corner of the viewport rectangle, in pixels, with the viewport rectangle extendingwidth
pixels to the right ofx
andheight
pixels abovey
. The values can be passed to the WebGL viewport function directly.
XRView
s of an XRViewerPose
, queries an XRViewport
from an XRWebGLLayer
for each, and uses them to set the appropriate WebGL viewports for rendering.
xrSession. requestAnimationFrame(( time, xrFrame) => { let viewer= xrFrame. getViewerPose( xrReferenceSpace); gl. bindFramebuffer( xrWebGLLayer. framebuffer); for ( xrViewof viewer. views) { let xrViewport= xrWebGLLayer. getViewport( xrView); gl. viewport( xrViewport. x, xrViewport. y, xrViewport. width, xrViewport. height); // WebGL draw calls will now be rendered into the appropriate viewport. } });
8. Geometric Primitives
8.1. Matrices
WebXR provides various transforms in the form of matrices. WebXR uses the WebGL conventions when communicating matrices, in which 4x4 matrices are given as 16 element Float32Array
s with column major storage, and are applied to column vectors by premultiplying the matrix from the left. They may be passed directly to WebGL’s uniformMatrix4fv
function, used to create an equivalent DOMMatrix
, or used with a variety of third party math libraries.
Float32Array
laid out like so:
[a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15]
Applying this matrix as a transform to a column vector specified as a DOMPointReadOnly
lik so:
{x:X, y:Y, z:Z, w:1}
Produces the following result:
a0 a4 a8 a12 * X = a0 * X + a4 * Y + a8 * Z + a12 a1 a5 a9 a13 Y a1 * X + a5 * Y + a9 * Z + a13 a2 a6 a10 a14 Z a2 * X + a6 * Y + a10 * Z + a14 a3 a7 a11 a15 1 a3 * X + a7 * Y + a11 * Z + a15
8.2. Normalization
There are several algorithms which call for a vector or quaternion to be normalized, which means to scale the components to have a collective magnitude of 1.0
.
To normalize a list of components the UA MUST perform the following steps:
-
Let length be the square root of the sum of the squares of each component.
-
If length is
0
, throw anInvalidStateError
and abort these steps. -
Divide each component by length and set the component.
8.3. XRRigidTransform
An XRRigidTransform
is a transform described by a position
and orientation
. When interpreting an XRRigidTransform
the orientation
is always applied prior to the position
.
[SecureContext ,Exposed =Window ,Constructor (optional DOMPointInit ,
position optional DOMPointInit )]
orientation interface { [
XRRigidTransform SameObject ]readonly attribute DOMPointReadOnly position ; [SameObject ]readonly attribute DOMPointReadOnly orientation ; [SameObject ]readonly attribute Float32Array matrix ; [SameObject ]readonly attribute XRRigidTransform inverse ; };
The XRRigidTransform(position, orientation)
constructor MUST perform the following steps when invoked:
-
Let transform be a new
XRRigidTransform
. -
If position is not a
DOMPointInit
initialize transform’sposition
to{ x: 0.0, y: 0.0, z: 0.0, w: 1.0 }
. -
Else initialize transform’s
position
’sx
value to position’s x dictionary member,y
value to position’s y dictionary member,z
value to position’s z dictionary member andw
to1.0
. -
If orientation is not a
DOMPointInit
initialize transform’sorientation
to{ x: 0.0, y: 0.0, z: 0.0, w: 1.0 }
. -
Else initialize transform’s
orientation
’sx
value to orientation’s x dictionary member,y
value to orientation’s y dictionary member,z
value to orientation’s z dictionary member andw
value to orientation’s w dictionary member. -
Normalize
x
,y
,z
, andw
components of transform’sorientation
. -
Return transform.
The position
attribute is a 3-dimensional point, given in meters, describing the translation component of the transform. The position
's w
attribute MUST be 1.0
.
The orientation
attribute is a quaternion describing the rotational component of the transform. The orientation
MUST be normalized to have a length of 1.0
.
The matrix
attribute returns the transform described by the position
and orientation
attributes as a matrix. This attribute SHOULD be lazily evaluated.
The inverse
attribute returns a XRRigidTransform
which, if applied to an object that had previously been transformed by the original XRRigidTransform
, would undo the transform and return the object to its initial pose. This attribute SHOULD be lazily evaluated. The XRRigidTransform
returned by inverse
MUST return the originating XRRigidTransform
as its inverse
.
An XRRigidTransform
with a position
of { x: 0, y: 0, z: 0 w: 1 }
and an orientation
of { x: 0, y: 0, z: 0, w: 1 }
is known as an identity transform.
To multiply two XRRigidTransform
s, B and A, the UA MUST perform the following steps:
-
Let result be a new
XRRigidTransform
object. -
Set result’s
matrix
to the result of premultiplying B’smatrix
from the left onto A’smatrix
. -
Set result’s
orientation
to the quaternion that describes the rotation indicated by the top left 3x3 sub-matrix of result’smatrix
. -
Set result’s
position
to the vector given by the fourth column of result’smatrix
. -
Return result.
result is a transform from A’s source space to B’s destination space.
8.4. XRRay
An XRRay
is a geometric ray described by a origin
point and direction
vector.
[SecureContext ,Exposed =Window ,Constructor (optional DOMPointInit ,
origin optional DOMPointInit ),
direction Constructor (XRRigidTransform )]
transform interface { [
XRRay SameObject ]readonly attribute DOMPointReadOnly origin ; [SameObject ]readonly attribute DOMPointReadOnly direction ; [SameObject ]readonly attribute Float32Array matrix ; };
The XRRay(origin, direction)
constructor MUST perform the following steps when invoked:
-
Let ray be a new
XRRay
. -
If origin is not a
DOMPointInit
initialize ray’sorigin
to{ x: 0.0, y: 0.0, z: 0.0, w: 1.0 }
. -
Else initialize ray’s
origin
’sx
value to origin’s x dictionary member,y
value to origin’s y dictionary member,z
value to origin’s z dictionary member andw
to1.0
. -
If direction is not a
DOMPointInit
initialize ray’sdirection
to{ x: 0.0, y: 0.0, z: -1.0, w: 0.0 }
. -
Else initialize ray’s
direction
’sx
value to direction’s x dictionary member,y
value to direction’s y dictionary member,z
value to direction’s z dictionary member andw
value to to0.0
. -
Return ray.
The XRRay(transform)
constructor MUST perform the following steps when invoked:
-
Let ray be a new
XRRay
. -
Initialize ray’s
origin
to{ x: 0.0, y: 0.0, z: 0.0, w: 1.0 }
. -
Initialize ray’s
direction
to{ x: 0.0, y: 0.0, z: -1.0, w: 0.0 }
. -
Multiply ray’s
origin
by the transform’smatrix
and set ray to the result. -
Multiply ray’s
direction
by the transform’smatrix
and set ray to the result. -
Return ray.
The origin
attribute defines the 3-dimensional point in space that the ray originates from, given in meters. The origin
's w
attribute MUST be 1.0
.
The direction
attribute defines the ray’s 3-dimensional directional vector. The direction
's w
attribute MUST be 0.0
and the vector MUST be normalized to have a length of 1.0
.
matrix
attribute is a matrix which represents the transform from a ray originating at [0, 0, 0]
and extending down the negative Z axis to the ray described by the XRRay
's origin
and direction
.
NOTE: The XRRay
's matrix
can be used to easily position graphical representations of the ray when rendering.
9. Pose
9.1. XRPose
An XRPose
describes a position and orientation in space relative to an XRSpace
.
[SecureContext ,Exposed =Window ]interface { [
XRPose SameObject ]readonly attribute XRRigidTransform transform ;readonly attribute boolean emulatedPosition ; };
The transform
attribute describes the XRRigidTransform
between two XRSpace
.
emulatedPosition
attribute MUST be false
if the transform
.position
values are based on sensor readings, or true
if the position values are software estimations, such as those provided by a neck or arm model. 9.2. XRViewerPose
An XRViewerPose
is an XRPose
describing the state of a viewer of the XR scene as tracked by the XR device. A viewer may represent a tracked piece of hardware, the observed position of a users head relative to the hardware, or some other means of computing a series of viewpoints into the XR scene. XRViewerPose
s can only be queried relative to an XRReferenceSpace
. It provides, in addition to the XRPose
values, an array of views which include include rigid transforms to indicate the viewpoint and projection matrices. These values should be used by the application when render a frame of an XR scene.
[SecureContext ,Exposed =Window ]interface :
XRViewerPose XRPose { [SameObject ]readonly attribute FrozenArray <XRView >views ; };
The views
array is a sequence of XRView
s describing the viewpoints of the XR scene, relative to the XRReferenceSpace
the XRViewerPose
was queried with. Every view of the XR scene in the array must be rendered in order to display correctly on the XR device. Each XRView
includes rigid transforms to indicate the viewpoint and projection matrices, and can be used to query XRViewport
s from layers when needed.
NOTE: The XRViewerPose
's transform
can be used to position graphical representations of the viewer for spectator views of the scene or multi-user interaction.
10. Input
10.1. XRInputSource
An XRInputSource
represents any input mechanism which allows the user to perform targeted actions in the same virtual space as the viewer.
enum {
XRHandedness ,
"none" ,
"left" };
"right" enum {
XRTargetRayMode "gaze" ,"tracked-pointer" ,"screen" }; [SecureContext ,Exposed =Window ]interface {
XRInputSource readonly attribute XRHandedness handedness ;readonly attribute XRTargetRayMode targetRayMode ; [SameObject ]readonly attribute XRSpace targetRaySpace ; [SameObject ]readonly attribute XRSpace ?gripSpace ; [SameObject ]readonly attribute Gamepad ?gamepad ; };
Each XRInputSource
SHOULD define a primary action. The primary action is a platform-specific action that, when engaged, produces selectstart
, selectend
, and select
events. Examples of possible primary actions are pressing a trigger, touchpad, or button, speaking a command, or making a hand gesture. If the platform guidelines define a recommended primary input then it should be used as the primary action, otherwise the user agent is free to select one.
The handedness
attribute describes which hand the input source is associated with, if any. Input sources with no natural handedness (such as headset-mounted controls or standard gamepads) or for which the handedness is not currently known MUST set this attribute none
.
The targetRayMode
attribute describes the method used to produce the target ray, and indicates how the application should present the target ray to the user if desired.
-
gaze
indicates the target ray will originate at the user’s head and follow the direction they are looking (this is commonly referred to as a "gaze input" device). -
tracked-pointer
indicates that the target ray originates from either a handheld device or other hand-tracking mechanism and represents that the user is using their hands or the held device for pointing. The orientation of the target ray relative to the tracked object MUST follow platform-specific ergonomics guidelines when available. In the absence of platform-specific guidance, the target ray SHOULD point in the same direction as the user’s index finger if it was outstretched. -
screen
indicates that the input source was an interaction with the canvas element associated with a inline session’s output context, such as a mouse click or touch event.
Note: Some input sources, like an XRInputSource
with targetRayMode
set to screen
, will only be added to the session’s list of active input sources immediately before the selectstart
event, and removed from the session’s list of active input sources immediately after the selectend
event.
The targetRaySpace
attribute is an XRSpace
that tracks the pose of the preferred pointing ray of the XRInputSource
, as defined by the targetRayMode
.
The gripSpace
attribute is an XRSpace
that tracks the pose that should be used to render virtual objects such that they appear to be held in the user’s hand. If the user were to hold a straight rod, this XRSpace
places the origin at the centroid of their curled fingers and where the -Z
axis points along the length of the rod towards their thumb. The X
axis is perpendicular to the back of the hand being described, with back of the users right hand pointing towards +X
and the back of the user’s left hand pointing towards -X
. The Y
axis is implied by the relationship between the X
and Z
axis, with +Y
roughly pointing in the direction of the user’s arm.
The gripSpace
MUST be null
if the input source isn’t inherently trackable such as for input sources with a targetRayMode
of gaze
or screen
.
The gamepad
attribute is a Gamepad
that describes the state of any buttons and axes on the input device. If the device does not have at least one of the following properties, the gamepad
attribute MUST be null
:
-
Multiple buttons
-
A button which reports analog values
-
An axis
NOTE: XRInputSource
s with a null
gamepad
can still fire select
, selectstart
, and selectend
events to report binary inputs.
10.2. Gamepad API Integration
Gamepad
instances returned by an XRInputSource
's gamepad
attribute behave as described by the Gamepad API, with several additional behavioral restrictions:
-
gamepad
MUST NOT be included in the array returned bynavigator.getGamepads()
. -
gamepad
'sconnected
attribute MUST betrue
until theXRInputSource
is removed from the array returned bygetInputSources()
or theXRSession
is ended.
The gamepad
's id
also enforces additional behavioral restrictions, and MUST conform to the following rules:
-
If the
XRSession
's mode isinline
thegamepad
'sid
MUST be"unknown"
. -
If the input device cannot be reliably identified the
gamepad
'sid
MUST be"unknown"
. -
If the UA masks the input device type for any reason the
gamepad
'sid
MUST be"unknown"
GitHub #550 - Additional restrictions, still under discussion, will be applied to the formatting of the gamepad
's id
attribute. Until those restrictions have been identified, all gamepad
id
's should default to "unknown"
.
10.3. "xr-standard" Gamepad Mapping
The WebXR Device API extends the GamepadMappingType
to describe the mapping of common XR controller devices.
enum {
GamepadMappingType , // Defined in the Gamepad API
"" , // Defined in the Gamepad API
"standard" "xr-standard" , };
The xr-standard
mapping indicates that the layout of the buttons and axes of the gamepad
corresponds as closely as possible to the tables below.
In order to report a mapping
of xr-standard
the device MUST report a targetRayMode
of tracked-pointer
and MUST have a non-null
gripSpace
. It also MUST have at least one touchpad or joystick and MUST have at least one primary button, often a trigger, separate from the touchpad or joystick. If a device does not meet the requirements for the xr-standard
mapping it may still be exposed on an XRInputSource
with the ""
(empty string) mapping. The xr-standard
mapping MUST only be used by Gamepad
instances reported by an XRInputSource
.
Button | xr-standard Location
| Required |
---|---|---|
buttons[0] | Primary Button/Trigger | Yes |
buttons[1] | Primary Touchpad/Joystick Button | Yes |
buttons[2] | Secondary Button/Grip Trigger | No |
buttons[3] | Secondary Touchpad/Joystick Button | No |
Axis | xr-standard Location
| Required |
---|---|---|
axis[0] | Primary Touchpad/Joystick X | Yes |
axis[1] | Primary Touchpad/Joystick Y | Yes |
axis[2] | Secondary Touchpad/Joystick X | No |
axis[3] | Secondary Touchpad/Joystick Y | No |
Additional buttons or axes may be exposed after these reserved indices, and SHOULD appear in order of decreasing importance. Related axes (such both axes of a joystick) SHOULD be grouped and, if applicable, SHOULD appear in X, Y, Z order. Devices that lack one of the inputs listed in the tables above MUST still preserve their place in the buttons
or axes
array. If a device has both a touchpad and a joystick the UA MAY designate whichever it chooses to be the primary axis-based input. Buttons reserved by the UA or platform MUST NOT be exposed.
NOTE: This diagram demonstrates how two potential controllers would be exposed with the `xr-standard` mapping.
11. Layers
11.1. XRLayer
An XRLayer
defines a source of bitmap images and a description of how the image is to be rendered to the XR device. Initially only one type of layer, the XRWebGLLayer
, is defined but future revisions of the spec may extend the available layer types.
[SecureContext ,Exposed =Window ]interface {};
XRLayer
11.2. XRWebGLLayer
An XRWebGLLayer
is a layer which provides a WebGL framebuffer to render into, enabling hardware accelerated rendering of 3D graphics to be presented on the XR device.
typedef (WebGLRenderingContext or WebGL2RenderingContext );
XRWebGLRenderingContext dictionary {
XRWebGLLayerInit boolean =
antialias true ;boolean =
depth true ;boolean =
stencil false ;boolean =
alpha true ;boolean =
ignoreDepthValues false ;double = 1.0; }; [
framebufferScaleFactor SecureContext ,Exposed =Window ,Constructor (XRSession ,
session XRWebGLRenderingContext ,
context optional XRWebGLLayerInit )]
layerInit interface :
XRWebGLLayer XRLayer { // Attributes [SameObject ]readonly attribute XRWebGLRenderingContext context ;readonly attribute boolean antialias ;readonly attribute boolean ignoreDepthValues ; [SameObject ]readonly attribute WebGLFramebuffer framebuffer ;readonly attribute unsigned long framebufferWidth ;readonly attribute unsigned long framebufferHeight ; // MethodsXRViewport ?getViewport (XRView );
view void (
requestViewportScaling double ); // Static Methods
viewportScaleFactor static double getNativeFramebufferScaleFactor (XRSession ); };
session
The XRWebGLLayer(session, context, layerInit)
constructor MUST perform the following steps when invoked:
-
Let layer be a new
XRWebGLLayer
-
If session’s ended value is
true
, throw anInvalidStateError
and abort these steps. -
If context is lost, throw an
InvalidStateError
and abort these steps. -
If context’s XR compatible boolean is false, throw an
InvalidStateError
and abort these steps. -
Initialize layer’s
context
to context. -
Initialize layer’s
antialias
to layerInit’santialias
value. -
If layerInit’s
ignoreDepthValues
value isfalse
and the XR Compositor will make use of depth values, Initialize layer’signoreDepthValues
tofalse
. -
Else Initialize layer’s
ignoreDepthValues
totrue
-
Initialize layer’s
framebuffer
to a new opaque framebuffer created with context. -
Initialize the layer’s swap chain.
-
If layer’s swap chain was unable to be created for any reason, throw an
OperationError
and abort these steps. -
Return layer.
The context
attribute is the WebGLRenderingContext
the XRWebGLLayer
was created with.
The framebuffer
attribute of an XRWebGLLayer
is an instance of a WebGLFramebuffer
which has been marked as opaque. An opaque framebuffer functions identically to a standard WebGLFramebuffer
with the following changes that make it behave more like the default framebuffer:
-
An opaque framebuffer MAY support antialiasing, even in WebGL 1.0.
-
An opaque framebuffer's attachments cannot be inspected or changed. Calling
framebufferTexture2D
,framebufferRenderbuffer
,getFramebufferAttachmentParameter
, orgetRenderbufferParameter
with an opaque framebuffer MUST generate anINVALID_OPERATION
error. -
An opaque framebuffer is considered incomplete outside of an
requestAnimationFrame()
callback. When not in arequestAnimationFrame()
callback calls tocheckFramebufferStatus
outside of anrequestAnimationFrame()
callback MUST generate aFRAMEBUFFER_UNSUPPORTED
error and attempts to clear, draw to, or read from the opaque framebuffer MUST generate anINVALID_FRAMEBUFFER_OPERATION
error.
The framebufferWidth
and framebufferHeight
attributes return the width and height of the framebuffer
's attachments, respectively.
The antialias
attribute is true
if the framebuffer
supports antialiasing using a technique of the UAs choosing, and false
if no antialiasing will be performed.
The ignoreDepthValues
attribute, if true
, indicates the XR Compositor MUST NOT make use of values in the depth buffer attachment when rendering. When the attribute is false
it indicates that the content of the depth buffer attachment will be used by the XR Compositor and is expected to be representative of the scene rendered into the layer.
Depth values stored in the buffer are expected to be between 0.0
and 1.0
, with 0.0
representing the distance of depthNear
and 1.0
representing the distance of depthFar
, with intermediate values interpolated linearly. This is the default behavior of WebGL. (See documentation for the depthRange function for additional details.))
NOTE: Making the scene’s depth buffer available to the compositor allows some platforms to provide quality and comfort improvements such as improved reprojection.
Each XRWebGLLayer
MUST have a list of viewports which contains one WebGL viewport for each XRView
the XRSession
currently exposes. The viewports MUST NOT be overlapping. The XRWebGLLayer
MUST also have a viewport scale factor, initially set to 1.0, and a minimum viewport scale factor set to a UA-determined value between 0 and 1.
getViewport()
queries the XRViewport
the given XRView
should use when rendering to the layer.
The getViewport(view)
method, when invoked, MUST run the following steps:
-
If layer was created with a different
XRSession
than the one that produced view returnnull
. -
Let glViewport be the WebGL viewport from the list of viewports associated with view.
-
Let viewport be a new
XRViewport
instance. -
Initialize viewport’s
x
to glViewport’sx
component. -
Initialize viewport’s
y
to glViewport’sy
component. -
Initialize viewport’s
width
to glViewport’swidth
component multiplied by the viewport scale factor. -
Initialize viewport’s
height
to glViewport’sheight
component multiplied by the viewport scale factor. -
Return viewport.
framebuffer
size cannot be adjusted by the developer after the XRWebGLLayer
has been created, but it can be useful to adjust the resolution content is rendered at at runtime to aid application performance. To do so, developers can request that the size of the viewports in the list of viewports be changed using the requestViewportScaling()
method.
The requestViewportScaling(scaleFactor)
method, when invoked, MUST run the following steps:
-
If scaleFactor is greater than 1.0 set scaleFactor to 1.0.
-
If scaleFactor is less than the minimum viewport scale factor set scaleFactor to the minimum viewport scale factor.
-
If the XR device places additional device-specific restrictions on viewport size, adjust scaleFactor accordingly.
-
Set the viewport scale factor to scaleFactor.
Each XRSession
MUST identify a native WebGL framebuffer resolution, which is the pixel resolution of a WebGL framebuffer required to match the physical pixel resolution of the XR device.
The native WebGL framebuffer resolution is determined by running the following steps:
-
Let session be the target
XRSession
. -
If session’s mode value is not
"inline"
, set the native WebGL framebuffer resolution to the resolution required to have a 1:1 ratio between the pixels of a framebuffer large enough to contain all of the session’sXRView
s and the physical screen pixels in the area of the display under the highest magnification and abort these steps. If no method exists to determine the native resolution as described, the recommended WebGL framebuffer resolution MAY be used. -
If session’s mode value is
"inline"
, set the native WebGL framebuffer resolution to the size of the session’srenderState
'soutputContext
'scanvas
in physical display pixels and reevaluate these steps every time the size of the canvas changes or theoutputContext
is changed.
Additionally, the XRSession
MUST identify a recommended WebGL framebuffer resolution, which represents a best estimate of the WebGL framebuffer resolution large enough to contain all of the session’s XRView
s that provides an average application a good balance between performance and quality. It MAY be smaller than, larger than, or equal to the native WebGL framebuffer resolution.
NOTE: The user agent is free to use and method of it’s choosing to estimate the recommended WebGL framebuffer resolution. If there are platform-specific methods for querying a recommended size it is recommended that they be used, but not required.
The getNativeFramebufferScaleFactor(session)
method, when invoked, MUST run the following steps:
-
Let session be the target
XRSession
. -
If session’s ended value is
true
, return0.0
and abort these steps. -
Return the value that the session’s recommended WebGL framebuffer resolution must be multiplied by to yield the session’s native WebGL framebuffer resolution.
We either need to define the swap chain or remove references to it. (This is filed.)
11.3. WebGL Context Compatibility
In order for a WebGL context to be used as a source for XR imagery it must be created on a compatible graphics adapter for the XR device. What is considered a compatible graphics adapter is platform dependent, but is understood to mean that the graphics adapter can supply imagery to the XR device without undue latency. If a WebGL context was not already created on the compatible graphics adapter, it typically must be re-created on the adapter in question before it can be used with an XRWebGLLayer
.
Note: On an XR platform with a single GPU, it can safely be assumed that the GPU is compatible with the XR devices advertised by the platform, and thus any hardware accelerated WebGL contexts are compatible as well. On PCs with both an integrated and discreet GPU the discreet GPU is often considered the compatible graphics adapter since it generally a higher performance chip. On desktop PCs with multiple graphics adapters installed, the one with the XR device physically connected to it is likely to be considered the compatible graphics adapter.
partial dictionary WebGLContextAttributes {boolean =
xrCompatible null ; };partial interface mixin WebGLRenderingContextBase {Promise <void >makeXRCompatible (); };
When a user agent implements this specification it MUST set a XR compatible boolean, initially set to false
, on every WebGLRenderingContextBase
. Once the XR compatible boolean is set to true
, the context can be used with layers for any XRSession
requested from the current XR device.
The XR compatible boolean can be set either at context creation time or after context creation, potentially incurring a context loss. To set the XR compatible boolean at context creation time, the xrCompatible
context creation attribute must be set to true
when requesting a WebGL context.
When the HTMLCanvasElement
's getContext() method is invoked with a WebGLContextAttributes
dictionary with xrCompatible
set to true
, run the following steps:
-
Create the WebGL context as usual, ensuring it is created on a compatible graphics adapter for the XR device.
-
Let context be the newly created WebGL context.
-
Set context’s XR compatible boolean to true.
-
Return context.
XRWebGLLayer
.
function onXRSessionStarted( xrSession) { let glCanvas= document. createElement( "canvas" ); let gl= glCanvas. getContext( "webgl" , { xrCompatible: true }); loadWebGLResources(); xrSession. updateRenderState({ baseLayer: new XRWebGLLayer( xrSession, gl) }); }
To set the XR compatible boolean after the context has been created, the makeXRCompatible()
method is used.
When the makeXRCompatible()
method is invoked, the user agent MUST return a new Promise promise and run the following steps in parallel:
-
Let context be the target
WebGLRenderingContextBase
object. -
If context’s WebGL context lost flag is set, reject promise with an
InvalidStateError
and abort these abort these steps. -
If context’s XR compatible boolean is
true
, resolve promise and abort these steps. -
If context was created on a compatible graphics adapter for the XR device:
-
Set context’s XR compatible boolean to
true
. -
Resolve promise and abort these steps.
-
-
Queue a task to perform the following steps:
-
Force context to be lost and handle the context loss as described by the WebGL specification.
-
If the canceled flag of the "webglcontextlost" event fired in the previous step was not set, reject promise with an
AbortError
and abort these steps. -
Restore the context on a compatible graphics adapter for the XR device.
-
Set context’s XR compatible boolean to
true
. -
Resolve promise.
-
Additionally, when any WebGL context is lost run the following steps prior to firing the "webglcontextlost" event:
-
Set the context’s XR compatible boolean to
false
.
XRWebGLLayer
from a pre-existing WebGL context.
let glCanvas= document. createElement( "canvas" ); let gl= glCanvas. getContext( "webgl" ); loadWebGLResources(); glCanvas. addEventListener( "webglcontextlost" , ( event) => { // Indicates that the WebGL context can be restored. event. canceled= true ; }); glCanvas. addEventListener( "webglcontextrestored" , ( event) => { // WebGL resources need to be re-created after a context loss. loadWebGLResources(); }); function onXRSessionStarted( xrSession) { // Make sure the canvas context we want to use is compatible with the device. // May trigger a context loss. return gl. makeXRCompatible(). then(() => { return xrSession. updateRenderState({ baseLayer: new XRWebGLLayer( xrSession, gl) }); }); }
12. Canvas Rendering Context
12.1. XRPresentationContext
When the getContext()
method of an HTMLCanvasElement
canvas is to return a new object for the contextId present-xr
, the user agent must return an XRPresentationContext
with canvas
set to canvas.
[SecureContext ,Exposed =Window ]interface { [
XRPresentationContext SameObject ]readonly attribute HTMLCanvasElement canvas ; };
The canvas
attribute indicates the HTMLCanvasElement
that created this context.
Each XRPresentationContext
has an session which is an XRSession
initially set to null
.
When another algorithm indicates it will update the XRPresentationContext session to XRSession
session, run the following steps:
-
Let context be the target
XRPresentationContext
. -
Let prevSession be context’s session.
-
If prevSession is equal to session abort these steps.
-
If prevSession is not
null
:-
If prevSession’s
renderState
'soutputContext
is equal to context set prevSession’srenderState
'soutputContext
tonull
.
-
-
Set context’s session to session.
13. Events
13.1. XRSessionEvent
XRSessionEvent
s are fired to indicate changes to the state of an XRSession
.
[SecureContext ,Exposed =Window ,(
Constructor DOMString ,
type XRSessionEventInit )]
eventInitDict interface :
XRSessionEvent Event { [SameObject ]readonly attribute XRSession session ; };dictionary :
XRSessionEventInit EventInit {required XRSession ; };
session
The session
attribute indicates the XRSession
that generated the event.
13.2. XRInputSourceEvent
XRInputSourceEvent
s are fired to indicate changes to the state of an XRInputSource
.
[SecureContext ,Exposed =Window ,(
Constructor DOMString ,
type XRInputSourceEventInit )]
eventInitDict interface :
XRInputSourceEvent Event { [SameObject ]readonly attribute XRFrame frame ; [SameObject ]readonly attribute XRInputSource inputSource ; [SameObject ]readonly attribute long ?buttonIndex ; };dictionary :
XRInputSourceEventInit EventInit {required XRFrame ;
frame required XRInputSource ;
inputSource long ?=
buttonIndex null ; };
The inputSource
attribute indicates the XRInputSource
that generated this event.
The frame
attribute is an XRFrame
that corresponds with the time that the event took place. It may represent historical data. Any XRViewerPose
queried from the frame
MUST have an empty views
array.
The buttonIndex
attribute indicates the index of the button in the buttons
array of the inputSource
's gamepad
that caused the event to be generated. If the gamepad
is null
or the event was not generated by a button interaction the values MUST be null
.
When the user agent fires an XRInputSourceEvent
event it MUST run the following steps:
13.3. XRReferenceSpaceEvent
XRReferenceSpaceEvent
s are fired to indicate changes to the state of an XRReferenceSpace
.
[SecureContext ,Exposed =Window ,(
Constructor DOMString ,
type XRReferenceSpaceEventInit )]
eventInitDict interface :
XRReferenceSpaceEvent Event { [SameObject ]readonly attribute XRReferenceSpace referenceSpace ; [SameObject ]readonly attribute XRRigidTransform ?transform ; };dictionary :
XRReferenceSpaceEventInit EventInit {required XRReferenceSpace ;
referenceSpace XRRigidTransform ; };
transform
The referenceSpace
attribute indicates the XRReferenceSpace
that generated this event.
The transform
attribute describes the transform the referenceSpace
underwent during this event, if applicable.
13.4. Event Types
The user agent MUST provide the following new events. Registration for and firing of the events must follow the usual behavior of DOM4 Events.
The user agent MAY fire a devicechange
event on the XR
object to indicate that the availability of XR devices has been changed. The event MUST be of type Event
.
blur
event on an XRSession
to indicate that presentation to the XRSession
by the page has been suspended by the user agent, OS, or XR hardware. While an XRSession
is blurred it remains active but it may have its frame production throttled. This is to prevent tracking while the user interacts with potentially sensitive UI. For example: The user agent SHOULD blur the presenting application when the user is typing a URL into the browser with a virtual keyboard, otherwise the presenting page may be able to guess the URL the user is entering by tracking their head motions. The event MUST be of type XRSessionEvent
.
A user agent MAY dispatch a focus
event on an XRSession
to indicate that presentation to the XRSession
by the page has resumed after being suspended. The event MUST be of type XRSessionEvent
.
A user agent MUST dispatch a end
event on an XRSession
when the session ends, either by the application or the user agent. The event MUST be of type XRSessionEvent
.
inputsourceschange
event on an XRSession
when the session’s list of active input sources has changed. The event MUST be of type XRSessionEvent
. A user agent MUST dispatch a selectstart
event on an XRSession
when one of its XRInputSource
s begins its primary action. The event MUST be of type XRInputSourceEvent
.
A user agent MUST dispatch a selectend
event on an XRSession
when one of its XRInputSource
s ends its primary action or when an XRInputSource
that has begun a primary action is disconnected. The event MUST be of type XRInputSourceEvent
.
A user agent MUST dispatch a select
event on an XRSession
when one of its XRInputSource
s has fully completed a primary action. The event MUST be of type XRInputSourceEvent
.
A user agent MUST dispatch a reset
event on an XRReferenceSpace
when discontinuities of the origin occur. (That is, significant changes in the origin’s position or orientation relative to the user’s environment.) It also fires when the boundsGeometry
changes for a XRBoundedReferenceSpace
. The event MUST be of type XRReferenceSpaceEvent
, and MUST be dispatched prior to the execution of any XR animation frames that make use of the new origin.
14. Security, Privacy, and Comfort Considerations
The WebXR Device API provides powerful new features which bring with them several unique privacy, security, and comfort risks that user agents must take steps to mitigate.
14.1. Gaze Tracking
While the API does not yet expose eye tracking capabilities a lot can be inferred about where the user is looking by tracking the orientation of their head. This is especially true of XR devices that have limited input capabilities, such as Google Cardboard, which frequently require users to control a "gaze cursor" with their head orientation. This means that it may be possible for a malicious page to infer what a user is typing on a virtual keyboard or how they are interacting with a virtual UI based solely on monitoring their head movements. For example: if not prevented from doing so a page could estimate what URL a user is entering into the user agent’s URL bar.
To prevent this risk the user agent MUST blur all sessions when the users is interacting with sensitive, trusted UI such as URL bars or system dialogs. Additionally, to prevent a malicious page from being able to monitor input on a other pages the user agent MUST blur all sessions on non-focused pages.
14.2. Trusted Environment
If the virtual environment does not consistently track the user’s head motion with low latency and at a high frame rate the user may become disoriented or physically ill. Since it is impossible to force pages to produce consistently performant and correct content the user agent MUST provide a tracked, trusted environment and an XR Compositor which runs asynchronously from page content. The compositor is responsible for compositing the trusted and untrusted content. If content is not performant, does not submit frames, or terminates unexpectedly the user agent should be able to continue presenting a responsive, trusted UI.
Additionally, page content has the ability to make users uncomfortable in ways not related to performance. Badly applied tracking, strobing colors, and content intended to offend, frighten, or intimidate are examples of content which may cause the user to want to quickly exit the XR experience. Removing the XR device in these cases may not always be a fast or practical option. To accommodate this the user agent SHOULD provide users with an action, such as pressing a reserved hardware button or performing a gesture, that escapes out of WebXR content and displays the user agent’s trusted UI.
When navigating between pages in XR the user agent should display trusted UI elements informing the user of the security information of the site they are navigating to which is normally presented by the 2D UI, such as the URL and encryption status.
14.3. Context Isolation
The trusted UI must be drawn by an independent rendering context whose state is isolated from any rendering contexts used by the page. (For example, any WebGL rendering contexts.) This is to prevent the page from corrupting the state of the trusted UI’s context, which may prevent it from properly rendering a tracked environment. It also prevents the possibility of the page being able to capture imagery from the trusted UI, which could lead to private information being leaked.
Also, to prevent CORS-related vulnerabilities each page will see a new instance of objects returned by the API, such as XRSession
. Attributes such as the context
set by one page must not be able to be read by another. Similarly, methods invoked on the API MUST NOT cause an observable state change on other pages. For example: No method will be exposed that enables a system-level orientation reset, as this could be called repeatedly by a malicious page to prevent other pages from tracking properly. The user agent MUST, however, respect system-level orientation resets triggered by a user gesture or system menu.
14.4. Fingerprinting
Given that the API describes hardware available to the user and its capabilities it will inevitably provide additional surface area for fingerprinting. While it’s impossible to completely avoid this, steps can be taken to mitigate the issue. This spec limits reporting of available hardware to only a single device at a time, which prevents using the rare cases of multiple headsets being connected as a fingerprinting signal. Also, the devices that are reported have no string identifiers and expose very little information about the devices capabilities until an XRSession is created, which may only be triggered via user activation in the most sensitive case.
15. Integrations
15.1. Feature Policy
This specification defines a policy-controlled feature that controls whether thexr
attribute is exposed on the Navigator
object.
The feature identifier for this feature is "xr"
.
The default allowlist for this feature is ["self"]
.
16. Acknowledgements
The following individuals have contributed to the design of the WebXR Device API specification:
-
Sebastian Sylvan (Formerly Microsoft)
And a special thanks to Vladimir Vukicevic (Unity) for kick-starting this whole adventure!