XR-Semantics-Module

From Accessible Platform Architectures Working Group
(Redirected from XRA-Semantics-Module)

XR Object Semantics Module

The following document outlines proposed accessibility requirements that may be used in a modular way in Immersive, Augmented or Mixed Reality (XR). A modular approach may help us to define clear accessibility requirements that support XR Accessibility User Requirements, as they relate to the immersive environment, objects, movement, and interaction with assistive technologies such as screen readers. Such a modular approach may help the development of clear semantics, designed to describe specific parts of the immersive eco-system.

In immersive environments it is imperative that the user can understand what objects are, understand their purpose, as well as another qualities and properties including interaction affordance, size, form, shape, and other inherent properties or attributes.

APIs/systems supporting accessibility

The following outlines what should be supported by any given API or Immersive Environment in order to meet various XR Accessibility User requirements.

Taxonomy of Objects

Descriptions of overall object types in any given environment.

Object related qualities that may vary within a taxonomy

Autonomous Objects

XR Navigation Module

In immersive environments it is imperative that controls be device independent, so that control may be achieved by keyboard, pointing device, speech, etc.

APIs/Systems supporting accessibility (must support)

Movement Model

Some movement models may be pleasing to a user and others may induce sickness or discomfort. The ability is needed to fine tune and control movement models including rate of motion, direction, angle of ascent/descent, motion tracking and view.

This may include the 'degrees of freedom' relating to movement such as 3 degrees (forward, back, up/down) or 6 degrees for more immersive experiences (+X, -X, +Y, -Y, +Z, -Z). The preferred model may be defined in a user preference.

Poor movement can induce sickness and can come from a disconnect between what they see visually and what they sense in their body. Quick movements may be preferred, and you must keep camera and user movement in alignment. The most comfortable experiences in XR involves very little, if any movement. For the user often head/motion tracking should be enabled when 'on' in immersive XR but this may need to be restricted depending on what needs focus. If a view is to restricted the user should be informed of this by auditory or visual ques. 3

Use of sounds

Sonic maps - Aural feedback, input.

Haptics

Location

Orientation

Field of View

Display Field of View and Camera FOV.

XR Environment Module

In immersive environments it is imperative that controls be device independent, so that control may be achieved by keyboard, pointing device, speech, etc.

APIs/Systems supporting accessibility (must support)

Abstracting environments into alternatives

Environmental relationships between objects.

Distance, if an object is moving, or still,

Environmental Object States

On fire, exploding, (physics modelling stuff)

Guides

Guides for the user in the environment. May also be linked to Status.

Content Maps

Content maps help to guide the user and can take the form of instructions, hints, sonic cues etc.

XR Interaction Module

Personal Interaction/Communication

Picking things up

Making selections / Deselecting

Undoing Actions

Gaming Interactions

Menus in XR

Different menu types:

  • Linear Menus
  • Idle Call and Point
  • Box Menus

XR Time Module

APIs for understanding, current time, lapse of time, time based control when within an XR environment.

XR Status Module

XR status APIs for loading/scene changes, understanding current user status, changes of user status, (Online, offline, alive/dead), changes of aspects of user status connected/disconnected/ on a call, muted etc.

Loading content

Any XR application state chances such as loading content, scene changes should be in the cone of focus for sighted users, and have accessible alternatives non-sighted users. Users may choose a preference for a 'type' of status message, alert or warning. The cone of focus should be customised to suit user requirements, colour contrast, depth of field requirements, angle of view etc. This could be particularly important for users with macular degeneration, tunnel vision.

Known Issues

WebXR Devices API, WebGL and canvas

The WeXR Devices API spec Section 11 ("Layers") introduces the mechanisms used for visual rendering of the 3-dimensional XR content. Only one type of layer is presently defined, but it is made clear that additional layer types may be defined in future revisions of the specification. The presently defined layer type relies on WebGL for the rendering of the XR content, and, it appears (section 12), on an HTML canvas to host it.

Only Canvas 2D supports integration with ARIA (hit regions, etc.), and hence with assistive technologies that depend on accessibility APIs. WebGL does not offer such support. It therefore does not appear possible to associate accessibility API objects directly with components of the 3D scene as rendered within the canvas.

Performance and Web Assembly

WebAssembly is also promising as run time as there are current proposals to improve memory management and speed up API calls, which would be very good for accessibility in this space. The current WASM proposal aims to move away from instantiating 'memory instance' variables and by removing this 'JS Glue' be able to reference binding expressions from linear memory which is more efficient and potentially better/faster for accessibility related API calls. While not directly accessibility related, this proposed approach will make the WASM platform much faster and better able to handle AT requirements via Accessibility API and platform APIs.

Tracking and Consistency

There are known issues with tracking in XR environments. Gestures and motion controllers are not always accurate. To support accessibility needs of a range of users there will need to be support for redundancies, fallback and alternative input mechanisms if a user needs to switch between input methods.

What extra protocols and process would be needed to ensure consistent tracking and fallback and smooth transition between input devices?

Ease of Detection

Are the start and end of motion obvious for the user?

Occlusion

Where something gets in the way during an interaction (Originally a dental term).

Bi-manual Ambiguity

When how two hands interact creates confusion in an interaction, or one makes a selection while the other is performing an action and the two should be focussing on only one task. 3

Acknowledgements