Table of Contents
W3C held a two days workshop on bringing Inclusive Design to Immersive Web standdards in Seattle on November 5 and 6 2019, hosted by PlutoVR, Maveron, We Make Reality, Virtual World Society and the Seattle Immersive Technology Association. The workshop brought over 50 participants from a very diverse set of backgrounds: browser developers, XR (extended reality, the union of augmented and virtual reality) ecosystem providers, standardization experts, Web developers, accessibility experts, XR developers and producers, governmental agencies, XR users with disabilities.
Workshop participants learned from existing approaches that have been taken in making XR experiences (on and off the Web) accessible before looking at what lessons could be derived from these existing research and experiments in the context of the Immersive Web architecture.
These lessons brought forward four aspects of accessible XR experiences:
- For visual interactions, the need to standardize semantics for scene and models (e.g. for the glTF format) was identified as a low-hanging fruit, while browsers and 3D engines could provide support for accommocation for low-vision impairments (e.g. high contrast, magnification)
- For motricity considerations, ensuring the WebXR input mechanism can be applied to accessible controllers and enable accessible re-mappings of default controllers would provide a first level of improvement, while making real-world movement detection accessible is likely to require more substantive standardization work (interaction semantics, users' capabilities and preferences, and development best practices)
- For auditory aspects, workshop participants reviewed existing research in how to integrate and position accessible accommodations in 3D environments (e.g. sign language interpretation, captions). Discussions on captions are expected to continue in the Immersive Captions Community Group.
- Participants also reviewed the landscape of Assistive Technologies to understand how current tools can provide support for XR and agreed that new tools and approaches (e.g. AI) would be needed to bring full XR support, including using Web and XR technologies themselves as the basis for building assistive technologies.
The relevant work in W3C spans across at least 6 standardization Working Groups and 6 pre-standardization and incubation Community Groups, and also intersects at least 3 Khronos Working Groups - pointing toward the need for a strong coordination effort to ensure systematic and consistent progress for the Web platform. We propose to host this coordination in the Inclusive Design for the Immersive Web Community Group via a dedicated github repository.
W3C holds workshops to discuss a specific space from various perspectives and identify needs that could warrant standardization efforts at W3C or elsewhere, and assess support and priorities among relevant communities. The Workshop on Inclusive Design for Immersive Web Standards was held in Seattle on November 5 and 6 2019, hosted by PlutoVR, Maveron, We Make Reality, Virtual World Society and the Seattle Immersive Technology Association, and with the generous sponsorship of Google, Twitch and Samsung Internet.
Building on the W3C Workshop on Web and Virtual Reality (VR) in 2016, and the W3C Workshop on VR Authoring in 2017, and on the 2019 XR Access Symposium, this W3C Workshop looked at applying Inclusive Design to the development of Virtual and Augmented Reality (XR) standards for the Web - where Inclusive Design is meant as “designing for the needs of people with permanent, temporary, situational, or variable disabilities”.
The specific goals of the workshop were to:
- Share existing inclusive XR solutions to help create new standards for inclusive XR on the web.
- Identify accessibility gaps in existing web XR technologies, and consider solutions for closing those gaps.
- Explore ways to use existing technologies and standards to create innovative solutions for inclusive XR on the web.
The workshop brought over 50 participants from a very diverse set of backgrounds: browser developers, XR (extended reality, the union of augmented and virtual reality) ecosystem providers, standardization experts, Web developers, accessibility experts, XR developers and producers, governmental agencies, XR users with disabilities.
Setting the context
Given the diversity of profiles of the workshop participants, the workshop started with setting up a shared understanding of the challenges and opportunities around making the Immersive Web accessible to all. An informal lexicon had been shared ahead of the event to give all participants a chance to get familiar with the jargon from the various relevant fields.
Leonie Watson (Tetralogical), the workshop chair, reminded everyone in her presentation of the definition and goals of inclusive design: “designing for the needs of people with permanent, temporary, situational, or changing disabilities - all of us really”.
Matt May (Adobe) summarized the work conducted over the past 20+ years to make the Web accessible, in particular through the develop of the Web Content Accessibility Guidelines.
Kip Gilbert (Mozilla) shared an overview of the architecture of the Immersive Web, highlighting the possible opportunities to incorporate accessibility in.
Josh O'Connor (W3C) presented the work done by the Accessible Platform Architectures Working Group on XR User Needs and Requirements to help all participants understand how and why people with disabilities can benefit from an accessible immersive Web.
Accessibility hooks for graphical 3D Environments
To make the visual component of XR experiences, based on immersive 3D graphics, the need to associate stronger semantics with 3D environments and objects was brought as a clear line of improvement, which would apply both to immersive and non-immersive contexts.
While the semantics provided by ARIA can help in the context of 2D interfaces projected in 3D, it emerged that ARIA was unlikely to be a scalable approach to annotate full 3D environments.
Several models to annotate 3D scenes and models were presented and discussed:
- Zohar Gan (Accessible Realities) presented his work on enabling XR Accessibility with Semantic XR Data Model
- Chris Joel (Google) shared the lessons learned from making the Web-based 3D viewer component (
model-viewer) accessible, and related work in the Khronos Group
- Liv Erikson (Mozilla) shed light on the authoring aspects of using glTF in an accessible and adaptive way<.
Interest was expressed in bringing native accessible support in 3D formats such as Khronos glTF.
The opportunity for browsers and 3D engines to provide built-in support to accommodate vision impairments (e.g. high contrast rendering, magnification) was identified as an encouraging path to explore in Meredith Ringal Morris' presentation on making VR inclusive to people with low-vision.
Making Motricity Accessible in XR
Considering the motricity aspects of XR experiences, which encompass both the use of input mechanisms (e.g. controllers) and the reliance on real-world movements detection, the workshop participants reviewed the existing approach to making XR experiences accessible, led by Roland Dubois's review of accessible XR controllers and John Akers's demo of adapting a WebVR experience to binary-input controllers.
They identified opportunities to build-in support for accessible controllers and accessible mapping of default controllers.
They recognized that making real-world movement detection accessible is likely to require a combination of best practices (both for app and engines developers), standards semantics to represent motor interactions possibilities in an experience and some ways to represent which movements a given user would need accommodation for.
Auditory accessibility in 3D environments
Wendy Dannels (Rochester Institute of Technology) gave an overview of the opportunities and challenges for sign-language users in an XR environment: XR provides for instance unique opportunities to make real-life settings more accessible by including a sign-language interpreter in the direct field of view of the user, but a number of technical and operational challenges remain to make that vision widely available.
Melina Möhlne (IRT) presented the findings from the ImAc project on how to incorporate captions in 360° media based on a series of user testing sessions.
Chris Wilson (Google) gave an overview of how the Web Audio API can already be used in the context of WebXR to create spatialized immersive audio experiences.
Assistive Technologies for XR
Led by a panel with Markku Hakkinen (Educational Testing Service), Meredith Ringel Morris (Microsoft), Leonie Watson (Tetralogical) and Jason White (Educational Testing Service), furthered by a breakout on the topic, the workshop participants explored how assistive technologies themselves need to evolve to work in immersive environments.
There was wide recognition that the current AT tools are likely to struggle with the additional requirements brought by XR, and that technologies used to feed them (e.g. ARIA) are likely to be too limited to provide a fully immersive accessible experiences.
The need for exposing a wider set of annotations, built not just from content authors, but also by other users and possibly by artificial intelligence, was identified as an area worth further investigation.
The value of personalization and semantics (e.g. describing possible interactions rather than specific movements) was discussed as a key enabler for either immersive assistive technologies or for immersive technologies being their own assistive tools.
Terminology for XR
In complement to the informal lexicon, one of the breakout sessions of the workshop focused on the terminology used in XR to help tease out how these terms may influence (in a good or in a bad way) how we approach solutions in this space. The relationships and differences between Virtual Reality, Augmented Reality, Mixed Reality, eXtended Reality, spatial computing, immersive environment and immersive experiences were discussed. The session participants also questionned where and how an experience starts to be immersive: with a simple digital overlay? with a big enough screen? with a headset? os is it mostly about the state of mind that gets induced?
Current practices in making XR accessible
Thomas Logan and Roland Dubois shared some of the lessons they learned when making Web-based XR experiences accessible using a variety of techniques: using ARIA annotations in declarative XR frameworks (e.g. a-frame), using the Web Speech API to build an in-app screen reader and a voice-based command interpreter. Among the identified lessons:
- keeping track of semantics throughout the content production chain provides a big opportunity to make content accessible: for instance, 3D models composed from well-identified items (e.g. clothes) and animated using pre-defined animation sequences (e.g. dancing) could be more easily exposed with the proper semantics if that information is surfaced when rendered in an immersive environment;
- adapting to each user's need would be facilitated by a shared language to describe these needs and import them to configure a given immersive experience: for instance, the user's desire to have captions set by default (rather than having to manually select them) should be something immersive experiences could use without having to ask the user;
- the need for personalization extends not just to interaction preferences, but also in how a user may want to be represented in the experience - there again, the possibility of having an easily importable avatar would help making XR experiences more inclusive.
In a dedicated breakout session, a subset of the workshop participants then took an existing WebXR experience (a-blast) and reviewed how the guidance provided by the draft XAUR checkpoints helped identify limitations in an existing app, and when they needed clarifications or changes to make that analysis clearer.
Conclusions and Next Steps
In reviewing the outcomes of the discussions, the workshop participants identified several layers where W3C and other organizations could have impact, as summarized in the following table.
|Immersive Web Applications||App developers||Examples, Best Practices, Training|
|Immersive Web Engine/Framework/Libraries||Framework developers||Best Practices, Prototypes, Patches|
|Web browsers||Browser developers, Standards developers||Additional APIs and formats, standardized best practices|
|XR platform/SDK||Not in scope for the workshop|
|Operating System||Not in scope for the workshop|
Given the technological complexity of XR, a great variety of groups (in and outside of W3C) need to be involved in bringing inclusive design to the Immersive Web. The following groups were identified as key stakeholders:
- the W3C Immersive Web Working Group
- Responsible for standardizing the WebXR API and its associated modules (incl. gamepad), providing the core features to bring XR to the Web
- the Immersive Web Community Group (W3C)
- Pre-standardization group where new features for future Immersive Web standards are incubated.
- the W3C Audio Working Group
- Responsible for standardizing the Web Audio API, including its spatialized audio capabilities.
- the Immersive Captions Community Group (W3C)
- Pre-standardization group to explore how to bring captions to 360 and immersive content.
- the W3C Timed Text Working Group
- Responsible for standardizing caption formats for the Web (TTML, WebVTT).
- the Web Platform Incubator Community Group (WICG) (W3C)
- Pre-standardization group where a number of browser technologies are incubated; particularly relevant to the topic, the Accessible Object Model provides an opportunity to interact in more ways with assistive technologies, and the Web Speech API bring voice synthesis and recognition capabilities to Web browsers.
- the W3C ARIA Working Group
- Responsible for standardizing the ARIA suite of specifications
- the W3C Accessibility Guidelines Working Group
- Responsible for standardizing guidelines to make the Web accessible, including the Web Content Accessibility Guidelines.
- the W3C Accessible Platform Architectures Working Group
- Responsible for the ongoing work on XR User Needs and Requirements; also hosts the Research and Question Task Force and the Personalization Task Force where relevant work is happening.
- the Khronos glTF Working Group
- glTF is a key interchange format for 3D scenes and models, which might usefully hosts annotations to make its content accessible.
- the Khronos 3D Commerce Working Group
- Responsible for streamlining 3D content creation in a commerce context, with a particular focus on metadata, a key element of accessible experiences.
Beyond these key groups, the work in the GPU on the Web Community Group (W3C), Machine Learning for the Web Community Group (W3C) and the related Web & Machine Learning W3C Workshop scheduled in March 2020 are likely to bring new challenges and opportunities to the field.
This wide set of groups and relevant works points to the need for a strong coordination program to ensure these different efforts develop into a consistent platform for an accessible Immersive Web. We invite all the parties interesting in helping with that coordination to join the Inclusive Design for the Immersive Web Community Group which will track and monitor progress using a dedicated github repository.
In terms of short terms technical developments that can serve as enables to this vision:
- ensuring that the work on WebXR input controllers allows re-mapping to take into account less frequent or well-known controllers falls under the responsibility of the Immersive Web Working Group.
- the Immersive Captions Community Group is developing an understanding of how captions needs to be presented and delivered in an immersive environment.
- W3C and Khronos could collaborate on bringing accessibility support to glTF (e.g. based on the existing XMP extension), bringing improved accessibility not only to immersive environments, but also to more contexts on- and off- the Web (e.g. for retail use cases as explored in the Khronos 3D Commerce Working Group.
- best practices in how to develop XR content to make it compatible with existing assistive technologies provide a useful bridge until better AT and stronger built-in accessible XR features become available - as paved by the work on XR User Needs and Requirements.
Longer term work will require research and exploration in the field of metadata to describe 3D environments and their affordances, and a broad rethinking of how Assistive Technologies need to interpret content and experiences on their user's behalf.