W3C

- DRAFT -

XR Accessibility

18 Sep 2019

Attendees

Present
JohnRochford, dom, boaz, CharlesHall, ada, CharlesL, cabanier, achraf, LocMDao, Roy
Regrets
Chair
Joshue108
Scribe
dom

Contents


<scribe> ScribeNick: dom

Joshue108: the goal is to grow an accessibility community in XR, i.e. Augmented/Virtual reality
... we want to bring together the accessibility community to identify gaps and challenges in making XR accessible to people with disabilities
... I've been working in accessibility for quite a long time
... a strong background on usability for people with disabilities
... in particular with user testing with people from disabilities
... we're looking at all sort of things beyond XR
... Real-Time Communications
... this is funded by the EU WAI-Guide project
... Most of the work happening in this space on my side is in APA WG and the Research Questions Task Force
... that's where of most of the XR work is happening in this space
... Few topics for today:
... Overview of my existing work
... and would like to collect ideas on post-it notes
... or IRC
... want to brainstorm with people in the room
... I'm currently drafting user requirements for XR - particular user needs for XR
... "use cases" is a bit loaded of a term
... which varies on a group by group basis
... here wanted to focus on user needs
... WHat kind of technical architecture is needed to support accessibility?
... Accessibility is complicated in a 2D environment, nevermind in 3D, surround sound, etc
... It's hard to communicate these requirements and how to put them at an abstract level in an architecture
... we've been discussing this in APA

<Joshue108> https://www.w3.org/WAI/APA/wiki/Xaur_draft

Joshue108: The document we've been working on is at https://www.w3.org/WAI/APA/wiki/Xaur_draft
... we drafted user needs and requirements for XR with a modular approach
... as they relate to navigation, object semantics, interactions
... I figure that was an interesting approach
... there has been a lot of existing research in this space
... this modular approach would help address what an XR author understand what they need to do for their app
... we came up with a draft checklist for things to do - inspired by the Games ACcessibility Guidelines
... a big crossover between gaming and XR

<aboxhall_> http://gameaccessibilityguidelines.com/

Joshue108: APA aims to publish XAUR as a non-normative WG Note
... we hope to have it useful as a jumping point to other specifications
... We also started a series of media checkpoint
... not yet reviewed by APA

<Joshue108> https://www.w3.org/WAI/APA/wiki/Media_in_XR

Joshue108: based on MAUR
... (user requirements that relate to audio/video in HTML)
... I've drafted a similar list for XR
... What are the current challenges? which will lead to our brainstorming sessions
... In the general sense - lack of declarative semantics for XR context
... we're looking at the accessibility object model primarily designed for Web components but may be useful for XR too (or not)
... there are issues with rendering environment & performance
... we've looked at something called "modal muting" - if a user isn't primarily visual, can we mute the visual modality and still keep the rest of the experience?
... likewise for audio-based modality?
... this reduces bandwidth, CPU
... where will the needs be addressed
... how does this work relate to WCAG and the silver TF?
... likewise wrt Framework for ACcessible Specification (FAST)
... how do we bring people together?
... we need people coming to our accessibility work from many backgrounds
... incl people with expertise on every slice of that topic
... we have the APA WG, the RQTF
... there is the Immersive Web WG
... the work on AOM & Web Components
... we need broader engagement from accessibility in this space

<Zakim> CharlesHall, you wanted to discuss extent of scope

CharlesHall: question about scope : there are emerging very peculiar technologies like Helio from Magic Leap
... how far do we go?

Joshue108: my guess is that the needs from user with disabilities doesn't change all that much across that range
... the goal of WCAG and accessibility work is to enable people with disabilities across all the spectrum

Janina: the answer we don't know but we want to figure out
... we have a 9am Thursday session with Immersive Web WG
... I want to know what they plan to achieve in the short term and what their longer term vision
... which will help us scope the work
... will be in APA WG

<aboxhall_> Schedule actually says Kashi for APA on Friday

Janina: Thu at 11am will be on AOM and how it can help in this space
... and one more conversation on Friday at 11am in a discussion with the TAG - one of the topics is rendering for XR
... we need semantics for accessibility

DavidF: IE in APA & AG WGs
... we have one hand XR & Immersive Web
... XR feels visual vs IMmersive Web feels like more holistic experiences (audio, haptic, ...)
... I'm working on an immersive platform to integrate all these modalities to help with learning

Brandon: Editor of WebXR spec
... we have a fair chunk of the IMmersive Web WG here today and we will be at the Thursday meeting today
... happy to have questions today as well
... my goal is to listen and absorb the needs, hear what work has been done, in guidelines and more
... and how we can better engage in this for developing standards
... we understand the challenge of relying of WebGL from an accessibility perspective
... any ideas that can help with that challenge is of interest - I'm interested to hear about how AOM can help here
... One specific question related to "modal muting" - this is very interesting esp as XR tends to be very processing-intensive
... muting visual can help reducing hardware requirements
... a previous session on aria-virtualcontent was not related to VR (contrary to what I expected)
... but one of the topics they mentioned was fingerprinting
... I'm curious how much of a concern this may be

Alice: in particular AT detection

Joshue108: this is a fairly controversial question indeed

<sushrajaMSFT> +q talk about perhaps native support for glTF in the browser can give 3d content a dom for current accesibility APIs to function in XR and general 2d content

Joshue108: in terms of where we are now: with a regular page, the DOM-based rendering enables screen-readers interaction
... we're in a discovery phase - we can't tell you how to do it yet

<Zakim> joanie, you wanted to encourage reaching out to AT/API devs with specific needs and questions as they come up.

Joanie: I'm the co-chair of the ARIA WG
... with AT/API hat on
... there are only so many AT people that cover all platforms, so we're very busy, but please come to us with questions

cabanier: work for Magic Leap on the Helio browser
... there is WebXR & Immersive Web
... when the browser is part of your environment
... how can we improve accessibility given the new sensors (eye tracking, spatial tracking)
... want to make immersive browser a first class citizen for accessibility

Joshue108: this ties in to questions around Web of Things - how a person with disabilities could navigate a network of sensors, helped by e.g. AR

boaz: I haven't heard to hear about some accessibility experience design patterns or ideas for immersive experiences

<aboxhall_> https://www.w3.org/WAI/APA/wiki/Xaur_draft#XR_User_Needs

boaz: is there a collection of those?

DavidF: absolutely

Joshue108: the XAUR draft contains our first cut at this
... there are questions of new laws of UX in XR

DavidF: for immersive education, there are many ways you can help people learn with the various modalities of immersive (haptics, stereoscopic sound, etc)

sushraja: on the Web, 3D suffers from accessibility problems
... we heard about embedding accessibility in glTF
... we've explored bringing glTF to HTML - a breakout coming up on this
... I think we need to look at the overall question on accessible 3D
... glTF is a format for 3D models

<Zakim> bajones, you wanted to add one more note about considering mobility challenges

brandon: One positive example worth bringing up - Google has a project called model-viewer, a Web component to expose glTF models to the Web
... it has some accessibility support to expose an alt tag for the model extracted from glTF
... as you explore the models, the aria-label updates to express the perspectives of the viewer
... it's an interesting demo for exploration
... Separately, I often hear about the challenges of XR in visual context
... but I don't hear as much discussed about mobility challenges
... these challenges are unparalleled on the Web (vs WebGL has already been challenging)
... e.g. if you put an item on a high shelf, or requires a lot of movements in your XR experience
... which can be challenging both in the context of a disability or lack of available space
... I think this challenge needs more thoughts put into them
... how can we make these types of physical-world interactions experience accessible?

Joshue108: this is typically something we would put in an accessibility guideline
... it could be emulated via an assistive technology tool to e.g. emulate crouching, running
... Clearly screen-reader accessibility is not the whole of accessibility
... based on my experience though, what I want to focus is where we can get the greatest RoI

DavidF: that's a reason we need user testing - using the assistive technologies used in the real world may apply in the virtual world as well

<Joshue108> +1 to that

<Zakim> klausw, you wanted to say opportunities for using spatial navigation

klaus: for WebXR specifically, graphics is one aspect
... but WebXR can be used to track poses and spatial tracking
... eG. could be used to help track where someone is pointing at
... We've also looked at different layers of content: graphics rendering + DOM (where accessibility could get plugged in)
... it's a toolbox which used correctly we hope can help build accessible experience, sometimes even ignoring the graphical aspects
... We lack accessibility expertise in the WG

Charles: I want to bring XR experience personalization to meet the user specific needs
... e.g. for a wheelchair user put items lower down
... people with brightness-sensitivity with adjusted lighting, etc

Joshue108: in XAUR, we've started a series of checkpoints - informative ideas to make your XR experience more accessible, grouped in categories
... based on the type of user needs

<aboxhall_> interesting to think about how media queries like prefers-dark-mode might apply

<Zakim> kip, you wanted to say that it may be interesting to explore ways that XR technology can increase accessibility of content not native to the medium

Kip: Mozilla, in Immersive Web team
... I find it interesting that with the commodization of VR hardware enables people to have access to device & sensors that would otherwise be too expensive
... I would like to see how XR platforms and browsers could be used to increase the accessibility of the platform
... e.G. through the lower cost of eye tracking
... or with problems linked to shaking - low-filter with VR sensors could help
... accelerometer & gyroscope used to be expensive - mobile changed it all as an analogy

aboxhall_: regarding use of AOM

<aboxhall_> 9https://github.com/WICG/aom/blob/gh-pages/explainer.md#virtual-accessibility-nodes

aboxhall_: the best fit would be for the virtual accessibility nodes

<bajones> https://github.com/WICG/aom/blob/gh-pages/explainer.md#virtual-accessibility-nodes

aboxhall_: on native platforms, you can build accessible trees independent of the rest of the app
... we want to enable that for the Web as well
... it does raise questions about AT detection
... which probably means prompting for user consent
... there are challenges: the vocabulary that can be expressed with AOM is based on ARIA that isn't fit for XR
... it would be interesting how you would even consume these semantics
... a screen reader assumes something that can be linearized
... that is very challenging in the XR content
... you need a way to navigator XR nodes

Joshue108: wrt linearization (i.e. reading one thing at a time, and discovering its context)
... for an HTML table, a screen reader doesn't linearize as much as allowing to interrogate the context and its context
... this sounds like this model could apply to XR as well

aboxhall_: +1

Joshue108: happy to get follow up by email; do we need to set up a CG for some of these follow-up?

<CharlesHall> brainstorm / request: please add a design principle to the extent of “do not place people at risk”

Janina: I think it will be clearer at the end of the week after Thursday and Friday meeting

<Joshue108> joconnor@w3.org

Joshue108: the APA and the RQTF lists are good forums to track

Janina: I'm conflicted with 2 views: the better virtual reality gets, the closer to reality it gets and those the rules of reality applies
... and then, I wouldn't want that limitations of reality keep applying in virtual worlds

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/09/18 06:29:17 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/WCAG/WCAG and accessibility work/
Succeeded: s/will/will be/
Present: JohnRochford dom boaz CharlesHall ada CharlesL cabanier achraf LocMDao Roy
Found ScribeNick: dom
Inferring Scribes: dom

WARNING: No "Topic:" lines found.


WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]