W3C

– DRAFT –
Accessible Platform Architectures Working Group Teleconference

19 May 2021

Attendees

Present
janina, jasonjgw, John_Paton, Joshue, Joshue108, scott_h, SteveNoble, SuzanneTaylor
Regrets
-
Chair
jasonjgw
Scribe
janina, joconnor

Meeting minutes

<Joshue108> <Judy works on logistics with captioning>

Introductions.

<Joshue108> JB: Over to Jason

<Joshue108> <Judy suggests people raise hand in Zoom>

<Joshue108> JS: Works on phone, I'm scribing also.

<Joshue108> JW: Judys efforts on planning and logistics are appreciated

<Joshue108> <Jason gives intro and overview>

<Joshue108> <Identifies issues of joint interest - XR etc>

<Joshue108> JS: <Welcomes all and gives intro>

<Joshue108> Glad that there is an active community of deaf folks at W3C

<Joshue108> Great to have more direct interaction.

<Joshue108> APA is the W3C group that works on overseeing accessibility at W3C

<Joshue108> JB: Mentions the current group is ~ 20.

<Joshue108> Wendy D: Hello, this is Wendy. I work at RIT - one of the deaf community research members

<Joshue108> JOC: <Gives intro>

<Joshue108> SH: <Gives Intro>

<Joshue108> <Wont list anymore intros..>

Overview of current Research Questions Task Force activities.

<Joshue108> JW: We want to here provide overview of the RQTF work that would be of interest to the Immersive Captions community group

<Joshue108> JW: I'd like to mention the RTC Accessibility User Requirements doc.

<Joshue108> It has recieved wide review and is soon to be published

<Joshue108> It addresses a range of issues of interest to the CG

<Joshue108> https://raw.githack.com/w3c/apa/74f3a865bc80548eb45add85f2c2561db23c0c60/raur/index.html

<Joshue108> There is also the XR A11y User Requirements doc

<Joshue108> https://raw.githack.com/w3c/apa/6d5bf713d9d7c65ecda104c213ad47b0e98cfbe1/xaur/index.html

<Joshue108> Another document is the Natural Language A11y User Requirements

<Joshue108> Used for UIs that have natural language UIs - speech, and text input - Natural Language

<Joshue108> Currently in an earlier stage of development - it would be of interest to Immersive Captions CG for feedback

<Joshue108> The RQTF is currently also looking at Media Synchonisation issues around, audio and video - captions and textual descriptions

<Joshue108> The impetus is from common interest accross groups to achieve sync of various forms of media

<Joshue108> Steve Noble and Scott H are working a lot on this.

<Joshue108> May become more formal publication soon.

<Joshue108> They are the main activities

<Joshue108> JS: You did good!

<Joshue108> JS: Questions?

<Joshue108> WD: Can we post something in the chat?

<Joshue108> JB: Which link?

<Joshue108> WD: XAUR?

<Joshue108> JOC: Those links are all on that master page

<Joshue108> JS: Note that the RAUR URI will change as it is being published tomorrow

<Joshue108> JW: I will update wiki page tomorrow

Overview of Immersive Captions Community Group activities.

<Joshue108> JS: Can anyone provide summary?

<Joshue108> JB: Q from Wendy..

<Joshue108> WD: When you use the word user, it is a person that uses something. What about other different kinds of authority?

<Joshue108> A host etc?

<Joshue108> WD: What is others are managing technology for their own use?

<Joshue108> JS: That is a good question Wendy

<Joshue108> A host is a just a user playing a different role, or performing a function.

<Joshue108> We may describe requirements for participation but we dont want to overlook any roles that may need to be fullfilled.

<Joshue108> We intend to cover all the roles a user may have.

<Joshue108> JW: The RQTF is also developing a doc on Remote meetings and hybrid meetings.

<Joshue108> RAUR addresses particular technology parts of the stack - and this new one is a higher level overview

<Joshue108> JW: Update on IC group activity.

<Joshue108> JB: We've not prearranged this.

<Joshue108> <Mentions maybe Wendy or Chris H could give overview>

<Joshue108> WD: I think Chris - could you walk thru the platforms?

<Joshue108> CH: You mean our prototypes?

<Joshue108> CH: In our group we are interested in how users use captions in 360 - as well as XR at some stage

<Joshue108> We are looking at video using headmounted displays

<Joshue108> Once we have worked out 360 we will look at Immersive Environments etc

<Joshue108> Our group is passionate about the topic

<Joshue108> We found existing advise to be lacking, not interested in default solutions and there are more interesting things we can do.

<Joshue108> I've been building JavaScript implementations of our ideas

<Joshue108> Allows us to experiment

<Joshue108> <Shares link to prototype>

<Joshue108> https://www.chxr.org/immersive_subs2_5/?pn=Barbershop

<Joshue108> We are building tools with interesting parameters

<Joshue108> It is a HTML and JS implementation, no fancy UI but useful as a POC.

<Joshue108> As a group we want to be proactive about building stuff

<Joshue108> WD: In that 360 view the sound varies..

<Joshue108> <Wendy gives overview of user preferences for placing captions etc>

<Joshue108> WD: Some people with hearing can identify things by sound..

<Joshue108> For deaf people, we have annotations that are built and added at the beginning, so the deaf user can identify characters as they move through the game.

<Joshue108> We also have directional q's

<Joshue108> WD: Frances?

<Joshue108> FB: Chris's prototype is amazing..

<Joshue108> Also Wendy worked on things that show speed variability for users

<Joshue108> So these options are really important.

<Joshue108> We have started to draft high level requirements etc, that we can make available.

<Joshue108> Like, understanding who is speaking..

<Joshue108> There are different perceptions - we need to make sure when we fix one problem, we dont cause others.

<Joshue108> Lots of energy in the group.

<Joshue108> JW: That was very helpful

<Joshue108> WD: I've another comment - yes, it is on a case by case video..

<Joshue108> For example if people are talking fast, they may want to use a screen etc

<Joshue108> Options are good, transcript or captioning etc

<Joshue108> And they need to be able to make a choice.

<Joshue108> Also curved captions are v helpful

<Joshue108> Like Frances was saying speed is critical - slowing audio down can cause issues..

<Joshue108> Or looking up - if people switch positions, captions need to follow etc

<Joshue108> JB: We should talk about XAUR, and Bill Curtis would like to mention something

Issues of joint interest.

<Joshue108> JW: Josh, to discuss XAUR things

jo: Appreciates feedback from Wendy and others in the CG on our XAUR work. Some questions ...

jo: Should we be discussing signing avatars?

jo: Might there be some contexts where they're acceptable?

jo: Wondering about associated situations ...

wendy: Depends on the developer ...

wendy: The language itself is important, but avatars don't have facial expression and may have artificial language semantics

wendy: I've seen unacceptable and more acceptable avatars. Sometimes developers think they've got things right, but really don't

WHO?

Howard: right now not that great, 2d, too much missing data

howard: recording a human signing would be better

howard: ASL, where one has words in a specific grammatical order

howard: asl might change that based on expression (facial), or vocal emphasis, etc

howard: turning that into algo is not likely soon

howard: ntid and gu have put effort into this -- but we're not yet there

howard: Notes we appreciate having human interpretation on this call

jo: Asks about the potential -- and it's an opportunity to discuss

<Joshue108> WD: Maybe yeah, but right now - and as far as signing avatars go there are a group of people reviewing feedback etc

<Joshue108> Howard: The problem is objective measurements for VR/XR

<Joshue108> Based on current science we have live interpreter as is required

<Joshue108> Currently WCAG mentions that automated captioning is not sufficient

<Joshue108> We could mention that automated avatars are not sufficient, in a similar vein.

<Joshue108> Some PIP, Picture in Picture is tiny

<Joshue108> WFD, have guidelines for minimal size etc

<Joshue108> 25% ~

<Joshue108> we suggest 33%

Actions to take and coordination of future work.

<Joshue108> JS: Can we co-ordinate further?

<Joshue108> There can be a follow meet.

<Joshue108> Future work on Natural Language Interfaces etc would be of use also

<Joshue108> Suggestions on co-ordination?

<SuzanneTaylor> +1 to more meetings like this; this was so helpful

<Joshue108> JB: Poeple interested in follow up?

<Joshue108> XR Access Symposium: bit.ly/xraccess21

<SuzanneTaylor> (Notice also that Chris' prototype allows you to save a set of settings to share as a URL for further discussion.)

Minutes manually created (not a transcript), formatted by scribe.perl version 131 (Sat Apr 24 15:23:43 2021 UTC).

Diagnostics

Maybe present: Howard, jo, wendy