Silver Conformance Subgroup

22 Jun 2020


jeanne, bruce_bailey, michaelcrabb, CharlesHall


<jeanne> Meeting: Silver XR Subgroup

<jeanne> chair: MikeCrabb

<scribe> scribe:bruce_bailey

<michaelcrabb> User Needs: https://github.com/w3c/silver/projects/2

mcrabb: reminder last two weeks, been looking at user needs from Josh
... top down approach of looking at user needs
... which relevent for xr captioning
... which relevant to tradtional caption (and therefore, also xr captioning)
... and those which might not be so relevant to captioning.
... I would like to propose we do a bottom up approach.
... before that, check in GitHub to see that we have assigned user need labels

<michaelcrabb> https://github.com/w3c/silver/issues/115

mcrabb: first one is support irc chat

Many blind users may not benefit directly from RTT type interfaces just to issues with synthesised speak output. Therefore traditional IRC type chat interfaces should be available.

1st activity is to go through github issues and determine which fpc to check

2nd is to look at fpc separately, and decide if we are missing anything

Mcrabb: did have mini-site, but that is not behaving

CharlesHall: for those not familar with GitHub features, is it possible to have checklist of FPC decoupled from cards? If we add fpc, will they populated to new cards?

MC: Yes, the FPC checkmark list is a separate GitHub issue. As list is expanded, it will populate to other uses of the list.

CH: Great, I am working on other use in user needs.

Jeanne: Vestibular disorders should be added.

MC: Last of Us 2 has been getting lots of press for customization
... some buzz as most accessible game ever

Jeanne: TPG and Evan Hamilton have been working on this.

MC: First time accessible news has landed on front page of BBC site

MCcrabb: want to spend a little time going through each card, checking each card for fpc

Jeanne: we want to add vesitbular, break down cognitive, and address intersectional so, for example, address deaf-blind

<michaelcrabb> https://github.com/w3c/silver/issues/114

MC: 2nd issue in list

Deaf and hard of hearing users need to able to tell the difference between sent and received messages when communicating in XR environments with RTT.

MC: Is this just a hearing issue?
... Okay, leave as-is.

<michaelcrabb> https://github.com/w3c/silver/issues/130

Users with physical disabilities or cognitive and learning disabilities may find some interactions too fast to keep up with or maintain.

MC: currently not linked to any fpc
... Is this just vision?

CharlesHall: There are other vision area, like limited depth perception
... WRT previous card, if method of distinguishing set/received is color, then that effects low vision

MC: Will the speed of interaction have an effect on users with hearing impairment?

CH: If the feedback is audio cue / ear-con too fast, but otherwise hard to image that speed is limiting factor

MC: next one, limited reach and limited manipulation

Jeanne: yes

CH: yes
... some of those scenerios apply to XR generally, but not captioning per se

MCrabb: This is where things get tricky. Is captioning strictly from the computer / environment, or does the user need to interact with them.

Jeanne: can we park the idea for a latter guideline

MC: Okay, lets focus on the impact for just viewing captions

Bruce: focus should be just on viewing captions, it will still get complicated

MCrabb: all the ones we have ticked will apply to captions in some way
... Do we need to talk audio visual descriptions?

Bruce: the objects in XR need to self-caption, so there is interaction

Jeanne: We need to give platform owners choice and abiltiy to make many of these decisions
... we will need flexibility since audio description is an art

MCrabb: Agreed, and there have been research projects that support artistic approach with AD
... My impression is that we can take a quick run at this activity and see if some trends fall out
... user without vision would be text output.

Jeanne: Janika talked about overlap between Audio Description and captioning

Bruce: Agreed, quite a bit of overlap

MCrabb: Users without vision will need cues to orient themselves
... while captioning not traditional thought of as use to people who cannot see, in XR blind user will probably need to make use of captions for orientation
... How about indication of sounds and who is speaking?
... In U.S., captioning indicated speaker by name (in text) then >> whenever speaker changes.
... In UK, captions are always in color but the name never shows up in text

Bruce: Asked about off-screen speaker, and maybee missing color association

Jeanne: Currently playing a game that uses color caption text, and the colors are very confusable.

MCrabb: Can anyone think of additional implications for no vision or limited vision?
... What about without perception of color for XR space?
... Obviously, only using color to speaker names need to be a customized option.
... What about Radar Map type widgets?

Charles: Exactly what you described, but i wanted to ask, are we including Augmented Reality?
... AR captioning will need so much consideration, like color, if I am trying to possition the captions in the real world.

MCrabb: I recently tried to create an AR captioning experiment, and color was a real problem...
... ended up just going with as bright white as I could manage because nothing else worked

CH: ambient light would be a huge factor.

MC: explains why white-on-black is so popular with traditional captioning, since it provides the contrast

CharlesHall: Do we want to include AR in this exercise?

Concensus that AR is covered, so we may have to circle back when AR versus XR has significant difference

Bruce complains that "without perception of color" is not well formed because it is trying to address blindness and low contrast

MCrabb: Okay making a note on some of these cards, we are running out time
... Usage without hearing will be largest challenge
... have to convert other users speach into text

CharlesHall: And sound effects!

MC: We need to know what the sound is and what direction it is coming from.

Jeanne: Expecially sounds coming from outside the current view screen

MC: What is the difference for users with limited hearing?

Bruce gives example of deafness in one ear, captioning of directional cues critical, but maybe captioning of speech not needed at all

Jeanne: i think we need some outside expertise on limited hearing
... there are unique needs

CH: There are some good opportunities now since the Hearing Aid companies have standardized on Bluetooth API, so HA can interface directly with XR platforms.

MCrabb: So how can we hook in with some of those communities? There is an meeting I will invite myself to.

Jeanne: I will ask Jannina for APAgroup to see if they have leads.

MCrabb: Wrapping up, I will exapand into some user need groups, and contact BBC collegue and immersive captioning meeting.

MC: What else for next week? I still feel like we are okay for timing so far.
... We want GL by end of August.

If we can finish with functional outcomes for next two weeks


<jeanne> +1

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/06/22 13:57:15 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/Jankika for API /Jannina for APA/

WARNING: Replacing previous Present list. (Old list: jeanne)
Use 'Present+ ... ' if you meant to add people without replacing the list,
such as: <dbooth> Present+ 

Present: jeanne bruce_bailey michaelcrabb CharlesHall
Found Scribe: bruce_bailey
Inferring ScribeNick: bruce_bailey

WARNING: No "Topic:" lines found.

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]