W3C

- DRAFT -

Silver XR Subgroup

06 Jul 2020

Attendees

Present
Joshue108, CharlesHall, michaelcrabb, jeanne, bruce_bailey
Regrets
ChrisP
Chair
MikeCrabb
Scribe
jeanne

Contents


queue:

<Zakim> JF, you wanted to note JK's use of the term "standard"

This queue was from the last Silver meeting, not the XR meeting

Coordination with other WAI groups

<scribe> scribenick: jeanne

<michaelcrabb> User Needs: https://w3c.github.io/silver/subgroups/xr/captioning/xr-captioning-user-needs.html

JS: I have connected with the Immersive Captioning Community Group chair. He is passiing on the User Needs analysis and our request for expertise with H/H
... We should add Jason White to the invite list.

User Needs

<bruce_bailey> very nice

MC: I put together a draft which I sent around Silver, shared to expert collaborators, and Jeanne shared with Immeersive Captioning.

<michaelcrabb> Usage with limited vision should be extended to talk about placement of subtitles within users Field of View (FoV).

<michaelcrabb> For example, people with macular degeneration (see here for info) may wish for caption placement to be personalised within FoV.

<michaelcrabb> https://github.com/w3c/silver/issues/139

<michaelcrabb> What plans exist to deal with situations where the boundaries between traditional captions and immersive captions merge due to traditional content being displayed in a 3D environment?

<michaelcrabb> Illustration is at : https://github.com/w3c/silver/issues/139

MC: This issue is likely to occur in virtual conferences where multiple virtual screens are in the 3D environment

CH: Magic Leap has an embedded browser that is addressing this. They are handling it by people being allowed to toggle what ones they want to hear.

JS: I know people that are used to handling multiple audio streams, at least in short bursts. I think we should allow them to choose what they want.
... HTML5 supports multiple streams -- originally intended to support multiple languages.

MC: Jeanne and I have a meeting with a former Magic Leap accessibility expert

JS: We should include Intersectional Needs, particularly deaf-blind
... There is some history around this around programmatically associated transcript.
... This could be re-introduced because the need exists, but seven years after it was rejected at a F2F meeting

BB: Was this around the time that Apple introduced a ?? (lost track)

<CharlesHall> not to derail, but for reference, in the Functional Needs work, I am now looking at intersectional needs that include race from these studies: https://docs.google.com/document/d/18eRZ6fy766oOyqKmRidvAx9daIfEG955XmJ9obqrV34/edit

JS: It was around that time. There is a difference between Deaf-blind and blind-deaf
... the usually addressees the need to speed read -- to not have to watch the video at all.
... that would be an APA battle to re-submit that.

[discussion of differences between deaf-blind and blind-deaf]

JS: Is there a digital equivalent of contact signing?

BB: There is an assistive tech for plates on the chest and back that broadcast the digital signal so that the person can feel them.

JS: The difference is largely the socialization and time that the sense was lost. SOmeone who is profoundly deaf and signs moves to contact signing when they lose sight, someone who starts blind and loses hearing usually moves to braille.

<bruce_bailey> +1

MC: Let's start with writing Functional Outcomes knowing that we may have to add or change the user needs based on more public feedback. This will be an agile process.

JS: Note that this group through AGWG and RQTF through APA are working in this space. I think we need a joint meeting with Immersive Web.
... last year we had a very useful meeting with Immersive Web. They talked us through the architecture of Immersive Web.

<CharlesHall> any news yet on TPAC in person?

JS: we could get their input and their feedback
... we should develop agenda -- I like to send out with a joint meeting request "and we want to talk about X"
... TPAC will be virtual, and it may be more than one week -- because you can't be as intensive in virtual meetings, plus the time zone issues.

<CharlesHall> yes meet.

<CharlesHall> +1

<CharlesHall> Immersive Web is also very active on the W3C Slack Community

JO: We should include Immersive Captioning Community Group in the invite

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/07/06 13:53:22 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s|https://www.irccloud.com/pastebin/BojV7rGt/||
Present: Joshue108 CharlesHall michaelcrabb jeanne bruce_bailey
Regrets: ChrisP
Found ScribeNick: jeanne
Inferring Scribes: jeanne

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]