Silver XR Subgroup

15 Jun 2020


jeanne, CharlesHall, Joshue108, bruce_bailey


MC: I made a contact with Chris Hughes who is in Immersive Captioning Community Group

<michaelcrabb> https://github.com/w3c/silver/projects/2

<scribe> scribe: jeanne

User ID and Status

CH: That applies to live captioning.
... their identity would have to be inserted into the captioning

<michaelcrabb> https://github.com/w3c/silver/issues/112

JO: We were thinking about being able to query or ping an individual's status: Whether they were muted or talking, [and other examples the scribe didn't catch]

MC: If you are trying to live caption a group of people in an immersive environment, we would want to be able to do this.

JO: Where and when they are speaking

<michaelcrabb> https://github.com/w3c/silver/issues/113

ZRouting and Communication Channel Control

MC: quotes XAUR: Users of immersive augmented environments need to be able to route and configure their audio inputs and outputs to multiple devices. In that many users can choose to turn of images and use only text if they choose, or modify or zoom an interface, users should able to route inputs and outputs to where they choose in formats that they need. It is arguable that being able to

customise audio routing preferences is the same.

CH: This applies to the output of caption

BB: What's the difference between this one and Second Screen?

MC: It's seems to be a creation in time. There is a lot of overlap.

CH: The routing to second screens is specific to RTC.

JO: This isn't just RTC.

[discussion of the column assignments]

BB: ID should be in the Relevant to Tradition column

MC: Is routing specific to XR?

<Joshue108> +1 to Charles - second screen === any other output device

JSp: Yes, it helps with cognitive disability to distraction of captioning on second screen. As long as we define braille output as second screen, everything will need it.

MC: There is a note that we may need deeper access to browser. It is difficult to move around the captioning window.

<michaelcrabb> https://github.com/w3c/silver/issues/114

Distinguishing sent and received messages in XR

MC: It's from RTC Accessibility User Requirements: Deaf and hard of hearing users need to able to tell the difference between sent and received messages when communicating in XR environments with RTT.
... This seems to be duplicate

BB: Traditional captioning isn't two way.

MC: I think it belongs in Relevant for XR captioning column.

<michaelcrabb> https://github.com/w3c/silver/issues/115

JSp: Add to the notes that the user needs the option to filter out their own comments. It's a customization option.

Support Internet relay chat (IRC) interfaces in XR

RAUR: Many blind users may not benefit directly from RTT type interfaces just to issues with synthesised speak output. Therefore traditional IRC type chat interfaces should be available.

JSp: Aren't many of these topics a subset of Support IRC?

MC: There are so many different user needs that exist, they all are related. It will be a challenge to pick a subset.
... even looking at captioning itself in XR, there is a lot to look at.

BB: I think we need an additional column between relevant to XR and Traditional.
... some take the same solution in XR and Traditional, but others are much more complicated.

MC: Added a new column: Relevant for Traditional but XR requires additional thought.

[MC starts moving issues into the new column]

<michaelcrabb> Looking through Relevant for Traditional Captioning and deciding what should be moved

<michaelcrabb> https://github.com/w3c/silver/issues/113

Creating a new column for sorting Issues that require more thought for XR

MC: IF people are routing their output to external devices, they will want to do it for XR. I don't see that it is different other than the hardware issues

BB: I could see captioning in a large theatre, library, or broadcast being routed to a separate screen.

JO: We have to be careful that we understand the core use case.

.,. they seem to be different from Second Screen.

BB: We can reference the related issue in the comment.

MC: We will need additional thought in XR for both the routing issues.

BB: I think we should keep Second Screen simpler.

JSp: As an aside, I'm very excited that we are including a requirement that is specific to the needs of deaf/blind who I think have been underserved by the standards.

MC: Is the need to provide directionality enough to put it in the needs additional thought column?

BB: I think so. If it's not in your viewscreen, you need to know that the voice is coming from, especially if it is behind you.

MC: Color changes and the need to customize colors. We thought about it to distinguish between different speakers in XR. That seems like no additional thought is needed.
... the only difference seems the customization interface.

JSp: I think that's an interface issue, not a captioning issue.

CH: It goes beyond personalization to being context-sensitive and reflow in response to change of depth of field changes. Zoom or shrink.

MC: That makes this very difficult.

CH: Contextual aware may still be limited to preference settings instead of being context aware.

MC: In traditional captioning, even limited personalization is very difficult.

JO: Spacial awareness and direction and how to map that.

MC: Some customization is straightforward, but contextual captioning becomes a lot more difficult.

JO: That may be a good approach: Have baseline requirements and then XR specific requirements

MC: Subtitling customization is moved to the "requires additional thought for XR"
... Routing to second screen?

BB: It's only output, so I think it stays in Second Screen.

Functional Outcomes: https://docs.google.com/document/d/1gfYAiV2Z-FA_kEHYlLV32J8ClNEGPxRgSIohu3gUHEA/edit#heading=h.rbn6yq3f7i4b

<Zakim> bruce_bailey, you wanted to ask about issue lables?

MC: I think it makes sense to make a document that puts it all together. Then we can look at the labels that go to the functional needs.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/06/15 14:00:06 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/specific to RTC?/specific to RTC./
Succeeded: s/ second scree =/ second screen =/
Succeeded: s/ new column/ new column]/
Succeeded: s/large theatre or/large theatre, library, or/
Present: jeanne CharlesHall Joshue108 bruce_bailey
Found Scribe: jeanne
Inferring ScribeNick: jeanne

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]