W3C

- DRAFT -

Accessible Platform Architectures Working Group Teleconference

22 Jul 2020

Attendees

Present
jasonjgw, janina, nicolocarp_, SteveNoble, Joshue, nicolocarp, scott_h, Judy, Joshue108
Regrets
Chair
jasonjgw
Scribe
Joshue108, janina

Contents


<jasonjgw> genda+ Miscellaneous topics.

RTC Accessibility User requirements (RAUR) - open issues.

<Joshue108> scribe: Joshue108

JW: We want to get some actions from the workshops ML and Maps

There are things relating to bias etc - APA stance?

Nicolo has done some work.

Identifying issues etc and he has made some suggestions to present idea.

Updates?

JS: Correct.

JB: There is co-ordination going on with Dom etc

https://www.w3.org/2020/06/machine-learning-workshop/presentations.html

JS: Josh and I should touch base on that

Want to hear from Nicolo.

Josh: I'm on hols v soon

JS: We need to get it done soon

Nicolo has done work on AR etc

Am interested in understanding what is new.

Need to turn these into actions - so there is a presence at workshop

NC: Firstly, me too - I will be away next week.

So I need to do this before next Monday

I have reviewed the Maps for the Web abstract

I found some nexus

https://lists.w3.org/Archives/Public/public-rqtf/2020Jul/0022.html

The idea is help those with problems with 2D maps and we need an additional way to see AR Geo data

using old web technologies

This can be the first topic.

We can also talk about comms thru non visual technology

I've worked on creating place labels that can be run via speech

IMO we can use this tech on 2D map

JOC: Sounds great!

NC: I think this can be of interest and delivered in detail.

Thoughts?

JS: Can you give examples of labels?

NC: Placenames - we can say title, name, distance in meters etc.

Useful in Geoloc and AR, 2D maps.

They are trying to understand how to create a native HTML element for maps.

We can think of a design to navigate 2D and think of metadata

JS: So if I understand is the reason for an element is we are JavaScripting and stuffing with co-ordinates -

Not native but this does work in browser. Is that correct?

NC: Yes - I think this is one of the points they want to make.

JS: Makes a lot of sense - there is an a11y case here.

Users will have different filters.

blind users needs info x, wheelchair users info z etc

We need a standard way to get to these values based on user needs.

Make sense?

NC: Yes

JS: We don't need to figure out the details - just the use cases and the requirements

We may need an attribute with key values that can be filtered and others can be ignored.

JW: So if other parties were wishing to use this, then that would be useful.

So actions?

Schedule?

Here are the dates https://www.w3.org/2020/maps/call-for-participation#important-dates

JOC: We should get this proposal in this week if possible.

JS: Nicolo can you draft something?

NC: Draft talk - on ARIA attribute and similar on 2D maps-

JS: Yes, the needs for annotations to support a11y

NC: Ok - I will try to do that before this weekend.

JW: Timecheck..
... We wont get to all of them

Here is the list

https://github.com/w3c/apa/issues?q=is%3Aissue+is%3Aopen+label%3ARAUR

https://github.com/w3c/apa/issues/102

Capture of on record/off record captions in RTC #102

JS: This relates to what we need to do if we need a conversation off the record.

JB: Good one!
... I've more info - the non adjustable setting is anything said in chat gets packed with everything else.

Private and public get muxed up.

JOC: I believe that is the case with Zoom.

So there are issues with private captioning.

JS: I may have a solution - a la platform specific..

we need a pause function in the recording of the conf.

So brief off camera etc convos/notes should be stopped.

There could be screen shots or recordings made but helpful for groups to go off record.

SH: We did talk about this on list - audio doesn;t have to be treated differently.

Pausing idea is good.

JOC: +1 to Pausing.

JW: Looks like application level requirement - including captions to be paused.
... Josh do you think this is a good use case?

Yes

JB: I'm nervous about this, read it differently..

Need to be careful about use case..

JS: <discusses global pause and global resume>

JW: Judy is right there is an issue on accessible support for side converstions.

<Mozilla hubs example - of proximity based side conversations>

JS: THat is different.

<Zakim> Joshue, you wanted to ask is this Zoom specific?

JS: It is not just Zoom doing this.

JB: <mentions XR Access>

https://github.com/w3c/apa/issues/97

In a video conference session in which sign language interpretation is provided, both the signer and the speaker should be visible #97

JS: Seems self explanetory
... I woudnt care if visible or not but those who need them need them associated with who is talking

We need to be careful how we do this.

JW: Josh is that clear?

Josh: Yes this is important and could be available via a user preference

SH: It may be the case that for an a11y requirement must be on screen for that meeting.
... So is it a preference for the view on the screen or an a11y window that is always there?

JS: I wouldn't want to accept a limited stream world here.

We can have multiple options to configure video displays. For sighted users we have gallery view etc

SH: Varys with product.

JS: DOnt want to standardise based on restricted implementations

We also have second screen etc - we can do this with video.

SH: But if there are many users in a call, how can we specify the one we want visible.

JS: Good point!

JB: Windows anchoring.

Immersive captions group are looking at this - so you may need to anchor various speakers and various captions etc

JS: Needs to be cross platorm
... <discusses platforms with presence of sign language that naturally adjust>
... We did this for HTML5 labels etc

We can identify language and media type

We could do the same thing

JOC: Yes - we have a track element that could take a new attribute that support this.

JS: There are ISO specs that are related.

JOC: On a high level we just need to articulate the user need.

JW: Josh, have you enough background.

All really promising.

https://github.com/w3c/apa/issues/46

Audio and Video Quality #46

JS: G722 is hot stuff but Opus is hotter.

JOC: I'm impressed with the quality of audio these days.

Loss less compression algos etc

JS: We can get past 48 => 196

https://www.w3.org/TR/raur/#quality-of-service-scenarios

JS: We can only encourage UAs and providers to do the best.

JOC: I'll close

https://github.com/w3c/apa/issues/45

Clarification needed on Req 13a, 13b, 14a #45

<janina> scribe: janina

<Joshue108> https://www.w3.org/TR/raur/#assistance-for-users-with-cognitive-disabilities

jo: Looking at COGA issues ...
... Some of this may relate to earlier drafts ...
... May have done this? ... Asks jgw whether COGA was the context?

jgw: Not sure, will need to review

jo: Notion of support channels -- a mechanism for a third party support vector
... User Need 13, and reqs 13a+b, also 14a; ...
... Believe we've got these?

<Joshue108> JOC: Could we add the personalisation spec?

<Joshue108> JS: Should be in CR by Sept.

<Joshue108> JOC: URI.

Personalization Module 1:

https://raw.githack.com/w3c/personalization-semantics/JF-Edits/content/index.html

<Joshue108> JS: There is related Media Query work

<Joshue108> So I suggest the module 1 personalisation ref and also we have references to CSS Media Queries 5

<Joshue108> https://github.com/w3c/apa/issues/44

<Joshue108> Section 2.8: REQ 10a and 15a: Seem UI-related, and not WebRTC-related. #44

Media Queries V5 related email from CSS here:

https://lists.w3.org/Archives/Public/public-review-announce/2020Jul/0006.html

<Joshue108> https://www.w3.org/TR/raur/#internet-relay-chat-irc-style-interfaces-required-by-blind-users

<Joshue108> JS: Should that be generalised?

<Joshue108> Can we build one for a rolling right to all? DOnt turn off the TTY?

<Joshue108> We have the distinction that RTT doesn't work.

<Joshue108> Also important to know that Braille translation.

<Joshue108> JS: Lets come back

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/07/22 14:01:05 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Default Present: jasonjgw, janina, nicolocarp_, SteveNoble, Joshue, nicolocarp, scott_h, Judy
Present: jasonjgw janina nicolocarp_ SteveNoble Joshue nicolocarp scott_h Judy Joshue108
Found Scribe: Joshue108
Inferring ScribeNick: Joshue108
Found Scribe: janina
Inferring ScribeNick: janina
Scribes: Joshue108, janina
ScribeNicks: Joshue108, janina
Found Date: 22 Jul 2020
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]