W3C

– DRAFT –
Accessible Platform Architectures Working Group Teleconference

17 March 2021

Attendees

Present
janina, jasonjgw, joconnor, John_Paton, Judy, scott-H, SteveNoble
Regrets
-
Chair
jasonjgw
Scribe
joconnor

Meeting minutes

RAUR and XAUR any updates.

<SteveNoble> janina: What about the 2 questions that came in of late?

..from WCAG 3 regarding RAUR

<SteveNoble> jasonjgw: Use case of caption for braille display use was another question

Should captions be full transcripts is a part of the question

<SteveNoble> janina: Problem of slowing down the meeting conversation rate for the sake of the captions

<SteveNoble> joconnor: Should we delay RAUR while we deal with this?

<SteveNoble> janina: Probably so

RAUR to be delayed, due to content usable anyways

https://lists.w3.org/Archives/Public/public-rqtf/2021Mar/0033.html

<janina> https://lists.w3.org/Archives/Public/public-rqtf/2021Mar/0033.html

<SteveNoble> janina: It is not workable to limit communications due to limitations in reading rates

<SteveNoble> janina: Two weeks seems safe

RAUR and XAUR to be delayed till Content usable is out

<SteveNoble> joconnor: To blog or not to blog?

<SteveNoble> janina: Blogging in not needed

Media synchronization research.

<jasonjgw> https://www.w3.org/WAI/APA/task-forces/research-questions/wiki/Media_Synchronization_Requirements

JW: We are hoping to develop this in the right direction

SN: I've been v busy with other things last week

I hope to get back to this, this week.

The main areas are sign language interpretation and synchronisation

Am looking at peer reviewed docs etc at the moment

The most interesting stuff is around signing avatars - cutting edge work on automagic speech recognition

We dont have the body of recorded speech that has been converted into sign language

We have speech/voice activated stuff - for text but there is a gap for speech => sign transformation

There are issues about interpretation - around context and avatar synchronisation

Other comments?

JS: I found out something interesting - relating to timing - a friend is an SL interpreter, she said the hearing person who signs, vs the natively deaf person who re-signs to be more understandable by the deaf community

So native signing vs learned

That has implications

SN: Yeah, that's interesting - I also work in the area of audio description and gaming a11y

This point came up recently

<describes signer => sign => Audio description of singer and signer interplay>

This was with recorded video, will be challenging in live scenarios

Will look at literature on this.

JW: The question is, what are preferences under certain conditions?

JP: Could be a cultural thing

There are levels of quality of signing..

Big D, first language signing vs learned etc

Some users may be deaf and read English v well.

There are variances in what is culturally acceptable

JS: There are variations of range, depending on real time vs pre-recorded

JP: The main issue with be in Live meeting environement

There is a latency issue

JP: Its not directly transcribing but interpretation

SN: The signer has to hear the whole sentence to sign

JW: So this is the issue for investigation?

SN: I can spend more time on this.

We can follow on next week.

JS: If we can pull this together we have two groups waiting, Silver and Timed Text want input from us

This is useful to promote better synchronisation of media.

JW: Thank you

Accessibility of natural language interfaces.

<SteveNoble> jasonjgw: Thanks to Josh on spending some time on this and make some improvements

https://raw.githack.com/w3c/apa/naur/naur/index.html

https://github.com/w3c/apa/issues?q=is%3Aissue+is%3Aopen+label%3Anaur

<SteveNoble> jasonjgw: Thanks to John for his comments as well

<SteveNoble> joconnor: Jason's initial framing was very useful

<SteveNoble> joconnor: Added a section on services and agents

<SteveNoble> joconnor: In a good place now for everyone to look at this and make sure we are good with the framework

<SteveNoble> jasonjgw: Can spend more time on the framework if that is helpful

<SteveNoble> jasonjgw: There are cognitive issues we may need to cover and the accessibility of the underlying services

<SteveNoble> jasonjgw: Not all the requirements we have identified are in the document

<SteveNoble> janina: Google added functionality to analyze individual's sleep patterns

<SteveNoble> janina: Privacy and security is an area we commonly look at

<SteveNoble> jasonjgw: Could add more to the framework - have the time

<SteveNoble> joconnor: Sounds good

<SteveNoble> joconnor: We need a tight review of the user needs

<SteveNoble> joconnor: A better understanding of the modality and the services

<SteveNoble> jasonjgw: Will look at the literature on natural language

<SteveNoble> jasonjgw: Will dedicate some writing time to formalize the writing

<SteveNoble> jasonjgw: Will bring back for discussion

Miscellaneous updates and topics relevant to Task Force work.

<janina> https://www.w3.org/TR/coga-usable/

<SteveNoble> janina: Need to look at Content usable

<SteveNoble> Do a quick review next week

Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).

Diagnostics

Succeeded: s/scetion/section

No scribenick or scribe found. Guessed: joconnor

Maybe present: JP, JS, JW, SN