W3C

- DRAFT -

Accessible Platform Architectures Working Group Teleconference

02 Dec 2020

Attendees

Present
jasonjgw, SteveNoble, JPaton, janina
Regrets
Chair
jasonjgw
Scribe
SteveNoble

Contents


RTC Accessibility User requirements (RAUR) open issues.

<scribe> scribe: SteveNoble

Some open issues on RAUR which have recently come in

Josh: we understand that these new comments will push back our work some, but will help improve the document

Jason: should we solicit additional comments from particular groups?

Janina: may want to tap people from Zoom, Google and others
... we may owe ourselves a wider review if we believe some of our points are groundbreaking
... we have not yet missed any opportunities for APA specs coming up

Jason: the need for metadata on information streams coming in and going out
... currently, it does not appear these multiple pieces of the stream are getting separated in such ways
... suggest that we provisionally propose that we comeplete the open issues we are aware of and then republish and deal with additional comments

Janina: we should also include a check in with the web-RTC folks to make sure we are in sync with them

Jason: when should we engage web-RTC?

Janina: before we have an editors draft

Jason: so, basically when we have an editors draft almost ready?

Josh: will we need a new editors working draft?

Janina: common to version a working draft and continue the work
... rather import into master

Jason: next matter to address open issues...how to proceed with this?

Josh: have added some reply comments on security based on feedback
... suggest we wait until next week for further discussion, plus address comments on the mailing list before then

Janina: question for John on comment about audio description

John: focus was prescripted description...of course for deaf-blind users we would want that through the braille display

<janina> http://www.w3.org/TR/media-accessibility-reqs/

Janina: examples of images which would take much longer to decsribe than the video has natively

John: example of on-demand audio description as a proof of concept, but not commercially adopted

Media synchronization.

<jasonjgw> Steve has continued to identify relevant publications in the literature. Audio/video synchronization is well covered in the research identified so far.

<jasonjgw> Steve: caption synchronization is constrained by the capabilities of the relevant technologies. He notes long-term opportunities for aritficial intelligence to improve the efficiency of caption authoring, including synchronization.

<jasonjgw> He notes the contribution of cloud computing to advancing caption authoring capability, and the low latency of automatic caption generation.

<jasonjgw> This makes it similar to human abilities with regard to synchronization.

<jasonjgw> The word error rate is also improving, thanks to AI (e.g., Google's APIs).

<jasonjgw> The speech recognition error rate can be within the range of error rates of typical human captioning.

<jasonjgw> Steve notes the need for good synchronization especially in remote meetings.

<jasonjgw> Informal observations reinforce these findings, differing between providers.

<jasonjgw> Responding to a question by John, Steve notes that the chorded keyboards used by trained professionals generate the lower word error rates.

<jasonjgw> He also notes the need to internationalize these observations and the lack of multilingual research in this area.

<jasonjgw> He also notes the respeaking approach in which the captionist narrates the text to a speech recognition system trained appropriately to recognize this individual's speech.

<jasonjgw> Steve notes the importance of live environments as raising synchronization issues.

<Judy> https://www.captel.com/knowledgebase/how-voice-recognition-errors-affect-captions/

<jasonjgw> For purposes of remote meetings, error rate and synchronization are both important. He acknowledges the disadvantage at which a user is placed if substantial delays occur in a live interaction.

<jasonjgw> John notes the inaccuracies that can be compounded by translation of (inaccurate) captions into ocntracted braille.

<jasonjgw> Janina notes concerns about lack of synchronizatin (of captions and of audio/video tracks) in prerecorded video situations.

<jasonjgw> This is among the motivations for taking relevant research to the Timed Text Working Group per their request/interest.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/12/02 14:59:59 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Default Present: jasonjgw, SteveNoble, JPaton, janina
Present: jasonjgw SteveNoble JPaton janina
Found Scribe: SteveNoble
Inferring ScribeNick: SteveNoble
Found Date: 02 Dec 2020
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]