W3C

- DRAFT -

Accessible Platform Architectures Working Group Teleconference

07 Oct 2020

Attendees

Present
jasonjgw, scott_h, nicolocarp, SteveNoble, janina, Joshue, Joshue108
Regrets
Chair
jasonjgw
Scribe
Joshue108

Contents


TPAC planning and cross-group meetings

<scribe> scribe: Joshue108

JW: We have a schedule

On the wiki

It covers lots

JS: There will be additions

Useful to front load what we need

I'd like to use ID refs for all meetings

There are specific Zoom call details to be added

JW: Password protected?

JS: Not sure yet - please do register e'one

There is a bug

SH: Its a long list

<discussion on TPAC list>

SH: It will be my first TPAC

JW: Mark H will attend the AC meeting

JS: Our TPAC videos are up.

JOC: Great stuff

JS: We are q'd up

We have some interesting what we have learned from COVID a la RTC.

Pinning of important people in Zoom meetings for example.

JW: Anything else you want to flag here?

JS: No.

JW: Good sign - e'thing under control :-)

SH: Am looking forward to it.

Media Synchronization

JS: Steve reporting

JW: Yes, he made substantial contribution

SN: I've been able to work on this, and did some capturing of references in journals

I've been researching them and put these notes into narrative text in wiki

SN: What is the plan? To plug this info into other resources?

I looked at other docs that Jason worked on and used that as a template.

I've given the arguments about what we know around human speech recognition etc

A lot of these resources come from tests undertaken with users in normal environment.

There are a lot of aspects at play that aid cognition.

<if SNR is ~ 30 bd louder than focussable speech recognition drops down to 20%>

accuracy rates drop when the speaker can't be seen

If they can then recognition is much higher ~ 90%

Looking at existing standards - they exist for digital TV broadcasting

There are thresholds used

SIgnal quality needs to high, and audio can be up to 45 ms late

So a small amount of latency is acceptable

But if 15 ms early that impacts cognition negatively

There is a ratio between distance and acceptable latency that does or does not impact on understanding

Sight is used more to identify and verify the speech you are hearing.

I'm going to dig into more studies with people who regularly lip read are hard of hearing

SN: Found a ref from the gaming world - that one of the use cases driving innovation is the ability to sync animated characters with the voice

The more realistic the less distracting it is.

These can help make immersive experiences more engaging and accessible

SN: I need some guidance on what we will do with this.

JS: This is awesome Steve

We will work that out next week.

This is great but I'm concerned about what happens for users with hearing and vision deficiencies.

SN: Yes, there are resources that relate to that

Next step is to make sure we focus on the disability side

JS: Useful foundation

SN: Yes - there are comparable data points which are useful

Being able to focus and filter sounds can be difficult

JS: It will be interesting - that the envelop that found jumped out at us.

It is more restricted - and I'm curious what Timed Text will think is acceptable.

SN: The TV people are dealing with high quality signals

So they can demand more.

The more restrictive window is digital broadcasting

SH: Thanks so much Steve

Just to add, I've been able to start reading more on Captions and their timing - and related issues

SN: It sounds like we can also add more subheadings.

Syncronisation and captions for example, or translation

There are syncronisation windows

JW: Q, maybe for Josh

Is the case of media on the web for video is different for RTC?

If you have real time captions that will be a challenge in its own right?

JOC: We'd have to look at individual cases

JS: There is delay - so the question is how well you can keep these things syncronised

<discussion on how to use latency and available bandwidths to an accessibility advantage>

Getting video early is good :-)

JB: Reflecting on this, we seem to be referring this emphatically

Want to make sure we differentiate between theoretical and that which is well reserached

JS: We can ask this is our cross group meeting

JB: Lets talk but be cautious in our conclusion

SN: Assembling references etc

JW: We will continue to review the research findings.

JS: Judy did you get any more input on studies from deaf or hard of hearing folks?

JB: no

JW: There is time

JS: I should note our wiki here

RAUR update

JW: Just in case we have RAUR discussion

Josh is working on that this week

JOC: Yes, I've started

I reviewed the doc this morning and will start a new branch with updated content tomorrow

Hope to have a production ready branch for Michael by EOB Friday

JS: Please have URL ready for next meeting

JW: Any update?

JB: Have an update soon

There is work on hybrid going on teamside

Am thinking about it

Other topics

JW: Not so many at the moment

JS: We may have some new recruits soon.

SH: Yes, will update after TPAC

JS: We should look at future meetings - this call isn't on next week

JB: On the WAI co-ordination call later..

during TPAC, people look at what other groups are doing and were to look at RQTF pages would they find up to date stuff?

Are we advertising well?

JS: I'm glad you brought this up - sort of half ways there

we have an APA meetings page and we should link that

JS: I'm doing my best to list meeting and the items as well as key reference points

So preparatory docs are referenced

JOC: Thats great

JS: We should have these things on a page.

All relevant resources.

We are experienced doing this, but not so much virtually

Web-based Remote Sign Language Interpretation

https://www.itu.int/dms_pub/itu-t/opb/tut/T-TUT-FSTP-2019-ACC.RCS-PDF-E.pdf

ITU-T SG16 (Question 26)

JS: Should we invite Masahito Kawamori to our Timed text or RTC meetings?

JOC: Sounds good to me.

JB: Its good that they are looking at this

It may make sense to draw him into those meetings

JB: We need to co-ordinate our site also first
... Looking at the Immersive Captioning CG would be good to link in with also

<janina> https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2020

Schedule of next meeting.

Masahito is also interested in our work on Cognitive Accessibility and ML

JS: We can meet on the 21st and then again in Nov

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/10/07 13:58:44 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Default Present: jasonjgw, scott_h, nicolocarp, SteveNoble, janina, Joshue
Present: jasonjgw scott_h nicolocarp SteveNoble janina Joshue Joshue108
Found Scribe: Joshue108
Inferring ScribeNick: Joshue108
Found Date: 07 Oct 2020
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]