W3C

- DRAFT -

Accessible Platform Architectures Working Group Teleconference

15 Oct 2020

Attendees

Present
janina, plh, CharlesHall, Chris_Needham, jeanne, Francis_Storr, Lauriat, Nigel_Megitt, becky, mikecrabb, jasonjgw, MelanieP, wendyreid, George, MattChan, Matthew_Atkinson, paul_grenier, Joshue108, gpellegrino, PeterKorn, mgarrish, Yanni, Dee_Dyer, ivan, CharlesL, Avneesh, duga, SuzanneTaylor, tzviya, LisaSeemanKest, marisa, DanielWeck, KimD, Bill_Kasdorf, JustineP, Garth, CharlesHall_, bruce_bailey, ada, kirkwood, yonet__
Regrets
Chair
SV_MEETING_CHAIR
Scribe
Joshue108, becky, Matthew_Atkinson

Contents


<Joshue108> scribe: Joshue108

JS: We have some agenda from APA

additions from Silver

Lets look at agenda overall first

WR: Good idea

Maybe update from Avneesh and Marissa during media sync and SMIL etc

JS: great stuff

PK: Can I find agenda?

JS: Jeanne anything from Silver?

JSpel: We are interested in what ePub are working on and how we can include in Silver

Emerging APA specs of interest; Personalization; Pronunciation

JS: A couple of interesting specs to help ePub create more accessible content

This is of interest where there are legal mandates etc

I want to talk about Personalization and Pronunciation

<becky> personalization video: https://www.w3.org/2020/10/TPAC/apa-personalization.html

Personalization is more advanced at the mo

We have a video..

Acts like ARIA as it allows the author to overlay

That allow UA to address the needs of users with cognitive and learning disabilities

Such as symbolsets

JS: <gives background to education in this space and the use of proprietary symbolsets>
... Lisa - what is it, more deeply?

<PeterKorn> Can someone post the video into IRC?

<becky> personalization video: https://www.w3.org/2020/10/TPAC/apa-personalization.html

LS: The idea is to add additional semantic info to the content

This is info that the author knows - enables adaptation

e.g with a help icon, if you can say semantically this is a help item, this is its purpose

then at the user end they can add an icon that makes help to them and they are used to

e.g Older users may want older floppy disc items - this may be meaning ful

for a younger user it may be a USB stick - so comprehension is dependent on context etc

e.g. For users with cognitive overload - when shopping, being offered 'extra' stuff and forgetting what you originally wanted.

So some users only want the basic core stuff

We need clear definitions, and these are things that are going into Module 1 of the Personalization Spec

Then we will want to work on Module 2 and 3

Module 2 will be about alternative content.

Someone on the spectrum may like the extra stuff

Someone with dyscalculia wouldn't

Depending on where the impairment is, they need to enable different layers of help etc- this is for module 2

JS: Thanks for the overview

Av: My understanding is more of an ePub reading system to support this

Difficult for a publisher to support, issues with geopgraphies

We are working on reading system guidelines

So a link to this would be good.

Do you have a library of symbols?

JS: Lets not dive too deeply
... We are using one by Bliss as an index to translate
... Lisa knows more about that.

<becky> latest editors draft of the explainer: https://w3c.github.io/personalization-semantics/

I agree, and think of ePub as an early adopter of personalization

LS: To add these are extreme cases of different learning styles.

Adaptation is a win for educational ePub

JS: Status update - we are ready to go to CR

Hope to schedule meeting with the TAG

We may have to do CR twice due to HTML 5 spec prototyping issue - we need a formally assigned reserved prefix

we need to request that from WHAT WG, so need to talk with TAG.

<CharlesL> Here is the link to Module 1 Personalization Semantics https://w3c.github.io/personalization-semantics/content/index.html

JS: We will provide background, and then re-issue with the permanent prefix

You can implement today, but you will need to update with prefix change.

<becky> pronunciation video: https://www.w3.org/2020/10/TPAC/apa-pronunciation.html

JS: Now, Pronunciation..
... The poor pronunciation of items in text to speech is a time waste

We are presenting to the TAG and WHATWG..

we have two directions for this - one will solve the a11y prob, the other will do that but also provide an industry wide approach for all text to speech engines

JS: Mentions the A lady, S lady etc

we are looking for SSML to be allowed into the spec

Older UAs will just ignore - others will have specific hinting on pronunciation

<give examples of TTS blunders>

JS: We need a wider implementation agreement - this may take time.

There is a demo for this - the meeting with the TAG did come thru

JS: You are welcome

Major impact with ePub

PK: Are you thinking of sub sets of SSML?

JS: Yes.

PK: Being developed in pronunciation TF

<becky> Meeting details for Pronunciation meeting: https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2020#Meeting_with_TAG_for_discussion_of_Pronunciation_issues

BG: Posts meeting details

<becky> 15:00 UTC Friday, Oct 16

CL: What about the links to pronunciation and CSS module?

JS: Revisted this year with Leonie as editor
... We now understand we are addressing different aspects of controlling speech

so we are not clashing

Anyone from Pronunciation?

PG: I've just posted - in our gap analysis Mark H has written this up

<paul_grenier> https://www.w3.org/TR/pronunciation-gap-analysis-and-use-cases/#gap-analysis

JS: What SSML does vs CSS speech?

PG: Yup

<paul_grenier> https://www.w3.org/TR/pronunciation-explainer/

LS: I suggest this is looked at by screen reader users, esp Arabic and Hebrew
... Very hard
... Arabic has a lot of ambiguity in language

JS: We will look at this

Can we recruit some?

JS: We have a spec for markup but need to work out how it will be parsed.

And then translated via A11y APIs

<ivan> s/Hebew/Hebrew/

Vendors may not want to do that.

JS: That model may work for a11y or not
... We need to convince WHAT WG on SSML in HTML

IH: We are in ePub needing to work out testing methodologie and testing of all kinds

There is future work here so the tests used for SSML.. should be reused

Different for testing HTML in the browser - we should make a bridge and reuse these tests

JS: Appreciated Ivan

CL: The emphasis is interesting - publishers use an emphashis tag for bolding

and not used generically

We need to get that emphasis in screen readers etc

JS: Maybe in there already

PG: We need to work out how this will go to the AX tree

There may need to be a preference order, also backwards compatability issues

We need to discuss more -

JS@ Ok, there is opportunity here

<Avneesh> https://github.com/w3c/publ-a11y/wiki/Publishing-issues-for-Silver

Publishing Group requirements for Silver

<Avneesh> https://github.com/w3c/publ-a11y/wiki/Publishing-issues-for-Silver

Av: I've sent the link

Here is the list from three years ago

we are working thru the list

Should I walk thru?

JS: Jeanne?

JeanS: Please give us the most important or difficult

Av: The a11y metadata

We need a way to bind

There are various options

we need a generic way to do this/.

Av: There are many pages and audio files built by manifest

You can point to the manifest that indicates resources

Or you can just point to the resources

How does this fit together?

There will be requirements for publishing

There can be specialised sub divisions - without getting e'thing into general requirements

In WCAG each thing is evaluated

A modular approach makes assessment and validation easier

Then there is generalised content

and media overlays

WCAG doesn't touch on synchronised media

JeanS: Thats a good start

We have addressed some of this and would like your feedback

<jeanne> https://w3c.github.io/silver/guidelines/#processes

JeanS: Latest editors draft

We want to look at conformance via types or in terms of views and by processes

I hope processes would meet ePub needs

JS: To define - first do this, then that, then the other

These are things you need to walk thru to achieve something

Each node counts

Av: In Silver I will discuss with you

PK: As we work to use higher level language and not tie things just to HTML

Would ePub consider not totally aligning with WCAG 3, but saying an accessible ePub is this

And here are the things we tweak, as a book is not a webpage

So we dont just use WCAG 3 for e'thing.

Av: We expect to use WCAG 3 for conformance

we should not need additional specs for conformace in ePub

<Zakim> Lauriat, you wanted to clarify that process can also mean "read through a page"

<ivan> EPUB A11y Draft for EPUB 3.3. : https://w3c.github.io/publ-epub-revision/epub33/a11y/

SL: The process should also be to read thru a page

JeanS: I agree with Avneesh - we want to include ePub in Silver conformance

Look at the caption section guideline

We have worked in this for XR

Immersive environment etc

We used generalised guidelines for Captions and then some specialised items

we need to test

JS: Also note while Silver is a FPWD - it is exemplary of where the spec is headed but not complete.

<wendyreid> https://w3c.github.io/publ-epub-revision/epub33/a11y/

WR: The ePub group is working on new revision

the idea is to take the WCAG guidelines and where there is clarification -

Av: If you need that its fine

JS: We are closing on the hour.

<becky> scribe: becky

Media Synchronization Update

JS: Research questions got interested in media synchronization
... there are limits between audio and video if they get out of sync - comprehension suffers for people with a hearing disability
... affects people relying on lip reading; but research indicates that all people rely on lip reading (without realizing it)
... had meeting with TimedText and Media interest group to discuss how to control / limit. discussed different options

Marissa: am with Daisy consortium; in community group discussing this for publishing
... hear and highlight was it being spoken within audio books or publications
... media overlays are already available; idea is to group fragments - match this audio with this chunk of text

<marisa> https://github.com/w3c/sync-media-pub/

<marisa> Synchronized Media for Publications CG

Marissa: can enhance with additional data about the type of content - gives AT users to leave / bypass the content; identify narration; background audio, etc.

<marisa> * https://raw.githack.com/w3c/sync-media-pub/wip/docs/new/index.html

Marissa: have just released a new draft - see link above

<marisa> Demo: https://marisademeglio.github.io/worlds-best-audiobook/web/library/

Marissa: want to include more types of media and improve mechanisms; content types represented - audio overlay to HTML - see demo
... audio narration to SVG content; add structuring to audio books; book with background music track and more;
... haven't look at latency, yet

JS: need to make sure Pronunciation works with this WG

Paul: seems like opposite of pronun. - map the audio to the content

JS: not clashing with each other;
... probably not critical that if audio and content is within 100-200 milliseconds

PK: there are going to be downstream user agent challenges - for example blue tooth latencies; and latencies can vary over time

JS: good heads up

https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2020

<scribe> scribe: becky

Activities Update: XAUR; Silver XR Specifications

<Matthew_Atkinson> scribe: Matthew_Atkinson

<Joshue108> https://www.w3.org/TR/xaur/

<Joshue108> https://www.w3.org/TR/xaur/#c-change-log

Josh: We've had quite a bit of engagement with the document via GitHub; many user needs and requirements added (per above link).

Janina: We have an updated working draft; it's nearing completion. Getting/got thorugh the feedback. Should be finalised as a W3C Note soon.
... ...then to work on implementations. Looking for any further comments/input very soon (next few months).

<Zakim> ada, you wanted to comment

Ada: *Will feed back comments arising from Immersive Web WG meetings following TPAC*

*Thanks from APA*

Josh: ACK Ada's suggestion of semantic scene graphs.

Sliver XR

Janina: Note that Web Content Accessibility Guidelines (WCAG) 3.0 is in development; WCAG 2.2 is currently being finalised; WCAG 2.1 is current. The Accessibility Guidelines (AG) WG (AG ~= Sliver) is working on guidelines related to making XR accessible.

<jeanne> https://w3c.github.io/silver/guidelines/#captions

<mikecrabb> https://w3c.github.io/silver/subgroups/xr/captioning/functional-outcomes.html

Michael Crabb: Working on developing a mixed reality (XR) accessibility guideline. Started with captioning (per above link). Looked at the user requirements (from XAUR) and now working on the expectations users have wrt outcomes for successfull captioning in XR.

Michael Crabb: 5 outcomes were identifed (check out the link directly above).

Michael/Janina/Josh: Note that "second screen" in this case may be another device such as a Braille display.

Janina: i.e. an auxilliary device synchronized to the primary media.

<becky> Outcome 1: We Need Captions

<becky> Outcome 2: We need meta data of sound effects

<becky> Outcome 3: Second screen adaptions should be possible

<becky> Outcome 4: Customization of captions

<becky> Outcome 5: Time alterations for caption viewing

Michael Crabb: Work going on on temporal customization of media.

Lisa: Use cases for people with ASD or cognitive awareness disabiltiies. Awareness can be an issue. [scribe note: of gestures?] Reactions to cues such as expressions may be too little/too much. Some help with interpreting this would be good.

Janina: *suggests semantic annotations to help users interpret such events/meaning*

Lisa: Suggest a mapping with a meaning token, which maps to something a particular user can understand.

Janina: It would need to be mapped from a particular location/event.

Josh: XAUR could be expanded to articulate specific requirements like these—need to work out which specific situations need to be catered for due to user needs. After publishing, this may become more apparent.

Lisa: *COGA to check out XAUR*

<mikecrabb> https://www.w3.org/WAI/GL/task-forces/silver/wiki/XR_Subgroup

Michael Crabb: Having defined outcomes, now working on content that can go into WCAG 3.0 to provide for these. Link directly above contains some drafts—comments and feedback sought. Current stage is only wanting to add things when there are methods that can be used to achieve outcomes.

Michael Crabb: One challenge is how to provide captions in a 360 environment. For authors creating content: tools for ensuring the captions appear in the right place—where are they coming from in physical space; where should they be rendered. What if, as Janina mentioned, the speech is coming from behind you. How can we get this info to users in the most appropriate way?

Janina: There are W3C Community Groups looking into captions; aware that Silver is liaising with them.
... Existing standards such as TTML and WebVTT could be built upon to provide the captions. There could be third-party authors that add this overlay data and it be interleaved on the fly (e.g. accessibility offices in universities).

Michael Crabb: Aware of work that is being done at University of Salfard and organisations such as Google. [scribe note: didn't catch names of those involved]

<Lauriat> Chris Patnoe

Michael Crabb: One approach is to use automated captions in general, but if a student declares a need for captions then a human specialist is brought in to provide/check them.

Ada: There's a layers API that can be used to provide text in XR. Is aware of different means of projection and presentation. However it is designed for pre-rendered content, so on-the-fly is difficult. Being able to render general HTML & CSS to a layer is a perennial request, but it still some way off (though work is ongoing).
... Right now the main way to render text is to render to a texture and display via a canvas. So not necessarily the most comfortable reading experience for the long-term.

<Zakim> ada, you wanted to raise a potential difficulty

Michael Crabb: ACK layers API; had been researching it. Good to know work's being done on it.

Michael Crabb: The more feedback, the better, due to this being such a new area. Any comments, questions, suggestions are actively sought and appreciated.

Jeanne: *Posted the link to Ed's draft earlier*

Jeane: This will include methods specific to XR and specific UAs.

Janina: the First Public Working Draft of the accessibility guidelines is exemplary of where we want to go—there's a lot more content to add.

Additional Topics? eg. Immersive Annotated 360 Mapping

Ada: The DOM Overlay API is very useful for AR; allows a layer of HTML to be added on top of the display. This should work with standard mobile screen readers.

<mikecrabb> +1 to usefulness of DOM Overlay API - really great work there

Janina: ACK; Recent workshops on Mapping and Machine Learning have highlighted this too.
... Good to know that its development is progressing well.

Josh: How does the DOM Overlay API relate to Accessibiltiy Object Model?

Janina: suggest we meet with AOM after checking out the DOM Overlay API.

<ada> DOM Overlays:

<ada> https://github.com/immersive-web/dom-overlays

<Joshue108> thanks

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/10/15 16:41:24 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/12 EST//
Succeeded: s/Hebew/Hebrew/
FAILED: s/Hebew/Hebrew/
Succeeded: s/JS;/JS:/
Succeeded: s/aware that Immersive Web/aware that Silver/g
Succeeded: s/Dome/DOM/

WARNING: Replacing list of attendees.
Old list: janina plh CharlesHall Chris_Needham jeanne Francis_Storr Lauriat Nigel_Megitt becky mikecrabb jasonjgw MelanieP Joshue SuzanneTaylor KimD jib martin Ken_Ogiso wendyreid George MattChan Matthew_Atkinson paul_grenier gpellegrino PeterKorn mgarrish Yanni Dee_Dyer ivan CharlesL Avneesh duga tzviya LisaSeemanKest marisa DanielWeck Bill_Kasdorf JustineP Garth CharlesHall_ bruce_bailey ada
New list: janina plh CharlesHall Chris_Needham jeanne Francis_Storr Lauriat Nigel_Megitt becky mikecrabb jasonjgw MelanieP

Default Present: janina, plh, CharlesHall, Chris_Needham, jeanne, Francis_Storr, Lauriat, Nigel_Megitt, becky, mikecrabb, jasonjgw, MelanieP
Present: janina plh CharlesHall Chris_Needham jeanne Francis_Storr Lauriat Nigel_Megitt becky mikecrabb jasonjgw MelanieP wendyreid George MattChan Matthew_Atkinson paul_grenier Joshue108 gpellegrino PeterKorn mgarrish Yanni Dee_Dyer ivan CharlesL Avneesh duga SuzanneTaylor tzviya LisaSeemanKest marisa DanielWeck KimD Bill_Kasdorf JustineP Garth CharlesHall_ bruce_bailey ada kirkwood yonet__
Found Scribe: Joshue108
Inferring ScribeNick: Joshue108
Found Scribe: becky
Inferring ScribeNick: becky
Found Scribe: becky
Inferring ScribeNick: becky
Found Scribe: Matthew_Atkinson
Inferring ScribeNick: Matthew_Atkinson
Scribes: Joshue108, becky, Matthew_Atkinson
ScribeNicks: Joshue108, becky, Matthew_Atkinson

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 15 Oct 2020
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]