W3C

- DRAFT -

APA at TPAC

19 Sep 2019

Agenda

Attendees

Present
Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek, Joanmarie_Diggs, ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue108, Janina, achraf, addison, stevelee
Regrets
Chair
Janina
Scribe
Irfan, Joshue108, ZoeBijl, CharlesL, MichaelC

Contents


<Roy> Meeting: APA WG Meeting at TPAC 2019

<Irfan> Chair: Janina

<Irfan> Agenda: APA Task Forces & Next Steps Today and Tomorrow

<Irfan> Scribe: Irfan

<CharlesL> chair: Janina

<CharlesL> Meeting: APA TPAC Meeting

Janina: question that can help us focusing on a11y to hear from all of us. what kind of applications, what kind of emersive environment you are thinking off, in a working group.

<CharlesL> Agenda: XR

where are we going? sensible question?

APA Task Forces & Next Steps Today and Tomorrow

*Introduction*

ada: if you looking at the study improvement of current hw.. for the VR side.. a massive technology shift

stuff like ML.. it didnt happened overnight..

<Joshue108_> +q to ask if people understand some of the challenges around XR A11y

software wise.. standards wise.. people are interesting about webXR..

we are bulding the foundation at the moment

for the work.. its been done today.. hopefully we will see lot more capability towards voice interfaces...

speech synthesis and recognition.. long way to go

some of the thoughts

bajones: going for a11y.. couple of paths that are clear..

there are some that not very clear

one area which is clear, mobility concerns that impact aria computing

job simulator game example.

kind of adjustment where you are making user biger and environment smaller.. where you allow user to manipulate the space..

those are things that can be done in a way.. that is ahndoff in application point of view

you could have all sort of a11y gesture

these are things where you could have within the browser. that kind of a11 is going to emerge very well.

it is going to have huge impact

it related to other form of a11y.. e.g. visual

a frame which is declarative by feature... where the base life api doesnt have any hooks. lot of possibilities there

going further.. having like descriptive audio..

i dont not personally have ckear idea.. and I dont use any a11y tool. this may be an area where lot of research is required

if you want to tab through to determine the content.. or you want to navigate though the objects...

janina: any one on the phone?

* no one*

Joshua: term XR covers many things

broadly speaking, it makes lot of sense..

essentially.. it is visual rendering in 2 d. which makes sementic information architecture in the dom.

this gets little fussy..

<Zakim> Joshue108_, you wanted to ask if people understand some of the challenges around XR A11y

current model when sr interact the web page.. forms mode thing, user bypass the a11y.. if they are interacting or navigation something..

<Joshue108_> What do we understand are the issues for existing AT users, where the AT is essentially outside the simulation?

what are the issues emersive ways?

what are the issues with AT outside of a11y.

in the future AT could be inside the simulation

AT could be embedded in the environment.

<Joshue108_> What does the architecture of tomorrow look like?

<Joshue108_> # Could AT be embedded in these future environments?

another question? what is the architecture for tomorrow look like?

bajones: are there any a11y tools that applies to the similar situation that we discussed?

I dont think there are many paralles to the environment that we are talking about

janina: history about it.. would like to explain

nell: we srtat with short term options where we start with entry point.. we could encourage UA.. example.. job simulator.. browser level setting can make it easier

input devise.. target ray.. have to reach the thing you are trying to reach. It truens out that it is not pleasant experience

there may be opportunity in lower level apis

to enable alternate in input device. that could accomplish similar feature.

there were two different discussion.. we need to split it out.

perhaps the bigger benefit might not be the web specific way.. user agent can do something like job simulator.

existing user interface.. often user experience tells you that you are trying to access is behind you. not sure how to think about making that easier.

localizing sound is not useful to me.

it seems some opportunity to dig in there

things to consider.. take action on today.. that isnt necessary the web specific.

how can we propose the change in glTF (GL Transmission format) file format.

thats a declarative format and relies on extension that is integrated with A11Y

you are asking.. whats the future/

two interesting things

one, is fair amount of interest in eye tracking APIs at the plateform level

this could either be super helpful or super problematic as far a11y concerns

if you cant see the content and cant detect objects.. its falls positive

its related to input sources.. having a target ray.. under the hood to chage the targeting ray.. there is an opportunity.. thats again in few years.

when we look out for 5-10 years in the future... we likely to see more declaring hybrid interface.

immersive shell UI has the ability to place 3d object all around the world.

That would allow users ti provide more semantic approach.

walking to the street.. you can query the menu that is digitally advertised...

joshua: we need to start what people actually need.

nell: I am talking about 5-10 years in a11y work.
... people are going to wear those gadget 24x7 like they have phone

as those things start to be more widely accepted and available.. there are interesting potential to get the information.

that could be helpful in context of a11y

joshua: its great as long as it is not vendor related

nell: its very different than our imperative approach

<Zakim> kip, you wanted to mention that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as

kip: i can speak what happend in FF reality.

spent time with user and understand what do they need.

we discovered things.. what are the things that are actionable quicker.

things .. some people are sensitive about some behavior of browsers . watching videos.. we don't project the video in proper way.. if you are producing the video and add the subtile in the video.. its

when we show 360 degree video.. different presentation to left and right eye..

nell: its like map projection

kip: it is unreadable

you may want to have that text around you..

it needs to be sensitive where you are looking at one paricular point.. mid-term work that we want to look at

reviewed document.. actionable thing that can be done quickly..

allowing mix audio to both ears..

<Joshue108_> XAUR draft https://www.w3.org/WAI/APA/wiki/Xaur_draft

we can discover that can be handled quickly

nell: A frame

<Joshue108_> https://aframe.io/

* Thanks Joshua*

nell: super medium browser is based upon A frame.

<NellWaliczek> to clarify, it's not build on aframe

<NellWaliczek> just the same people working on both

matt: inside or outside the at experience, 30 years of context, how the at technology is going to work.

at would live inside the app. it would load the program with all the functions

you can imagine the problem for the end users to find the new ways to experience.

application that rely on third party and try to build an a11y tree.. I wonder if XR space is an opportunity to get best in both worlds

where you can have at build inside the app but standardize api

building a SR that tried to read world around you, we do not have a concept today in any SR tech.

it a linear world and not 2d

i would love us when we try to think that API and what is possible in linear world that could be more ideal with general purpose feature

cabanier: everything is declarative.. there is no reason that a11y dom can not be used..

in short term, we do have a set of strict recommendations to use application...

<Zakim> ada, you wanted to ask about developers generating something akin to the AOM from the scenegraph?

ada: one of the interesting about aframe, is concept of saom graph

<klausw> s/seam/scene/

there were some kind of scene graph format that can be easily generated using JS library.

<klausw> (FYI, assume AOM = https://github.com/WICG/aom/blob/gh-pages/explainer.md)

cabanier: we have 3d declarative framework. they can become the part of a11y

ada: there was an API that would let you submit JS object in a particular format.

<Joshue108_> +1 to Ada

that might go in long way to provide a method.

joshua: great idea

would like to explore more with you folks

janina: one of the things in a11y is archeological digging.

stuff can change live.. we put more booting, more inconvenience. making a calculation..

my mentor was a guy who would track his eyes on keyboard...

there is a background there.. we need to digg that...

<NellWaliczek> Anyone who is interested, I'd be happy to schedule time this week to explain a bit about how the underlying 3D and XR platforms work

history of attempts to use early implementation that becomes more robistics

one of the most compelling presentation at CSUN in 1994..

example of the wheelchair in presentation.

skills we would rather learn in good controlled environment.

<Zakim> Joshue108_, you wanted to mention our current draft XR user needs

we need to dig something in archive

joshua: in history, many initiative that we can learn from the things that didnt work well and determine why

exploring what can we do in authoring environment is brilliant idea.

semantic scene graph, AOM are part of equation.. what can we do for user needs...

<Joshue108_> https://www.w3.org/WAI/APA/wiki/Xaur_draft

a11y means different things for different people

<klausw> WebXR flexibility could be used for near-term a11y improvements.

<klausw> Input uses abstractions, and a custom controller should be usable in existing applications by supplying "select" events and targeting rays which are decoupled from physical movement.

<klausw> tuning outputs: fully disabling rendering may confuse applications, but a user agent could reduce framerate, set monocular mode, and/or set an extremely low resolution to decrease rendering cost.

<klausw> Reference spaces and poses are UA controlled, the UA could have mobility options such as adjusting floor height, teleportation, or arm extension, even without specific application support.

<klausw> There are ongoing discussions about DOM layers in WebXR, important to ensure that existing a11y mechanisms can remain functional. For example, avoid a "DOM to texture" approach where this information may get lost.

<Zakim> kip, you wanted to say that as aframe is based on custom elements, perhaps authors could start adding aria attributes

klaus: ask me if you have any question what I have added here.

kip: aframe is based upon custom element

<Zakim> Judy, you wanted to comment on "single switch access"

that could be one potential avenue to action

judy: want to refelct back.. with regards to single switch access..

second: use case for mobility training..

there is very interesting development in virtual environment.

nell: I can make myself available to give you more information if you need.

joshua: we have a set of declarative semantic. we tell how to markup stuff... now we are in a situation in XR where we need declarative semantic.

the only other thing that I saw recently is AOM that could be used as a bridge between document semantic and application semantic.

what am I hearing from the feedback, it could be possible to populate the a11y tree.. we . need to agree what is needed.

object oriented case.. where you have properties inherited... or encapsulate.. have ability to understand

in-terms in AOM it seems interim solution.

<Zakim> bajones, you wanted to discuss AOM

bajones: talk about AOM.. recently came up with this.. it is something like canvas

linear stream of data.. what is most logical ordering of data?

it is relatively trivial for use to produce some markup that got some volume, description in it.. you need one intelligent way to mark it up. <div> <div>..<div>

AOM seems reasonable example..

<Zakim> Joshue108_, you wanted to say its not really about linearisation

joshua: brilliant topic

if you take regular HTML.. the example of data table where you interrogate the data table.. user go where they want to go...

thats a little bit matching your understanding... what you need a description where you can read that a particular heading belongs to a particular field.

we need to work on what kind of architecture looks like...

matt: as a SR use, you still has a linear view even if you think for 3d

if you move item by item on the webpage.. you do need to order in way that makes sense

*example of discovering objects in room*

you dont have easy ways to control scanning in different ways..

<Zakim> NellWaliczek, you wanted to talk about input profiles

nell: there is one other emerging API area that will be available in short term.. eye-tracking.

you will see that within couple of years underline the plate-form level.

there is an open source librarty that I am working, "input profiles"

<NellWaliczek> https://github.com/immersive-web/webxr-input-profiles is the library's github

bajones: it is part of our input story called select events..

user are doing primary input..

* APA room*

<NellWaliczek> This is the link to the test page I've been using to ensure the motion controllers behave consistently. Apologies that it is very barebones (and probably very poorly built because i'm not really a webdev...), but i'd be happy to take guidance on how to make it more usable

<NellWaliczek> https://immersive-web.github.io/webxr-input-profiles/packages/viewer/dist/index.html

* Thanks Nell

<Zakim> Joshue108_, you wanted to ask Ada more about her view of standardisation of semantic scene graphs

AOM and XR

<Joshue108_> scribe: Joshue108

Pronunciation Approach

<Joshue108_> MH: I've describe the issue.

https://github.com/w3c/pronunciation/wiki

<CharlesL> scribe+ Joshue108

<Joshue108_> In the education space, we have requirements for students to be exposed to text to text speech.

<Joshue108_> There are issues with things not being pronounced correctly, based on teaching style..

<Joshue108_> e.g. certain pauses and emphasis etc.

https://w3c.github.io/pronunciation/gap-analysis/

<Joshue108_> There are no real solutions for this and we done a gap analysis.

<Joshue108_> There are various hacks, use of speech q's such as miss use of aria-label, rather fragile hacks.

User scenarios document https://w3c.github.io/pronunciation/user-scenarios/

<Joshue108_> We have done a gap analysis.

Use case document https://w3c.github.io/pronunciation/use-cases/

<Joshue108_> SSML is a W3C req we dont have a way for authors to bring it into HTML.

<Joshue108_> There were solutions such as inlining into HTML, or an attribute model which may work well for AT vendors.

<Joshue108_> We also have other attribute based model.

<Joshue108_> The question for TAG is which of these could be the most successful?

<Joshue108_> Talking to AT vendors, inlining is not so attractive etc, standardisind the attribute model could work.

<Joshue108_> Also scraping content could work.

<Joshue108_> Irf: We have also provided these use case document, see URI.

<Zakim> jcraig, you wanted to get a refresher on IPA attr?

<Joshue108_> JC: When we talked about an aria attribute for IPA pronunciation, does this do enough?

<Joshue108_> MH: Pronunciation is a key aspect but there are issues of handling numberic values and other peculiar lexical values.

<Joshue108_> Not handled by IPA pronunciation.

<Joshue108_> LW: There are issues with classical english iambic pentamter, prosody etc.

<Joshue108_> JC: IPA allows this.

<Joshue108_> JS: The problem loading this into ARIA means we dont get the user we want to pick up.

<Joshue108_> JC: Could we do a combo of IPA and parts of CSS speech, speak as digits etc.

<Joshue108_> MH: Right, a combination. Not e'thing is supported.

https://w3c.github.io/pronunciation/gap-analysis/#gap-analysis

<Joshue108_> Janinas point is that with the range of voice assistance, and SSML type content could be beneficial to the growing number.

<Joshue108_> This is not just an AT issue.

<Joshue108_> There are other potential use cases.

<Joshue108_> JS: We want to eventually make this a part of HTML.

<Joshue108_> AB: Looking thru the use case doc, it does seem like a problem that goes beyond AT.

<Joshue108_> Seems like a good problem to solve.

<Joshue108_> What was the feedback you needed?

<Joshue108_> MH: We have surveys etc out to the AT vendor community.

<Joshue108_> Irf: Posts survey.

<Joshue108_> JS: We want to finish the gap analysis, etc then lock it into HTML, as the way to solve these issues.

https://www.w3.org/2002/09/wbs/110437/SurveyforAT/

<Joshue108_> JS: HTML is now not just W3C, we've talked with WHATWG etc.

<Joshue108_> Happy Leonie is here.

<Joshue108_> <discussion on namespace solutions>

<Joshue108_> MH: For some background ETS, Pearson and others are offering solutions where things are captured..

<Joshue108_> MH: We know we have to author pronunciation to get content to students..

<Joshue108_> We are missing mechanism to bring it into HTML.

<Joshue108_> There is a legal imperative.

<Joshue108_> MH: This is a real problem.

<Joshue108_> Language users with the read aloud tool for example, if pron incsistent with general usage.

<Joshue108_> Totally confusing for language learners.

<Joshue108_> SP: Simon Pieters from Baku.. editor of HTML.

<jcraig> S/Bocoup/Bocoup/

<Joshue108_> If you want to propose, please file issue to HTML repo.

<Joshue108_> JS: Yup.

<Joshue108_> SP: You can start by presenting the problem, good way to get discussion going.

<Joshue108_> I can talk about issues with the namespace.

<Joshue108_> MH: No-one we have talked to really wants to go there.

<Joshue108_> SP: They are technical implementation issues, the problem statement is the crucial bit.

<Joshue108_> AB: Was going to suggest filing a review.

<aboxhall_> https://github.com/w3ctag/design-reviews/issues/new/choose

<dbaron> https://w3ctag.github.io/explainers

<Joshue108_> SW: We require an explainer.

<aboxhall_> Choose "Specification Review"

<Joshue108_> AB: If there is an issue filed on HTML we can bring those issues together, any preference?

<Joshue108_> SP: No, I need to know more about the space first.

<Joshue108_> SP: Process wise, file an issue, explain use case- point to existing work.

<Joshue108_> Will send a link.

<Joshue108_> MH: We are vetting approaches.

<sangwhan> SW: https://github.com/w3ctag/design-reviews/issues is where we take reviews. We require an explainer - effectively an elevator pitch in markdown. Here is a explainer for what an explainer is: https://w3ctag.github.io/explainers

<Joshue108_> JS: The use case doc is past FPWD.

<Joshue108_> There are directions that are apparent.

<zcorpan> https://whatwg.org/faq#how-does-the-whatwg-work - whatwg process

<Joshue108_> AB: We have a different definition of an explainer for TAG review.

<Joshue108_> <gives overview>

<Joshue108_> We like to understand the problem space and options you have considered.

<Joshue108_> And discuss.

<Joshue108_> JS: Sounds good?

<Joshue108_> <yup>

XR and AOM

<mhakkinen> IMS QTI Spec (Question Test interoperability) https://www.imsglobal.org/question/index.html

<ZoeBijl> scribe: ZoeBijl

<mhakkinen> IMS QTI usage of SSML defined here: https://www.imsglobal.org/apip/apipv1p0/APIP_QTI_v1p0.html

Josh: we had a very useful meeting with some folks from immersive web

there was a general need to give this some attention

general need to understand user needs

?? semantics

DOM generation

accessibility tree

and getting that to AT

There was also an acknowledgement of ??

things could be described declaratively

it’s not moved(?) into an accessibility API

there was an ineresting discussiona round making accessible ???

<CharlesL> scribe+

<zcorpan> scribenick: CharlesL

JC: AOM is not yet ready to be used as a temp solution today /tomorrow

virtual tree may be a while

<Irfan> scribe: CharlesL

aria reflected attributes

Josh: we are making assumptions if we took agile approach what does good look like

Janina: what is practical.

Josh: if thats a blocker

<aboxhall_> https://github.com/WICG/aom/blob/gh-pages/caniuse.md

Josh: semantics for XR … can we?

Allison: what might be possible. what AT would be consuming this, really cool if they were developing AT

Josh: AT could be embedded in the environment.
... AT would be looking at an abstraction.
... core things the user needs to know (role / state / property)

<aboxhall_> https://github.com/WICG/aom/blob/gh-pages/explainer.md#virtual-accessibility-nodes

Jaimes: on the roadmap Virtual trees canvas a JS api to expose the virtual tree under it

Alison: we could create the accessibility Tree like Dom nodes, could you create a DOM tree that represented in the XR env.

how would existing ATs would interact with it with different User interfaces.

Josh: JS calls on DOM window object and env. object could have children objects being separate nodes.
... I would see visit them sequentially. linearization. web page mark up there is a semantic markup, if marked up correctly users can navigate it

Leonie: suggested an API to expose those things.

Josh: create a blob semantics that user can interact with it.

<tink> Proposal for an API for immersive web https://github.com/immersive-web/proposals/issues/54#issuecomment-522341968

Alison: new AT based on the APIs

AR is a primitive. and use cases for that is not settled yet./jcraig: 3d space VR/AR could be a new accessibility primitive. and use cases for AR/VR are not yet settled./AR is a primitive. and use cases for that is not settled yet.

AR - utilitarian, VR - games etc.

some primitives we can put together but a solution is very early

Leonie: aria has Web Commonents UI controls but in VR we can have anything. a labo to a dragon so how can we figure out what we are dealing with.

Josh: DOM tree that emulates this room. a tree can be generated, issues we have docuent rendered if there were aync calls the ajax call, but immersive env. as a function of time, backwards/forwards depending on where the user is. node will change dependant on user ,via API calls as a function of time.
... moving beyond document oject models states as a function of time.

Alice: scope to a new vocabulary fundamentally a tree and node would interact with that and sequential or in 2D space. How do you pick which node in 3D space.

Josh: we need a vocabulary in AOM lexicon of terms

Matt: agrees with josh

interaction how do we read this tree we don't know yet.

surfacing the info so AT could interact with it.

Alison: that there is a tree, but how does the tree map to the immersive env.

Matt: where you are standing in that tree is something we would need to know.

Josh: no

Alice: we need to know how the interaction would work.

Josh: I don't think we need to worry about that. the interaction could be mediated by the AT
... some things that are AT responsibilities. Matt different env. updating that tree sequentially would give you that concept of movement.
... various different nodes within that env. could be different sound.

effects.

<Zakim> ZoeBijl, you wanted to say that I don’t see how flattening the 3D space would give the same experience

zoe: I am not sure flattening a 3D space would give AT user the same thing

<Zakim> jcraig, you wanted to say I am not sure a “tree” is the right solution for 3D space and to say some of the vocab may be solved in an XR ARIA module (similar to DPUB’s)

zoe: A website is essentially a linear document. It might have branches in 2D which you can move about in. But all of the branches are connected. This doesn’t work the same way in 3D space. Things aren’t connected to each other—they’re not linear.

Jaimes: not sure a tree is good for 3D space as zoe points out
... obscuring moving behind objects etc.
... not convinced. Josh's with vocabulary that you can work on like the DPUB module in ARIA not sure how far that would get you.

<Zakim> zcorpan, you wanted to discuss analogy with scrolling content into view

in different environments it could be just say "boardroom" is enough, but these ideas are not worked out yet.

Simon: you can scroll i 2 dimensions, similar if you see in one direction vs. moving your head like a scroll bar potentially.

Josh: Google is working on a large JSON model populated as needed, renascent thing virtual scrolling
... modal muting is the idea of cutting out the stuff you don't use ie visual readering would be must more responsive etc.

<chrishall> https://en.wikipedia.org/wiki/Octree

Matt: prev. meeting in every 3D library there is a concept like Tree OCT-Tree

<chrishall> https://en.wikipedia.org/wiki/Binary_space_partitioning

Josh: user within an emersive space; view from within that space Sceen Graph is represented used for expressing relationships OCT-trees are optimization reducing the load output device

logic is captured in the form of. a graph spanning tree can be deduced from that graph.

I don't belive the oct-tree is the right representation that has semantic value.

oct-tree only subdivids space.

Rossen - made previous comments

Rossen: OCT-Tree reduces down to a Quadrant Tree

Matt: strickly spactial is an OCT-tree

Simmon: what do we want to represent to the user, is a tree or graph the best way to do this?

Janina: Nell mentioned that as you pass restaurants you may get the entire menu, or way to enter in that virtual env. to eat there.

Rossen: current AT to observe 1 element at a time which is fair on a web page, but on a 3D space you are observing a multitude of things happening which ddoesn't fit the current single observability model.

simplest thing how do you convey multiple things to the user at the same time

Matt: if a person is coming down the street in the real world I hear the footsteps, if there is cars in the street I hear that, but if it is Janina walking towards me then the AT could say who is coming towards me or that vehicle on the street is bus #102 that is the information we could expose via AT

Josh: cherrypicking certain portions we could scrolled window pane we could map and time sync with sound effects could be iterated over time

Simmon: describing virtual reality similar to an actual person helping a blind person on the street you would talk about one thing at a time, and similarly with a screen reader would do the same.

<Joshue108_> CharlesLeP: I used to work in GPS system for blind users, so when walking thru street you will hear announcements..

<Joshue108_> These could be personalised and narrowed down to what was needed.

<Joshue108_> You can also ping to find out what is around you.

Leonie: Microsoft sound scape does this with different pings and distance to where those objects are in reality.

Josh: semantic sceen graph and a tree representations could be beneficial

<tink> W3C workshop on inclusive design for immersive web standards https://w3c.github.io/inclusive-xr-workshop/

Leonie: there is a w3c workshop on Nov 5/6 in Seattle

Digital Publishing / APA

%s/sceen/scene/g

<Avneesh> audio books: https://www.w3.org/TR/2019/WD-audiobooks-20190911/

Janina: I did not get this review done

<Avneesh> publication manifest: https://www.w3.org/TR/2019/WD-audiobooks-20190911/#audio-accessibility

Avneesh: basic dpub manifest, and audio books is a JSON structure with default play list and TOC in HTML and page #'s and uses media fragments file name, chapter2 .mp3 and the time sync. for a11y but not accessible for hearing impaired.

<LisaSeemanKest_> trying to join the webex

<LisaSeemanKest_> i can join after the host joins

Avneesh: pointer to media file and sync with text representation

Marisa: we are exploring and prototyping sign language video sync we restricted sync media to text/audio, but there is room to grow sign language /

<LisaSeemanKest_> we are on the webex, but the host needs to join

Janina: the APA review should take us a week.

Avneesh; end of September would be fine, we want to go to CR by early October. i18n already done, privacy is going on right now and looks good.

Janina: I will make sure APA review is done by the end of Sept.

<LisaSeemanKest_> roy, no audio

<LisaSeemanKest_> taking a brake

<LisaSeemanKest_> no

<LisaSeemanKest_> hanging up. will try audio again

<LisaSeemanKest_> ok, will call back after the brake

<LisaSeemanKest_> will try a diffrent audio

<LisaSeemanKest_> i am tring to join

<LisaSeemanKest_> i can not join without michael joining

<LisaSeemanKest_> it needs a host

<LisaSeemanKest_> waiting for michael to join webex

FAST

<LisaSeemanKest_> waiting for michale to join the webe

<Joshue108> JOC: I would like to undestand how the FAST architecture relates to other spec and work that we have going on.

<Joshue108> So what does good look like for the FAST, how do we need to change it?

<Joshue108> JSON-LD used the FAST.

<Joshue108> JSON-LD horizontal review request used it

<Joshue108> FAST is a big list of user needs and a description of how they could be met.

<Joshue108> scribe: Joshue108

MC: That was a bigger issue than I thought.
... Its there as a POC.

Around this time, checklists started to get traction, so all groups were starting to get requests for checklist.

<LisaSeemanKest_> You can't start the meeting right now because we're having problems connecting to the WebEx service. Try again later.

<LisaSeemanKest_> Error code: 0xa0010003

<LisaSeemanKest_> Help us improve Cisco Webex Meetings by sending a problem report. Your report will be confidential.

<LisaSeemanKest_> error message

It does have some good ideas, filterning relating to the tech you are developing.

There is a short and long form version of the checklist.

There are placeholders for links to relevant specs.

MC: Its a CSS styled thing but not really a functioning spec.

It is hard to tell how this is applicable tbh.

We did try the WASM thing.

We should regroup, recode it.

Should be Yes/No/NA for example.

And can be used as a list of relevant questions.

This would be easy to do with a DB, and output the checkboxes.

MattK: Why do you need to do that, are there not other groups doing this?

MC: Because other groups do this differently.

<LisaSeemanKest_> joined on my ohone. Thanks all

A better way to edit it, and output it etc would be good.

There is talk about a common infrstructure, not going to happen quickly.

MK: What happens to the output?

MC: There is a GH feature where you can store some data etc.

MK: Why not make an issue template, and put them in there GH has this out of the box?

MC: i18n does this.

<discussion on GH pros and cons>

https://w3c.github.io/apa/fast/

LS: There is another issue, may not be the right time.

My concern is that it is difficult to get things from COGA into WCAG.

This is possibly more important than WCAG, so this could have the hooks to make stuff happen.

So rather than focussing on WCAG etc for user needs, and with other specs.

They could be moved here, and could include more COGA issues.

This could help to not perpetuate a catch 22 situation.

There could also be more flexible technologies etc outside of COGA as well.

So it could be a way of addressing accessibility use cases.

As speech interaction is more prominent this will be more relevant.

So instead of UAAG 2.0 etc they could be moved here.

MC: On user needs we should be migrating towards that.

Longer term vision for sure..

FAST could be the repo of user needs with other specs in parallel.

We won't get their quickly or easily.

Silver is also moving in that direction.

Will take time to do something meaningful, our focus now is on the checklist for self review.

We need to do the checklist first.

LS: There are problems from my perspective, I'm not seeing that the COGA patterns are being included here.

MC: Yes, it is incompleted. We also need to make it manageble.
... I'm not so clear on self review checklists etc, but we need to help groups get meaningful review on their spec.

The idea is that it should raise questions also.

With the relevant group, here APA.

<LisaSeemanKest_> https://w3c.github.io/coga/content-usable/#appendix1

JS: I'd rather we help other groups raise issues here rather than muddle things.

It seems we should help them build a correct UI, and then help them with specifics as they relate to COGA etc.

JS: Not asking them in very deep level of detail at this point.

MC: So yes, I was poking around of i18n checklist - Michael reads..

These are checklists but they are not easily maintained.

MK: There is an API for it.

<discussion of GH again>

MC: I'm not sure how robust this is.

They are rather detailed with many links

The question is how much focus do we want, how detailed it should be etc.

<LisaSeemanKest_> https://w3c.github.io/coga/content-usable/#appendix1

LS: I've linked to the COGA patterns.

/w3c.github.io/coga/content-usable/#appendix1/https://w3c.github.io/coga/content-usable/#appendix1//w3c.github.io/coga/content-usable/#appendix1

We can move this up to our things to do, can go on checklist.

Good for self review.

There needs to a way for the things that are not in WCAG are still supported.

LS: User testing could also help, for SR users, low vision etc.

MC: This is for technology spec developers, your link relates to authors etc.

Some may be relevant but this is mostly relevant for spec people.

JS: What would you expect from JSON-LD.

LS: Dont know really.

JS: They are the ones who filled out the survey.

What about Immersive Web etc? We need to know what they are doing.

MC: JSON-LD is an abstract framework. We need to know what they are doing, we are being asked to produce generic user requirements.

It can be difficult to know how to provide checklist for some specs.

LS: How does this relate to WCAG?

<LisaSeemanKest_> https://w3c.github.io/coga/content-usable/#objective-adapt-and-personalize

JS: It doesn't..

LS: I've looked at these slides.

MC: If you have looked at this from FAST, it should be possible to create stuff that relates to WCAG.

JOC: So how do these FAST requirements bubble into and impact on a spec? Thats something I'd like to know.

LS: These questions will need to be revised from a COGA perspective.

JS: We hear you, but dont see how that analysis fits in here.
... We can come back later to this.

MC: Something that would fit in, is for users to indicate personlisation preferences.

We could reasonably add that.

Some of the other things could relate to the FAST checklist.

<LisaSeemanKest_> - i was looing at the intro of the doscument. my mistake

So what parts of the COGA requirements could be fixed by FAST, at the spec leve?

MC: Right.

JS: So thats not user testing etc.
... Asking these questions does make sense, but not diving into details.
... This is a semaphor

MC: Checklist for best practices could point to resources and what to do, outline impacts etc.

The full framework could cover these things.

The full framework is the user needs, and a breakdown, best practices etc.

<LisaSeemanKest_> faiding in and out -want to hear this...

JS: Could be a lot like Silver.

MC: we are distilling a framwork which we will undistil in the full framework.

JS: <riffs on how spec review may work>
... We need something for a group, say second screen, who is writing an API, keeping devices in sync.

JOC: So these are like my approach to XAUR and seperating technical use cases from user needs and requirements.

MC: I need to think about that.

<LisaSeemanKest_> i realy can not hear well. just mumbles

So if we can capture these at a highlevel, then this would make the authors job easier.

MC: So I struggle with capturing it at high level. Its an ok start.

<MichaelC> Draft FAST checklist

Ahh..

http://w3c.github.io/apa/fast/checklis

/w3c.github.io/apa/fast/checklis//w3c.github.io/apa/fast/checklist

JOC: This checklist is really good.

Very useful for specs doing technical stuff, use cases, that can fix things at the spec level.

LS: Some of this from our patterns could be supported by this.

MC: If we can break them down to technology features then yes.

LS: Some thing that provides direct navigable access to specific points in a media file.

MC: Right.

JS: I'd like to see a hierarchical list.

LS: Yes.

MC: One bit at a time.

LS: time based media etc.
... I need to read it.

<Zakim> Joshue, you wanted to ask if Lisa could review the checklist agains the COGA patterns she is suggesting.

MC: Please do!
... It would be great if you could come up with some.

I want to identify checklist items that are missing etc, and want to identify categories that are missing.

<Michael review categories>

They feel a little weird but I find things I could group under them.

I'd like input on how useful they are, and what are missing, especially as we are including emerging tech.

JOC: I'll also review.

JS: Is Media XR a time based medium?

JOC: Interesting!

MC: I'd like to look at WoT also.

JOC: Content is aggregrated in WoT via sensors etc.
... The stuff Lisa could feed into this would be really useful.

MC: We could do a bigger call for review.

Needs an explainer!

MC: Shall we take the checklist to the note track?

<LisaSeemanKest_> i cant here well.

I say no to either the framework or checklist.

<LisaSeemanKest_> but i think i get it if i focus on the checklist

The Framework is on hold, and the checklist needs attention.

MC: Implications of XR and other related tech.

We need accessiblity people to have a look at this.

Nell will be in tomorrow to demo how 3D is authored etc today.

MC: There was a demo I saw with 3D type captions etc that was interesting.
... Next step is to request what is missing review from accessibility people we know.

What it is and is not could be written up quickly - after TPAC, for two weeks say?

Could we ask?

MC: Yes.

JS: We can ask for it on the call Weds, and say we'd like feedback.

Then we can look at i18n thing borrow their code, either me or Josh.

MC: They have a generator - static doc, generator GH, scraping etc.

We could have a checklist by the end of the year.

JOC: Yes.

JS: I think we have a plan.

<LisaSeemanKest_> im back

Correct identification of signed and symbolic (AAC)

<MichaelC> scribe: MichaelC

Bliss symbols being referenced from Personalization spec

raised if we should be referencing unicode

means getting the Bliss symbols into Unicode

that´s apparently been explored before unsure of outcome

Bliss people ok with the usage in Personalization, want to discuss with them the unicode thing

they were invited to this meeting but nobody seems present

Lisa was at AAC conference

people with certain kinds of brain damage benefit from symbols

there are libraries

js: would somebody use symbols to express?

lsk: they could

js: in media work we worked on supporting multiple alternate representations of media

lsk: challenge with sign languages

ag: sign languages are regional

used to fudge a region code

ISO 639-3 has 3-letter codes that cover many sign languages

lsk: you could have both a symbol set and a language

there was need to be able to identify both spoken regional language and sign regional language

ag: sounds like two separate things to tag

js: appear to be supporting that in media formats

ag: sounds like we might need to register additional subtags

where there are modalities beyond text

lsk: symbols sometimes work for a given language

<stevelee> wha tis the meeting URL please the one on the usual page say meeting has ended

and cultural representation of symbols

<Roy> /me https://www.w3.org/2017/08/telecon-info_apa-tpac

<stevelee> i dont have that either - jus ta verbal that it was happening looking

in some languages there can be symbol overlap

or other cases different symbol sets within same language based on AT use

there can be copyrights on symbol sets, which is actually copyrighting someone´s language

so we´re using a more neutral set

<stevelee> http://www.arasaac.org/

<LisaSeemanKest_> i can not hear

amrai: localized for Qatar

use case of eye tracker user using symbols to construct phrase

cultural issues mean can´t use all symbols from other regions

need local versions

exploring whether there could be abstract ones suitable for all cultues

js: there´s a demo

using Bliss IDs to translate among set

ag: these are glyph variations, not semantic variations?

amrai: yes

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview

deaf community says sign language is its own language with grammar etc

looking at finding mappings between sign languages

<Roy> ac ac

ag: sign language codes not related to spoken language of region

<stevelee> please speak louder or closer to the mic - thanks

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview

<LisaSeemanKest_> https://mycult-5c18a.firebaseapp.com/

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview

<achraf> https://youtu.be/68TbCVNQ3Z8?t=25

<achraf> Library: http://madaportal.org/tawasol/en/symbols/

<demos of customizing symbol sets>

<hawkinsw> If you could point me to the source of the plugin, that would be great

<hawkinsw> I was able to see the zip file, but I was hoping that I could actually see the source code.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/09/19 09:03:25 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/teh/the/
Succeeded: s/liekly/likely/
Succeeded: s/itracking/eye tracking/
Succeeded: s/sementic/semantic/
Succeeded: s/nel;/nell/
Succeeded: s/eam/seam/
Succeeded: s/eam graph/aom graph/
FAILED: s/seam/scene/
Succeeded: s/liek/like/
Succeeded: s/wek/week/
Succeeded: s/emersive/immersive/
Succeeded: s/Baku/Bocoup/
Succeeded: s/def/definition/
Succeeded: s/???/web/
Succeeded: s/inclusive/immersive/
Succeeded: s/AOM could be a temp solution/AOM is not yet ready to be used as a temp solution/
Succeeded: s/Alison/Alice/
Succeeded: s/Jaimes: 3d space in the accessibility tree is a new VR/AR is a primitive. and use cases for that is not settled yet./jcraig: 3d space VR/AR could be a new accessibility primitive. and use cases for AR/VR are not yet settled./
Succeeded: s/you can move in 2D space how are you going to do this with a linear tree/A website is essentially a linear document. It might have branches in 2D which you can move about in. But all of the branches are connected. This doesn’t work the same way in 3D space. Things aren’t connected to each other—they’re not linear./
Succeeded: s/we are exploring and prototyping it for video./we are exploring and prototyping sign language video sync/
Succeeded: s/we restricted to sync media text/we restricted sync media to text/
Succeeded: s/braille etc.//
Succeeded: s/ttps://w3c.github.io/coga/content-usable/#appendix1/https://w3c.github.io/coga/content-usable/#appendix1/
Succeeded: s/MC: I went to a meet that were looking at user testing etc, so these suggestions could be added.//
Succeeded: s/http://w3c.github.io/apa/fast/checklis/
Succeeded: s/lisa.seeman@zoho.com  lisa seeman//
Present: Irfan CharlesL Roy Joshue108_ Manishearth kip cabanier Matt_King NellWaliczek Joanmarie_Diggs ZoeBijl Léonie (tink) zcorpan Avneesh romain marisa LisaSeemanKest_ Joshue108 Janina achraf addison stevelee
Found Scribe: Irfan
Inferring ScribeNick: Irfan
Found Scribe: Joshue108
Found Scribe: ZoeBijl
Inferring ScribeNick: ZoeBijl
Found ScribeNick: CharlesL
Found Scribe: CharlesL
Inferring ScribeNick: CharlesL
Found Scribe: Joshue108
Inferring ScribeNick: Joshue108
Found Scribe: MichaelC
Inferring ScribeNick: MichaelC
Scribes: Irfan, Joshue108, ZoeBijl, CharlesL, MichaelC
ScribeNicks: CharlesL, Irfan, ZoeBijl, Joshue108, MichaelC
Agenda: https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2019

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]