W3C

- DRAFT -

Accessible Platform Architectures Working Group Teleconference

07 Aug 2019

Attendees

Present
jasonjgw, janina, scott_h, Joshue108
Regrets
Chair
jasonjgw
Scribe
Joshue108

Contents


Document review and planning.

<scribe> scribe: Joshue108

JW: We have all had a chance to review some of Joshs drafts.
... There are a few observations and I'd like us to clarify what docs we want to produce and their audience.

So we can work out what to create.

So what are the priorities?

XR, Multimedia, RTC not excluding WoT and others

I support Judys thought that we need to be focussed on the accessibility of whatever apps are developed etc

As we have reviewed XR related stuff, are in a position to bring these reviews into the discussion.

So what are the challenges and opportunities etc?

I think our previous reviews and research gives us a good start.

<Zakim> Judy, you wanted to clarify my comment from last time

JB: Thanks for lead in, I want to clarify..
... My main concern was with the overall framing of the XAUR doc, and the approach.

There are two things we need to provide a framework for, use cases, user needs requirements, accessibility support and interoperability.

This may not be the best way, but we need to think accross the whole range of things that humans want to do in XR space and highlight these use cases - education, employment, commerce, health etc.

Want to make sure they are not primarliy Rehab related.

JW: Brief comment, that Janina mentioned a degree of convergence between XR and RTC applications.

Especially there XR may have RTC type components.

Use cases and requirements are a part of it but there are challenges with different aspects.

So how do we want to shake deliverables.

<Zakim> janina, you wanted to suggest we want docs that can inform TPAC cross group work, and possibly become formal Notes

JS: I agree with the long term goal and look at the XAUR doc - I still think there is a lot of cross fertalistation.

This is a good list but not exhaustive.

I'm worried about primitives, the buidling blocks that build XR

Visual, Audio, Multiple Video, Audio and textual type comms..

There is the chrome and controls - subtitles and captions etc where you would need Object Orientated semantics.

We possibly do with multi dimensionsal audio - there are high end mags that talk about 9.2.2 sound configs with lots of meta data to provide sense of left and right and up and down.

So I'd like us to get a grip on these things - these are the primitives.

So who can we talk with at TPAC, especially Thurs Fri will be a big part of it.

The XAUR and RTC use cases - we are nearly at the point for a FPWD.

This will help longer terms process.

JW: That gives us the beginning of a plan.

<jasonjgw> Josh: agrees with Judy's and Janina's observations.

<jasonjgw> Josh notes the modular structure of the XAUR draft, which can be applied to a variety of settings (education, health, and other fields in which XR is expected to be deployed).

<jasonjgw> Josh would like to achieve agreement on the structure and design of a document.

<Zakim> Judy, you wanted to mention an FYI on the scope of our potential work in XR and to also comment on the standards and architectural issues

JB: We need to be careful on the scope of XR work in this space.
... +1 to Janina and thanks Josh for agreement
... Thinking of standards coming up from other spaces, we need to look at how they can help us and where we need them. GL etc
... We need to pay attention.
... I hear about Joshs bias towards Rehab and XR, and mine is towards helping PwDs in not being excluded a la learning about STEM etc.

That would be an enormous breakthru

+1 to that.

Also recreational activities etc.

Goes beyond Rehab etc.

JW: One way to approach this is by looking at the properties that devices have a la WebXR API

The use of 3D graphics, and movement responsiveness etc and alternate inputs.

Would one way to approach this to understand and explain what makes XR interesting and unique.

+1 to Jason, sounds great.

Same with RTC, we could try to help people understand we are talking about Peer to peer, there is audio, text, multiparty aspects, shared devices and displays.

Questions of approach..

SH: Sorry to backtrack, a few chats back, in terms of some info from use cases, for Joshs benefit. I think the structure overall is good.

Everything that is addressed seems to be covered but there is a question around domains should be added.

Controllers etc need to be added. The current draft doesn't go into depth, would like more detail there.

Doc could be opened up and definitions and the how and why of this needs expansion.

+1 to Scott - very useful feedback.

Will add something on controllers.

To Judys point we do need more discussion.

<jasonjgw> Josh: notes that Scott's observations are valuable and will be taken into account in further development.

<jasonjgw> Josh agrees that TPAC offers an excellent opportunity for dialogue. We need clarification from those actively involved in these areas (e.g., role of WebGL, WebGPU, Web Assembly, object-oriented authoring/rendering environments, opportunities for broader applications of semantic annotations than accessibility).

<Zakim> Joshue108_, you wanted to plus one the mainstreaming of semantics and OO approaches being important.

JW: Going back to some of the concrete suggestions..

I heard from Janina that there may be a chance to formalise the RTC doc as it stands?

JS: Yes.

JW: There is a proposal to refine the scope and nature of the XR doc, in tandem with our existing review work, and improved definitions etc.

As we continue to work on requirements?

Is there agreement?

Anything I've missed?

<jasonjgw> Josh: agrees with the priorities articulated by Jason.

<jasonjgw> Josh is concerned not to replicate work undertaken elsewhere. Josh will work on the XAUR document, bringing it to a call next week to discuss this and the literature review.

<jasonjgw> This could lead to editorial tasks to make it fit for review.

JB: Sounds good.

JW: For the RTC Janina you would like to see that becoming formalised.

Are there APA steps we need to do.

JS: I like the cross referencing Josh, useful.

We can edit as needed and share this with WebRTC group.

Will there be aspects that create issues for them in future versions.

Thats a la WebRTC.

In XAUR there is some gating concerns that help us scope what we can expect when in our review etc.

There are considerations of settings, and getting XR experiences off different devices..

So there are limits.

To what extent are access to the biological experiment etc is that dependent on Googles?

JS: I meant we wont get 3D audio or 3D video off a smart phone.

JB: I will beg to differ.

While not true 3D there are some news channels running a 360 approach type thing.

JS: Whats that?

<gives overview of perception>

Differences of experience, convergence - 3D objects following established lines of perspective etc.

Timed Text Charter are using this for caption space.

360 is a panorama type thing but is not exactly immersive.

JS: So like a low vision experience where you move a frame?

JB: Not exactly, is this useful?

JS: Think so, helps to frame discussion.

USA Today has news vids that they call VR, though they are not really but you can see this on Mobile

Diff from regular vids.

For user with binaural hearing and access to head phones I think we can simulate 3D audio.

JS: We do need to understand this all better

JW: Agree with that, so knowing about device characteristics etc does help what we need to say about XR

JS: We need to discuss and be in agreement around terms

+1 to agreement of terms.

<jasonjgw> Josh notes known issues of disorientation in immersive environments for some users.

So in discussion of these we will hit a lot of useful accessibility user needs etc.

<discussion on XR frog biology>

JW: Its a good example, anatomical detail etc.

We will find more use of that in VR type settings.

Surgery is also one of the applications, similar to the example Judy was giving.

Looking at how this can be done for general audiences - Judy is right in the role of semantics and descriptions here..

There is work being done in querys a la natural language dialogues.

Convergence of things that we need to think about.

JW: I agree around the limit of human vision and the bucket problem is being overcome by computing speed and hardware.

<jasonjgw> Josh notes that educational use cases make perspicuous the need for rich and well structured semantics in annotations/descriptions.

JB: Mentions visual object rendering in 3D.. appearances and objects can be generated from databases - or by painting surfaces, or annotating descriptions.

Gregg is saying this visual rendering will eventually be able to be generated from the photosphere.

RECAPTCHA has taken this kind of approach.

JW: That takes us to challenges articulated in the literature.

Can be overwhelming, so e.g how can you tell what is known to the user and how they can interact.

Lets do what Josh was suggesting about reviewing the past literature review as well as XAUR draft.

Can we do this next week?

JS: Works for me

JB: +1

What about the RTC stuff?

JS: All of us who have voted on CAPTCHA have done.

Michael will have a look at this on Monday.

Regarding RTC, do we need to talk with them?

JS: What clarity are we missing?

JW: Regarding the formal communcation, how do we want to do it?

JS: Lets discuss that in APA and here..

Getting quorum can be hard.

JB: APA is relevant to publication.
... We should co-ord with Shawn.

JS: CAPTCHA looking good.

JB: Updated blog on the way

<Zakim> Joshue108_, you wanted to say I'll be talking with Dom on Monday about the Accessible RTC doc

Agenda for next week..

JOC: I'll be talking with Dom

JS: Ask Dom if they will find our user doc useful

JOC: Will do.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/08/07 14:02:49 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/perseption/perception/
Succeeded: s/can simulate deep audio/can simulate 3D audio/
Succeeded: s/Greg/Gregg/
Succeeded: s/can be generated/will eventually be able to be generated/
Default Present: jasonjgw, janina, scott_h, Joshue108
Present: jasonjgw janina scott_h Joshue108
No ScribeNick specified.  Guessing ScribeNick: Joshue108_
Found Scribe: Joshue108
Found Date: 07 Aug 2019
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]