W3C

- DRAFT -

Immersive Web Working Group face-to-face meeting

17 Sep 2019

Agenda

Attendees

Present
cwilso, joshmarinacci, ada, trevor, atsushi, cabanier, samdrazin, present, alexturn, trevorfsmith, bajones, LocMDao, mounir, Manishearth, dom, Diane_Hosfelt_(remote), Laszlo_Gombos, plamb_mozilla, alice_boxhall, dan_appelquist, Sangwhan, kip, flaki, jeff_
Regrets
Chair
ChrisWilson, AdaRoseCannon
Scribe
cabanier, cwilso, cabinier, dino, alexturn, Manishearth, Locmdao, nellwaliczek, kip, plamb_mozilla

Contents


<ada> https://github.com/immersive-web/administrivia/tree/master/TPAC-2019

<cabanier> scribenick: cabanier

<NellWaliczek> https://www.irccloud.com/irc/w3.org/channel/immersive-web

<NellWaliczek> oops

ada: we posted the agenda but it might change

<NellWaliczek> https://github.com/immersive-web/administrivia/tree/master/TPAC-2019

cwilso: me or ada will resolve issues with code of conduct or talk to dom if you want to talk to the w3c

ada: (going over the agenda)

ada: (going over the agenda)

<Manishearth> https://github.com/immersive-web/webxr-input-profiles

<Manishearth> https://github.com/immersive-web/webxr-input-profiles/tree/master/packages/registry

<Zakim> klausw, you wanted to say do we want a TPAC channel on webvr slack? and to say who's the audience for demos today vs demos on Wed?

NellWaliczek: please take a look and we will go over it tomorrow

ada: on the schedule for wednesday, I marked some session that you might want to attend
... there are a lot of sessions on immersive web

Overview of recent changes/current status

bajones: this should be a short topic and I will list the changes since the oregon face-to-face
... the biggest thing is that we're trying to scale back our deliverable
... we're about to cut the working draft by the end of the week
... we are VR complete so it should be useful as a replacement for WebVR
... there were no large issues raised
... we had a few commits
... removing redundant attributes
... we removed xr layer type so we only support webgl
... we removed xr presentation context which made it easier to understand
... we removed blur events and replaced it with a change event
... we fixed issues with typed arrays
... we add a bunch of security and privacy text. mitigations, etc
... we developed a system for input profiles
... (going over what input profiles do)
... we got stricter on gamepad mappings
... we added optional and required features during session creation
... and then a whole slew of little changes
... so, there were a lot of commits but not really a change in how the API was intended to be used
... the big thing we did our module split
... we have the core webxr spec which has everything for a VR application
... and we moved AR and gamepad out into different modules
... the modules let us iterate on that piece of the spec
... it serves as a way to speed up development
... and now Manishearth is helping out as an editor on the AR spec
... a large part has been laying the framework for how the modules will interact
... the gamepad module should ship as WebXR core
... WebAR module should follow shortly after
... outside the spec, there was a lot of progress on the webxr polyfill
... there's still work on updating the test
... the samples page will run with the polyfill
... for instance, the oculus quest that only does webvr runs the webxr samples
... the input profiles library has multiple packages and will hopefully help authors determining the input that got out of the gamepad API
... showing the controller that user had was an unsolved problem
... and we get a new LOGO! and STICKERS!
... there's also an animated version

<Jared> https://toji.github.io/webxr-logo/

bajones: that's all. Hopefully we can flesh out the reminder overthe next couple of day

<Zakim> klausw, you wanted to say implementations and polyfills should enforce feature restrictions to avoid compat issues, i.e. local-floor use

klausw: webxr implementations or polyfills might not enforce the restriction
... if you want to use local-floor, you have to ask it during requestsession
... this is new behavior and new implementations should be aware of it

NellWaliczek: we should update the test to enforce compliants

bajones: the polyfill currently doesn't do any validation. That is one thing that's on the issue list
... that is indeed a breaking change
... but we all knew it was coming :-)

LocMDao: what is the status of audio? Should that become its own module?

cwilso: we certainly could
... the web audio API will likely need some extra pieces
... today the frameworks solve the issue for you and nothing's needed from us
... I think we'll have extra modules ie 360 video mapping

bajones: in general every time we interface with another API, we will probably want to do it as another module
... so people that are experts can work on that particular problem
... we don't want to revise the core spec when another API updates

<ada> ack

ada: working on polyfill and tests, issues and PRs are welcome
... and we want to get people interested and get more feedback

bajones: if there's something in the repo, we won't berate you for putting up a PR

<joemedley> Where's the lightning talk sign-up?

bajones: we might say to hold off but we will never get upset if someone tries to contribute

avadacatavra: talking about integrating with other API
... made me think of permissions
... do we integrate with that specification?

bajones: that is a good open question
... NellWaliczek has done some exploration in the past
... to give us a feel what could be done in this space
... so maybe this should go in the core spec

<Zakim> kip, you wanted to say that WebXR could provide information about the user's environment (AR Centric) useful for occlusion and spatial positioning in a way that doesn't leak

kip: previously the thought spec was that audio didn't need deeper integration
... one reason might be that webxr could provide information to webaudio
... so you can do reverbertion and occlusion without giving the author access to that private information

cwilso: there is some function that we could provide to give the headpose to webaudio directly
... we could hook it up in a different way

dino: you need to have permission because you will have always have permission

<cwilso> scribenick: cwilso

<dino> (was going to say the same thing as cwilso)

rik: for the permissions API, we hooked up in the browser
... although that's divergent from current spec.

nell: please file an issue, since this would be noncomformant.

<scribe> scribenick: cabinier

<cwilso> scribenick: cabanier

<scribe> scribenick: cabanier

<bajones> https://docs.google.com/document/d/1RZTL69JsTxoJUyXNnu_2v0PPILqrDpYW3ZxDjMAqQ-M/edit#heading=h.qlpukl2oy1tq

Feature Policy Discussion

<Manishearth> https://github.com/immersive-web/webxr/issues?q=is%3Aopen+is%3Aissue+label%3A%22feature+policy%22

bajones: we have a feature that allows embedding in other pages, etc
... you can say features can't be used in iframes, popups, etc
... because we are a powerful, we want to integrate with the feature policy
... up to this point, we stated that if the policy was blocked, the XR object would be gone from the navigator object
... there is precedent in other spec, but we got feedback that this wasn't a recommended pattern
... what was recommended that we always expose it, but it is not available, it would reject everything
... requestSession would always reject
... we want to have a pattern going forward to feature policy is handled in a consistent way

<ada> joemedley: is this a pressing question?

mounir: who implements this today?

bajones: google didn't do this. Did anyone do this?
... it isn't widely implemented

<Zakim> kip, you wanted to say that we didn't implement in Gecko and are hoping to see this proposal

kip: I'm support of nit hiding the xr object and failing the individual requests

<joemedley> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Feature-Policy#Browser_compatibility

alexturn: I don't know

dino: I don't know either

(wrt how XR is handled)

<joemedley> If you need help getting this table updated, see me on break.

bajones: should we do a straw poll?
... hopefull we will resolve this quite soon

<cwilso> scribenick:cwilso

rik: inline sessions?

<cabanier> scribenick: cabanier

<cwilso> bajones: intent is feature policy would block inline sessions as well

bajones: yes
... the polyfill can do a good job to fill in the gap

NellWaliczek: we have inline session with spatial tracking

bajones: it is possible to break out for sensor access for individual API

cwilso: I would suggest that people should comment on the pull request

avadacatavra: are we saying that the proposal will break out different sensors

NellWaliczek: requestSession and sessionSupported
... are the entry points

ada: let's do a poll

<ada> sorry 'bout that

bajones: no, we'll wait for comments on the PR
... there's some more to go through.
... we do want the polyfill to step in and fill the gaps
... and the fallback is that you have no sensor data
... for mouse style interactions
... the polyfill will fill the gap between feature policy and what developers want to do
... when should new features policies be added?
... the ar module shouldn't require its own feature policy
... the data that is exposes to the page is not any different
... in general we want to keep things simple for developers
... we don't want the developers to have to turn on a bunch of policies

mounir: we're switching to xr for the policy, should we break it into VR or AR?

bajones: we are proposing that this is too granular
... it's not an invalid point
... there's a difference between requesting AR and turning on the camera
... we don't want people to turn off because of the fear of camera data

NellWaliczek: there were issues filed on this topic
... there is no distinction between VR and AR
... the way we treat AR is just a hint to say that the real world is visible
... the more effective approach is to do this in modules that require this

mounir: (??)

bajones: what would be the benefit for implementors to break this up?

<dom> [should this be "spatial-tracking" then rather than "xr"?]

mounir: we are making a bet that AR and VR have the same security
... but this part of the spec isn't done
... so maybe it's best not to make that assumption

NellWaliczek: that's interesting

<kip> Bug 1419190

<dino> scribenick: dino

avadacatavra: My understanding is that XR if denied by feature policy, polyfill will be the fall back that renders something. It seems like we'd want consistency with feature policy.
... how does this situation interact with user expectations?
... when the user says NO, it breaks their expectations if something appears on the page

bajones: In that last case, that would be akin to having something show up about notifications, the user rejecting it, and having a notice that notifications are on. is that right?

avadacatavra: similar to the question of conveying to the user that the browser is the one looking at the physical world, not the page. Just something to keep in mind.
... We need to think about user expectations.

bajones: overview of user consent in the API: we work similar to the full screen API. You can check beforehand to see if full screen is available, then put a button on the page. No prompt.
... so in the XR case, it's not asking the user, it's asking the browser. The page would show the button if the browser said no.
... then, once the user activates XR (e.g. press the button), the browser will prompt the user by explaining what will happen.
... if the user says no, then it isn't an outright ban on the API, just on that activation
... this last bit is about consent, not permission. it can be transient.

NellWaliczek: the spec text is very intentional about defining requirements for consent. and when to strongly consider prompting the user.

<joemedley> If I remember correctly, there is a sample that demonstrates this.

NellWaliczek: from the end user perspective, there isn't going to be any difference. The request session call won't know between user denied, hardware not supported, browser not supported.

<ada> dino: I was going to pick up on something mounir said, regarding permissions for AR, how do we expect AR to work if we do not have permission to se the camera for the page?

<ada> dino: If you do AR you need todo scene detection which is similar to camera access

<ada> NellWaliczek: that's not true, i'll send you a link

bajones: the UA is given a lot of latitude

<ada> dino: from Apple's perspective it is the same

<NellWaliczek> https://immersive-web.github.io/webxr-ar-module/#xr-compositor-behaviors

bajones: for page camera access, that's going to be discussed separately

<bajones> https://docs.google.com/document/d/1RZTL69JsTxoJUyXNnu_2v0PPILqrDpYW3ZxDjMAqQ-M/edit#

<Zakim> alexturn, you wanted to talk about the ar-module being additive to core WebXR - presumably any AR privacy concerns are also incremental

alexturn: the point was made about the AR module, and that we already have a split. Should we separate?
... there are core privacy things about XR, and then extra things for AR. I hope that it is all incremental (in that AR exposes things that need more impact).
... by being incremental, it would be easy to think of it as an addition to the core XR requirements

bajones: i think we should continue reading through the document

<Zakim> dom, you wanted to ask about spatial tracking vs xr

dom: The commonality of AR and VR is spatial tracking? (YES). So should this be in the feature policy?

bajones: good point. worth noting that you can't do much with XR if you don't have spatial tracking.

dom: right, so spatial tracking should be the feature.

bajones: we want it to be clear that the feature policy is blocking people from creating XR sessions. We wouldn't want a feature policy for spatial tracking that blocks XR.

NellWaliczek: I am slightly intrigued because it would be nice to pair the feature with the feature policy being requested.

bajones: I agree. if we had "spatial tracking" rather than "XR" it would be inappropriate for it to block XR altogether
... you could technically say yes to XR, but without spatial tracking it isn't very useful

NellWaliczek: (some discussion about local v local-floor)

<NellWaliczek> local is a requirement for all immersive session

bajones: more questions about exposing immersive AR that has no tracking but does have camera?

NellWaliczek: no, that's required as part of immersive

cabanier: I like the idea of a feature policy for "immersive AR" rather than just a blank "XR". And it would tie to the permissions API

NellWaliczek: I don't know if we're talking about a dedicated permission for going immersive

cabanier: this proposal has nothing about inline sessions

bajones: To move through the rest of this document... several people have asked for an AR feature policy. We don't think that it is AR that is the privacy sensitve part.
... there is a way to expose AR to the user, without the page getting the data.
... this is also a reason for why we want immersive AR to be full screen. It is a tracking data sensitive permission.
... camera data has different requirements. there is already a feature policy that covers those implications.
... we probably don't need to introduce a new one
... i.e. if you want to give camera access to the page then you don't need to ask separately for AR
... other examples - ambient light sensor. if that feature policy is turned off, then it shouldn't be exposed to the page
... we probably want to still expose it

<dom> [I wonder if this argues for considering gyroscope / magnetometer / accelerometer features as equivalent to spatial-tracking]

bajones: there is some data that can come from camera-like sensors, that isn't strictly the same as a camera (much less resolution).

dino: How does one provide an acceptable AR experience without camera access?

NellWaliczek: refer to the document above - it explains how to blend.

bajones: you're asking how to do render something if you don't know where the floor is?

dino: yes

bajones: they feel there is value in having AR content that doesn't have a reference coordinate system e.g. walking around a solar system

NellWaliczek: some of our customers have asked just to put something in AR.

<Zakim> NellWaliczek, you wanted to talk about other xr related implications

NellWaliczek: there is also some thoughts on exposing a reference floor space, that would just be exposed in the reference coordinate system. The issue is that you still want some kind of user interaction to detect the floor.
... When we get into spatial tracking, a declarative API might not need the tracking.
... there is a lot of detail in the privacy explainer about extra concerns exposed in spatial tracking

Riju @intel: (ambient light sensor author) We are planning to ship ambient light sensor sometime soon. There is some security concerns that have been addressed - e.g. lower frequency updates and larger steps in the ambient light results (e.g. 50 lux vs 1 lux)

bajones: refer to "Module Interaction with non-XR features" part of the document

<NellWaliczek> FYI, the current xr-related lighting explorations https://github.com/immersive-web/lighting-estimation

bajones: "with similar mitigations if necessary"
... it would be useful for us to piggy-back on the work of ambient lighting when exposing that

<Zakim> alexturn, you wanted to talk about even with a declarative scene graph, non-trivial apps likely still let the app scripts reason about user position relative to the UA-rendered

alexturn: re. Declarative Scene Graphs - i think there are some useful applications there, but you would quickly get to the point where the app/page does want to do something (e.g. highlight when the user is looking at something)

<ada> dino: appel exposes basically that you drop in a USDZ file

bajones: we've discussed this before, and i agree. but i think there will be people who just want the simple thing

<ada> dino: it is enough and extremely useful

s/[acronym]/USDZ/

cabanier: Leap does something similar.

joemedley: wanted to make sure i understand the relationship between data and camera. If I have a rig, I might be getting information on surfaces in the room, but I don't have the actual camera pixels. Right?

bajones: you're talking about a few things here. There are different levels.

1. Headset on, drawing something on top of the camera feed (or a transparent headset). Page has no access to the pixels.

2. Above that, real world understanding - exposing data on the environment. We want to get there as soon as we can.

3. Above that, actual camera access - at this point the developer can read the pixels.

NellWaliczek: there is also some specifics about hit testing. not quite defined yet.

Riju: I have a lot of data that does computational analysis on camera data. We are also planning on exposing the magnetometer soon. What are your expectations here?

bajones: the original version of Google Cardboard had a metal ring to pull on in order to press a button. This was done via detecting magnetometer changes. The result was pretty terrible. So we now have a lever to press on the screen instead.

Riju: we didn't expose the raw uncalibrated magnetometer data

<Zakim> alexturn, you wanted to talk about extra API you need for magnetometer with XR

alexturn: Heading aligned coordinated space - we quickly realised that it was confusing. Unclear if it was user orientation or application orientation. A compass heading only made sense once you were in a reference coordinate system.

NellWaliczek: that repository hasn't been touched for a while.

ada: We've gone over time - and missed the Trusted Immersive topic completely

bajones: this session was very helpful. thanks everyone.

----- break for lunch ----

<NellWaliczek> agenda says 1pm

<ada> on my way!!!!!!!!!!!!!!!!

<joemedley> join #tpac

<ada> https://github.com/immersive-web/administrivia/tree/master/TPAC-2019

<dom> ScribeNick: alexturn

Meeting with the TAG

bajones: We had discussion on the last call about supportsSession()
... Feedback was that, since we have a true/false result, we should just return that rather than requiring catch
... Strawpoll per-person results were 50/50, but per-org results showed Google as the only org against the switch
... So far, folks have generally been positive

<aboxhall> https://github.com/w3ctag/design-principles/issues/137

<dino> - can someone link to the TAG feedback that we are discussing?

<aboxhall> https://www.w3.org/2001/tag/doc/promises-guide/#rejections-should-be-exceptional

aboxhall: General guidance is that rejections should be only for exceptional cases

bajones: What about feature policy? If the feature is off, should it be an exception or return false?

NellWaliczek: Seems like a case for a "judgment call"

aboxhall: Doesn't seem exceptional to me, but maybe I don't understand.

NellWaliczek: Is there any precedent here?

bajones: We had discussed earlier today and believe that we shouldn't otherwise hide the feature

dino: Examples talk about user contact access and says you should reject if feature policy off

aboxhall: Request vs. Supported different here

<cwilso> (this is all WRT the TAG's WebXR Device API review - https://github.com/w3ctag/design-reviews/issues/403)

aboxhall: Request doesn't work as expected, and so that's exceptional, whereas Supported is still returning true/false as expected

bajones: Unless the browser has melted, it seems we can still just return true/false

aboxhall: Yep - that's what's expected

joemedley: False can mean multiple things it seems - what does it mean that we can document?

bajones: False means that calling request will not give me what I want
... This is really for someone who wants to put a button on their page for XR support
... Not doing heavy validation - just can we support the feature

NellWaliczek: Support call also can't spin up any platform resources

joemedley: Point is not to express a point pro/con here - just need to know what false can mean here

mounir: This change is extremely incompatible - every single page has to make this call
... While we agree this should be a boolean, this would break existing pages

<joemedley> Correct me if I'm wrong. Don't we have a bit of a problem with devs just plowing ahead with WebXR?

<trevorfsmith> joemedley, the problem is less that they plow ahead and more the unwillingness to break their work during an origin trial.

dan: Doesn't this defeat the purpose of having an origin trial if we can't take feedback?

mounir: We have apps who are already deploying and are ready to go live that have followed along

dan: That is fair - at the same time, we see a lot of APIs that come through that already have origin trial content and then have issues integrating feedback

dom: In WebRTC, we ended up going with similar arguments to avoid change and now regret it
... Even if things seems trivial within one API, when devs use multiple web APIs, it quickly becomes difficult to keep straight which parts of the web work which way

bajones: One key note in the proposal is that we are proposing a rename of the function at the same time, to ensure that devs know to update - can also do slow deprecation of old API

mounir: Was surprised to see advancement from Tuesday call to Thursday vote

cwilso: There will be far more WebVR content that breaks than content using this intermediate version of WebXR - we shouldn't get too attached

NellWaliczek: For enum values, there are two cases where there are enum values we're extending in a module
... We are extending the session mode to add "immersive-ar" next to "immersive-vr"
... What does that mean to be able to extend an enum value in the AR module?
... We also have the Gamepads module, which extends the mapping enum in the Gamepads spec to also include "xr-standard"

<dom> Allow partial enums #184 in WebIDL

dom: We've had this problem of extensible enums in WebRTC as well
... We've handled it there so far by downgrading the enum to a DOMString where we put additional requirements in the spec
... WebIDL editors have not pushed back to say that extensible enums won't work at all
... We could get in touch with them here to see if this could happen given resources

NellWaliczek: Base enum would need to change to be a DOMString - would need to coordinate with Gamepad

bajones: For context, the only reason it became an enum is because this group asked for it to become an enum

NellWaliczek: This string is only an output enum and so validation won't actually occur
... May not be observable then if it's really a string

mounir: Something Permission API did is to have a registry of the valid names
... Here because there's one value and we have Gamepad API spec editor here, can we just add "xr-standard" to it?
... We have dependency on Gamepad spec - if we have to add this to Gamepad spec, which is not CR, are we stuck?

dino: I strongly suggest we just fix WebIDL here to support partial enums
... Lots of benefit to implementations to have a proper enum

bajones: Part of this is that we don't control the thing we intend to extend

NellWaliczek: Not sure how long it takes to get change from WebIDL

dom: On dependency, there is a process question and a real-world question
... Normally, when you go to CR, you have to show your normative dependencies have similar status
... Can be OK if a narrow dependency has not reached that yet
... Gamepad is not a narrow dependency
... To go beyond CR, we need to find a way to put resources behind Gamepad

sangwhan: Is there a reason that Gamepad has not hit CR?

bajones: Basically just lack of motivation
... New editor now - not sure how engaged they are

cwilso: Planning to connect with Steve when he's here

alexturn: How hard is it to change WebIDL if a PR is made?

NellWaliczek: We have WebIDL folks we'll chat with this week - we'll know more then
... Last issue has to do with feature policy

bajones: Could make a decision either way and be comfortable - want to avoid a bad precedent for the rest of the web
... Have a core API for spatial tracking ability that depends on various system sensors
... Many times these are the same sensors that populate existing sensor APIs
... Other times, these are other sensors, e.g. the headset peripheral's own sensors
... The data we surface is manipulated in such a way as to be appropriate for a head pose, etc. to incorporate neck motion
... Fundamentally, still giving back something that could be reverse-engineered into what you get from other sensor APIs

NellWaliczek: On phones especially

bajones: When a Feature Policy has blocked the generic sensor API also block our usage of those APIs?
... If so, should we block it universally, even for devices that would not expose devices that way?
... If not, does that cause bifurcation in which devices will support a given page?

NellWaliczek: We're concerned developers may not realize that disabling the other feature policy would impact this API, especially on some devices

bajones: We also don't want our API to turn into the data source for a polyfill that works around Feature Policy

dan: The thing that this brings to mind is what came up with WebRTC about private IP address exposure
... As many people found, this was being widely misused by ad tracking networks as fingerprinting data
... If you allow that kind of data to be exposed when the expectation that it's not exposed, it'll be used by bad actors
... Will be on the front page of the New York Times
... Including ads that appear on the front page of the NYT

bajones: The only time sensor data would be exposed by our API is behind user consent of some sort
... That's a fairly fundamental difference
... Doesn't completely mitigate the bad actor sitaution

<Zakim> dom, you wanted to make distinction between allow/disallow in feature-policy

dom: Feature policy is about allowing features, not disallowing them
... And so if you enable "xr"/"spatial-tracking", it doesn't shock me that device sensors aren't allowed
... What about reverse question - if you allow "xr", should that implicitly allow the sensor data?
... Since it's a developer decision to allow "xr", it's not a shocking situation that you can get to sensors

NellWaliczek: For top-level doc, allow is on for self

dom: Can you disallow for self?

mounir: Yes, you can

sangwhan: How often do you actually know the provenance of the data coming in?

bajones: Right now in Chrome, we know very well where the data is coming from
... Over time, we'll be using intermediate APIs like OpenXR that cover a wide variety of backend choices

dan: What I said before wasn't intended to be an answer to the question - just some context why we're concerned

bajones: Don't expect you to have a concrete answer now - wanted to allow for Q&A

NellWaliczek: We can also send you links to the specific issue for you to look at

sangwhan: Assuming the same kind of constraints exist with permissions too

mounir: Wanted to ensure XR would not be used as polyfill for other APIs
... Could you infer the whole API from XR?

bajones: Yes - you could get something that looks/acts like DeviceOrientation from the frame loop for WebXR

aboxhall: Just wanted to come back to the accessibility question
... Brandon didn't get a chance to address all explicitly
... Goal section only notes displaying imagery
... Can also have UI surfaces beyond visual or an assistive application that would drive different designs
... Other APIs could use WebXR to aid across other modalities

NellWaliczek: We do have time this week to chat with accessibility folks about WebXR

bajones: My understanding of the question: Are you able to use our API to drive the Audio API and Haptics API
... Sample lets you move speakers to position them around your head
... Could do same thing for haptics
... Really just exposing position/orientation in space and so we can feed other APIs with taht

NellWaliczek: Gamepad API does keep set of XR and non-XR gamepads separate

aboxhall: A-Frame and three.js already integrate with WebXR to spatialize audio around the user

<kip> (Maybe could discuss asynchronously. Too much to discuss with TAG members)

aboxhall: Only other point from the review is magnification - for DOM content you get magnification for free
... Would be nice to see similar mechanism here

bajones: With the way the API works, the user agent provides a pose and the page is expected to render the scene from that pose without question
... The UA has a lot of leeway in terms of providing that data
... While I question how we'd spec it, there's ample opportunity for the UA to step in and do a magnification step around that pose

cwilso: Would that be a good question to cover with the accessibility group

dan: Accessibility issues shouldn't be relegated to some side document - should be covered in main document

aboxhall: Agreed - even if other document dives deep, main doc should point to it

cwilso: Schedule: CHANGE PLACES

<Manishearth> scribenick: Manishearth

Aligning across specs and standards [Nell]

NellWaliczek: brought this up in the TAG discussion a bit wrt enums
... cabanier, you have filed some issues on CSS related to XR that you might want to recap?
... there's the environmentblendmode one (may want to defer that until after the composition discussion)
... also 3d fave icons and manifests?

cabanier: so the environmentblendmode in CSS is different from what we have
... it's more about how the browser blends with the environment

Aligning across specs and standards [Nell]

https://github.com/w3c/csswg-drafts/pull/2719

NellWaliczek: what's the subtractive mode?

kip: in cases like magic mirrors using LCD screens, the light from the backlight is *removed* by the lcd panel
... unlike an AR headset where the colors add up

NellWaliczek: difference between a see through headset?

alexturn: with the magic mirror LCD stuff the signal makes it darker

multiple: *general noises of understanding*

NellWaliczek: do we need to be adding the alpha-blend equivalent to the CSS spec?

cabanier: you can't see through your phone though

alexturn: what about a video passthrough phone?

brandon: we prob don't need to address subtractive in our spec is that someone won't do AR on a magic mirror

cwilso: someone might build it though

<kip> https://www.globalsources.com/gsol/I/Transparent-display/p/sm/1150590843.htm

cabanier: if such a dvice is created we can edit the spec

NellWaliczek: yeah if nobody is using it i'm hesitant to add it

alexturn: we know people build mirrors like this but not ar devices

bajones: in asking about alpha-blend, if you had an AR headset (varjo), which has video passthrough, and you wanted to do a HL2/ML style browser, where there's a browser in space,
... would you not need this mode?

cabanier: not for the 2d browser mode
... there are some things about dark mode though

<cwilso> 1q?

klaus: for the proposed domoverlay i'd need transparent background support
... also video overlays and stuff uses transparent backgrounds implicitly

cabanier: i don't think that's specified?

bajones: to make a distinction, you're compositing transparent content on browser content

klaus: other way around, you have dom comtent of the transparent thing on the video element, where the overlay has a root element that is transparent

<dom> s//dom/DOM/

bajones: so that does have alpha-blend behavior but it's being composited by the browser anyway on top of other browser content
... i guess you can have an alpha-blend XR session with DOM in it and it would inherit this mode

<Zakim> bajones, you wanted to discuss klaus

bajones: but it seems like the browser itself will never have alpha-blend behavior with the rest of the world

<Zakim> NellWaliczek, you wanted to ask the original css use case

NellWaliczek: what was the original usecase for the new css @media attribute?

cabanier: if you have content on an ML device, and you make it super bright white with black letters it doesn't look nice

NellWaliczek: lets people modify css to be transparent-display friendly
... do we not believe there is a necessity for devs to modify for alphablend
... backing up, on a varjo, if you have a browser floating in the world
... would it be transparent and need this?

alexturn: nah
... we may be able to request that things have transparent backgrounds
... it would never automatically happen

cwilso: timecheck

NellWaliczek: coming back up to the higher level
... we should prob discuss such topics in IWG _first_ since there's topic awareness

cabanier: css wants to be asked first too

NellWaliczek: this is more about sanity checks

cwilso: not really a question of venue, this is more about just making sure everyone is in the room

NellWaliczek: this is a bit fuzzy, idk what the solution is
... at the very least for the EBM we prob want non normative spec text
... pointing around

cabanier: for EBM i think we didn't have it yet

mounir: what's the manifest thing?

NellWaliczek: a proposal to extend fave icons to 3d fave icons, and it became a discussion about how to fit this into the manifest
... discussed last TPAC
... similar to how you can have multiple sized based favicons
... one could have more like this for physical size or distance

mounir: use case?

NellWaliczek: couple. having a "favorite" in a 3d shell and stick it somewhere
... also the idea of having a 3d gallery of stuff

cabanier: helio does this with launchers that you can drag

alexturn: hololens can have gltf icons that launch things

mounir: what's the blocker here?

NellWaliczek: none, just a status update

<Zakim> ada, you wanted to ask regrarding can IWWG peeps engage with CSSWG

<NellWaliczek> https://images.app.goo.gl/j2kXABwRk2tM1Qb97

cwilso: for context we discussed this last TPAC
... and then rik got busy with manifest stuff

ada: one concern i had with raising stuff with other WGs
... how much privilege do we have to discuss CSS's issues in our calls

cwilso: it's a good idea to do that within this group to coordinate
... this is a good reason why to coordinate within this group

cabanier: also if folks want to help out, i'm more than happy to change stuff

cwilso: in order to contribute substantially you need to join the group

dom: you can always make comments, but for stronger participation you need your employer to let you in

NellWaliczek: wrapping up we should probably have non normative text clarifying stuff around this

cwilso: non normative so that not following it is not a conformance issues

klaus: like flavortext in MtG

Breaking Changes

NellWaliczek: this is things marked as potential breaking and we should discuss what we should do about them

<NellWaliczek> https://github.com/immersive-web/webxr/labels/potential%20breaking%20change

<ada> https://iw-find-by-label.glitch.me/?label=potential%20breaking%20change

https://github.com/issues?utf8=%E2%9C%93&q=repo%3Aimmersive-web%2Fwebxr+repo%3Aimmersive-web%2Fwebxr-ar-module+repo%3Aimmersive-web%2Fwebxr-input-profiles+repo%3Aimmersive-web%2Fwebxr-gamepads-module+is%3Aopen+is%3Aissue+label%3A%22potential+breaking+change%22

<NellWaliczek> https://github.com/immersive-web/webxr/issues/819

<dom> Revisit transient input sources & XRInputSource.targetRaySpace #819

https://github.com/immersive-web/webxr/issues/819

bajones: recapping, from his reading of the hit test explainer it would not be possible to implement in chrome
... not close enough to know the issues, mounir?

mounir: so you need to subscribe to get input source
... but transient sources are created on the event
... and then you need to subscribe to that
... which means a delay of a frame

bajones: iirc the explainer had a thing that allows this to be synchronous/immediate

nell: the way we originally designed this was to deal with this. you would get a hit test source *paired with the transient event*
... and you can immediately use that

mounir: but this is all async

NellWaliczek: yes and when you fire the event you need the first hit test result

bajones: i recall bringing this up and the response from piotr was
... that he has examined this and he doesn't think it's implementable
... my plan was to get him and nell to talk, she can talk about her original intent

NellWaliczek: sg, moving on
... now https://github.com/immersive-web/webxr/issues/823
... we already spent time on this, we just need to write up the discussion results
... now https://github.com/immersive-web/webxr/issues/824

bajones: iirc there was a concern brought up during the TAG discussion that we tabled for a later time. is that now?

cwilso: oh so some background
... when in a situation like the call or f2f, a straw poll is a way for people to get a temperature check
... it is non binding, and since not everyone is able to vote it's not very strong.
... it can be used to see if there's a strong lean in one direction
... chairs can also decide how to count the vote (one per indivdual or company, or give more weight to some people)
... because it's non binding
... otoh when we call for consensus it's companies that vote, and it's a binding question
... this should happen rarely since it's a lot more process

dom: note this is not the preferred way for reaching decisions, since it's more of a majority vote than consensus
... but w3c does allow groups to do this when there is no clear consesnsu

mounir: iirc you are required to have hit a deadlock, which is what surprised me

cwilso: that's a fair comment and is why i apologized for not being able to be there
... what i actually said was we should discuss this with the TAG first and then move on

klausw: straw polls also don't always convey intent, my position was "-1 but can be convinced easily", probably should have done -0.01, apologies if that contributed to the pile on

cwilso: note that people can always decide to formally object at the next stage of the process , but i hope we don't end up in such a position

ada: so to be clear the intent of the CfC was to let people to wait for tpac and vote after if they wanted, there was no rush. things done async by email usually last a week or two

mounir: something i see working elsewhere is to discuss why people disagree during the straw poll

cwilso: that's a fair point, i find they should be in the middle of a conversation

ada: fwiw both sides of the issue had had plenty of floor time on it already

Manishearth: i think for this straw poll "why do people disagree" had already been discussed

bajones: we perhaps should use the framing of "is anyone allergic to this" more often over immediately jumping to a straw poll

NellWaliczek: how should we proceed here?

cwilso: so i think the goal was to learn how the process works and how we should do it in the future

NellWaliczek: recap?

cwilso: we should be more careful around this, use bajones's suggestion of "does anyone strongly agree/disagree?"
... we should rarely use acutal votes
... to get consensus
... (also when i say consensus i don't mean unanimous)

NellWaliczek: so to recap we had a straw poll, vote was split google/not, now we have a formal CfC, are we locked into that

cwilso: i don't think we are locked in, but if any of you feel locked in by this there is an escalation path
... but other than that we don't have to continue down that path
... personally given TAG feedback and that they have this in their design principles
... i think we should make this change , in a least damaging way as possible, as polyfillable as possible, and take the least amount of compat change early on
... so to go by the unacceptable/acceptable model
... "Is it unacceptable to change the return value of supportsSession"
... perhaps iwth a name change

mounir: name change is important for us

LocMDao: i feel like it's the right thing to do in the big picture. curious about what will happen to current projects that are following webvr 1.1? how long would it be between that change and the released spec

NellWaliczek: clarificaiton: you're talking about webvr 1.1?

bajones: hold on: you have content on top of webvr. not 100% on the latest origin trial

cwilso: (no current webvr origin trial)

"not 100% on the latest origin trial" was said by LocMDao

cwilso: your clock is ticking because we have very clearly stated webvr is going away.

mounir: if this is a chrome question it can be discussed later

NellWaliczek: feel like this should be moved elsewhere

LocMDao: to rephrase: what kind of timing are we looking at here?

NellWaliczek: two questions: when will this happen, and will you be able to detect it. former is an implementation question

(various): adding to unconference

lgombos: (missed the TAG meeting) is the TAG guidance clear here?

bajones: yes. very clear.
... in terms of talking about the speed, i have a draft PR for moving forward with this
... should i move it to non draft

<NellWaliczek> https://github.com/immersive-web/webxr/pull/841

NellWaliczek: i think, yes

bajones: should i click the button

(multiple) : click. the. button.

bajones: cool. take a look at PR 841
... *boop*

ada: quick q:: does it include the name change?

bajones: it includes *a* name change

NellWaliczek: wait is there tag guidance on names here?

dom: yes

(multiple): cackling

<avadacatavra> yay trusted ui!!!

<LocMDao> scribenick: Locmdao

Trusted Immersive UI

NellWaliczek: Recap for people who weren't here last time. Kick-started around discussion about required and optional features in request sessions. At the time we discussed deferrable issues.
... #1 don't do anything if the feature this session requires, don't do anything. If you need world scale tracking don't bother.
... defintion of optional feature. It would be nice if you had a fall back. ie room scale where you could fall back to a standstill experience. At least having the capability.
... #2 deferrable features: not having to request the features up front but at the time needed.

Two pieces of feedback: Some user agents expressed that they would be unable to present consent UI while immersive.

The other there were a number of coincides that needed to expose certain UI all up front.

Back to question of not having to post consent. The idea being when in an immersive session, the page can render every pixel that is up on a display. The idea is its possible to spoof any kind of permission request and lose trust with the user.

This led to topic of what is immersive UI. Different hardware agents have been doing different explorations. Couple of examples - a motion controller, the dedicated button or if you press something that isn't a real prompt it pops you out.

Another option is its a totem or something not unique to you. Some way that the UA could decipher itself for this concept of trusted UI. Discussed what its likes in the 2D world. Unclear where this sits today in 2D.

Lots here. Lets look at the whole topic before we dig in.

On the topic of trusted UI, it turns out at the time we hadn't gone through our entire security and privacy spec check.

The outcome was there was strong encouragement to do things a certain way. It seemed like we can approach this in a similar manner. Could we have a term where we could gate other functionality like the idea of deferrable requests. Does an app that doesn't have its own implementation of trusted immersive UI do something else?

The ability to query whether that feature was available or not vs the user having non-consent is the next set of topics. But having noodled on this a whole bunch... what is the extent of defining trusted UI and do we want to plan to take dependencies within the spec with the presence of a trusted UI. For example we might put in the spec, you're strongly discouraged from allowing mid session consent changes if you don't have consent from the

user. related topic, how does this intersect with non-XR permissions like mics, bluetooth, geolocation etc.

Does anyone think we need to be more prescriptive rather than putting general guidance about strongly encouraging user agents to provide something to not be spoofed. Or be more concrete.

avadacatavra: Nell and I have talked about this in the past and talked Manish, lots of people. Really important that we leave it up to UAs for this trusted UI. Some might chose to boot out of immersive mind like a consent prompts outside of immersive mode. Others might use a sigil outside of the trusted UI.
... Could we use hardware kind of like CTRL-ALT-DEL which would exit the immersive mode if the page was rendering.

NellWaliczek: We do have text that there must be a way to get out but not consent experience.

mounir: I was going to say something similar. UI can be left in the UA. We don't really know what we don't really know. Leaving that in the spec and things that bind isn't need.

NellWaliczek: User agent to decide if it's needed.

mounir: There are so many devices that behave differently. Cardboard vs Oculus etc, even if we believe in trusted UIs we wouldn't be able to apply to all

<Zakim> kip, you wanted to suggest that this may interact with XR link traversal, which may push towards more deferred permissions requiring UI and to say that we should be clear if the

NellWaliczek: So you're saying there should be non-normative text and any reference should be non-normative.

kip: One thing directly related. There may some abstract requirements for the UI. If you're instantiating form a user event like Turning off the cam. We may have to differentiate these events.
... if the user had already accepted to let their camera being used, they may change their minds later. When the user interacts themselves, showing the UX may not being something they want the content to know.

NellWaliczek: We have several modes for this already.

kip: Should we include in the spec text the act of bringing up the permission prompt should not be detectable before the blur event. ie if I have to hold down the button, the content may respond faster than the UI...

NellWaliczek: Ie in Hololens there's an issue that shows this.

Manishearth: If you allowed UAs to go by the trusted button route, we need to ensure there are trusted user buttons.

NellWaliczek: Current content about the menu button.

kip: should we have something in the spec saying it must not be detectable before the UI is detectable.
... there may be interactions with link traversal

NellWaliczek: Yeah if we do support trusted immersive UI and its something that is spoofable then it can be used to spoof/phish you going to your bank

bajones: Clarifying question for Diane. You said one potential path was to exit out of the immersive event. You're looking at something in cardboard takes you out. Was the expectation exiting and dive back or exit entirely? One is massively inconvenient. The other shuts down your session

<avadacatavra> NellWaliczek: bajones what nell is saying is accurate

<bajones> Thank you

NellWaliczek: My guess is it would blur your session and not end it. We could say if your trusted UI requires you to leave, normatively . If we described trusted UI and defined it non-normative, guidance would be "normatively it goes to the invisible state instead of exiting the session all together" but we have to think about the intersection.

<Zakim> engedy, you wanted to ask about persisting immersive mode "permissions" and better alignment between XR and regular permissions

[wireless mic testing]

engedy: I'm aware that I'm late on this. Two points. 1. What is the group thinking of persisting permissions in immersive mode? ie privacy

NellWaliczek: Regarding duration of consent. We can't define this normatively but we can have guidance which is captured in the spec. Someone please share text from spec.
... in privacy explainer.

<alexturn> https://immersive-web.github.io/webxr/#consent-duration

avadacatavra: Going back to normative/non-normative and what we say. Perfectly reasonable to say for the following actions, you can either use a trusted UI or you need to pause the session, exit out of immersive and return when all is handled.
... 2nd. The other properties, principles, characteristics(spoof ability) you have to fulfill in order to say this is a trusted UI then you have to pause.
... the non-normative is how the UA arrives at the trusted part. Normatively when, why etc.

<Zakim> alexturn, you wanted to propose a normative/non-normative split around whether we can compatibly "lie" to naive pages

alexturn: Around what has to be normative. Curious to see what other specs do. Where we want to be sure is around privacy so that we can force blurring and do it compatibly trick the page into this is something I understand while protecting privacy. Some of the other things we discussed that we shouldn't restrict all UAs , the places where the place where you need to consider spec requirements to allow content to be compatible across these

various privacy approaches.

ada: question for engedy . Do you envision the persistence of permissions to being different than on the flat web. ie on an http web site if you get permission then that persists vs WebXR? Is there a reason why it shouldn't be the same?

<Zakim> ada, you wanted to say regarding duration how does engedy envisage it differing from https persistence?

<Zakim> engedy, you wanted to follow up on persistence and on Permissions API alignment

engedy: We're also thinking of reducing permissions persistent on the flat web. But what I'm really asking is how we envision do we believe that we have a set of permission states for immersive and different set for flat web. I can't imagine a completely different set makes sense.

NellWaliczek: clarification - are you asking the XR specific or Bluetooth and USB?

engedy: I guess both. The definition of addition to the list of what makes sense semantically. Ie sensors makes sense in both but I may want it in XR but not flat.

NellWaliczek: There's permissions and then there's features some of which require consent. We need to rationalize how we think about that. The way the required and optional feature list works is its quite granular because threats and mitigations have to be bucketed. The current reference based states like local floor, bounded floor etc But as we expand into more involved real world understanding, we expect this set to expand.
... one thing we talked about earlier in feature policy is how this is bound in XR. I could see us making a similar choice for permissions like using the same name.

bajones: When we're talking about the difference in permissions between XR and flat web, the way we think about our features in immersive is explicitly scoped - it lives for the session. We did leave the language up to the User Agent. ie if you're jumping in and out maybe you don't need to give permission to the camera every time but you wouldn't be able access the camera outside an active session,
... it would feel weird to give the flat web page mic permission but have it disappear in the immersive web. But I can understand that there are permissions that we don't want to be inherited back. Its likely already taken care of but we have to evaluate if we've covered this all.

NellWaliczek: Happy to speak more later this week with engedy.

<NellWaliczek> https://github.com/immersive-web/webxr/labels/consent%20tracking

NellWaliczek: 5 issues on being able to know when consent has been granted. Walk through at a high level to see if we have consensus or opinions about what continues to be a concern.
... 1. Informing apps which optional features were granted. Once an app has been created the only way to find out if a feature was enabled, you had to call the api and it failed.
... 2. Allow features to marked as deferrable. Need confirmation on question.
... 3. Should applications be able to determine that the user declined consent? Not clear on.

4. Querying user consent status. If websites wanted to warn their users hey when I make this call you're going to get this consent prompt.

5. Privacy: Interaction of existing permission-based features with WebXR feature consent. Continue conversation.

scribe: First do folks feel need for deferrable features still exist.

bajones: Yes. Please speak up if you have a use case.
... ie AR scene if you want to capture a snapshot. We could add an API ability. But you could do the entire app without camera until you want to save what's on a screen, but you don't want to ask users when it loads to scare them off.

NellWaliczek: Sorry I didn't frame this correctly. From the conversation we believe there will always be a path to popping you out to blurred state or exit. Do we believe ...

<kip> (Nell clarified, and Null-ified my question...)

bajones: Previously we had assumed you had to declare which deferred features you didn't want up front. If we believe there are no browsers that will do that then it should be possible to not declare those up front and simply make the call to trigger that consent inline. I believe this is the case if the previous assertion is true.

<Zakim> klausw, you wanted to say some features may not be easily API-testable

NellWaliczek: We'll ask the whole room if someone has concerns.

klausw: For checking if an optional feature was granted or not. Some features may not have a testable API which might be chaotic. It would be cleaner to have one way. Open question. And separate question if you should know why?

bajones: temperature of the room

<ada> Manishearth: it will run over

NellWaliczek: can you give us a background bajones
... engedy not bajones about decision that went into current design.

engedy: I would be interested in discussion future of permissions api and I'm here to collect requirements.

NellWaliczek: lets push the rest of the conversation to an unconference or breakout.
... Time to ask "do you strongly object" question. Wants sanity check on what I've heard. Do we believe that we can safely say that mid session can always be trusted ie trusted UI or kicking you out of a session "pause/blur" example is cardboard that makes you take your headset off.
... Do we feel like we have any User Agents that given these options does not feel comfortable with this.

mounir: Everything that is UI related should be recommendations.

NellWaliczek: context is deferrable session.
... see above
... is there every going to be a browser that won't allow mid session consent?

example above from bajones about camera

bajones: This would also come into play in something not exclusively XR. Someone who triggers the permission prompt on 2D web. Ties into same methodology that you can get the users consent. Question does anyone in the room have an allergic reaction about consent while user session.

NellWaliczek: or are you ok if your UA not being able to support this?

bajones: The answer informs can we design the spec to depend on this always being there or we put in a mechanism to allow developers to intelligently decide.

dom: If we were to disallow any permission grant while in session. Are we talking about trusted security UI or expanding the set of XR features from the ones you requested initially vs ones you need later. In particular, if you're not asking for an XR feature up front is different than asking for any feature.

NellWaliczek: exactly we want consistency in at least outcome from a developer perspective.
... I keep saying mid-session because you could be kicked out of immersive and come back in a way that is non-spoofable.

dom: maybe the question is. If some UAs will provide it then we should provide it.

NellWaliczek: There are UAs who have expressed they're going to experiment with it.
... Others said they wanted to pre-declare - John Pallet had said. Is this still your team stance?

mounir: This sounds like a very complicated API for low return. I rather not have it.

cabanier: It would feel really weird in AR.
... so far we've assumed there's only one immersive session but what if I click on camera access and then I open another browser its starts another sessions.

NellWaliczek: It would kill the previous session.

cabanier: one tab could kill another?

bajones: current spec says a new session would be rejected.
... if concern is bringing dialogue window up in another room then you should have options to bring it up forefront. We're trying to be non-prescriptive. If you felt like another room its up to you.

cabanier: as a user I wouldn't trust this because its outside of the context of your user window.

bajones: we don't want to force your browser to do anything you don't have to. We want to give you the leeway to build something intuitive, usable and secure.

<dom> [do we need trusted UI for permissions? having a fake permission prompt isn't really a risk, is it?]

alexturn: Just to give some context around holens devs and native apps. Think of microphone. Apps could be about communication like record a video or make a call. If 30% of users do this we don't want to require this because its a feature not a requirement. In hololens not everyone needs this in our current app experience. So today if you need the mic then you get the dialogue prompt at that point. We're going to have to solve this especially

for security. We want to say in your browser to be able to have permission when needed.

<dom> [I guess you need to avoid clickjacking]

scribe: if some browsers want to allow mid-session but others don't, is that ok?

thanks dom

scribe: the other way this could have gone is we went around the room and we all could have trusted UI but we clearly have UAs that won't so that creates this gap except for the deferrable design.

NellWaliczek: The question I'm trying to ask is is this a problem?

alexturn: implication is the opposite of the pattern want people to use no the web.

NellWaliczek: There's a difference in your bar for spoof-ability because you didn't make the conscious decision to install. But this may not meet the usability requirements.

alexturn: Its a hard problem. We have to solve it eventually for nav. We could all defer if we got to nav soon. If it will take a while, do we just block this now?

NellWaliczek: That speaks to the concern that if you click the VR button that you get 17 prompts then prompts for immersive.

bajones: should we make it unconference material?

<Zakim> kip, you wanted to Say that it could be a browser window on your desktop while using a tethered headset

<Zakim> alexturn, you wanted to talk about how we should either require the MUST around mid-session permissions or do the deferrable permissions up front - if UAs end up just

kip: Alex made some of my points. From Mozilla's side, we need to handle mid-session prompts anyway because users can interact on their desktop and their headset.
... we will have to notify users of multiple things that could interrupt their session. We have to solve this anyway so its only one more thing to add.

alexturn: Can we go out and noodle offline on what is the most simple deferrable.

mounir: Its not like people are scared because this is unknown. Could we just wait and see instead of creating an API?

ada: like waiting until we see what the first persons does it and do what they do.

mounir: We know removing your headset and going to your desktop is the worst thing for user experience.

NellWaliczek: the trouble is the anti-pattern question is raised and if you don't have a solution you will have people ask for the kitchen sink up front.
... more discussion is needed.

alexturn: lets brainstorm one thing.

NellWaliczek: ok at unconference?

mounir: I wouldn't be surprised if websites ask you for multiple prompts.

NellWaliczek: Ok lets talk about this more.

ada: ok at unconference

<NellWaliczek> scribenick: nellwaliczek

<Manishearth> https://paper.dropbox.com/doc/Notes-on-alpha--Ak5yBgNICNQxfGVJ9drWSh9NAg-fgZLUW4tapXx3j7if9iUE

manishearth: The AR spec has the concept of an environment blend mode and different hardware works differently. pass though means a video of the real-world is composited with virtual content with source-over blending. hololens is see through (additive light) effectively compositing with lighter blending
... the enum has opaque, alpha-blend, and additive.
... lots of talking about how additive works. Initial spec text said alpha values were ignored, but actually its that additive hardware assumes premultiplied alpha. This may turn out to be true more broadly

<Manishearth> https://github.com/immersive-web/webxr/issues/838

<Manishearth> https://github.com/immersive-web/webxr/issues/837

<Manishearth> https://github.com/immersive-web/webxr/pull/840

manishearth: First topic is the core spec's opinion on premultiplied alpha (see those links)
... When you create the XRWebGLLayer you pass in lots of stuff. Creating a context also has lots of options
... These two sets of options may conflict. So what do we do?
... 1. Continue with what we do, have the compositor assume that all incoming values are premultiplied alpha

<Manishearth> 1: either we claim that those parameters only apply to the default framebuffer used in the canvas, and WebXR’s buffer uses the parameters from XRWebGLLayerInit.

<Manishearth> 2. or we treat this as an error/warning case

3. Or add the premultiplied alpha option to the layer init

alexturn: I hope we can agree on half of this. Is there some reason premultiplied alpha should be treated differently as depth? We ask this question up front for depth, etc. I don't see why we'd make that different

<bajones> +100 to Alex's comment

Manishearth: that is one possible outcome. We can say all these options are ignored, but perhaps there may be some that should be treated differently

bajones: I agree with alex.
... i don't think anyone actually wants to add a new property for ourselves. Got a question for Artem (yes, you)
... Does the oculus api have native support for non-premultiplied alpha

artem: No, but OpenXR does

bajones: at least in the short-tem allowing for non-premultiplied alpha would incur additional overhead on almost platform.

manishearth: this is true on openxr too. it just does it for you at the same cost

bajones: I think the only reason people would care about inline vs. immersive is if the blending behavior was different and then things looked differernt. I'm not sure if this actually matters tho because of canvas background
... because most inline sessions will clear to opaque black
... if it's shared, unlikely to have a transparent background

klaus: If they're different, wouldn't it be up to the ua discretion about showing a console warning

bajones: manish sugggest this in a non-normative manner, but I don't know how much it's a good idea to set that precedent

klaus: should we be throwing exceptions

<Manishearth> q/

manishearth: *mutters no*

<Zakim> kip, you wanted to ask if there are any concerns with gamma correct blending?

bajones: i would not be in favor of that

kip: on the topic of normalizing alpha blending, to what degree should we require gamma correct alpha blending?

bajones: is this different from srgb correctness

kip: it's a bit different. if either side isn't linear then you need gamma correction

cabanier: isn't it up to the author (otter)

kip: if it's not being composited with the UAs own ui. otherwise we need to know the target colorspace
... the responsiblity of the UA and the platform
... for reference CSS got this wrong. gamma incorrect blending happens

https://github.com/immersive-web/webxr/pull/840

anyone allergic

PUSH THE BUTTON

manishearth: ok, the second topic

<Manishearth> https://github.com/immersive-web/webxr-ar-module/issues/16

manishearth: additive light displays might choose to support immersive-vr. not required, but if they do... what blend-mode should they report

<alexturn> q

<alexturn> q

manishearth: spec says return opaque, but this isn't actually what's happening, tho

<alexturn> phew

zoom zoom... time to read

alexturn: following the request to talk about concerns before solutions
... in point number 6, we say ar is world-composition, but then we get tangled up and conflate it with real world understanding
... this actually is something that VR devices *could* support in the future
... let's keep the real-world understanding thing separate from the composition thing we're worried about people poluting the meaning of

cabanier: in my comments on that issue, I'm mainly concerned that there will be two ways to do the same thing
... alex you said if the spec doesn't follow too far behind then your'e not worried

Alexturn: we were worried that you'd be able to query blend mode before ar was possible. but now that's in the ar core. So now we're not worried about it because the blend mode will ship at the same time as ar
... let's say if we show a weird warning (aka this experience may not look right) then we'll encourage developers to do the right thing. today's devices can put you on the right path

cabanier: one of my other points... if you just implement webxr spec for vr, does that mean your hardware wouldn't report a value so it might be a forcing function

alexturn: this is a general problem with modules

bajones: isn't that the same as calling supportSession without checking it exists?
... the best practice is to always check for this. if you get an ar session then you can trust the blend mode is there, but on VR you should check if it exists because it's not part of the core spec

alexturn: maybe over time, a ua might decide to support the ar spec just to say the blend mode is opaque

<cwilso> zakin, close the queue

klaus: comment about apps lying about what you want... if you request ar the experience is TOTALLY different than requesting vr. one just goes into full screen type mode, the other tells you to put on a headset

manishearth: having no blendmode in vr implies opaque
... two options... always require immersive-vr to opaque or allow it to be accurate.

bajones: i don't think the enum should be forced to lie

allergy check: on additive light displays, either don't support immersive-vr or are must report addtive blend mode

cabanier: i'm not a fan, but this isn't a formal objection

this topic was not resolved, but we're all too tired

<trevorfsmith> late reaction to yesterday's allergy check: the UA should always report the actual hw-provided blend mode and UAs that don't want to support immersive-vr on additive displays should reject that enum. (so, not allergic to Nell's check)

<trevorfsmith> Meeting: Immersive Web WG/CG

<trevorfsmith> Chair: Trevor, Ada, Chris

<trevorfsmith> Agenda: https://github.com/immersive-web/administrivia/tree/master/TPAC-2019

<cabanier> scribenick: cabanier

<alexturn> gauntlet+

environment blend-mode and alpha-blending

cabanier: I really don't like this but since it was in the spec before and @alexturn made some good points about launching at the same time, I withdraw my objection
... so it's ok if we can ship the spec as-is with env-blend mode for VR

CG overview: review of current incubations

trevorfsmith: hey everyone
... the idea is to do a run through of the different repo and figure out the next steps
... just a really rapid fire status
... the next steps is to go over DOM overlay and layers
... let's start with lighting estimation
... 3 questions:
... - has there been implementation work?
... - what mystery remains?

kip: there has been no implementation work
... as far as work in the browser

trevorfsmith: has anyone done any work in this area?

(no)

scribe: - what mystery remains?

kip: we have an acceptance of the format of the lighting estimation itself
... what will the API shape look like? That may match the hit test API
... we don't know yet how permissions are involved? Should it be under camera?
... we need to get feedback from browser vendors to determine if it is reasonable
... can they implement conversion functions or if it should be left up to javascript functions

trevorfsmith: so it sounds like we are waiting for more information

kip: yes

trevorfsmith: navigation. Is Diego on the call?
... or is anyone from mozilla willing to comment?
... real world geometry

klausw: I don't know on the full status
... my understanding is that plane detection is under discussion
... what do other people think about?
... there is ongoing work to do hit testing
... there is a question on transient input sources
... are anchors separate?

trevorfsmith: yes
... does anyone else have an implementation

<trevorfsmith> https://github.com/immersive-web/spatial-favicons/blob/master/design_docs/SpatialFaviconSizes.pdf

cabanier: there was a question on hit testing being only valid during raf

trevorfsmith: spatial favicon

Ravi: yes, we have a working implementation

trevorfsmith: so the sizes specification is still up in the air

Ravi: the last piece of the puzzle is to let the platform know about the quality
... and how you represent the quality
... one is to use the bounding box; the other is to use vertex count
... there is a PDF in the repo that talks about this

dino: isn't the big open question what format to use?

Ravi: there was consensus to use GLTF as the supported format
... because it's an open standard that's available for everyone to use

dino: so it's gltf with some restrictions

NellWaliczek: so it's basic GLTF without extension
... the core standard doesn't have shaders

dino: can it reference external files

NellWaliczek: gltf can have external file

dino: so the resources are bundled?

cabanier: no, we only do the single file. no external fetches

dino: I think the web manifest can easily updated

NellWaliczek: I think we just didn't follow up
... ravi, do you have an action item?

Ravi: I have an issue open and if dino can comment I can take it over

sushrajaMSFT: the spec allows even external references in a glb file

dino: the w3c has an official liaison so you can use that channel if you want

<Ravi> Here is the issue to comment on : https://github.com/immersive-web/spatial-favicons/issues/5

trevorfsmith: the next one is anchors

NellWaliczek: there has been no progress

klausw: piotr has a working implementation

trevorfsmith: has anyone but google start an implementation

NellWaliczek: I will take it up
... I wil review the proposal to make sure that there is no disconnect wrt reference spaces

trevorfsmith: is piotr leading the feature

<trevorfsmith> ack

cwilso: maybe it's useful to get piotr's feedback

<sushrajaMSFT> https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#glb-stored-buffer - I read that as additional buffers can point to external urls

klausw: it landed very recently

cwilso: yes, let's ask him

trevorfsmith: hit testing

NellWaliczek: we should talk to piotr

joemedley: anchors will be behind a flag during chrome 79

klausw: it's not important because we use Chrome Canary. It still an open question if it is on by default
... we're still waiting for AR
... I don't know what the detailed plan is

alexturn: what is the procedure for modules?

NellWaliczek: we'll have a process conversation in another call

klausw: a feature ends up in canary, it will end up in an official release

cwilso: when we have intents to implement, it doesn't mean that it should ship
... it's just a prototype so it won't auto-promote

dino: do you have different flags for different modules?

klausw: as far as chrome goes, there's different flags for different features
... so there are separate flags
... so even though it's in a shipping browser, it's still considered an experiment

trevorfsmith: I will follow up
... computer vision and geo alignmet
... Blair is leading those and he's not on the call
... he is going over all possibilities
... like seeing if webrtc is possible
... maybe it will be picked up until after we ship
... same with geo aligmnent with mozilla
... has anyone look into that?
... ok, there's likely no implementation work happening
... so let's move to the next topic?

joemedley: I had a footnote on the flag discussion
... so you might find on mdn as if it is part of the platform but it's not really

cabanier: I wrote up a spec for world and hand meshing and would like to get some feedback

https://cabanier.github.io/real-world-geometry/webxrmeshing-1.html

Layers and DOM overlays - demos and discussion of path forward

cwilso: klausw can you give an overview
... if you can start with the implementation experience

klausw: I would like for people to see it
... so can we do it now?
... what I have is an experimental implementation for handheld AR
... and as a separate proof of concept in VR to see if the approach can work there
... the handheld one has an intent to implement out
... the immersive/daydream one is not planned to ship. It's a pure experiment
... (demo)
... it's a simple three.js
... it's a simple three.js demo
... you can see the dom content in the corne
... the main thing is that this was very easy to implement
... it works the same
... you get the exit AR button since a lot of developers asked for that
... accessibility works as well
... (demoes the screen reader)
... since they are dom elements all the a11y features continue to work
... I can also show the VR version
... there is a floating browser tab
... this is not ready for shipping
... the backdrop filter could work here
... it's a bonus that it feels natural
... this is reusing the full screen
... full screen elements stay visible
... if you do nothing the body element gets the full screen
... the code for this is in webview
... blink people are a bit worried on how things are composited
... the VR is just a quick hack
... it doesn't solve DOM input vs AR input
... do the events happen in parallel?
... it would be hard to figure out for the user agent? Would it be annoying?
... it's just an experiment
... for the handheld case it's very natural
... for VR/AR case it might not be optimal

NellWaliczek: last year we talk about this
... and I voiced concern about a dom overlay part
... since then, we had the modules split so I'm much more comfortable
... I think it's reasonable to have DOM overlay for now as we have a proper path to a proper layers model later
... at that point we can do DOM overlay using that new layers spec
... Sumerian customers are going to love this
... for both headset and handheld experiences
... so I have no objections
... we need to be careful about CORS content
... it shouldn't get input events
... it was more concerning for immersive headsets
... the other thing, I strongly support that we don't have an API that only works on handheld devices
... what if a headset wants to have a DOM overlay
... having experiments is a great way to explore ideas
... and having the modules is a great way to do this
... I'm very excited about it

klausw: I have a pr that explains more about how this works

NellWaliczek: I would like to see how other manufactures deal with this

klausw: I want to reiterate that this is the only approach
... layers could have other restrictions

NellWaliczek: we don't have to answer that right now. We can deal with that later

cwilso: I would like other people to participate here

trevorfsmith: this looks great
... have you done experiments about doing intersection with the real world

klausw: currently for the handheld version you can assume that it's full screen
... should there be an API to inform where things are in the world
... the current API has a way to move the DOM layer in space

samira: this looks very exciting
... especially for subtitles

klausw: yes. Even the VR version works with a11y

<samira> subtitles for immersive 360 video :)

cwilso: we will be discussing this again for the 360 video case

plamb_m__: we are interested as well.
... we've done some experiments
... there are 2 things. For instance to do depth sorting
... since we want to do occlusion.
... how to handle transparency

<kip> (plamb covered it)

plamb_m__: there's a general concern for privacy

klausw: yes, that is the missing bit for the handheld site
... sorting and ambiguity what is seen

<cwilso> scribenick cwilso

<cwilso> rik: have you considered you could do fullscreen for AR case?

<cwilso> ...i.e. 360 video, or creating invisible DOM elements that are accessibility DOM elements that are headlocked

<cwilso> klausw: definitely would be able to work for that (a11y) as it would remove ambiguity

scribenick cabanier

artem: z-sorting could be solved through layers
... it think there's an intersection where we split the areas of influence
... we want to have quad layers
... which can be positioned in space
... and it would be great to have DOM in those
... we need to talk about the details, especially for the VR case

NellWaliczek: this is why we need to be careful about proposing things to other groups
... we need to be thoughtful on how experiments interact

bajones: artem you were saying that this is an area where layers could jump in
... as the layer system comes in, it might as a basis to polyfill
... and at that time we can depricate the DOM overlay implementation
... you mentioned sorting of the layers
... are there implementations that do depth sorting of the layers

artem: only Z

bajones: the spec lets you not specify the depth buffer
... if it is there, an implementation could use that to avoid the depth disparity

<Zakim> alexturn, you wanted to talk about headset benefit of knowing layer is overlay specifically

alexturn: this is really great. A lot of customers are going to want this
... I like some of the aspects
... for instance how simple it is
... doing the DOM overlays first is very helpful
... and a lot of people might not want to go further than what this offers

dino: what is doing the rendering
... is there a pixel shader running on it?

klausw: this is a normal compositing layer
... it's written to shared memory and then it's composited with a pixel shader

NellWaliczek: this is all within the browser, not the web author?

klausw: yes. It's a fixed pixel shader with a fixed blend mode
... the dom content is using normal chrome compositing

dino: so it can never interact with the lighting of the room?

klausw: correct. I just started with something really simple
... if there are arbitrary layers, you might want to put effects on them

NellWaliczek: this is why we had layers, it's a concern
... the dom content is never surfaced back to the web author

dino: can the webgl occlude the dom overlay

klausw: that is correct
... the web content has access to the camera content or its own pixels

NellWaliczek: yes, it will be weird
... that it isn't occluded

<sushrajaMSFT> to ask how dom layer for AR is different than the dom element being placed with cc on top of the canvas in an inline session

<sushrajaMSFT> * with css

alexturn: oculus has the ability to make it translucent

artem: there is no API yet

klausw: if there is a depth buffer, there is a way to do it
... I don't think that there is a way to do a timing attack

<Zakim> kip, you wanted to ask if it would be possible for pages to request browser to choose placement when content doesn't care. (Eg, for wrist-worn 2d dom )

NellWaliczek: I wasn't suggesting that we should deprecate the DOM layer API. Just that with the modularisation, (???)

<NellWaliczek> we'd have the option to do so if the API shape of layers made that make sense

kip: would it be possible that pages can express
... is there a way to differentiate this overlay from DOM layers
... maybe the author doesn't control the placement. Instead the UA might do it by for instance putting it on your wrist

<kip> Dom layers would be composed as part of the XR experience explicitly

sushrajaMSFT: how is this different from putting DOM elements on top of a canvas element

klausw: for AR, it doesn't really work for an inline experience

<kip> Overlays would be controlled by the UA and the user for cases such as accessibility

klausw: so I don't think that is an alternative way

<cwilso> scribenick: cwilso

<cabanier> scribenick cabanier

<cabanier> cwilso: it sounds like everyone is excited about this

<cabanier> ... we like to have feedback from other vendors

<cabanier> ... so we can come to a generalized solution

on a break until 11:00.

<LocMDao> scribenick LocMDao

<LocMDao> [Gauntlet throwing scribe talk]

<NellWaliczek> https://github.com/immersive-web/webxr/labels/editor%20triage

<LocMDao> artem: Layers. Summer was slow. Working on the API was harder than implementing it. [LAUGHTER]

<LocMDao> ... tried to create a really short document several times not anywhere close [Nell is still laughing]

<LocMDao> ... question to NellWaliczek klausw and Brandon. Main issue is image source which defines where you take rendering from. Beginning for proper multi view support. XR projection layer similar to openXR. Will take instead of regular frame buffer and get structure as a source like texture, domsource. These two first and then quadlayers. Official goal at Oculus to implement layering system in browser,

Walk through issues marked for short updates [Brandon, Nell, Manish]

<LocMDao> see NellWaliczek 's link above

<LocMDao> NellWaliczek: wanted to kickstart on First category. Second category we want to get sorted out sooner than later but we don't want to disrupt if someone is still working on this.

<LocMDao> ... easiest one #497 Which globals apply to security policy?

<LocMDao> ... bajones and I have failed to figure out what this issue is about.

<LocMDao> Manishearth: Can ask Alan to come on call. I don't understand this either.

<klausw> issue link: https://github.com/immersive-web/webxr/issues/497

<LocMDao> NellWaliczek: that would be great if you could reach out for when we meet next. This issue has been over for a year.

<LocMDao> bajones: will be available for next week's call.

<LocMDao> NellWaliczek: #363 Clarification of mouse behaviour. From before immersive. filed by Microsoft. Difference in behaviour of mouse and focus when you were plugged in... Is Microsoft proposing?

<LocMDao> alexturn: Want to look because there may be cases where still want this.

<LocMDao> NellWaliczek: Because this has open a long time, we need to drive to resolution because it could be a non minor change.

<LocMDao> ... #619 Add Linear/Angular Acceleration/Velocity for XRInputSource

<sushrajaMSFT> https://github.com/immersive-web/webxr/issues/619

<LocMDao> ... https://github.com/immersive-web/webxr/issues/619

<LocMDao> ... pulse in the room

<ada> xakim, open the queue

<LocMDao> ... open it up to the queue

<ada> xakim, open the queue

<LocMDao> bajones: request from alexturn . openXR came to conclusion that velocity XR not necessary.

<LocMDao> alexturn: We went through a lot of designs. Acceleration was something people had laying around. Different conventions. Noisy and divergent. Barely got away with before and with XR. So the group it was superior to giving out . How do we sprinkle a notion of a time relative queries if you want to say where will this be one frame now ie controller or head so that we can get a high quality answer instead of us giving our acceleration and doing

<LocMDao> your own math.

<LocMDao> ... instead of the specific use of velocity. We have XR frame for render and select input events. There may be a gap here but there are other ways to do this like last input frame without opening up a can of worms.

<LocMDao> NellWaliczek: You spent time talking about design options. Would you be concerned if the first version of webXR didn't contain this?

<LocMDao> klausw: You were talking about acceleration but velocity is something we need.

<LocMDao> alexturn: I wouldn't it fatal if we didn't have velocity out of the gate.

<LocMDao> NellWaliczek: To repeat, acceleration shouldn't be exposed implicitly. Velocity exposed as its own number because of divergence.

<LocMDao> alexturn: for velocity its straightforward with openXR you can get it orientation and velocity.

<LocMDao> ... we already have that core notion. There are other things people are starting to do like predict a render frame ahead but no need to go there immediately.

<LocMDao> NellWaliczek: You would like to see velocity added but not acceleration and in the future time stamped pose query.

<LocMDao> alexturn: If when I'm requesting a pose I could ask for velocity.

<LocMDao> bajones: is there a reason openXR made it optional?

<LocMDao> alexturn: yes in the future you might be asking for a lot of space relations for things like anchors when we get to real world geometries we could have many more spaces we could find a way to make it more efficient or queried.

<LocMDao> Manishearth: The design space is huge like velocity exposing is tricky the way we do transforms. For acceleration we should figure out how to do future and past frames.

<Zakim> kip, you wanted to ask if "Time relative queries" would enable access to non-predicted poses

<LocMDao> ... if we just want straight velocity its ok but angular would be problematic.

<LocMDao> kip: General concern we won't be providing the same compatibility as open platforms. openXR has its good reasons. If we were to get access to predictive poses like an artist using an iPad instead of a mouse. The line would be a frame behind. Could we solve this with time relative queries?

<LocMDao> alexturn: We want to be super aggressive to making something feel super responsive. WebXR you only have access to selects, openXR lets you go back 100ms and by then most platforms will have stabilized. Theoretically there's an another option like a stream of best quality samples for drawing tools. Something we've thought about but out of scope.

<LocMDao> bajones: want to point out that there's a relation here velocity/acceleration vs predictive poses. I do think they're distinct topics. We're not quite ready to address this complexity. ie we know discussions from openXR about how far back and forward to predict data, although its worthwhile, it doesn't fit under issues for short updates.

<alexturn> OpenXR velocity definitions (linear + angular): https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XrSpaceVelocity

<LocMDao> NellWaliczek: to extend on that, there's nothing here that prevents us from being addictive later. What do we have to finish to be part of CR. That's the temp I;m trying to gauge.

<LocMDao> kip: if this is more accessible ie time query could solve multiple problems perhaps we could defer velocity so we can solve this in the long term.

<LocMDao> klausw: quickly about angular velocity. the issue is you throw something with a wrist flick and the controller might have no motion. If we punt on this now and doing this kind of hack, they might stick around when people don't update their frameworks.

<LocMDao> ... with time base sampling you still want to update your velocity. Its not just a prediction but to get the psychics correct, the math isn't hard.

<LocMDao> alexturn: it doesn't interact with origin offset. Request where origin in and then request angular velocity.

<LocMDao> klausw: I'm not seeing this as a must have but leaves a gap where people may end up doing suboptimal workarounds.

<LocMDao> NellWaliczek: clarifying question about creating a space.

<LocMDao> alexturn: We still have offset space.

<LocMDao> NellWaliczek: not for input

<LocMDao> alexturn: openXR has offset space for spaces. People can say this is where my object is. It's a transform but it feeds into my angular velocity map.

<LocMDao> bajones: One thing I want to mention, speaking to what klausw is saying. As an API purist, I want to see the API. The best thing we can do is ship the API. Its not a clear question if our users will demand. I'm curious to see if we hear feedback from real world developer that it will be instructive to this group.

<LocMDao> ... I expect we'll hear I want to do this with my 360 video right now and I can't instead of a baseball simulation. There's going to be some kind of implicit priority weighing to get this out. The question for me is who's asking from this rn and who will benefit. I question if its worthwhile unless we can say it benefits the majority of users of the API.

<LocMDao> NellWaliczek: Quick Straw poll. Chairs agree. Two options:

<cabanier> +1

<LocMDao> ... punting to future

<klausw> +1

<bajones> +1

<LocMDao> +1

<Manishearth> +1

<trevorfsmith> What are the options?

<kip> +1

<LocMDao> +1 is for punting, -1 for rn

<plamb_mozilla> +1

<trevorfsmith> +1

<artem__> +1

<LocMDao> bajones: +1 for delating to a later spec. -1 something to deal with rn

<LocMDao> NellWaliczek: moving to a future milestone. Good job team!

<LocMDao> ... #316 and #779

<LocMDao> ... heart of this problem is entering in an immersive session is an asynchronous. You could hold on to the promise you created and call it a second time. The problem is when you start something that requires a user interaction like media playback or unmuting for audio, if it were just a matter of the first promise resolution you could do the thing you want to but its a weird that the first stop could be 5 mins from now.

<klausw> https://github.com/immersive-web/webxr/issues/316

<klausw> https://github.com/immersive-web/webxr/issues/779

<LocMDao> ... how do we make it easy for User Gesture protected APIs to be invoked at the beginning of the session like media playback.

<LocMDao> ... going fullscreen although NA requires a user gesture.

<LocMDao> ... klausw you

<LocMDao> you've mentioned in the past that chrome is thinking of a time-out after a while.

<LocMDao> klausw: the current behaviour like clicking enter VR, the current implementation is to start a 5sec timer is attached to clicking the button and no connected to clicking the start the VR

<LocMDao> ... that was nasty if you tool longer than 5s

<LocMDao> ... I brought this up related to fullscreen, talking to people they say they're fine about not wanting any user activations for future uses because you ended up here. So it mainly leaves things like media playback.

<LocMDao> NellWaliczek: The spec says entering an immersive session, we could set the timer to the session resolving as opposed to a button click which could resolve the timing gap.

<LocMDao> klausw: I don't think everything is timing based. Something to check the Current activation status is messy.

<LocMDao> NellWaliczek: Anyone have thoughts please q

<LocMDao> cabanier: should we be the only solving or is this in the stack? Are there other APIs that suffer from the same thing.

<LocMDao> klausw: what's unique is immersive...

<bajones> Chrome User Activation "V2": https://www.chromestatus.com/feature/5722065667620864

<LocMDao> cabanier: but isn't this the same as camera access?

<LocMDao> NellWaliczek: Camera is a common path as opposed to other cases that require resolve promises. 360 video is a quintessential example. We expect the fair usage to be this. If we can't start the video stream consistently across UA's we have a problem.

<LocMDao> cabanier: Still not sure our group should solve this.

<LocMDao> NellWaliczek: Can we come up with other things that we might be able to confer with other groups.

<LocMDao> ... Is there precedence. Do we need to find a group to collaborate with?

<LocMDao> ... user gesture, web apps?

<LocMDao> bajones: Want to re-iterate what Nell said. Media playback is a primary use we have seen for webXR webVR. Generally agree with cabanier . I don't want this to be our problem.

<LocMDao> ... There is some novel interaction.

<LocMDao> ... see link to chrome user activation v2 above

<LocMDao> ... notable

<LocMDao> https://www.irccloud.com/pastebin/3lRN9I6Q/

<LocMDao> NellWaliczek: Last signal oct 2018. last comment march saying someone shipped in chrome.

<NellWaliczek> https://github.com/whatwg/html/issues/1903

<LocMDao> ... the ask I have is Mozilla, Apple and anyone who has their own UA that might diverge to go back to see if you have a stance because we have to do something here. See issue link above

<ada> ack next/

<LocMDao> ... you probably have a direct line and we could pull them into our call to discuss.

<LocMDao> klausw: I hope we could do something simpler. Media playback is big but workaround is the button click. only need user activation for the first time they click play. So you don't need activation again.

<LocMDao> ... let me rephrase - if there's a workaround for media playback, are there other issues where there aren't workarounds? like fullscreen.

<LocMDao> ... don't need to solve if no other issues.

<LocMDao> bajones: curious about other browsers.

<LocMDao> dino: Not sure what you're asking. Webkit has a notion of user events. Some things are protected by the requirement, There's a way to ask for the element event if its trusted.

<LocMDao> ... we allow video playback if it doesn't have audio or the video element has the volume set to 0.

<LocMDao> ... if its not 0 and its not from a user gesture we stop playback.

<LocMDao> ... if you've done it in a user's gesture it set a token for the page and that covers audio too.

<LocMDao> ... other things we do - you might do something with a user gesture and requires a timeout later. We have algorithms to deal with this.

<LocMDao> NellWaliczek: You have answered the question that there's a workaround. Is it you click play once or the page once.

<LocMDao> dino: scrolling is considered a user gesture so that counts.

<LocMDao> klausw: if its in an iframe you have to interact with that iframe.

<LocMDao> dino: not specific but what some browsers implement.

<LocMDao> ... if you touch the screen, scroll, click the mouse you set timeout to play to 0, when the function play is called that's also considered a user event. All browsers do this. Audio and fullscreen as well.

<LocMDao> NellWaliczek: sounds like there is a workaround from Chrome and Safari/webkit so we could punt.

<LocMDao> dino: my understanding is if you want to play media inside a vr or ar session you're considered to have done a user gesture

<LocMDao> bajones: workaround you have to user gesture to go into vr or ar. This would begin playback. A session is requested and once the promise is complete the media can be resumed without user interaction.

<LocMDao> dino: You could play any video you want without audio.

<LocMDao> NellWaliczek: cabanier Klaus's issue is that then promise doesn't count as a user gesture.

<LocMDao> bajones: a select event from a controller is a user gesture.

<sushrajaMSFT> https://developers.google.com/web/updates/2017/09/autoplay-policy-changes"Autoplay with sound is allowed if: User has interacted with the domain (click, tap, etc.).".

<LocMDao> ... the worst case scenario because we have select as a user gesture if you had media you couldn't start any other way, you go into an immersive session the user has to click star the video.

<LocMDao> NellWaliczek: I hear we have to do a test as a sanity check to confirm our belief. klausw can you take point on this?

<LocMDao> bajones: I'll work with klausw and create a sample page to test

<sushrajaMSFT> https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide also seems to allow audio if "The user has interacted with the site (by clicking, tapping, pressing keys, etc.)"

<LocMDao> NellWaliczek: I have enough info to move forward once we do the test page.

<LocMDao> lunch!

<kip> scribenick: kip

Loc NFB:

Loc Dao's slides for NFB

scribe: If you have a headset and want to try you can do it from wifi. Will post url after
... Haven't talked about NFB before. 30 seconds info
... Funded by and for Canadian government. In house producers hire artists and designers to do work
... Public money, $500 mil per year for content
... (Describing NFB, notes available afterwards)

LocMDao, NFB: Want to upgrade experiences to WebXR

LocMDao: When we make something, need to consider how long it will work
... http://tpac.locdao.com/roxham
... Demoing WebVR/XR experiences...
... Past project, Circa 1948, is a candidate for archival using WebXR
... Circa 1948 was an iOS app and cave experience
... Project about places that don't exist in Vancouver any more since 1948
... Have you heard about LucidWeb?
... The ceo just asked to mention that they launched .. based on WebXR.. At venice film festival three weeks ago
... Projects using Aframe for the most part, just released now
... Links will be sent around to look at offline
... Want to see non-WebVR content that may be future WebXR
... Gymnasia, with Felix and Paul
... Experiment in stop-mo VR
... Animators that are Oscar Awardwinning
... Camera based, but interactive as well
... Knows where you are looking and reacts to where you gaze
... Creepy
... [Showing scene]
... Biidaaban - Recent project
... Indiginous futurism. Look at Toronto in 150 years..

<scribe> ... Done by indigenous artist. About indigenous language, connected to land

UNKNOWN_SPEAKER: Culture expressed very well in VR.
... Opened at Tribeca
... Toured in 60 communities as installation
... Need distribution mechanism
... fully immersive
... When we did the installation, we did it in the place it happened
... Weird Quasy-AR moment
... From discussions, we worked with text and sound positionally placed in experience
... Have you seen "draw me close" at tribeca?
... [playing demo]
... Mix theater and ...
... Creepy when a person hugs you back...
... Will post more links in the chat

ada: We have input profiles, layout format discussion

Nell: Will start this off with a quick demo
... Never tried this before...
... what you are looking at is a test page that I created for the content In the WebXR input profiles repository
... This is something we can thank the technical artist for
... Fallback for different devices if 3d model is not available
... Put the WebXR logo on the handle
... There is a texture on the bottom
... This is the model for the trigger, grip, thumbstick fallback
... Fallbacks for all configurations in the registry
... Wrapping up those models this week. Rest up by end of week
... This one shows what is going on.
... Not final version
... Will explain what will change
... On the right, is a 3d model created. On left is dropdown to pick model
... Choose between right and left handed
... Data that was generated out of the registry file on left
... Represents each thing that is on the gamepad
... Drives each thing that is part
... Allows us to understand the structure of the motion controller
... Automate and genericise. Button axis changes
... This will apply for any motion controller. Press trigger and trigger animates
... Press grip button, can see that it depresses
... Not rendering engine specific. Agnostic code
... Does it by defining which node names in hierarchy to look at.
... Interpolates based on schema structure
... One thing that will change is the orientation that changes
... A little bit off right now
... Library caps you to being within a circle
... Everything works as expected
... It handles axis changes
... Handles pressed changes for thumbstick
... Specialist part.. Pleased with how it came out.. Node in heierarchy represents point on touch pad
... Attached a blue dot
... Blue dot shows where axis components are moving. Dot rendering engine specific
... Node attached to is not rendering engine specific
... WebXR input profiles repository has 4 packages inside it
... First most interested in talking about. High level of all 4
... First is registry package
... For now is a package in repository
... This json contains (with schema to back it).. A definition of how a user agent would represent this particular motion controller in the buttons and axis array.
... Gets conformance across multiple UA's
... The same thing holds for any piece of hardware, not just example
... Can see where it fires the select event
... Named components. Arbitrary strings
... But have a type (grip, trigger, thumbstick, etc)
... Mapping between gamepad object and components...
... buttons array says which component uses the value in the buttons index array
... which axis is this for (x, y...)
... Dove in deep.. coming up...
... PArticular profile shows left-right-none.. but can separate left and right.. Left-right, left-none, as various options
... Key thing is that profile id's array falls back to this particular format
... This is current schema. Not intended to say must use this schema. Open to discussion
... captures information we are aware of
... Over next week or two.. Dig in to see if captures all information needed for conformance
... Determine if this is the right way to cover it
... Context:
... Talk about other packages in repo
... A bit more to go into..
... 2nd package: Assets package
... Combination of two things for each motion controller
... 1 another json file describing other things
... 2 mapping of that asset and this json file in the registry
... Showing example for generic-trigger-grip-touchpad-thumbstick
... Assets element broken out by none, left, right
... IF only one for all of them, you can follow pattern of left-right-none
... Can see we are just started. Only one populated
... Path is relative to root of where assets are stored
... Node name at the top of file is top off asset hierarchy.
... Gets nested further as you go. Expected child nodes
... another layout sections maps similarily
... It has the same component objecvt
... Property for each component listed in prior document
... Key thing.. Each package has a build step
... Takes the files in it and puts into package that is distributed and validates schema
... Takes profile from registry and merges with registery
... If you want to use these assets, you don't need more files
... Ensures component name matches component in registry
... Where magic happens... Rendering engine agnostic, but still generic
... Digital responses
... For anything with a visual response defined..
... What element will be affected, what data source affected.. Eg, button value
... for case of touch pad. Touch pad X value modifies x axis
... States for which it applies
... Can be in neutral, touched, pressed state
... Driven by next package or by yourself with raw data
... Inside asset is node hierarchy with node names
... Under node names are three nodes
... First node is a min node
... Second is a max node
... Third is a value node
... For each component and value that drives that component.. Position is a transform that drives the position it should be
... Value is from 0 to 1, based on where transform will be.. Interpolate between two nodes
... For example... Trigger is driven by an entry in the buttons array
... Has a value attribute that ranges from 0 to 1
... Trigger node in asset has three nodes in side it
... Min has no transform
... Max has state for when fully depressed
... Value for where value is changed...
... Does slerp
... Between min node and max node depending on value of the button
... For each component there is a top level node
... Under component there are multiple nodes.. For top level (in glb/gltf) has 3 nodes at the top
... Each node has ability to be notified.. Each as 3 children.. Min value.. Max value... One that is actually changed using interpolation between min and max
... dont' have visual geometry.. Only trnasforms


.Ada:

You set the value ...

Ada: You interpolate based on button value

Nell: Entirelly rendering engine agnostic, can define every controller out there based on components out there
... When thumbstick is depressed, there is a min and max node to allow interpolation
... Based on the x axis value it is interpolated.
... Hierarchy bound.. Can move x axis and y axis separately
... Makes stick move around
... That's second package
... third package:

Ada: some questions

Nell: third package first
... JavaScript library that can parse json files and compute interpolation... Pops out interpolation for any rendering engine you want
... rendering engines walks node hierarchy and updates within Raf loop

Ada:

Joe:

Ada: ... For the left right and none controllers.

<Zakim> ada, you wanted to ask is none a controller for either left or right?

Ada: Left, right, and none is for either hands, like daydream?

Nell: reported by the platform in XRInputSource
... Some controller have capacity to report none but may change themselves to return left or right
... May have ability to change
... Where talking about left right and none here... IF you get XRInputSource handedness of left, this is what you use... right... This is what you use... None... This is what you use

Brandon: ... Daydream controller.. Will never report none
... While physical controller itself, does not have handedness
... Platform has configuration to say if in right or left hand
... Oculus go has same thing
... Always get left or right
... Vive is not inheritly handed
... Waits to see movement in relationship to each other...
... will dynamically change and get all three states during lifetime
... Related to what the platform is reporting, not physicality of controller

Ada: does the asset also change
... From vive controller asset to the generic asset which is not symmetrical

Nell: Thanks to Microsoft.. This is now the Microsoft controller. this is handed
... Can switch handedness... Generics have generic model for left, right, and none

Ada: ... vive goes from left to right.. Will tRANSITion?

Nell: XRInputSource will detache and reappear

Brandon: simple thing is to declare same asset for left, right, and none
... Platform can say, is just the same one..

Ada: Thanks.. Next is Rick

Rick: ... Wonder why you interpolate from min and max... Is there a reason you didn't use GLTF animation

nell: Need to know what to drive
... Not an animation. Is a transform.. Do off of gamepad data source
... Is a finite value at each given point of time

Rick: Animatino has a start and end to the animation

Nell: No time here, no value here

Rick: Can jump to place in time based on time
... Could have acontroller that is not just linearly interpolated. Can do something else

Nell: Possible, but haven't had a request

Rick: If UA wanted to, it could show a button moving with min and max

Brandon: Worth noting that animation works well for trigger but not a joystick.

Kip: q+

Rick: what happens if you load the asset

Alex: Children are only there when the value subnode

Ada: No way to accomplish what you are talking about..

Rick: Would use animation slider to manimulate

Nell: File an issue in the repository

Brandon: GLTF has defined animations...
... Will need animation blending

Kip: q-

(Brandon covered animation blending I was going to bring up)

Nell: No blending necessary. Uses node hierarchy
... Is a package not a standard.
... Can rev the package at any time.. Can rev the library that goes with it
... Can later speak the animation version instead of min, max version
... Encourage you to file an issue. Even if not feeling strongly
... 23 issues in repo already. . Opportunity for improvement
... Everything described has nothing to do with standardization process
... First package does..
... As a group need to discuss
... Important to agree as a group on the schema
... Every user agent will put the trigger at index 1...
... Issues need to clarify... Anything past first entries is not consistent
... Getting consistencies across UA's on indexes is important
... Pulling up example of that... For Oculus controller....
... To be clear.. Hardware people own registry files
... Just seeded the repo with these things
... Here is left hand, with a button and b button and thumbrest
... Thumbrest is touchpad without axis
... where there would be touchpad, is listed as null
... Everyone needs to agree, "A button here" , " B button here".
... Means we can get consistency on select sources
... Was problem in WebVR timeline
... Indended to fix that
... Fallback for Oculus touch is trigger-grip-thumbstick.. No touchpad
... Quest has oculus touch as intermediate fallback


.ada: some questions

Ada: ... From Samsung
... Does library provide any kind of event hookup
... For an A button or a thumbstick or grip...

Nell: Issue to figure out what to do with events.
... Finds first one that it can identify, then pulls json file and says what asset url for left and right handed
... Does not actually load asset files
... Has function is object created. Object for each component
... Function to call in raf callback to ensure updated.
... Is a polling api
... Trying to work around the hardware specs don't have eventing
... Issue filed to find out what to do.
... Can fake it in the library
... First round did not take on
... No user initiated on it.. Not sure if value induplicating
... Call to ACTION:
... As it currently sits.. All your hardware has a registry file seeded into the repository
... Already have feedback on Microsoft assets that needs to be resolved.
... The input profiles repo has issues that want to go over
... WMR motion controller has physical menu button
... Ambiguity if should be reserved for user agent or should be exposed to web page for use
... sort of thing that needs to be cotified inside registry so all user agents to the same thing
... Currently menu button in there.. doesn't need to stay in there.. Based on experience
... Other UA's may need to reserve.. so can get consistency
... Other needed consistency for profile strings
... Picked same id's used in WebVR timeframe
... Desire to change those.. Not opinionated..
... Discussion active to find right names
... Alex followed up
... Hardware owners' perspective on that needed

Ada: Picked names for the generic profiles
... Names that seemed like representative of what would be covered
... Eg, generic trigger.. generic trigger touchpad
... Id's defined in WebXR spec should be vendor-(whatever vendor whats)
... open to changing "Generic".
... No attachment to name
... Walked through things in distribution.. Conclusion is that only one not necessary...
... Put in profile for it
... There are two other default profiles...
... To be nice to Google
... There are motion controllers that don't fit XR standard
... Part of Google hardware platform
... Profiles for those in there as well, but not XR standard mapping
... Do the same thing... Touch pad here... Button here... Pieces are consistent but not XR standard

<ada> ack nextq?

ada: People in Queue

Manish: ... Mixed reality controller menu.. Oculus has that too
... Same icon
... REason discussion specific to WMR?

Nell: Issue was just raised on WMR
... Chrome assumed menu should not be there.. Had PR to remove it
... User agent wasn't reserving it.. Chrome wanted to reserve it
... Precisely why it belongs in the registry

Manish: In registry, Oculus menu is reserved for Oculus

Nell: Some user agents reserving and others don't is worst case
... Should not be exposed in one browser but not another
... Make sure it's consistent

Mounir: ... What mozilla is doing with profile names
... What is Mozilla's implementation

<ada> kip: we're just getting to that part of the implementation

<ada> ... we don't have our own hardware so we will do best for others who have their own hardware

<ada> ... we need consistency across hardware vendors between OpenXR and WebXR with what is reported from the apis

<ada> ... ensure hw vendors get what they want to report what they want to see

<ada> NellWaliczek: alex raised this in an issue

<ada> ... if there is an interest in doing maps into the WebXR registry, if this happens in this repo or somewhere else I Don't mind

<ada> alexturn: [missed]

Nell: ... The build step for the asset pakage takes the json file from the registry and merges to single json file
... Similar pattern could be applied to an OpenXR repository and merge it with the registry
... User agents could injest it

Alex: Urgent now state is to ensure that Chrome about to ship...
... Would agree
... What is the current status of Chromium 79 in relation to this registry

Brandon: the generic ones in 78 and lower were sketched out in an explainer
... They never really properly made it to spec

Nell: References in non-normative text

Alex: There are discrepencies

Brandon: Chrome team says "we need something here"...
... Did a best effort
... going to diverge in several places likely

Alex: Can be fixed by updating registry or ...

Brandon: Best if we can come to a consistent view

Alex: Goal for end of day? At least name of profiles?

multiple: [discussing if should decide today]

Nell: This is why we want to talk about registry today

Artem: ... Who is responsible for selecting the proper profile from profile id

nell: The XRInputSource object has a profiles array object
... This array indicates (fallbackProfileIds) ... Two halves to the question...
... [missed]
... As new hardware comes out, may not have assets for new device
... What comes out, can fall back to earlier device (eg, oculus quest)
... Forward compatibility intention
... Package itself is designed so that if new hardware is avilalbe. Can add new assets if conforms to build process of repo.

<scribe> ... New package will get rev'ed

UNKNOWN_SPEAKER: Hoping to coordinate
... Optimistic that AWS will host on a CDN for community
... Will update those as new hardware is merged into repo
... Anyone using CDN would get it
... Any branch/clone will need to update
... Your call as a developer
... No promised about CDN

ada: ... (From Samsung)
... Profile ID's remind of user agent strings
... Can you imagine smaller hardware vendor. doesn't expect support for their hardware, then next item is HTC vive.
... Then generic
... The vendor prefix is importand as nobody can create a prefix with your vendor
... eg, someone other than magic leap using "magicleap" prefix
... Documented rservations
... If not happy with yours, please let me know and we can change it
... If you don't like name of company
... We defined that strings must be lower case
... In this case "ada's special vendor" would make a PR to add to vendor prefix

Nell: Yes. Your fallback array could contain some other prefix, and would barf...

Brandon: Fallback system works well for this.
... going to get a number of small vendors
... You see a lot of hardware from Chinese vendors which western audience is not paying attention to
... Not gearing content for that
... Seems like a desireable pattern to say "going to register my hardware with my prefix."
... following up with "... but if you want to treat as HTC vive, go for it.. You'll get a right experience"..
... valid from their perspective... As good approximation of what their hardware is doing

Nell: Hardware vendors may be uncomfortable with that. May be like an impostor. Reducing brand identity

Brandon: All browsers pretend to be Mozilla

Nell: You will see the 3d model. Can have negative impact on someone's brand
... I would have hard time imagininig that people would be thrilled with someone pretending to be their hardware.

Brandon: Note that we can enforce that at the tooling level
... If someone ships like this with a built-in browser...

Nell: ... Can't stop from being non-conformant...
... Make sure hardare vendors can voice

Brandon: Presence of generic profiles helps

Nell: This is all of the possible XR standard defaults

Ada: Fallbacks are really nice

Nell: Can't wait to show the rest

Manish: ... Also want to say that there will be controllers without generic fallback. Like knuckles

Nell: Those will have a fallback
... like vive
... Maybe a warning will go in
... I think there needs to be agreement from prefix owners if you fall back to them

Ada: ... The knuckles are by valve, but HTC is different

Brandon: Lineage between devices

Manish: Can get worse when hardware companies change ownership
... We should try to have generic profiles for as many thigns as possible.
... If someone adds device to the repo with a ... button, we should add a generic with a ... button

Brandon: Not to go so far...
... fallback would probably be appropriate to use "generic trigger grip touchpad thumbstick" if has those components
... Beyond the range with what generic has.. Could identify majority of standard elements, and should be good enough
... If software is configured so it doesn't recognize knuckles uniquely, and falled back. Not likely interested in finger tracking capability. Not prepared for that

Nell: Put list of generics
... Top two are not xr standard mapping .. Daydream and generic button
... All of the possible variations of XR standard mappings make up the rest

<Zakim> alexturn, you wanted to talk about OpenXR mapping here

Ada:

Alexturn: ... Ada's special hardware that was mentioned... If desktop or all-in-one device happening to use OpenXR... Potentially OpenXR mapping registry is amended
... As well as registry
... Next time browser pulls mapping file, it is wired up
... Browser vendor didnt' have to do extra work

Nell: One more thing (TM)
... See through ardware vendors will need a profile in the registry... Encourage validating some sort of profile file
... Have 3d asset in there
... REason: Defines component interaction useful to someone.. Make sure there is a signal that we are happy with to indicate that you don't actually render that controller
... Update the library to accomplish that.. oplease chime in on issue
... This one is important...
... People are shipping. Let's not have a breaking change
... May be inevitable. Get closure ASAP
... Plan on talking about it in the nextcall

Alex: Thanks Nell

Brandon: We should bring up on next call, but not have everyone wait on opinions until the next call.
... If you have opinions, especially about generics or a piece of hardare you are not satisified with representation, GO FILE ISSUES!

Nell: One last thought...

<NellWaliczek> https://github.com/immersive-web/webxr-input-profiles

Nell: Strongly inclined to remove the asset profiles for anything without a 3d model
... At time publishing library.. Will be in history if need to pull back up and modify
... Won't have any impact on your registry, but will have impact on if helper library can find your hardware or needs to fall back on generic.
... As of now only have assets from Oculus and Microsoft.. Need to be under MIT license. Until then can't put in library
... Awkwardness around copyright.
... Will follow up

Klausw: Will file an issue.. Trademarks have different issues than copyrights

<LocMDao> Links from my presentation:

<LocMDao> NFB

<LocMDao> https://www.a-way-to-go.com

<LocMDao> https://bear71vr.nfb.ca

<LocMDao> https://roxham.nfb.ca

<LocMDao> Https://dream.nfb.ca

<LocMDao> LucidWeb

<LocMDao> https://sens.arte.tv/webvr/

<LocMDao> https://experience.lucidweb.pro/video/-Lo-6jXVN0tDI7T8u-4F

<LocMDao> https://experience.lucidweb.pro/video/-Lo6wDgQdaP_hW_2Edc7

<LocMDao> https://experience.lucidweb.pro/video/-Lo7GCKy_BgTZmtD3BCh

<LocMDao> https://experience.lucidweb.pro/video/-Lo7SBaR04TGFzDs7V5i

<LocMDao> https://experience.lucidweb.pro/video/-Lo0mjhxHgz-2pkG4814

[Notes from Loc's earlier talk inbound....]

<LocMDao> My notes from my presentation (so kip didn't have to type it all)

<LocMDao> Why does the NFB do this?

<LocMDao> The NFB is a publicly funded non-profit audio-visual producer and distributor paid for by the Canadian public.

<LocMDao> Who do we exist for?

<LocMDao> We have in-house producers that work with artists from filmmakers to designers to developers.

<LocMDao> We spend $20 million on content and our other Canadian public funds spend more than $500 million on content.

<LocMDao> About $35 million of this spending is for interactive and immersive content.

<LocMDao> We take risks and experiment which benefits the private industry and developers.

<LocMDao> But we exist for the Canadian public.

<LocMDao> What did we do from 1939 - 2009?

<LocMDao> Invent POV Documentary

<LocMDao> Invent Auteur Animation

<LocMDao> IMAX

<LocMDao> SoftImage

<LocMDao> What do we do since 2009?

<LocMDao> Linear documentaries and animated films

<LocMDao> Interactive / non-linear storytelling

<LocMDao> Immersive Works mostly in documentary and animation

<LocMDao> Our Immersive Works are all experiments:

<LocMDao> We’ve produced 19 Immersive projects, 12 are Unity, 4 are webVR 3 are AR mobile.

<LocMDao> We have an AI project that will be probably be experienced as an Immersive Experience.

<LocMDao> We mostly show them at Film Festivals. Our community over the last 10 years is all the big film festivals: Sundance, Tribeca, IDFA, Venice, Toronto and Vancouver.

<LocMDao> We’re part of the policy planning for public funding immersive in Canada and I’ve convinced them that webXR is the best distribution for online followed by a new network of immersive venues.

<LocMDao> Our 4 webVR projects are evolving - we know the risks of experimenting.

<LocMDao> We had to balance this to our reality of our audience.

<LocMDao> So all the projects have 2D flat web fallbacks even as the WebVR API implementations deprecate.

<LocMDao> We’re hoping to update these to projects to webXR and even consider making them VR and AR experiences that degrade to the flat web.

multiple: Discussing agenda

Nell: Plenary topics of interest in agenda for tomorrow: https://github.com/immersive-web/administrivia/tree/master/TPAC-2019

<avadacatavra> when are we doing trusted ui unconf?

Trusted immersive UI

<Manishearth> scribenick: Manishearth

avadacatavra: what we were trying to discuss yesterday was 2 fold
... first for midsession prompts is it acceptable to say either
... - we use a TIUI to prompt the user
... - or the UA needs to pause or suspend and prompt to reenter immersive
... secondly: do we need deferrable features
... personally it seems to me we have 3 options:
... - UI whch doesn't want to deal with midsession prompts can just go along with things
... - two: use a IUI to handle the prompt
... essentially the same as the either or, but we allow people to ignore it
... and if we do i think deferrable features are maybe not necessary
... thoughts?

<Zakim> kip, you wanted to say that one browser implementation may choose to either pause or show a non-interrupting prompt depending on the power of the particular hardware.

kip: for FxR i think we wouldn't need the deferrable permissions prompts so we would allow people to prompt after the session has begun
... but you may see a mix of non interrupting and interrupting prompts based on the level of capability

mounir: chrome is the same. we don't wan't deferrable features

<bajones> g?

klausw: the original idea was to allow a UI so that you can tell up front what the UI needs to do, or if there needs to be a choice at runtime
... currently don't see the need for deferrable
... but e.g. if we didn't have a VR/AR split then we might have for apps to figure out what the background is like
... but rn i don't really see where this would be useful

bajones: to back up klaus and mounir and kip: for one, we have specifically designed the API so that the split bw backends can have as unambiguously as possible
... would like to keep it
... so i shy away from the suggestions that ask us to merge immersive-*, for example
... in the past the reason we wanted to ask for features up front was to account for browsers saying they wouldn't support midsession consent
... but given that that doesn't seem to be the case, any concern is gone

<Zakim> alexturn, you wanted to talk about how deferrable features are ironically needed for the browsers that don't plan to implement trusted UI and talk about alternatives

alexturn: agree that the deferable stuff would be complicated to intro now
... want to go through alternative
... the need for deferable features so that browsers that *want* midsession can do so
... there's no way for a page to predict if it needs to ask early
... easy for browsers to say we don't care about this but if they don't implement it
... then you can't do this on others, everyone needs to

cabanier: for us at least i think it would be hard to exit midsession
... e.g. if there's another browser could it then exit?
... we can only have one backing surface. when you pause you can't take that backing surface away
... i think brandon said you can look at the isSessionActive stuff

bajones: i think i said you would only ever had a single active session in a browser, and the UA can handle this

cabanier: could be that the user walked into a different room and is unaware that there's a session open. but then the browser could prompt maybe

alexturn: is there a restriction i'm missing here?

bajones: for immersive sessions we have "exclusive access" to enforce this
... intention has always been that when you have an immersive session ongoing that is what you focus on
... environments where you place different apps around, there's still a mode where you go into "i'm taking over everything now"

alexturn: yeah but you can go in and out. should it shut down all sessions

bajones: aaah no, if your OS wants to run a secondary app that's fine
... doesn't need to end the session but it could
... hard to imagine a scenario where a user is juggling two scenarios
... ... browsers
... to complete mounir's thought the purpose of defered stuff is to allow other browsers to not support midsession
... we could even say that you are required to support defer
... big hammer, but we could
... even if we assume an environment where you a browser can't do it
... even in that world we could add an attribute on the XR object that lets you ask "can you do this" and it says no
... i guess that kinda amounts to the same thing, but i prefer to avoid offering this
... this = a path for declaring defered features upfront if everyone else says they can
... the way everyone wants is to do them just in time

alexturn: sounds like yesterday google was not planning for trusted UI, have i misunderstood?

bajones: thought that what mounir was stating was
... we have a mechanism in chrome rn allowing for midsession interaction
... in the worst case, cardboard, we just instruct the user to take the phone out of the headset
... terrible, but ... cardboard

alexturn: so you're saying it's restricted for cardboard but nothing else?

bajones: no even in cardboard we can support this, just in a really really terrible way
... by getting them to take the phone out of the headset

alexturn: this seems to be a case of violent agreement, do we get people to write down their plans for midsession prompting?

bajones: mmmaaybe? would like to verify with mounir
... idk if klausw or joemedley have anything that may contradict

klausw: do we need to decide which option we choose if it's trusted UI or pause session?

alexturn: no, both are allowed. option 3 was silently ignore

Manishearth: wanted to say that cross-app situations where people want to run immersive sessions while others are blurred -- these are commonish and should be ok

bajones: yeah

alexturn: you may want to jump between immersive apps
... like, i want to go into immersive email, but then go do something else and come back

<Zakim> alexturn, you wanted to confirm Google's plan here - is Google planning to support trusted UI then? and to ask if two tabs are different than two apps

alexturn: right now experiences are usually snackable but switching between them may be useful
... but also this seems like a separate discussion?

bajones: yes

avadacatavra: sounds like people aren't ready to say that we don't need deferable features

alexturn: i think we were but we need mounir to confirm?

bajones: i feel like he stated that deferable features aren't necessary?

alexturn: the nuance is that there's a difference between "google doesn't need it for itself" and "google doesn't think it would be necessary at all"
... because if it is necessary for one then you need buy in from all

klausw: don't see a reason where we need to change this pattern and then just ignore the permissions prompts

alexturn: yeah if either of those work it feels like we have the confidence to skip this

avadacatavra: issues 740
... i'll make a note there and see if the various UAs want to add their consensus there?

alexturn: to clarify, we are talking about the ignore thing, yeah?

Manishearth: just wanted to check if we're ok with intrducing this in the future with an attribute to check so browsers can just not support this until they need it

alexturn: yeah. but seems like we don't need it now

klausw: quick comment: given the current types of features, we'll be ok as discussed, but we may need to revisit for radical new permissions apis

<Zakim> plamb_mo_, you wanted to comment on link traversal

plamb_mo_: do we have anything to support link traversal?

bajones: no. not due to lack of enthusiasm

1?

avadacatavra: do we want to add link traversal to our discussion for midsession consent?

bajones: we are not in a position of a group to talk about the mechanics of link traversal rn. aside from very high level theorizing

cabanier: at least for magic leap i don't think we're ready to do midsession prompts
... there may be other applications that try to directly render to the screen and they just fail
... because the browser is holding on to the layer, and it fails
... i need to talk to some people

alexturn: is there a reason that pausing to handle midsession prompts would not work

cabanier: it would, it would just pause other apps

alexturn: talking about promoting to AR?

<avadacatavra> Zakim: please close the queue

alexturn: s/AR/immersive

cabanier: yeah then the layer is blocked and they get a cryptic error

<avadacatavra> Zakim: close the queue

cabanier: so the app is paused and never resumes to release it

avadacatavra: sg
... if we were to do consensus let's do it on #740
... then the issue to discuss what a trusted UI looks like is 718
... and the discusssio about if it is a conformance req is 719
... should we do consesnsus on them?

bajones: for 719 if phrased as "if TIUI is a conformance req" the answer is an unambiguous no
... may involve jumping out and back in to the 2d browser
... if that's the case we can't immersive trusted UI a requirement
... emphasis on immersive

alexturn: to rephrase, we can require a trusted prompt, we just can't require it to be immersive. may be abrupt
... maybe we need to broaden the defn

bajones: yeah and i think the catchword is "immersive" UI
... if we drop that then ... yeah

avadacatavra: fwiw there's discussion in 718 about a difference bw TIUI and a trusted UI

alexturn: yeah

Manishearth: action items?

bajones: make every browser vendor state their UI plans on 740

<plamb_mozilla> https://github.com/immersive-web/webxr/issues/740

avadacatavra: i'll do the comment

alexturn: also we should not make it direcly about deferred UI, but indirectly about TUI
... also to clarify blur is an old term, what manish was talking about was the /hidden/ state

<plamb_mozilla> scribenick plamb_mozilla

<alexturn> https://github.com/immersive-web/webxr-ar-module/issues

<alexturn> https://github.com/immersive-web/webxr-ar-module/issues/28

<plamb_mozilla> scribenick: plamb_mozilla

alexturn: Discussion on session setup and joining immersive-vr and immersive-ar.

klausw: initial position was to join these two, but has changed

bajones: important to avoid exposing data through supportsSession that might be usable by fingerprinters.
... developers should be steered towards the most appropriate api

<alexturn> ack

Manishearth: this breaks the assumption that a browser either supports immersive ar and blend mode, or neither
... Blair Macintyre has an opposing point that the page expresses its preferences and the UA chooses the mode.

<alexturn> https://github.com/immersive-web/webxr-ar-module/issues/28#issuecomment-531479365

Manishearth: what are the conditions for closing the issue?

bajones: consensus is required. give blair an opportunity to propose an alternative

alexturn: the requirement to ship the core ar spec means we should not overdesign now.

bajones: We are late in the process to make this kind of breaking change. Tangible benefits are small.

Manishearth: bajones please document the history in a comment on this issue.

klausw: the position taking in blair's comment also covers accessibility and this should be non-negotiable.

alexturn: not adopting this does not preclude full accessibility support

<klausw> context: blair's comment is https://github.com/immersive-web/webxr-ar-module/issues/27#issuecomment-531480564

alexturn: the path to address those concerns should be documented in issue 28.
... issue 27 is the more philosophical cousin to 28

Manishearth: this is an issue for core spec if we have declined to take up 28

<Manishearth> https://github.com/immersive-web/webxr-ar-module/issues/27#issuecomment-531480564

klausw: we can't force content developers to make everything accessible.

Manishearth: the outcome of this discussion is likely to be meta-text (rather than normative or non-normative text) to set out general principles.

klausw: Blair was mentioning that both best practices and libraries need to put work into this area.

alexturn: I read this as "we have these principles and sometimes these are resolved at different levels of the stack".
... fleshing out the ar pieces of the spec forces us to deal with these issues, so this issue belongs in this module

klausw: candidate for a question for the TAG is David's comment "How should it weigh a potentially poor user experience vs. blocking access to content"
... We can do this already with the API the way it is, e.g. the overlay demo I did works in both VR and AR with minor code changes.

kip: To follow the spirit of Blair's comment on economic factors and cost of headsets. Already in Firefox Reality we have plans to experiment with allowing users to use e.g. a VR device to view AR content or vice-versa.
... What this means on the spec level is that implementers of pages have to expect that they'd be blocked in some circumstances. The user would have the option to allow or deny the display on non-ideal devices.

<Zakim> kip, you wanted to say that in the spirit of Blair's comment, we would experiment with AR-in-VR features for FxR on relatively cheap (eg, Oculus Go / Quest) hardware, but don't

bajones: It's a straw man to look at the VR/AR split as being representative of progressive enhancement
... We should provide resources for developers to build apps that support progressive enhancement, but shouldn't force them.
... We have a counter-example in the group already: where we wanted to support only the select event.
... this was seen to solve issues around accessibility etc but got a massive amount of user pushback
... Now we expose gamepad support and this means some devices won't support experiences that require gamepad.

alexturn: Addressing this is best done by providing tools and support to make it easy for developers to provide progressive levels of support.

Manishearth: I'm going to close #28, but #27 is nebulous

alexturn: We could keep it as it a place for discussion on this general principle

Manishearth: I feel like our process is ill-suited for philosophical questions like this but this question is important.

bajones: I don't like leaving open open-ended issues that just collect discussion but don't have a concrete proposal
... It's difficult to judge the success criteria for this issue
... We might need to call for more specific issues that address the area

<kip> Will copy to the issue as acomment

alexturn: splitting things or not doesn't prevent a UA supporting both

bajones: experiments like those being conducted by Firefox show that the UA can address this rather than requiring spec to address it

kip: yes

Manishearth: We should close this issue but reopen a separate issue for the experiment
... on https://github.com/immersive-web/webxr-ar-module/issues/29..
... we're already allowing people to distinguish between these two things. e.g. on Android
... the only time we need to distinguish between these two modes is where the device might want to give the user the choice.. the only place where that is the case now is on Android phones, and for these devices handheld is the preferred mode

klausw: I don't think we should complicate the overall modes based on hypotheticals ... it's OK for the device to report just-in-time that it can't fulfil a certain type of interaction that the content requires.

cabanier: I'm not sure we should close.

alexturn: at a minimum we need to let the page know what the presentation is at runtime, but I'm not sure we want to distinguish between possible available modes before the start of a session

Manishearth: It's going to be rare that a device supports both modes, and most experiences are designed for a single mode.

cabanier: BUT if the device doesn't support the app's desired mode, it's just going to fail and that's not good UX

alexturn: I could see headworn vs handheld being something that is given up front. Fingerprinting risks might be moot.
... https://github.com/immersive-web/webxr-ar-module/issues/9 is a critical issue that we want to continue to follow up online, but #29 is something that we can potentially close.

<alexturn> ac c

Manishearth: #9 needs people to chew on it first.

alexturn: Any other issues in the main list that need addressing urgently at TPAC?

Manishearth: One other I want to close (#21)

alexturn: OK answer is no

session closed

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/09/23 09:47:16 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/a lot of content/webxr implementations or polyfills/
Succeeded: s/files/filed/
FAILED: s/[acronym]/USDZ/
Succeeded: s/[acroym]/USDZ/
Succeeded: s/aboxhall/cwilso/
Succeeded: s/like LCD screens/like magic mirrors using LCD screens/
Succeeded: s/might/might build it/
WARNING: Bad s/// command: s//dom/DOM/
Succeeded: s/force/forced/
Succeeded: s/really/not really/
Succeeded: s/shared/pixel shader/
Succeeded: s/could do work arounds/may end up doing suboptimal workarounds/
Succeeded: s/Topic: Use cases discussions//
Present: cwilso joshmarinacci ada trevor atsushi cabanier samdrazin present alexturn trevorfsmith bajones LocMDao mounir Manishearth dom Diane_Hosfelt_(remote) Laszlo_Gombos plamb_mozilla alice_boxhall dan_appelquist Sangwhan kip flaki jeff_
Found ScribeNick: cabanier
Found ScribeNick: cwilso
Found ScribeNick: cabinier
WARNING: No scribe lines found matching ScribeNick pattern: <cabinier> ...
Found ScribeNick: cabanier
Found ScribeNick: cabanier
Found ScribeNick: cwilso
Found ScribeNick: cabanier
Found ScribeNick: dino
Found ScribeNick: alexturn
Found ScribeNick: Manishearth
Found ScribeNick: Locmdao
Found ScribeNick: nellwaliczek
Found ScribeNick: cabanier
Found ScribeNick: cwilso
Found ScribeNick: kip
Found ScribeNick: Manishearth
Found ScribeNick: plamb_mozilla
Inferring Scribes: cabanier, cwilso, cabinier, dino, alexturn, Manishearth, Locmdao, nellwaliczek, kip, plamb_mozilla
Scribes: cabanier, cwilso, cabinier, dino, alexturn, Manishearth, Locmdao, nellwaliczek, kip, plamb_mozilla
ScribeNicks: cabanier, cwilso, cabinier, dino, alexturn, Manishearth, Locmdao, nellwaliczek, kip, plamb_mozilla
Agenda: https://github.com/immersive-web/administrivia/tree/master/TPAC-2019

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]