W3C

- DRAFT -

Immersive Web WG/CG F2F in TPAC, day 1

25 Oct 2018

Agenda

Attendees

Present
Ada, Chris, Dom, DanDruta, JoeMedley, Ravikiran, CharlesLaPierre, cwilso, mmocny, NellWaliczek, mdadas, chrwilliams, ChrisLittle, clapierre, dkrowe, cabanier, newton, svillar, ddorwin, Laszlo_Gombos, jeff, Jungkee_Song, LocMDao, janine, Tony_Brainwaive, Sangchul_Ahn, BrandonJones
Regrets
Chair
Ada, Chris
Scribe
boaz, johnpallett, max, Chris Little

Contents


<dontcallmeDOM> ... when speaking, please use the mic to make sure remote participants hear from us

<ada> Hi Everyone!!

chris: please queue up on IRC - managing the queue helps everyone participate equally

... using q+ will be tracked by zakim one of our IRC bot

<Zakim> ada, you wanted to say hi!!

chris: I sent a note on how to use IRC this morning

<dom> HOWTO use IRC

chris: queuing is particularly important given remote participation

<ada> Do you think the Immersive Web is cool?

<ada> +1

<johnpallett> +1

<dom> +ā™„

<madlaina> +1

<NellWaliczek> +1

<cwilso> -1

<ChrisLittle> +1

<cwilso> +1

Chris: [Work mode slide]

... the CG has been running as a happy group for a long time

... in the context of the WG, we operate under stricter IPR rules

... ideally, we would get participants in this space to be in both groups

... we will need to track more closely the context of contributions we get in which groups given their different IPR policies

... we think of ourselves as one big happy group "Immersive Web Group", managed into the signe github "immersive-web" organization

... we will see items moving from incubation in the cg to standardization in the wg

... some items may also move to other wgs or other places

... in the WG, we are bound by what our charter states

... the chairs will try to keep all that black magic away from the WG participants as much as possible

Ada: if you're part of a company that is not W3C Member, get in touch to look into the invited expert status

Chris: [Timeline slide]

... The WG has a finite timeline (contrary to the CG who exists for ever)

... our charter ends in March 2020 - we would expect to recharter by then

... but we're not going to wait til March 2020 to get a standard out

... my personal goal is to stabilize WebXR as early as possible

... I see that as a key component to adoption of the API both by browsers and app developers

Ada: As a developer advocate, the current situation we're in is not so great

... WebVR is not going to be the final spec

... but WebXR is not quite ready to play with yet

... When talking with developers, it's really important to be able to communicate clearly about what can be adopted

... having developers play with the spec is key in getting input and feedback

Chris: [Structure for the meeting slide]

<ada> ack

<Zakim> johnpallett, you wanted to ask if there are any guidelines for observers during these sessions.

<dom> Meeting agenda

JohnPallett: any guideline for observers?

Chris: anyone making a contribution (e.g. "make it this way") needs to be at least in the CG, if not the WG

... but otherwise, observers should feel free to participate and ask questions within these boundaries (and our code of conduct)

... [reviewing the agenda]

... we have a series of lightning talks identified on the agenda - please get in touch if yours isn't in the list

... we also have an unconference slot for smaller conversations

... we have 3 slots tomorrow as joint sessions with other WGs: audio WG (in the morning)

... in the afternoon, a discussion with WebPlatform WG on Gamepad API

... and a session with the Privacy IG

Ada: if you're lost with the WG at any point, all the minutes, agendas etc are kept in the administrivia repo

Chris: that's also where we track our work item issues to keep the groups running

... and where you can file topics for the unconference

Ada: tomorrow will be more future oriented, ie CG stuff

Chris: our charter is designed to adopt work items from the CG

... for things that change the scope of our work, we would need to recharter which comes with some administrative cost

<Zakim> NellWaliczek, you wanted to explain the CG/WG repo layout

Nell: if you're new to the immersive web group, or have not been active recently, you might have missed some of the recent evolution in our logistics

... [reviewing github repo on github.com/immersive-web]

... one of the most important repo is webxr where we develop the WebXR API

... the other important repo is "proposals" where we are suggesting people bring their ideas for new features or new specs

... in the proposal repo, you can bring up ideas that are not fully baked yet, look for other interested participants

... once it's clear there is enough momentum and shared interest, this idea is then promoted to its own repo by our CG chair, Trevor

... you can find on our org repo some repos that have already graduated, e.g. anchors, hit testing

ChrisLittle: in the Spatial Data IG, we have formalized our incubation process into a funnel process

... is this formalized in this group as well?

Nell: there are two sides to this question

... one is about how do we decide something is ready

... and then, from the WG, how to decide whether to adopt a given item (not everything from incubation will land in the IW WG)

Chris: we have a process for this inspired by my work in the Web Incubator CG

... we are really tightly coordinated between CG/WG with a regular call with the chairs and editors

Ada: please when speaking up, please state your name and company

<Zakim> JMedley, you wanted to comment on pinned repos.

JMedley: in addition to the spec, one of the pinned repos is webxr-reference where I've been documenting a reference for the WebXR spec

... it's early in the process to do that, but I want to stay as close as possible to the spec development as possible

<Zakim> ada, you wanted to give a very quick overview of what WebXR is.

... the end goal is to bring this to MDN, so I'll make sure pull requests provided to that repo match with the requiremnets for that later transition

<cwilso> Intro deck we've been presenting from: https://docs.google.com/presentation/d/1d-DHItFt_p6ur_g5ti3yGJpYRBw4fGuqU-CXi1s6GNQ/edit?usp=sharing [PDF]

Ada: I wanted to give a quick summary of what WebXR is

<JMedley> To clarify what I said, I want a smooth implementation path for early adopters.

... it's an evolution of WebVR, expanding to include elements from WebAR

... WebXR covers both AR and VR

What's new in the WebXR Device API

<dom> Slides: What's new in WebXR Device API [PDF]

Nell: We have a WG - yay!

<dom> [applause]

Nell: Brandon was supposed to report on the Chrome origin trial

... there was a lot of value in the WebVR API

... but we identified that it wasn't well suited to the new of hardware coming to the market

... not all of WebVR has found its way in WebXR yet

... this has made it challenging to migrate from WebVR to WebXR

... AR is very shiny, exciting

... WebXR is a foundation for AR, but it doesn't have yet all the required bricks to build real AR experiences

... plus, there is still confusion on how to use the API (unsurprisingly given the current state of things)

Chris: Origin-trials are a process in Chrome where we can deploy features but only to specific origin/web sites

... with a clear expectation that these features can and most likely will be changed / removed in the future

... Chrome has never shipped WebVR unflagged/un-origin-trialed

Nell: we're going to talk about what's new in WebXR since ~July

... [AR Readiness]

... a general theme of that is we're getting closer to be AR ready

... XREnvironmentBlendMode reflects the different type of AR modes (see through vs pass through)

... e.g. see-through environments can have black shadows

... so we need to handle the fact that you'll want to draw your experience in different ways depneding on the hardware

... XREnvironmentBlendMode is dealing with this

... the other big item is a refactoring of our frame of reference system

... the goal is to look at the pattern of experiences we want to support to simplify the support for the wide variety of devices we want to support

... (from 3DoF devices to full walkable 6DoF devices)

... so we looked at some of the common intents from developers (e.g. roomscale experience)

... there is ongoing work on enabling "AR mode" for sessions - it enables automatic environment composition

... we want to avoid having to manage camera feeds - esp since not all AR devices will be camera based

... There was a number of security issues we got feedback on - exposing camera feeds to XR developers create risks of abuse

... we discussed this at our September F2F

... the conclusion was to base our approach to the "mode switch" that getting into an XR experience represents

... we will discuss this later

... [Other API improvements]

... We have removed the XRDevice - simplifying the API

... we have added XRRay used for input

... naming updates to clarify, alignment

... we tend to finalize decisions on types once the higher level decisions have been made

... [upcoming area of focus]

... We're working on making our work process more transparent - we will communicate on this shortly

... on the horizon: API ergonomics that need clean up (some of it will be discussed today)

... I mentioned the pain points of the WebVR migration: the lack of parity with WebVR input

... this will be discussed at our next F2F

... Anchors and Hit Testing, which are developed in separate repos, are progressing nicely - we will discuss their timeline to migrating to WebXR

<johnpallett> +1

... there has been discussions on enabling a safer mode for AR, with fewer scary permissions

<Ravi> ship it

... but the key thing is that WE NEED TO SHIP THIS!

... GO WEBXR

<Zakim> ada, you wanted to deal with administrivia

<johnpallett> +1 to shipping

<Zakim> JMedley, you wanted to explain what I just did with the microphone

Ada: a couple of administrativia issues

... we're going to break before stepping into the big next topic

... also, we will have our next F2F on January 29, 30 in San Jose, hosted by Samsung

... (please note it's not the same place as the WebVR workshop from 2016!)

Chris: [break until 10:30]

<ada> Getting started

Coordinate systems fallout topics

<boaz> cwilso, NellWaliczek and ada, what is a good time to discuss the testing topic?

<dom> ScribeNick: johnpallett

<boaz> scribe: boaz

<johnpallett> Ada: Additional trivia - cadence of meetings - WG meetings are biweekly on Tuesday; CG also biweekly; they alternate (so there's a meeting every week). For WG meetings Ada will send schedule ahead of time; if you want, add a pull request to github on the agenda, or just email Ada and it will be updated.

im so sorry, im supposed to be scribing somewhere else

<scribe> scribe: johnpallett

Issues filed related to PR 409 (coordinate system changes). There's a link in the agenda.

<NellWaliczek> https://github.com/immersive-web/administrivia/blob/master/TPAC-2018/cs-fallout-topics.md

<dom> Coordinate Systems Fallout Topics

<mmocny> PR 409: https://github.com/immersive-web/webxr/pull/409

Nell: In course of merging PR there were a set of issues that weren't blocking; those are what we'll review to see if we can get closure.

<NellWaliczek> https://github.com/immersive-web/webxr/issues/228

Nell: This is an old issue that was reactivated about how we want to represent stage bounds; e.g. was it a general polygon that was floor-aligned?
... developer feedback wanted to know whether there was a square inside the bounds, not just a polygon? i.e. an enscribed square axis-aligned to the origin?
... editors are considering allowing both polygon and square (right now platforms have to compute the rectangle themselves)

alexturn: Microsoft Mixed Reality was a platform that chose to make polygon available, but a lot of apps don't use it since it's not trivial to lay out content in an arbitrary polygon... even within a rectangle it's a stretch.
... this is a problem people will have to solve for AR as well with arbitrary geometry; middleware will likely appear over time, but for bounded VR developers the sweet spot has proven to be a rectangle.
... some platforms might just give out a rectangle as their polygon; there's a compat risk if they only give the polygon as a rectangle. If developers only test on platforms that give rectangles then they may not account for arbitrary polygons.

Nell: Bounds are important. Walking into walls is bad!
... a lot of systems have their own guardian/chaperone to keep people from walking into stuff; it's not necessarily something the web site should handle explicitly.

<Max> +1

<cwilso> +1

Nell: s/walking into walls/punching walls/

<alexturn> +1

<darktears> +1

Nell: (but both are bad)

<dantheman> +1

<ada> +1

<Zakim> edent, you wanted to ask about user research

nell: ... rectangle will be entirely enclosed within the polygon - so may be small if the polygon is non-convex
... this will be part of the spec.
... this is safe to add since we're not expecting page to render polygon on page.

<chrwilliams> +1

+1

nell: PR will post next week.

<Ravi> +1

<cwilso> 8 for, 0 against,

<dom> Requiring multiple frames of reference #417

<cwilso> 9 for, zero against.

<ada> It's for placing objects in reach rather than to get the developer to draw the bounding box/poly

nell: Want to encourage progressive enhancement. Developers should pick minimum frame of reference, i.e. stationary where possible, bounded if they want users to walk around a little bit, unbounded if they want people to be able to walk a lot.
... not all hardware supports all these frame of references and we want to encourage developers to have a fallback plan.
... want to allow developers to provide an array of options where they get the ones back that are supported
... not all experiences have a fallback path, though. We don't want end-users to enter VR, have their whole system spin a bunch of stuff up, then immediately be told "whoops, sorry, can't do that"
... so session creation includes frame of reference type that is a baseline type required for session creation, and session will be rejected if not possible.
... when originally spec'd this was a single frame of reference type, but this was before an array of frame of references (FOR) could be passed in.
... a question came up: should session creation allow multiple FORs to be blockers for session creation? But there were multiple questions. i.e. If you give bounded AND unbounded, are BOTH required or is ONE required (i.e. is it an AND or OR)?

<Zakim> dom, you wanted to compare with getUserMedia constraints

dom: this reminds me of GetUserMedia where you can request minimal or maximal resolution from camera
... has there been a comparison with that system?

nell: yup.
... independent of GetUserMedia the answer right now is to not try to replicate that pattern, but there's a separate workstream that brandon is driving on property requirements for a session that might be requested.
... the question right now is whether asking for a FOR type indicates you need all, or one of them

<Zakim> dantheman, you wanted to ask about bounded vs unbounded

dantheman: Some experiences don't work in both bounded and unbounded, I need to know which one I have

nell: 2 pieces of API here. One is an XR session function called "requestFrameOfReference" which takes an array of frameOfReference options. The order of the elements in that array indicates the preference of the developer for which session they'd prefer.
... this is covered in the explainer as a sample of that behavior (or behaviour).
... the question is at creation time is whether they're heirarchical. At present they are, but long-term they may not be.

<cwilso> john: at present there's an implied hierarchy. If the order matters, does hierarchy matter?

<dom> requestFrameOfReference method

nell: requestFOR API is ordered. The required session one is not ordered (it's only one element); this is the one in consideration.

<dom> WebXR Device API - Spatial Tracking

nell: it sounds like it's an OR, i.e. succeed at creating the session if you can do 'one of these things'

<dom> https://github.com/immersive-web/webxr/blob/master/spatial-tracking-explainer.md#ensuring-hardware-compatibility

<dom> Ensuring hardware compatibility

nell: (see #ensuring-hardware-compatibility) the array for .requestSession as required types is likely to be an OR
... then the developer can pick which one they like when requesting a FOR

Max: What about creating a new session each time?

Nell: Strongly discourage that. Causes flickering and other bad things at the OS level.

Max: Is there no way to do an AND then, i.e. removing possibility of an app that wants to do two at once?

Nell: The idea here is to prevent useless sessions from being created

Alex: If you fail asking for a second FOR and it fails then you at least still have the session you cared about

Max: So, try creating the FORs when you create the session

Alexis: Can query?

Nell: No, cannot query whether session is supported. Might be useful to give list of supported sessions as a return value

<Zakim> johnpallett, you wanted to bring up use case where developer may wish to switch between FORs in a session

Max: There is a way to support an app that requires two, it's just not convenient

John: A way to support both AND and OR? One of the use cases is bounded+unbounded

Nell: Could do, but trying to figure out whether AND behavior is really required... is this a common enough case to contort the API shape to support ?
... A simpler API provides a better path for developers to be successful
... redirected walking is a core use case (make users walk in circles to support a large virtual space in a small physical one). This is a use case for progressively supporting unbounded.
... let's vote!
...
... OK! Moving right along will post pR
... On to body-lock content (#416)

<dom> Body-locked content #416

Nell: body locking is where if you put a virtual object in virtual space, and then walk away, follows me relative to position but not orientation
... so it preserves orientation vs. user but maintains a distance relative to the user.
... useful for something like a virtual desk or UX elements. It's not something that every experience builds.
... some developers like to create it, e.g. OS UI for a hololens or magic leap where there's a panel that pops up and then slowly adjusts its rotation to match your view direction
... there isn't a FOR for this today. You kind of can but since the concepts today bridge tracking technologies, a stationary FOR on Vive is different than GearVR so if you want to create body locked content then there isn't a FOR you can use.
... there are 3 options laid out in the issue, want feedback on whether we should include this content, and which approach feels intuitive if we do.

<Blair_Moz> ... johnpallett, it's not body locked, tho

Nell: (in response to question) face locking is less desirable for users

<Zakim> alexturn, you wanted to say something about WinMR experience with body-locking

alex: this serves two purposes. First is the UX that follows you that Nell mentioned. Second is tracking loss; if we lose tracking we'll switch to body locked FOR
... there are better approaches for tracking loss than this, so would prefer to not use body locking for that purpose
... developers tended to build their own body locking semantics at the engine layer; should only include in WebXR if we think it's something everyone should use

blair: This FOR is useful, it gets added to AR systems. Term 'body locked' isn't ideal since if you turn 90 degrees and walk, desk may be behind you in our implementation.
... I like this FOR but I don't have a good suggestion for an alternate name. Maybe 'body position'?

nell: we can discuss naming later

<Zakim> johnpallett, you wanted to ask whether there are actually any native requirements or if middleware can handle

john: can platforms deliver a consistent outcome here?
... and is there anything necessary for this to be native, or can middleware handle it?

alex: neck tracking might make this difficult if it's not in the platform, since position may be lost in stationary FOR but platform could handle neck movements

<Blair_Moz> ... the only other thing would be if you had an estimate of the floor in the platform

<Blair_Moz> ... so you could put the origin down on the floor

nell: in a FOR where position is locked the native platform could help with this, middleware layer couldn't extract this one

john: will platforms give a consistent experience across platforms?

<Blair_Moz> ... yes, to what alex is saying

alex: the FOR should be consistent, what the app actually does with it (i.e. moving the UX to be in front of the user) might vary

nell: adding new FORs is heavyweight so the real question is whether the indended usage satisfies the need and whether that need exists

<cabanier> +1

nell: happy to let this stay in github so people can comment

alex: in practice, the effective behavior of this is the old neck-locked FOR; the reason we moved away from this is because some apps might want neck modeling but not 6DOF.
... challenges with compat again with that old neck-locked FOR, which led to the design that we have now.
... it was explicit that we moved away from a neck-modeling FOR, didn't want to lose parallax effects on sites that only test on 3DOF devices

nell: a big part of this is progressive enhancement, ideally you say your ideal FOR
... not hearing anyone saying 'please include this'
... blair, can you get the experience you're building if you can't distinguish between generic 6DOF contribution vs. neck contribution?

blair: don't have a strong sense right now for how much it would contribute at this time

nell: let's let the issue sit; this is an additive change anyway.

<alexismenard> ScribeNick : alexismenard

<johnpallett> nell/cwilso: Let's change the milestone on the issue but keep it open.

alexturn: in winmr in practice apps engines like unity will adjust the UI by themselves
... we just let this handled by the middleware layer

NellWaliczek: next topic diorama content

<dom> Diorama-style content #31

NellWaliczek: magic leap is interested talking about this. One partner would be welcomed to partner with magic leap to work on this.
... posted on the proposals repo so far
... diorama definition : take topics, puts them in a shoebox, turns the box on its side and then you look into box and see content. In VR, you're viewing the web in a 2D window, you can imagine if you cut out a hole and put a shoe box in the middle, then if you move you head you could see the content of the shoebox
... different of magic window, magic window I'm centered and i look at the content...
... in a diorama you're not gonna put content behind the person, not going to be rendered....

Rik: start of the device you have a diorama to look at the content. Also in a game that we build...

rick: example a wayfair seeing the furniture. Scale down version of the content....

<Blair_Moz> no

<Artem_Bolgar> Sounds like a great thing to implement in Oculus Browser (which is in fact a window in VR)

NellWaliczek: the issue I filed in the proposal repo #31
... what are the technical constraints to support the concept. Getting first on the same page to understand the concept

<Blair_Moz> It feels orthogonal to WebXR; seems like you need head position relative to page and then you can just render in a WebGL canvas

NellWaliczek: the PR explained the difference between magic window and diorama
... XR device headset you want the diorama to be rendered in stereo
... on mobile phone not rendered as stereo
... typical the content rendered in the diorama is not life size

<Blair_Moz> ah, yes, alexismeard, thanks, the stereo is a big different to just doing webgl

NellWaliczek: next issue is renderloop update refresh rate
... how different is the refresh rate from the cutout vs refresh rate of the headset
... when talking about diorama we're talking about them inline rather than in full xr
... the last part of the PR is that we have to manage the projection matrix and view matrix
... not necessarily aligned with the frame reference
... it's a squished version of the world
... we did address this desire to unify the rendering device that has no pose data to report
... e.g. a desktop computer with an HMD connected, but you may want to support diorama inline...
... you will have to modify the projection matrix you get from the platform...
... clipping is also different from regular immersive xr
... last issue is around pose data
... rick is going to cover more about that
... there is a topic about that tomorrow with johnpallett
... don't want to spend a lot of time about that right now

<Zakim> alexturn, you wanted to talk about pose prediction and reprojection for dioramas to get best quality

NellWaliczek: we like to spend more time about that outside of TPAC as well

alexturn: if you want to have the highest quality diorama, you would need to predict head pose (just like regular VR).
... blair asked about achieving that with a canvas...

<Blair_Moz> I think it isn't an issue when you are rendering the web page in the view already

<Zakim> Max, you wanted to ask about content coming out of diorama and whether this is just app/middleware

Max: are we sure we dont want to to consider content popping out of the diorama

alexturn: limitation are if you don't have that : reprojection, stereo rendering. You want to apply the same trick that in VR to get smooth positioning.

NellWaliczek: the refresh rate of the 2D content is also different from VR headset
... content pop : user comfort needs to be taken into account. Tricky to achieve provided it's 2D content. You could build this using a hybrid approach
... about who is allowed to render where
... please find NellWaliczek or magic leap folks on the breaks to discuss that (also github)

<ada> Issue URL: https://github.com/immersive-web/proposals/issues/31

NellWaliczek: takes us to bikeshedding...
... going to be involved conversation...

<dom> Handling tracking loss #243

NellWaliczek: tracking loss, issue posted quite a while ago...
... when we work on #409 what happens when tracking is loss, and when you're able to regain tracking or not
... the big question is how the underlying platform handles it
... would like to hear platform providers to explain how their experiences behaves in case of tracking loss (bounded vs unbounded)...6DoF and 3DOf

example: some 6DoF experiences when losing tracking fallback to 3DoF

and say you take your HMD off move around to the house put it back again and the room is unknown and then the space is unknown

scribe: so what do we have to do for developers to handle tracking loss/restore

<Ravi> q

<Zakim> alexturn, you wanted to talk about WinMR tracking loss/recovery

alexturn: how tracking loss/restore works in winmr. The way we works in winmr, if you lost tracking you get nothing. Currently if you want to recover from that you switch to body locked. Not recommend that. Could be returning a null position or keep the position
... Unity for winmr implemented the drag along themselves

NellWaliczek: underlying platform doesn't block rendering...

alexturn: no, consider 360 videos...
... if you have a bounded frame, when you recover tracking you get back to the same origin again. In case of ANchors you get snap back to origin.
... avoid jump back because last know position
... two behaviors : snap back? My origin is meaningful. Teleport

unbounded frame -> you walking around, loss tracking, getting back, yourself in the fragment, strong affinity to the world, jumping back to where you were

in he case you are a new unknown room, winmr leaves you at the same old coords...

<NellWaliczek> q

scribe: two ways : you position stays put or reset the origin to 0
... if we want that to handle that in WebXR we need to kind a way to pass offset

NellWaliczek: we did added a way to get the discontinuity when reset
... I will be looking for platform providers....

ravi: MagicLeap, we don't have sophisticated restore mechanism

alexturn: I should probably write an explainer to what winmr do
... right now we swa it over and over that developers were building on top of our platform so if we tell you the size of the discountinty you can absorb it in your app

NellWaliczek: to magic leap so whenever you don't pose data and after a while what happen when you recover?

Ravi: what happens when loss tracking -> we switched to 3DOF, UI will tell the user you lost tracking : please wait 5s. If we're able to restore we'll restore.

NellWaliczek: couple of questions. Content is dimmed : is it there dimmed?

rick: it's head locked
... the user will get informed.

NellWaliczek: the system takes over...
... app will set get the orientation matrix but no position matrix

position will stay the same as last known position...

<Artem_Bolgar> I don't know exactly how Oculus is handling tracking loss, but will provide this info later.

NellWaliczek: would love to get clarification on tracking loss for immersive content...

<cwilso> ACTION: Rik to detail what happens in tracking loss

<Zakim> Blair_Moz, you wanted to ask when Hololens or ML1 detect they are in a new space, I assume they notify not only that the tracking was lost and found, but also tell the apps they've

Blair_Moz: in addition of updated pose information, do they provide extra info of whether the user moved to a new space

alexturn: in winmr we don't tell anything about the nature of the recovery, e.g. you came back at the space -> pass the delta. Or like oh you're somewhere else we have no idea where you are.
... it can be important for content provider, especially when anchoring

<Zakim> Max, you wanted to describe ARCore tracking behavior

Max: ARCore as far as tracking behavior, anchors centered approach. No guarantees about origin is. They have a tracking state, if ARCore looses tracking all what was tracked will change state
... when tracking is back, all of the objects and anchors will resume tracking...
... that if you get tracking back in the same place
... in a new space, all old anchors will be paused...new anchors will be tracked to the new space...

NellWaliczek: do you have a signal to know when anchors tracking is paused?

Max: I don't have the answer...

dan: it's built on top of ArCore. There is no UI on top.

<Zakim> alexturn, you wanted to talk about goals for WebXR here, around ensuring apps can target all the options we discussed simply

<Artem_Bolgar> Oculus Rift will report new coordinates after it recovers from the tracking loss (due to external tracking). While it is in the tracking loss mode it will fallback to 3DOF mode. Oculus Quest could be different, though.

alexturn: want to talk about goals here. Enumerate what the platforms are doing natively and what apps are doing and what they want when tracking is lost.
... so based on the list of what apps are doing, how we can make it simple for them to achieve what they do
... will be a nice target to design the API

NellWaliczek: question to Artem_Bolgar around what Oculus is doing for tracking loss/recovery
... report the new coords to the rift. Is there a system level UI popping up?

Artem_Bolgar: good question....
... I don't remember, guess it's app level to decide it

<cwilso> ACTION: Artem_Bolgar to follow up on system level UI popping up for tracking loss

NellWaliczek: on the Go/GearVR, do they end up reporting tracking loss?

Artem_Bolgar: hard to lose what doesn't exist :)

NellWaliczek is smiling :)

Artem_Bolgar: Quest don't have that information yet

<dantheman> for ARCore this is how it works https://github.com/googlecreativelab/justaline-android/blob/master/app/src/main/java/com/arexperiments/justaline/DrawARActivity.java#L553

NellWaliczek: alexturn wants to discuss a topic. Suggests to do it offline...

ada: Lunch!!!!!!!!!!!!!!

<ada> ACTION: FOllow up with App developers to see how they handle tracking loss

<Blair_Moz> Thanks John

<Blair_Moz> I'll be back in ~15 ... family starting to stir

<cwilso> BiKeShEdDiNg

<ada> Hi šŸ‘‹!

<ada> We're starting the lightning now :)

<ada> ā˜‡

<cwilso> Ada: says how to scribe

<cwilso> ...then says some more stuff

<Max> scribe:max

<dom> scribenick: Max

cwilso: Lightning talks begin now. Starting with Qing An (Alibaba)

Lightning Talks

chin: Use-cases and requirements for webAR
... later at 5 we can have a further discussion on this topic

<NellWaliczek> To answer the earlier question, yes..we'll move the bikeshedding topic to the unconference time

chin: WebAR Use case - shopping
... You can see an online shopping based on Web AR
... use mobile phone to scan your home environment, pick things you want to purchase like chair and bed, and virtual furniture will be placed
... this motivates users and improves their shopping experience
... another experience is a online physical AR market
... In this market, if you put on ar-glasses you get mixture of virtual objects in the physical worlds like signs in front of shops, advertisements. For example, you can actually interact with sports celebrities
... next use-case are games
... Coupon collection game - use cell phone to scan the real scene and find virtual coupons hidden among virtual objects - similar to Pokemon Go
... Another is to show more information on a physical surface - like a poster with preconfigured spots that display more information
... Next, I propose API requirements that support these experiences
... 3d face detection, posture/body detection, surface detection, environment/light detection, object detection - can detect actual things in physical world like a tree.
... these would be very helpful to provide a universal API to web developers

<johnpallett> +q to ask which API requests that are in the AR use case presentation are already covered in the proposals repo and which are not

chin: last one is the ability for developers to define their own custom AR algorithms - standardized API that defines some high-level input in a specified data format so the web developer can expand the AR API easily
... eg data format of video frame, sensor data, etc. output might return analysis of gesture, face, etc. to web developer
... describes example of what the standardized API would look like

<Zakim> johnpallett, you wanted to ask which API requests that are in the AR use case presentation are already covered in the proposals repo and which are not

<alexturn> +q to ask about the output of the AR F2F we had a while back and whether we should maintain a roadmap

johnpallett: I am curious which of these requests are already covered in the proposals repo

chin: some of this is covered, but some not such as surface detection, body detection, etc

nell: there is some discussion of small topics like surface detection as opposed to general object detection that have been part of the anchors repo but put aside for the moment
... environment understanding is a very broad topic. We have talked about them but no issues filed specifically like what a mesh or point-cloud API would look like
... as a community we know we need to get to it but nobody has put forward a proposal that bridges the different platforms
... as far as other environment understanding - one issue that has come up but doesn't have an issue is adding automatic occlusion. The flip-side of the mesh data question is what if we don't expose that - can we provide what the developer needs like they supply a depth buffer and the user-agent handles occlusion for them
... we are going to discuss multi-view rendering later and that may cover some of this
... there is a topic to discuss lighting that was filed to the proposals repo
... the existing APIs provide intensity but not color but other functionality is not covered

ada: I was at devices and sensors working group earlier this week to discuss ambient light sensing
... they already had a proposal for adding color data to ambient light but had no use-cases
... they were very happy we had a use-case for this. They aren't sure where the best place to handle this was - like a generic interface via window or something that you get with WebXR API
... they are happy with the idea that for the XR use-cases the ambient light information may come from ARCore or ARKit etc. rather than just using the light detection chip - so using algorithms on the camera data rather than raw sensors
... if you look at the ambient lighting issue in our issues repo it links to the devices and sensors corresponding issue

<dom> Lighting estimation for WebXR #27

<johnpallett> general list of open CG proposals are here: https://github.com/immersive-web/proposals/issues

nell: if you look at object detection, there are three buckets: known image detection (markers), 3D (known) object detection (coke add that recognizes a coke can), identify concepts like a tree - semantic understanding stuff is not something anyone has expressed interest in discussing

chin: I want to make sure we can propose new things if they are important to us - we could further propose technical support - we have working code for this stuff
... specific ideas around posture and surface - we would like to contribute more API examples and ideas to github

nell: that would be great!

<Zakim> alexturn, you wanted to ask about the output of the AR F2F we had a while back and whether we should maintain a roadmap

alex: at the AR f2f a while ago at Seattle in our initial brainstorming before AR gor merged into WebXR, some of these were mentioned

<johnpallett> a proposal relating to custom AR models is here: https://github.com/immersive-web/proposals/issues/4

alex: some of this content was just jostling around in our heads, maybe we can put up a roadmap. If we have a timeline of what we need - maybe hit-test and anchors are the initial concepts but we want to add more later down the road

nell: in this transition from community to working group, there isn't a lot of transparency of how we communicate what gets chosen to be driven forward as part of the spec and in what order
... we would like to be more transparent about transparency and ordering

<cwilso> ACTION: chairs+editors to work on roadmap planning

nell: we would like a mechanism for making these decisions more community oriented

<johnpallett> -q

alex: that would be great - then we could have a discussion of where some of chin's ideas are in the roadmap

QingAn: that would help me

<Zakim> dom, you wanted to talk about w3c roadmap framework https://w3c.github.io/web-roadmaps/media/rendering.html

dom: I want to present our status on a list of technologies

<NellWaliczek> thanks!

cwilso: next lightning talk goes to Michal

<NellWaliczek> Thanks to new scribes for taking on the responsibility!!

<mmocny> Presentation: https://docs.google.com/presentation/d/1rzcpyFuffPefXPAO0_t89RyGu3yS35xDydIS1n8lt9Y/edit?usp=sharing [PDF]

michal: I am Michal Mocny - I work at Google. This presentation is more of an appeal to the community group than to the working group
... publishing structured data to the web
... the best analogy I can use is a quick history - once upon a time, it became possible on the web to build maps
... information about the world - high-quality structured information - was not availble
... could scan a phonebook, make guesses, etc
... structured data like json, schemas, came along - this allowed a business to put up structured data so other sites could ingest that
... this is sample markup you can put up alongside your website - schema.org / geo-coordinate markup for describing a business
... with that you get much better search results (shows Google results)
... not just google - Bing, Yelp, thousands of sites and developers have used this information
... it's a scalable ecosystem
... one day AR becomes possible and people pick up these AR SDKs
... let's wonder world maps with this information
... Google Lens is doing this too
... showing you richer information about the world using the camera + the same rich data to visualize the world
... even richer experience like Google maps with AR walking navigation
... same rich snippet that gets rendered on the 2D web but in AR - the original owner of the data didn't intend or predict this, they just published it and it can be repurposed
... by publishing as data, the presentation can evolve
... what's new with AR?

<scribe> ... new classification and localization techniques

UNKNOWN_SPEAKER: existing: qrcodes, barcodes. New: visual markers, OCR, 3d spaces, ML models

<scribe> ... new content too

UNKNOWN_SPEAKER: best practices are published but incomplete for AR
... there is already a spec for AR markup - ARML
... it is somewhat old and incompatible with our work
... their motivations were good but for a different time
... they would like to overlap more with this W3C work - there is a hunger for this
... maybe it's too early for AR publishing in general, this is a non-goal in our current charter
... we want to embrace the current level of the space - too early for lots of this but not to get started
... you can read some of this if you want to be convinced
... at google, we are exploring this space and identify missing bits
... we want to publish guidelines for how immersive content can be embedded and shared
... specifically regarding Google Lens - we want to do more for immersive UX
... there is a strong desire for this to be web-friendly - to be hosted and published in an open way
... if they don't have one - they will have to come up with solutions on their own. If anyone is working in a similar space, please reach out and let's talk
... thank you very much!

<Zakim> cwilso, you wanted to talk about charter

cwilson: so this one in particular - I wanted to underscore because we are intersection of community and working group, I wanted Michal to present this here because I know this is an interesting and related topic to spark where this could go.
... this may never go into the immersive web working group and may be something totally different

<ada> Sorry Blair one sec

guido: I want to mention - we see here an AR scene - JS is in charge of doing this scene - then we have some other things like 3D objects, maybe also some behavior implemented in JS again. This highlights a architecture issue I see with the current spec. We have a scene and objects that are not in the same domain. It is not clear that these are in different scenes and we may get cross-origin issues
... so we need to be able to run JS related to these objects without the scene knowing the source of these objects and related javascript

blair: my comment regarding previous Google thing that while it is out-of-scope, this is exactly the kind of thing that motivated me to push on the geo-spatial alignment stuff

<dom> Work toward a feature in WebXR to geo-align coordinate systems

<Zakim> NellWaliczek, you wanted to ask blair about updates on that

blair: because we can't do geo-alignment and render geo-located data as content - it's a great use-case

nell: I was curious now that we have updates to the coordinate system stuff landed, are you going to do an update on that proposal?

blair: I still don't see in the session-creation stuff how we ask for capabilities
... it's a modification on a frame-of-reference, not a unique frame-of-reference
... I welcome input

nell: i think we can rediscuss this once brandon's current work is done

ada: Next, Artem has a demo of timewalk using oculus browser

artem: I can demo it later
... basically, the whole idea is to have multiple layers support at some point in WebXR
... currently, we only have one layer called "base layer" and it's pretty much a webGL surface where you render everything
... it's an ok approach but it's equally bad for all kinds of situations
... as you have noticed this is a mobile device - performance is not the best, not like PC
... in order to get good framerates, we reduce the resolutio
... not the best quality of image, of course. It's not a secret that current webvr's most common use-case is 360 videos and photos
... the video/photo usually has pretty bad quality. Our idea is to have ability to have multiple layers and have those layers "timewarped" or "reprojected"
... the idea behind it is composition of those layers happens not in webGL/Browser but in a low-level VR compositer
... it can sample texture/video in full resolution in low-latency and high-quality

<alexturn> +q to ask if quad layers would get most of the mobile value initially or if there is also significant value in multiple WebGL layers

artem: pros: performance and include better battery life
... you specify texture and type (like cube map) and some parameters (top/bottom or left/right) and send it to VR compositor
... so you don't waste cycles of webGL or JS
... the compositor has the latest pose of your head, samples full resolution texture, doesn't burn battery
... another use-case for layers is something with a lot of text like a panel in front of you with some text. You cannot make that work well in current webXR as it will be very blurry - you cannot read it
... you need to increase the resolution of the eye buffers significantly and it kills the performance
... we call it "cylinder layers" because it's usually a bit curved
... that's an idea to get text working better
... another idea is html-to texture is a no-go but html-to-layer - such as "cylinder layer" - is probably fine because you don't get the texture. It's kind of a browser inside a browser
... you just redirect the rendering to that layer and it should be ok from a security point of view
... my concern is that currently WebXR doesn't have any way to extend to multiple layer support
... I am not suggesting we add this whole idea to the spec - but it would be nice if we have instead of just one pointer to base layer that we start with support for an array of layers

<Zakim> alexturn, you wanted to ask if quad layers would get most of the mobile value initially or if there is also significant value in multiple WebGL layers

artem: that way we could add this stuff later

alexturner: I see the value - we have this type of thing too. I am curious - a lot of this stuff maps to quad layers. Do you see this all mapping to quad layers or do you think we need multiple GL layers being composited

artem: once you have multiple layers with quads, you may need to render something on top of that quad
... once you need that, you will need another GL layer on top of that quad layer
... first layer is base layer
... then quad layer
... then an object controller layer - so you need multiple GL layers so you can separate with other type of layers

alexturner: what about using depth testing in the compositor?

artem: there is no depth between layers

<Zakim> NellWaliczek, you wanted to mention the agenda topic tomorrow

artem: you will need to specify how you blend between layers

nell: we will discuss this tomorrow at 11am
... it started in the context of DOM overlays but evolved into this type of thing - feedback from the dev community, investigations regarding cross-origin security, etc
... we are learning about the problem space and have maybe another order of magnitude of detail about this to discuss
... it's not a concrete proposal - not even talking about whether it should be WebXR 1.0 or future work
... we just want to lay out the problem space and figure out how urgent this is
... for example, why can't devs use existing web technology to make 2D UIs in immersive?

<ada> Zakim: choose a victim

<jillian_munson> ScribeNick: jillian_munson

<Blair_Moz> cwilso/max: would love to use HTML5/CSS to make 2D UIs in immersive! In Argon, we let folks using WebGL and/or CSS3 content, and the majority put HTML5/CSS content out in the world

<ada> https://github.com/immersive-web/webxr/issues/412

<cwilso> we're walking through issues list: https://github.com/immersive-web/webxr/issues?q=is%3Aissue+is%3Aopen+label%3A%22FTF+discussion+requested%22

nell: For this issue, some backgroun. In order to get frame of reference, need to obtain projection matrices
... several options proposed by Brandon. Option 1: establish a single active frame of reference at a time
... feedback we got, lots of feedback, on various approaches. Need to decide on which approach taken.

<alexturn> +q to talk about multiple layers

nell: could overlap with diorama problem space, can discuss in diorama issue on github
... not sure who tagged items for discussion. If you tagged item for discussion and you don't feel the discussion was adequately addressed please speak up.

<alexturn> -q

alexturn: Should questions be asked here or in the issue?

nell: To summarize, if we go the route of having a default frame of reference how to handle the case of multiple library scenario, multiple attempts to define a default primary frame

alexturn: Tend to think of the idea of a default primary current frame, could move away once we start looking at multiple layers.

<ada> https://github.com/immersive-web/webxr/issues/411

nell: In context of issue #409 (frames of reference), issues around design guidance of using promises vs. callbacks
... lots of discussion around this issue. Two things happening: what's our philosophy around this, should it be filed as a separate issue
... Blair had concerns around navigation issues, put on hold while we answer more fundamental questions
... need to read this through more thoroughly, no real discussion requested on this aside from navigation issues

<Zakim> Blair_Moz, you wanted to talk about UA driven "enter immersive mode"

@@@ If anyone is thinking about making any design recommendations, read Brandon's proposal and understand timing issues

Blair: Follow up to what nell was saying, discussion she referred to - I was reviewing the frame of reference proposal and noticed that all the proposed events were defined as callbacks instead of promises
... If we ever want navigation to handle immersive mode, cannot happen in promises
... needs to be handled by events. When we look at things like showing 2D pages in 3D like hololens, FF reality, UI controls without having to rely on page management
... those things blend together. Should be done at session creation time.

<ada> https://github.com/immersive-web/webxr/issues/403

Gary: One specific callout with other APIs we've used with Promises. If you want a permission prompt, you give UA flexibility to do that even if you don't implement it as promises at least allow session creation and permissions to be obtained and passed in promises

nell: Next issue: 403
... would like to postpone to anchors conversation

<ada> (postponed until later)

<ada> https://github.com/immersive-web/webxr/issues/390

cwilso: Good springboard for tomorrow's conversation with web audio
... push discussion to tomorrow's web audio joint session

<ada> https://github.com/immersive-web/webxr/issues/388

nell: Issue 388
... lots of discussion, lots of scrolling, lots of discussion but no conclusion clear
... would like to move this to unconference this afternoon since this could be time consuming to resolve

<Zakim> Blair_Moz, you wanted to say I think our thinking moved along since then

nell: try to get some closure on this, force the conversation to happen if the relevant parties are interested and present

Blair: I think that once we switched, when Google switched from fullscreen vs inline on handheld
... some of the discussion became less relevant, might be closer to solving this than when this issue was originally proposed

nell: Blair, would you like to discuss during unconference?

cwilso: Single thread conversation during unconference, no plans to split up into groups as done previously

Blair: If those present from Google could skim the issue
... woof

<ddorwin> Does anyone know what are the open issues? Can we document those or even open separate issues for those and close this?

Blair: and review the conversation, get some eyes on it

David: What are the specific things we need to discuss? Drive this conversation to closure?
... many different topics in conversation thread

nell: Can someone volunteer to read through the conversation and identify other issues?
... not it

Max: volunteers

<Zakim> NellWaliczek, you wanted to suggest other folks can also tag issues to request updates

<cwilso> action Max to read through the conversation on #388 and identify other issues

nell: Wanted to mention, anyone is welcome to add the FTF discussion requesting tag to issues you want to discuss today or tomorrow

<cwilso> ACTION: Max to read through the conversation on #388 and identify other issues

nell: please feel free to add
... I'd like to request chairs add permissions or tag issues participants want to discuss

cwilso: Issue 344

<cwilso> https://github.com/immersive-web/webxr/issues/344

nell: Background - if you have front facing and back facing camera available
... propose to move this to proposals

<johnpallett> +q to ask whether other repos beyond webXR with issues tagged for f2f are in scope for this sessionn

Blair: Will do, context similar to what happens with tracking loss when switching cameras.
... details depend on the platform, might be weird but can discuss in proposals

<Zakim> johnpallett, you wanted to ask whether other repos beyond webXR with issues tagged for f2f are in scope for this sessionn

cwilso: Asked if other repos than webxr open to discussion, answer is yes

johnpallett: privacy and security to be discussed as well

<cwilso> https://github.com/immersive-web/webxr/issues/310

cwilso: Issue 310, capturing what's on the display

<johnpallett> also proposal issues

nell: Context, capture what's happening in the real world as well as the webxr context
... what we're missing is a concrete proposal, need someone to take this issue and change how output context is made today or propose changes to it

<ada> ACTION: Figure out if any additional work needs to be done for #310

Blair: I'm leery of taking ownership of this, have not looked at output context
... original thought was akin to like taking a picture. Instead of sending this directly to a web app, send output to something like a camera roll.
... something to be handled by the platform?

nell: This is worth looking into, discussing how to do something like a snapshot
... will say, this is a candidate to move to the proposals repo for discussion

Blair: Agreed, may need to integrate with webrtc in the background and may get complicated.

nell: It is unclear how we would approach solving this particular problem, if it is not a concrete solution would like to move this to proposals

cwilso: pause reviewing open issues for now, moving to next topic anchors and hit testing

nell: Work done on investigating both anchors, hit testing

<cwilso> Anchors and hittesting: https://github.com/immersive-web/administrivia/blob/master/TPAC-2018/anchors-topics.md

nell: high level framing: hit testing repo, anchors repo. Lots of good work being done in those repos
... During F2F, more like today's discussion on tracking loss to use this time to discuss what everyone's understanding of what they think the problem is and what terms mean
... would like to hear from platform owners, developers what fundamental features are core functionality you need to implement an AR experience?
... platform owners: what does the word anchor mean to you? What's in the anchors repo today, there's a focused implementation being proposed and I want to make sure there's consensus on that definition

<alexturn> Previous inventory done on the "anchor" term across native platforms: https://github.com/immersive-web/anchors/issues/4

nell: update rates, some of the things that are covered in dynamic coordinate systems, frame persistence - secondary to establishing foundational understanding of what anchors are
... want to make sure we can design something that correlates to everyone's understanding

max: first, want to describe why this is so foundational to augmented reality
... you have physical real world, virtual world that virtual objects exist within
... physical world implicit, anchors are a way for the app to tell the tracking system that this is a place in which I want to maintain a connection between those two worlds
... I need the virtual content to stay consistent with the physical world. Think of anchors as a pin. Everything else can squish, but maintain relationships between physical world, position relative
... On the discussion of tracking loss, need to know whether we should even render virtual objects in case of tracking loss
... Example: virtual chair placed in conference room. If I can't see the chair, I no longer have 6DOF position data should we even render
... would like to identify common language with respect to how these concepts map to their own platforms, find core concepts that link all of our platforms together

nell: Elaborate on updates? Specifically with respect to ARCore, what is the update mechanism?
... not a common element

Max: ARKit, ARCore, some of these are event driven -- not completely relevant since the core thing we care about are anchors.
... If you were to imagine a virtual measuring stick, place object at 0, step exactly 1 meter and place another object and place these object un-anchored, distance between objects may change
... ARCore's understanding of the physical world has evolved, learned more features
... in virtual world, objects are still 1 meter apart. In physical world, unless we anchor those objects will receive updates as system learns more about the world
... eventually it stabilizes, converge towards a stable point but physical distance may adjust

nell: with frequency of updates, was wondering if frames were batched but doesn't sound like that's the case and everything gets calculated on every frame

Alex: One difference, in ARCore the changes are digested on a frame basis
... in windows mixed reality there are many different cameras, as a developer you get ambient improvement over time not necessarily tied to your render loop
... didn't want to get developers used to the idea of doing each render frame, get the idea that the render frame was where the system improvement was happening

Max: One question - is there a model in which we have a separate world update vs. render frame update?

Alex: If you did choose to poll 5s later, we don't take steps to prevent tearing. Can fetch data all at once
... might be that it's good enough that we make data available to apps, just ask for data when you want it
... if your app every frame pulls anchor position data every frame, should be sufficient

nell: feedback from developers - do they use this discontinuity data or is polling on the render frame sufficient?

<garykac> scribeNick: garykac

<scribe> ScribeNick: garykac

NellWaliczek: Any other platform issues to cover?

<ada> g- cw

<Zakim> Blair_Moz, you wanted to talk about ARKit

Blair_Moz: In ARKit is essentially the same as ARCore WRT anchors
... you get a callback when there are updates

alexturn: anchors are used to pass info that the system has detected about the scene

<Zakim> Blair_Moz, you wanted to talk about WORLD info, not work

Blair_Moz: hit testing is a large topic, ARCore return a hit test as a relative coord to an anchor
... we may want to abstract that a bit.
... they have one anchor type where you set a pose

NellWaliczek: if you are a platform owner, raise your hand
... we're realy happy you're here
... are you building AR content?
... what are you open to sharing?
... how are you hoping to use AR going forward?
... our thoughts: it is worthwhile to expose AR info even if it's simple
... what are folks trying to do?
... is that a valid premise?

<johnpallett> +q can also share information from partners from our outreach thus far

<johnpallett> +q to offer to share information from partners from our outreach thus far

ChrisLittle: We play with AR/VR
... over the years, we've been asked for weather feeds
... for games (and it didn't work)
... resolution of data is 1 pt per 10km, which doesn't work well
... trying to get global data source with good resolution
... other problem:
... because data covers a large area, the relative scale doesn't match what we normally have in VR worlds.
... another application is displaying 360 videos of the weather for training

johnpallett: did you want this in the scope of anchors only?

NellWaliczek: no
... generally, are anchors useful for you. Or flip side, if anchors are inadequate, what do you need?

<Zakim> johnpallett, you wanted to offer to share information from partners from our outreach thus far

NellWaliczek: what are the table stakes for what AR means for people

johnpallett: WRT shopping partners, placing virtual objects into real world
... the feedback is that anchors are nice to have, but not a must have.
... if user looks away, the customer doesn't need the object to be in the exact same place
... one common problem is WRT real world scale

<Zakim> dantheman, you wanted to share our the high level roadmap

johnpallett: of the 4 partners, 2 didn't look into it, and the other 2 didn't find that it mattered to users

Dan: (formerly at Google): shared anchors were useful for shared experiences (multiple users referencing the same object)

<johnpallett> to clarify: real-world scale is necessary. Illumination was not a blocker for the 2 partners who had looked at user impact.

<johnpallett> but real-world scale was generally considered a high priority for shopping use cases so that users can place objects into the real world and know they are measured appropriately.

Dan: anchors and hit testing would be nice

<johnpallett> +q to clarify one point

Dan: hit testing can work, but anchors works better for multiple users

max: anchors that are not cloud anchors are not as useful to you?

<NellWaliczek> ?

ravi: what happens if we setup an anchor and the anchor disappears while the env is modified?
... eg: an anchor on a door and the door moves?

alex: anchors are only at 3d coords

<NellWaliczek> that's alex

alex: they don't move with objects.

:-(

<max> it's ok - we didn't announce ourselves well

<max> will do better in the future

scribe: might be hard for users to be able to find persisted anchors in that case
... (when the world had changed)

Max: ArCore doesn't handle changing worlds particularly well.

if you put an anchor on a table and it moves, it probably won't work well.

NellWaliczek: I believe that ARKit has some anchors that are dynamic

alexturn: some anchors are harder than others
... if attached to a plane or a mesh, then it can possibly he handled, but a 3d point isn't necessarily associated with a particular object.

NellWaliczek: If the dev knows that the anchor is on an object, then this can be addressed, but it's hard to solve this problem.
... and we probably can't solve this for the web until there's a reasonable approach for this open-ended problem.

<Zakim> johnpallett, you wanted to clarify one point

alexturn: we need to things about objects being static or dynamic

johnpallett: real-world scale is possible even if there's a glitch during adjustment, and the anchors don't interfere with this.
... if the dev places an obj in the scene, and the world is adjusted for real-world scale, the obj may move a bit during the adjustment.

<Zakim> janine, you wanted to reiterate key user experiences we are working on at the NFB

janine: wanted to +1 previous comments
... we build VR for narrative storytelling
... most in educational space
... people working together in a shared space
... real world scale is important. So the students have to adjust the container so that it fits in the space.

NellWaliczek: if the anchors don't provide the ability to create a great shared experience, how big of a deal is that?

janine: a concern if it affected the shared learning, but we can check with the devs who might have more info.

NellWaliczek: that would be nice
... another example, public customer wanting to show air quality indicators in the environment
... change gears
... let's talk about hit testing
... start with a recap so we have a shared understanding of the problem.

max: basically, main challenge is that hit testing needs to be async (for various impl details)
... this makes some thigns very challenging
... delay between request and getting the results
... this is glossed over in phone AR because it has a 2d coord, which is accurate enought
... but a 3d pointer has more problems since you can see the error more easily.
... need some mechanism to indicate that we want a hit test for a 3d ray relative to something
... alex had a proposal

<alexturn> Previous discussion: https://github.com/immersive-web/webxr/issues/384

max: hit test for a ray that is relative to a coord system that is moving.
... that way the calculation can account for the frame lagging.
... people have asked: is this really necessary? one frame of lag?
... the answer: yes!
... even a tiny angle of error can cause a huge error, even for objects 10 feet away

alex: currently have 2 types : static and dynamic
... but that can get confusing when you want to compare them
... instead of having separate concepts for this, if everything had a coord system (one of which might have a FrameOfRef), that would be convenient.
... e.g., with a hit test, be able to say what it is relative to? your head, or your controller, or whatever
... e.g., forever, always hit test relative to this object, regardless of where it moves.
... a permanent "subscription" for that object

NellWaliczek: clarifying question...

alex: allows the system to calc the best possible answer. It would allow the system to roll the calc forward or backward in time depending on needs.
... current approach prevents us from properly calc'ing things that are time-indexed.
... we have a system of --hacks-- workarounds, but we're building of more of these, so it seems like we should address the core problem.

max: i love everything that alex just said
... Brandon had a concern: is there more than one XRFrame valid at the same time? If not, then it's harder to answer questions about historical state.
... is there a max amount of time that we can work backward.

<BrandonJones> +q

max: and what features to we lose by not having this ability?

NellWaliczek: we want to collect info so that we have confidence that we have the info necessary for our design.

<alexturn> +q BrandonJones

NellWaliczek: want to ensure that we cover the use cases.

<trevorfsmith> No objection. Full support.

NellWaliczek: wants to take the info and fold it into the device spec/explainer. Is that OK?

<alexturn> +1

<max> +1

<DanMoore> +1

<ada> +1

<trevorfsmith> +1

<ravi> +1

<BrandonJones> +1

<LocMDao> +1

<chrwilliams> +1

<arno> +1

<BrandonJones> Okay, nevermind

BrandonJones: <volume too high>

<BrandonJones> yes

<DanMoore> did you turn yours down?

<BrandonJones> My point was to clarify Max's lifetime point

<BrandonJones> In that the *hopeful* assumption is that an XRFrame is only valid for the callback that it was returned during.

<ada> Break šŸµ

<guido_grassel> Guido made a comment that in the use case where a hot test is made for the purpose of creating an anchor we do not have the issue with coordinate systems.

<BrandonJones> Because otherwise defining the scope of a "Frame" is a bit fuzzy

<BrandonJones> But that's just a spec technicality and Max's point still stands.

<guido_grassel> ... the system will update anchor's pose over time automatically.

<trevorfsmith> Did anyone else just lose the WebEx?

<ada> We're back!

<NellWaliczek> BrandonJones: yes, about the XRFrame, but with the addition of needing to address how input events are handled

<alexturn> johnpallett: Here is an article and video that elaborates on WinMR coordinate systems: https://docs.microsoft.com/en-us/windows/mixed-reality/coordinate-systems https://channel9.msdn.com/events/GDC/GDC-2017/GDC2017-008

<NellWaliczek> BrandonJones: we know... someone is coming to try to fix it :(

<dom> ScribeNick: alexturn

dan: Talking about WebXR Multiview to get consensus on how to move forward

MultiView Rendering

slides [PDF]

dan: renders same scene from multiple viewpoints to save CPU cycles
... more important as devices have more displays, e.g. VR headset with 4 canted displays

<ada> Issue: https://github.com/immersive-web/webxr/issues/317

dan: not using multiview is like cooking two meals by going to grocery store and buying ingredients and cooking meal and then repeating process from scrath
... multiview is about doing things in one pass when possible
... can save 25-30% - low hanging fruit to conserve scarce resources
... WebGL extension for multiview, but it's not just a simple swap-in
... have to update your appshaders and your rendering loop
... have to specially bind your texture arrays with framebufferTextureMultiviewWEBGL
... inherently intrusive to your app code

<DanMoore> draft https://www.khronos.org/registry/webgl/extensions/WEBGL_multiview/

dan: term comes up a lot about opaque frame buffer
... app doesn't know dimensions/format
... lets system do optimizations under the hood

NellWaliczek: One downside of WebVR is that the app picks the size of the buffer, which may not match
... causes extra copies when things don't match platform exactly
... WebXR lets platform manage buffer, to help avoid copies
... however, some platforms support array textures and some support double-wide

<BrandonJones> The opaque framebuffer is also necessary to allow multisampled rendering to a non-default backbuffer in WebGL 1.

NellWaliczek: don't want that to bleed through to the app, since an app might only test on one vs. the other
... dan is gonna cover some cases where we don't always want this to be opaque

dan: Q: Do we believe we need to support mixed multiview/non-multiview shaders?

<BrandonJones> Yes, IMO.

dan: spec says that if you're using multiview and a shader doesn't support multiview, you're out of luck

<Artem_Bolgar> The biggest issue with WEBGL_multiview extension for non-opaque fbo case - lack of antialiasing.

dan: that makes this all or nothing - hard if devs mix and match, web-style
... this may not be compatible with the opaque frame buffer pattern, though
... how large is our intersection of platforms that support WebGL 2.0, WEBGL_multiview, and implement an antialias extension

<BrandonJones> Would also love to hear from content creators on this subject.

NellWaliczek: Want to ask: how would this impact adoption of multiview in your code if it really is all or nothing?

ada: Change from WebVR to WebXR is perfect opportunity to change how devs all think of render views
... if our samples depend on multiview by default, when folks copy/paste they'll get support out of the gate

<JMedley> I briefly felt better about my web dev skills.

ada: frameworks like A-Frame or ReactVR could abstract across this anyway

<Artem_Bolgar> Not all platforms will support multiview, so the experience should be able to work for both cases - multiview and non-multiview.

ada: simplifying this problem space for web developers

<Zakim> NellWaliczek, you wanted to talk about where the shaders come from

<johnpallett> Apologies if I missed this; can a link to Daniel's slides be put in the agenda or here?

<jillian_munson> https://docs.google.com/presentation/d/1OVWg6qb1B8b_sHnG4hDdveUrzXlwsAgsYpbqjoPYT_0/edit#slide=id.g465f769066_2_75 PDF]

NellWaliczek: As ada said, a lot of the web is copy/paste - not just our code to copy, ShaderToy is another source, and shaders there may not use multiview

<BrandonJones> +q

NellWaliczek: if our render path is set up to require multiview, but then apps try to copy/paste, will they hit conflicts here?

dan: WebXR multiview shaders may also just assume 2 views, but what if you have 4?

<cabanier> q

dan: Are we creating a larger barrier if shaders actually have to be generic for any number of views?

myles_: In a world where shaders are mechanically translated to get multiview benefits, how much perf would we lose?
... is this even possible?

dan: Artem may be able to speak to that.

Artem_Bolgar: Nope.
... You need to choose the proper view/projection matrices, and it's impossible to detect if you need it or not.
... Also need to rewrite your render loop since today you only need to render once

myles: The web is not the only place where you need to do multiview things. Do native apps do this too?

Artem_Bolgar: In Unity/Unreal, the engine handles this for you under the hood.
... the engine knows where/how it renders
... if you use three.js, you don't need to rewrite shaders either, since three.js handles it

dan: Typical game engine will rewrite their shaders to handle this - have auto-rewrite systems
... WebXR may be different since you don't know precisely which underlying platform you're targeting

<fernandojsg_> I implemented multiview support on three.js using the initial proposal for webxr and lately another version just for the webgl_multiview, rewriting the shaders on the fly so for the user is transparent, it will just detect the path needed

BrandonJones: It's not just shaders, have to also pass an array of view matrices
... your whole render loop is affected

dan: Lots of work on various fronts
... if WebXR tried to solve this automatically, could be even more intrusive

cabanier: Do you know the adoption rates on how many apps use multiview? Does every headset/engine support it?

Artem_Bolgar: Within Oculus, everyone supports this on mobile
... Everyone whose platform supports it gets it automatically

<Zakim> NellWaliczek, you wanted to talk about adoption

NellWaliczek: Part of the adoption question is for experience builders
... if they use custom shaders, they have to opt in
... in Unity/Unreal, that's a choice you make
... for Sumerian, same question if folks want to use custom shaders
... do we tell folks, "sorry your custom shader is no good"?
... with WebVR, this problem is slightly different, since the buffers are delivered in a slightly different manner
... we want to get the benefit of the new design while supporting multiview

Artem_Bolgar: WEBGL_multiview is still a draft extension in Khronos
... that's the biggest hurdle for adoption
... can't enable by default in browsers until fully released
... not even talking about anti-aliasing
... in draft state for almost a year

NellWaliczek: Part of why it's been draft for so long is that our own collaboration with those folks halted for various reasons
... we didn't want the shape to end up mismatching what WebXR needs

dan: Nobody is saying we don't have to support the mixed shader case
... seems like a pretty important case

cabanier: CPUs are much improved - do you know the impact on power usage?

Artem_Bolgar: GPU load is the same but CPU is loader

<fernandojsg_> fwiw from our draft tests we got a improvement around 40% on average

<ChrisLittle> 0

dan: Q: Do we feel that antialiasing for texture array targets is a blocker?
... history: antialiasing is not supported by default when rendering to texture arrays in WebGL 2
... people talk about antialiasing as being critical to VR on mobile

<Artem_Bolgar> +1

<ChrisLittle> 0

dan: Statement: We believe that antialiasing is essential.

<Rafael> +1

<johnpallett> +q to ask a questin on anti-aliasinig

<Zakim> johnpallett, you wanted to ask a questin on anti-aliasinig

johnpallett: Is the +1 here about texture array targets in particular?

dan: Yes, in VR, it's particularly essential

NellWaliczek: There is a mental leap here to move to texture arrays instead of opaque frame buffers
... You end up needing to index to each eye's texture
... the generic approach is to use texture arrays

<johnpallett> +q to ask follow-up question on anti-aliasing

Bryan: The majority of developers end up having antialiasing on, and for VR it's even more important

<BrandonJones> Observation: Antialiasing is not as critical for Phone AR. But Multiview isn't applicable there.

Bryan: this is a key use case that we need to continue to support

<Zakim> johnpallett, you wanted to ask follow-up question on anti-aliasing

johnpallett: We should make sure there's no cross-platform disincentive by enabling it here

<BrandonJones> zSpace is a weird in-between state, because it's not strapped to your face, but is multi view

NellWaliczek: We're just trying to figure out first if this is a fundamental problem
... do we generically need to enable texture array antialiasing, apart from any particular device?

BrandonJones: zSpace is the one case where you look at a screen at some distance and so maybe antialiasing is optional, but you'd still want multiview
... so some devices may still be able to avoid antialiasing

cabanier: Is this antialiasing within the texture you're drawing, or after resampling?

dan: This is while you draw the scene when there's a multisample resolve step

Artem_Bolgar: This is an implicit multisample resolve
... it happens under the hood

Bryan: You can do it explicitly or implicitly

Artem_Bolgar: Not sure if that's true for multiview extension
... no mechanism for explicit resolve

<ada> +1

<BrandonJones> +1

<Artem_Bolgar> +1

<DanMoore> +1

<cabanier> +1

<Rafael> +1

<fernandojsg_> +1

<chrwilliams> +1

<lgombos> +1

<Chris> +1

dan: Straw poll: +1 if you think antialiasing for multiview texture arrays is important

<Artem_Bolgar> +100500

<fernandojsg_> +111111

NellWaliczek: -1 if you want to be wrong

<ada> šŸ’Æ

dan: Q: Will a single viewport suffice?

<dom> -e^iĻ€

<NellWaliczek> https://github.com/immersive-web/webxr/tree/webgl2_layer

dan: The viewport is the part of the render target you render into

<BrandonJones> https://github.com/immersive-web/webxr/commit/31341c8ab9dfea9971d4bf7abc95d8553b0b7725

dan: The current proposal renders into the full resolution of the array texture
... Do we need to render into part of the texture?

NellWaliczek: As we get to 4-panel displays, the array textures need to be the same size, but what if the panels differ in size across views?

<BrandonJones> +q

Artem_Bolgar: For dynamic performance, you should be able to reference part of the buffer without having to reallocate buffers
... use case of dynamic resolution is still applicable

NellWaliczek: What if the inner and outer displays have different size viewports?
... should be sure we aren't cut off from supporting that

BrandonJones: Worried that if allow something super-flexible that we might prevent ourselves from working cleanly on top of mobile hardware
... When I've seen more exotic panels (hi-res/low-res/pimax), all panels are the same size or same resolution with different panel densities
... May not want to count on that strongly, but so far hardware is assuming your views are all the same resolution

NellWaliczek: How do we know if we are making the right assumption here?
... Comment is that any decision should add any more choice

<ada> ACTION: file issue to allow developers explicitly state the amount view ports they support.

alexturn: StarVR has 4 viewports and has found that the best results for 2 viewports is better if you get "optimized for 2" viewports
... May want more explicit app intent here
... Even apart from rendering - can affect correctness of things like frustum culling as well

NellWaliczek: May want to dig in here more...

<ada> +1,0

dan: Q: Should the WebXR level 1 spec enable multiview rendering?

NellWaliczek: This is a fundamental question that will decide how much effort we put towards solving this.

<BrandonJones> +q

NellWaliczek: Hoping for yes, but let's see...

Artem_Bolgar: Answer from my side is definitely yes
... this is optional by its nature, since some hardware doesn't support multiview

NellWaliczek: Optional in this case for the browser itself
... Is having multiview implemented necessary to say that your UA supports multiview

ddorwin: If we know it's a thing in the IDL and you have to fail if the hardware doesn't
... since we have to fail anyway for some hardware, can just fail gracefully if you don't support it

BrandonJones: Your questions are distinct
... Yes, I think we should have multiview in the UA baseline
... No, I don't think we should require people to implement it
... WebGL can support systems not implementing an extension
... If a system doesn't support WebGL2, obviously can't support a layer requiring WebGL 2
... Have that problem regardless
... Could use WebGL2 layer without multiview extension - not sure what benefits would be beyond code reuse
... Some people may be into that sort of thing

NellWaliczek: Example: users may render debug info into one eye
... Time for a vote
... Multiview rendering is optional: Resolved since browsers can always skip implementing an extension

dom: There is a question of whether the extension is supported, but it could be mandatory to support it in WebXR if the extension is supported, which is a different level of optionality

NellWaliczek: Is it considered conformant to not support WEBGL_multiview if the current WebGL supports it?

ddorwin: At least you can test for the WebGL extension - can't test if the WebXR implementation supported it, so we should just mandate it

<alexismenard> +1

<ada> +1,undefined

<Artem_Bolgar> +1,+1

<fernandojsg_> 1,1

<lgombos> +1, +1

<DanMoore> +1,+1

<Rafael> +1,+1

<BrandonJones> +1, 0

NellWaliczek: +1 if you think it's a requirement that we design for multiview as part of WebXR 1.0, +1 if also WebGL_multiview being implemented implies that WebXR implementation must support the extension

dan: Q: How do we create a pit of success?
... Can't write rendering code or rewrite shaders for app developer

<alexismenard> +1, 0

dan: Is there anything we can do here to make development less intrusive for content developers?

Artem_Bolgar: Convince mrdoob to implement this in three.js

ada: How many content developers write to WebGL directly vs. using engines?

<BrandonJones> I write my own WebGL! I'm definitely weird, though, so don't mind that

ada: What if we told you that you have to rewrite your shaders?

DanMoore: I write my own rendering logic and wouldn't turn down a 50% speedup

ada: Seems that people who write engines wouldn't turn down a huge perf increase

<Zakim> garykac, you wanted to ask about backface culling

garykac: Every time I hear people talk about multiple views, I think about backface culling
... Is that out of scope?
... Backface culling, not frustum culling

NellWaliczek: Do people do something different here that's multiview aware?

dan: Because this is part of rasterization pass, the right thing happens, even if views were looking at opposite sides of object
... frustum culling also important

NellWaliczek: Issue is how to easily get view frustum that includes all views

alexturn: If your views go more than 180 degrees, may not be normally formed combined frustum

<ada> ACTION: Double check with displays which have more than 2 displays.

NellWaliczek: Need to think through multi-display setups

<ada> action update the branch where this issue was raised

NellWaliczek: and pull together pull request

<ada> ACTION: update the branch where this issue was raised

NellWaliczek: also need to talk to WebGL group about array textures being MSAA-friendly targets
... may wait for that to issue pull request

Artem_Bolgar: Didn't talk much about keeping opaque frame buffer
... If we do texture arrays, do we keep opaque frame buffer?

dan: We didn't explicitly say this is a WebGL 2.0 only feature
... Is WebGL 2.0 exclusivity too limiting?

NellWaliczek: General inclination is that it's possible for us to leave it and separately support array textures
... Do we need to address enabling non-homogenous shaders? That requires non-opaque frame buffer
... Concerned that the flowchart for choosing what to create is getting crazy
... Concerns given that most content is WebGL 1.0 content

Artem_Bolgar: Would vote for having opaque frame buffer support with WebGL 1.0, which also supports antialiasing
... opaque frame buffer doesn't even require multiview to be requested
... Are we getting rid of opaque frame buffers in combination with multiview

NellWaliczek: Hoping for yes, due to issues in mixing shaders

Artem_Bolgar: Already implemented this in Oculus browser for WebVR

NellWaliczek: Easier in WebVR given how buffers are handled

Artem_Bolgar: Using same approach for opaque frame buffers
... Need to know if you're using multiview or not and choose proper shader

NellWaliczek: Not sure if I'm concerned one way or another
... There are two sides to this question

cabanier: How does this all relate to WebGPU?

dan: Came up in the WebGPU session yesterday?

<BrandonJones> +q

dan: Somewhat irrelevant to WebGPU since D3D12/Vulkan/Metal-style rendering API makes this your own problem anyway

NellWaliczek: WebGPU support is out of scope for now, but it's moot anyway

BrandonJones: Those APIs are focused on reducing CPU overhead - multiview doesn't save GPU
... Using Metal/Vulkan saves you most of the CPU inherently

ada: Thanks for that long discussion! Let that run into the unconference section a bit

<ada> https://github.com/immersive-web/administrivia/issues/24

ada: Similar format to yesterday's plenary day
... everyone proposes topics to be discussed
... collectively, we put together a schedule
... then we gather in groups to discuss the issues
... proposing a two-track unconference
... one hour blocks
... if we have more than 4 topics could do 30 minutes

<ada> https://github.com/immersive-web/administrivia/issues/24

NellWaliczek: Link posted in IRC channel - people can post in the next 15 minutes during the break
... May just add topics from folks not here right now

<Blair_Moz> +q to ask are these related to "webxr device api initial spec" or more proposal/incubation stuff too?

Blair_Moz: Are we focused more on getting WebXR out the door, or broader incubation stuff?

ada: Let's not be prescriptive - it's whatever people want to discuss

dom: Going until 7?

ada: May end early if we have less topics

<ada> https://github.com/immersive-web/administrivia/issues/24

max: Do we still want to talk about #388, since the action was to figure it out offline with Blair?

ada: Fine to remove it

<ChrisLittle> ScribeNick: ChrisLittle

<scribe> Scribe: Chris Little

Unconference 1, Guido

guido_grassel: 3 use cases for Anchors
... 1. User A creates Anchor, wants to invivte user B to see an anchor at the same place
... Other use case put an object at a location, closes the browser, reopen later, expect to see objects still at same place.
... One concern that we heard earlier that systems keep impoving the locations of items.
... The other trick part is user A using one browser, User B using another browser. How is the info exchanged, perhaps across teh cloud

s/sthe/the/

<Blair_Moz> +q

<Zakim> JMedley, you wanted to lifetime of shared anchors

<DanMoore> +q

<DanMoore> -q

JMedley: what the lifetime of shared anchors be ? Geocached game could be many days, others could be really short

<DanMoore> +q to re: time limits

BrandonJones: contrast Ambient AR, envisage offloading achnorsto prevent overload

NellWaliczek: Distinguish between persistent and shared anchors

<johnpallett> +q to ask questions about shared anchors and privacy

ada

ada: follow-up assertion from Dan that 24 hours is reasonable for sharing

<Blair_Moz> while shared and persistent are separate concepts, if I'm going to persist and CHANGE devices, the issues are similar

Max: Hard technical challenges: an identification system across the devices, but bigger is the
... different devices. some use features points or another that uses computer vision, another using GPS, etc
... It may not be possible for say an IR tracker share with another device.
... This is an issue for the Engine Developers to mutually discuss

<BrandonJones> +q

alexturn: lots of this drills down to the underlying development kit. does the user know this? would the user know about which cloud s appropriate?

<ada> +b BrandonJones

<ada> +q BrandonJones

Blair_Moz: big problem is cross platform compatibility, thinking about it for a year. Are we willing to have stuff exposed that cannot be done globally

I could iamgine a museum mapping out their experience severaal times for the different H/w users?

It will get solved per platform anyway.

max:

ada: definitely seen stuff shared across iOS and Android

<Zakim> johnpallett, you wanted to ask questions about shared anchors and privacy

DanMoore: sorry - I wobbled (chris)

johnpallett: are we talaking about only geometry or also markers
... these shared anchors could be things on the web, email, etc.
... I assume anchors could be geometry, markers, images, identifiers, breaches privacy. Should anchonrs be hashed somehow?

max: do you mean a hashed point cloud

johnpallett: imagine there was a water bottle point cloud that allows a bottle to be identified. could definitely hash

max: even a micron diefference in a point cloud would give a a different hash.

<dom> [it feels like we are discussing the philosophical issues around identity of entities]

<DanMoore> -d

<DanMoore> -q

<johnpallett> what I said: privacy is important if you're sharing an anchor publicly and ideally you couldn't reverse-engineer the anchor to get something sensitive. I can imagine anchor types, e.g. a water bottle point cloud that boils down to a hash algorithm that can detect water bottles but can't be reverse engineered.

<johnpallett> then Alex said: if it's a water bottle you could send 1000000 objects at the anchor to figure out if it's a water bottle (assuming one was a water bottle)

mmocny: codecs is a good pattern, allowed progress. maybe should be similar pattern for anchors, and not complete problem up fornt.

<johnpallett> then I said: agreed, and the reverse-engineerability of an anchor would depend upon how many inputs would result in success; but an anchor for a very specific location would have only a few such inputs; a 'water bottle' might have many.

<Blair_Moz> I agree with previous commenter: I've implemented "setWorldMap" and "getWorldMap" in my version of WebXR Viewer, that just works with ARKit.

BrandonJones: firstly, can shared in individual platforms. Shared anchors available form Google, dependent on another company, so inappropriate for a web standrd

<Blair_Moz> Brandon: if Google openned up the Cloud Anchor stuff, so we could not host data on a server, that would be great.

BrandonJones: propose that any solution should not use a third party.
... anchors could not be done wth 'just a library' as not enough info exposed
... one exception is twitter in a mixed reality mode with a virtual camera calibrated by user

<ada> +1 for p2p

BrandonJones: we need a data packet that travel between the two clients.

<Zakim> max, you wanted to mention a WASM solution

max: prototype exposed a QR code for th other phone to read. Not great results

<BrandonJones> Thanks for the real-world example, Max!

<Zakim> dom, you wanted to distinguish managing interop at the real-word level vs at the browser level vs at the server level

max: cloud anchors could be completely by passed by a WASM type solution

alexturn: Euforia and Hololens share an anchor to improve photos.

<DanMoore> +1

dom: sharing could be over several layers, platforms, etc and a very broad question. Needs tobe subdivided into bite sized pieces

<ada> +1

<jillian_munson> +1

<trevorfsmith> +1

<JMedley> +1

<alexismenard> +1

<alexturn> +q to +1

Call for time up. Brains too fried

applause all round

RSSAgent, please generate minutes

Summary of Action Items

[NEW] ACTION: Artem_Bolgar to follow up on system level UI popping up for tracking loss
[NEW] ACTION: chairs+editors to work on roadmap planning
[NEW] ACTION: Double check with displays which have more than 2 displays.
[NEW] ACTION: Figure out if any additional work needs to be done for #310
[NEW] ACTION: file issue to allow developers explicitly state the amount view ports they support.
[NEW] ACTION: FOllow up with App developers to see how they handle tracking loss
[NEW] ACTION: Max to read through the conversation on #388 and identify other issues
[NEW] ACTION: Rik to detail what happens in tracking loss
[NEW] ACTION: update the branch where this issue was raised
 

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2018/11/06 09:13:34 $