30 Jan 2019



dom, Atsushi_Shimono_(remote), Chris_Little_(remote), Hirokazu_Egashira_(remote), ChrisLittle, ada, bertf, jungkees, Tony_Brainwaive, trevorfsmith, NellWaliczek, alexis_menard, bajones, RafaelCintron, alexturn, dkrowe, Laszlo_Gombos
cwilso, ada, trevorfsmith
dom, josh_marinacci, Manishearth, alexturn


<dom> ScribeNick: dom

Spec status

Nell: Brandon and I often get asked what's stable in the spec
... we've heard the feedback there is confusion about the status

<trevorfsmith> We're getting started and are setting up audio for remote folks.

Nell: Ada and Chris will draft a summary of the meeting
... and a summary of the status of the spec
... but I'll try to give a quick overview
... for the most part, what's in the explainer document is fairly stable
... there are few identified deltas between the spec and explainer, filed as issues
... there are still some major design decisions that need to be made
... one of them was controllers, which we discussed yesterday
... another one is hit testing and anchors - we discussed hit testing, more work needed on achors
... both of these are additive, so should not create breaking changes
... a third item is camera access, including alignment of camera framerates
... we consider this as v1 since it's needed to polyfill further advancements of the spec
... also needed for screen based AR experiences
... Fourth topic: in-line AR and use of HTML/CSS for UI in world-space
... this will be discussed today where we will summarize the existing status of our inveestigation and decide next steps
... These are the 4 major design issues Brandon and I want to work on, in addition to bringing the spec up to shape
... everyone is more than welcomed to propose pull requests to help with spec text, that Brandon and I would review
... We're also adopting more explicit processes around labels and milestones in github - we will send more info after the F2F

Josh: where do we define the scope of 1.0?

Nell: we don't have a written list of what's in or out, since some of it depends on issue resolution
... for topics that are not currently in scope (e.g. object recognition), I leave the floor to Trevis

Trevis: the process is to bring it up in the proposals repo in the Immersive Web CG
... once there is enough momentum on an idea, we create a new repo to build up on it
... later steps might include migrating it a WG (Immersive Web or other)

Nell: Things can be incubated separately and then integrated in WebXR down the line - as we did for hit testing

Chris: we also have a "Future" milestone in WebXR, for things that are not critical for 1.0

Report from unconference: Declarative 3D language

Leonard: we have 10-15 people
... main question was how to get started on this
... I'll start a thread on public-immersive-web, create an issue on the proposals repo
... we'll also ping the declarative VR CG
... there is an important security aspec on which John will contribute
... Kip also shared his interest in helping on this

Tracking Loss

Handling tracking loss #243

<alexturn> https://github.com/immersive-web/webxr/issues/243#issuecomment-454282109

Alex: At our TPAC meeting, Max and I were talking through behavior during tracking loss
... one question was whether there was enough commonality across implementations for tracking loss behavior
... can we get uniform across behavior when tracking is lost and recovered

<ChrisLittle> * Current speaker's audio is good, but others often very faint and hard to hear

Alex: 3 main aspects for tracking loss (and 3 for recovery)
... some of our design constraints - we don't want to have too many modes
... In terms of tracking loss, three possible situations:
... * full tracking loss (null pose) - not great, except if there was no tracking prior to the loss
... it can also make sense for anchors
... * freeze last position, keep orientation tracking
... this is often what VR apps want, gives a good default experience in VR
... it can also easily be adapted to move to the two other options
... it also applies well for controllers
... * keep orientation tracking, null position
... similar, but less good for 6DOF headsets due to neck modeling
... Overall, 2nd option sounds like the best one
... The table below summarizes how to apply it to the various tracked entities

<Zakim> klausw, you wanted to say does "freeze position" still support updating neck model or IMU-estimated position?

Klaus: clarification re freezing the position - does that allow to update it based on neck modelling?

Alex: yes - it's logical freezing

Brandon: I haven't thought this through
... do these same options apply to phone based AR?
... is there a significant different between these 2 models?

Alex: neck modeling would be replaced by inertial tracking presumably in that context

Nick: we've found that for mobile AR, when tracking is loss, you want to keep position frozen and keep orientation tracking
... you want to communicate that tracking has been lost, but also why it has been lost (e.g. phone moving too fast)
... which can be useful to give feedback to the user

Alex: this sounds aligned with that proposal
... the only time we don't return a pose is when there is no existing position to fallback to

<adrian> re: the snap problem, if we have that bit the engine can just choose not to render

<adrian> but it simplifies the overall usage if pose is always there

Nick: frameworks provide support for compass-aligned orientation; snapping a view is not a good experience, but neither is it good when up is not up - gravity is a key thing to align to
... in some cases, the best approach is not start the experience until tracking has been acquired
... some of it can be left to the application to override

NellWaliczek: what kind of time delta are we thinking of? is this .5s or 5s?

Nick: the first .5s can be very jarring

Nell: I meant about the suggested delay to rendering

Klaus: it can easily take 5s

Alex: it sounds like the general approach works, with refinements on the never-tracked situation
... the basic rule is if cannot guess, do not pretend you can

Nell: if you are in tracking loss, in AR, you've lost real-world geometry - so can't do hit testing in that context

Alex: in terms of changes we need: change "head" to "viewer"
... refine the "not tracked" yet situations
... and take into account the impact on hit testing

Brandon: a possible addition: we should allow room for the UA to intervene (e.g. for security / privacy reasons)

Nell: maybe tie it to the blur event

Alex: I'll turn this into a pull request
... Moving on then on reviewing tracking recovery
... apps have different ways to react to tracking recovery
... here is what I've seen in use:
... * snap the camera back into position
... this is adapted for stationary of bounded VR experiences
... this creates a jump, but there isn't really a great alternative
... you can always smoothen the transition

<ChrisLittle> bye supper calls

<Zakim> klausw, you wanted to say Misplaced origin can lead to user smashing a controller into a wall, so this seems required to support and not just "desired"

klaus: I think it's more than desired to keep the origin static
... you don't want the user smash their controllers in the wall
... we have to support this

Nell: I realize that because the way we've setup the reset event, if you have tracking loss and you recover, and the origin has changed, we can fire a reset an event
... and we could provide the updated origin via a transform
... I think that addresses my earlier concern
... I'm a bit nervous about how gnarly this will be for developers, but I don't think there are really good alternatives

Alex: I think the key is to make reasonable defaults for "naive" app developers
... while keeping conformant behaviors across devices and implementations

<adrian> what was the wifi password for MV-Summits?

Alex: * the second behavior in tracking recovery is to keep camera and nearby objects at current world coordinates
... that's best for bounded VR scenarios with teleportation (to avoid disrupting the UX another time)
... but behavior 2 can be built on top of 1 as we will discuss
... this would also apply for an unbounded space
... * 3rd behavior (which we had discussed at TPAC): reset world origin to current camera position
... we could do this for unbounded, but it feels more confusing to me
... #2 feels better for unbounded

<Zakim> klausw, you wanted to say "viewer reset" is important for controllers recovering tracking, "pose reset"? and to say maybe keep up to app - API supplies a suggested originOffset

Klaus: we already have originOffset to express the relationship between virtual and real worlds
... my suggestion would be to stick with #1 and provide a suggest originOffset adjustment in the event
... and leave it to the app to decide to apply it or not

Alex: for stationary, I agree; a bit more of an open question on bounded; but for unbounded, there is no firm relationship of between virtual and real worlds
... originOffset is not really very meaningful for unbounded

klaus: we also need to be clear about the sequence of events

Nell: when you write this up, pay attention to the viewerSpace

Alex: the viewerSpace feels like anchor or controller

Nell: this matters eg. for a tag-along UI to alert the user that tracking loss had occurred
... 👏

Alex: the classification of stationary/bounded/unbounded has helped a lot to extract the relevant scenarios

<max> +1

<leonard> +1

<alexis_menard> +1

<Kip> +1

<RafaelCintron> +1

<trevorfsmith> +1

<jungkees> +1

<cabanier> +1

<adrian> +

<adrian> +1

<NellWaliczek> +1

[unbounded support]

<klausw> +1

<josh_marinacci> +1

<bajones> +1

<artem> +1

<bajones> +A thrown ball

<NellWaliczek> https://github.com/immersive-web/webxr/issues/225

<scribe> scribenick: josh_marinacci

kip is talking about request anfimation frame and request idle callback

kip: we haven't speciried what happens with raf and idle callback w/ webxr

intersection of rAF and rIC

kip: currently they effectively mute these things when presenting
... however we do need compositing to continue, so we need something. let's break it down.
... what's the happy day scenario. a mobile first or all in one device (Oculus Go?). raf should run at native framerate. ric we'll talk about later
... this became an issue because a laptop w/ tethered vr headset has multiple frame rates and multiple frame timing bretween devices

most underlying systems have a notion of breaking down a fram4e period into specirfic points in time and a notion of ownership. hard realtime tasks.

kip: a typical frame will have poses available, some rendering, frames to be presented, then additonal time for idle. if these go outside of their allocated time that may conusme resources during the hard realtime tasks. ex. compositing. could cause viual errors. blanking headests. etc.
... additonally, unique issues related to temporal aliasing. ex: imagine a mythical quad layer.
... you've attached a video element w/ drm. you have 24fps film. through repeating it is upsampled to 60hz.
... the input frames have to be distributed to the output frames. if you count 24 -> 60 should give you acceptable 232323 pattern.

problems from output of the bvrowser's 2d compositor and resample to 90hz for headset. temporal aliasing on top of temporal aliasing. end result is 34353435. this makes visual jitter and jank.

in optimal case of going directly from 24 to 90hz you'd get 3444344434443444

video is a mjaor use case of vr so this is a big problem, but could also affect 2d canvas ,css animation, and more.

<bajones_> +q

kip: wants to know from everyone here. given that there is no perfect solution can we optimzie for the primary experience. compositing of laptop w/ rift would be in sync at 90hz. up to the hardware and browser to work with this logical 90hz. this implies that the frame rate may change at runtime.

oculus async time warp drops to 45hz

the reasons that someone cares about a 2d display is that this is often asymetrical epxerience. ex: twitch stream, driving instructor. experience of s3econdary views would be compromised.

kip: alternatives
... request idle callback. ric
... another propblem with the current model. sub-frame timing. a normal non xr / 2d use case. at the start of vblank raf is fired. content does everything it needs to do to render, then sometime later it is presented when the frames flip. vr bvreaks this. in the ideal situation you'd want the r4endering porition to be as late as possible in the rendering

cycle so the rendering and pose are as close together. in order for the start of frame time to be dynamically moved we need a closed loop system for the feedback to work.

we encourage peopel to use only raf for rendering and ric for physics and other non rendering tasks. problem is ric isn't guaranteed to be called.

kip: proposal: additional event callback specifically for these non-view dependent things. or. re-introduce submit frame call but make it be a promise instead of a blocking call
... caveat: xrsession. suggest that we have raf still run at native display but no longer have ric.

<leonard> q

kip: there are lots of decisions to make here. straw poll time?

nell: not yet

<leonard> +q

bajones: thank you. this is a complex and subtle topic. so many thoughts. promises are not well suited for anything timing dependent. should be a callback or an event. other than that you are right.

kip: it could be an event, but we shouldn't call it request idle callback because not guarantteeed to be called every frame

bajones: in Windows Mixed Reality there is a call that updates the pose within the frame (frame pose update?). if you frontload physics and stuff at the time you render you can get an updated estimated pose

nell: woudl this address the concern? we'd talked about this before.

kip: might help. the key is the platform needs to experss to content the time periods that things should occur.
... need indicators for different parts of the rendering process.

other systems suggest doing cpu intensive part after rendering. web workers would also be good for this stuff but synchronization is a problem

bajones: this is very platform dependent. android is exteremly opinionated. windows as well but in opposite direction. could be useful to have something like that. hasn't been a priority yet. hopefully this can be done additively.
... TAG felt an explicit frame call didn't feel very webby.

bajones; keep raf as it is today. introduce something similar to raf with stronger guarantees at some point in the future.

kip: that would probably solve these problems.

bajones: we could give some stronger guarantees. some interesting things we coudl do. a matter of prioritization.
... going back to the video frame chart. currently chrome and other browsers cap raf at 60hz for stupid compatibility reasons because of badly written content that assumes raf is always 60hz. there is almost certain to be compatilibity issues if we try to align window raf w/ xr's raf

kip: based on that feedback if we can't change raf then the engineering work would preclude us from doing some of these things. last option: we don't change raf. still does 60hz. instead we handle things like video layer in bespoke ways.
... old dvd players would look at the current frame and add additional frame reordering because it knows both frame rates. we could do something simliar for video

bajones: if you can't view the page shoudl we fire raf?

kip: if window raf behaves like xr raf then why have both? compatibility and webworkers. also sub-frame timing diferences. that' swhy we have both.
... if you are on a dedicated device w/ 1 display do we consider the 2d elements and the DOM part of the same document as the xr content?

<leonard> -q

kip: does this notion change as you bring in xr elements.

nell: later today we will recap some stuff, but for security reasons if we go down the road of dom to layer approach odds are that it will end up being separate document (like iframes). i appreciate raising these issues so we can factor into design work. we shouldn't figure this all out yet. too early. dom layer is still iffy.
... 2: i'm apprehensive about locking the window raf to the headset raf because we'd find a pit of failure of content who don't optimize. option: subdivision of the raf that the headset is running at. if headset is 90 then raf is 45. sites are already able to handle that.

3: what would it mean to add a session idle callback or to divide sim and rendering phases. one of the things we don't have closure on is (we should focus on next few months) is performance feedback indicators. no indication of what hte frame rate shoudl be and if the app is making framerate. maybe it could scale back to get higher framerate. there is no

signal to the applicaiton of what it can or should do.

nell: at MS we tired to address this right when specter happened so that work was tabled for now.
... not certain if we are PR ready or straw poll. i think we need to split out the issues and solve them separately, then that will lead to the final solution.

<Bajones> +q

kip: does anyone else work on high rate monitors.

bajones: my understading is most browser vendors shrug. no idea what they should do. this kicked off when ipad pro came out. most browser makers didn't know what to do. no consensus yet.
... the easy thing to do is to cap at 60 but sometimes slower.
... another point. there exists some hardware where you can dynamically choose 60hz vs 72hz specifically for matching w/ content. if we really care about this we could let apps dynamically select this.

kip: for gecko framerate can change but fixed during execxution of the browser process. hard to change framerate while content is loaded.

Alexis: adaptive vsync tech will come to standalone headsets at some point
... think about a laptop where the battery is low and you want to change the rate.
... i think we are opening a can of worms here.

bajones: it is best practice to do animations in raf but not required. if you do drawing updates anywhere else it will still work. technically if we have window raf running at monitor hz and xr raf at headset update your updates will still work. it's an open quesiton if the browser will composite this in time. will introduce latency.

kip: canvas we could fix w/ app code, but for video we may need a bespoke solution.
... in order for gsync and freesync the 2d windows would also need submit frame. now the monitor pushes frames instead of pulling. similarlly in the offscreen canvas there is already a submit call (i think). where do people see this all landing? i'm hoping to see an explicit submit for this new kind of hardware

ada: we are low on time. please be brief.

Rafael: in the case of main screen and hmd why do we want to do more things on your computer that we are just going to throw away. this seems like it would introduce more randomness and problems. also: on the webgl and gpu working grouips they are debating whether to add a present frame call to do before end of raf to tell system that you are done and help

w/ schedule.

rafael: yes offscreen canvas have this too.

kip: so we have related issues. window raf . hesitation to run it faster. sounds like ric shoudln't happen when in immerrsive mode because lots could break. sounds like early possibility to consider an explicit submit call but this shouldn't be WebXR specific.

<Zakim> klausw, you wanted to say is there a suggested method to reliably run something in between XR rAFs?

klausw: currently you can only submit by existing raf. you want a way to say please run this next. one approach: say I am done and please run this thing next reliably and I don't need poses. question: is it good enough to have this or do we need something more complicated.

kip: yes, we could leave it up to the content to remember what it needs.
... to be clear, our primiary concern is to ensure the primary OS rendering has the info it needs to effectively run it's closed loop system to adjust the render. currently open loop. worst case.

m: in terms of pass through devices. tested arcore on android synchronizes raf to native frame rate of camera. if you put virtual conten tin the world that animates faster than the real world it creates an uncanny valley effect. we need some solution to have the ar raf in passthrough devices to be connected to the camera framerate.

chirs: break for lunch!

<dom> [break until 1:30pm PT]

<klausw_> ^^^ 12:01 s/existing raf/exiting rAF/

<Manishearth> scribenick:Manishearth

<scribe> scribenick: Manishearth

<Tony_Brainwaive> #present+

NellWaliczek: so, quadlayer
... not just talking about quadlayer, talking about the notion of DOM and the intersection with webxr
... two categories here: the issue that was introduced back two F2Fs ago


NellWaliczek: the traditional ways people draw 2d apis don't work if there's no way to ... draw them to the screen
... one of the things that we hear frequently -- the idea that you can create 2d uis in world space in html/css
... back at tpac we spent a bunch of time talking about the problem space, recapping:
... ... one of the big things that we were asking was: what does it mean to meet both of our requirements
... the first most obvious thing we'd need to support is interactive content
... e.g. having a button -- what that means in the context of XR with aiming, etc is somethinng we need to answer
... a related question: what do we do about x-origin content? there's a big security rabbithole here
... this is why we don't have APIs like dom-to-texture
... so aside from the issues wrt input there are security concerns
... about letting someone know about what's in the DOM
... interestingly this is solvable in the context of XR since the actual composition is done by the UA and not the user
... this is the concept of a quad layer, the ability to have a "quad" in space which you can back with ... something
... from the dom, which can be then rendered and composited by the UA. the webpage never sees this
... this also opens the door for refresh rate stuff we can play around with because they're independent from each other
... there are things we need to discuss what that structure looks like -- is it just a subelement of the dom?
... bit trickier with multidisplay situations too
... we were also playing with having a different document root
... instead it's something like a different DOM layer, or an iframe or something
... key thing: when you talk about hosting x-origin content in a quadlayer in XR, you get gnarly stuff about *input* in that origin
... to render things you need pose data which the parent document can control
... so for xorigin content by its very nature we can't let input events reach it, even when it's same origin with embedded xorigin info
... there's no secure way to do this without letting the parent document eavesdrop on what's going on here
... what the plumbing code for input in therms of UAs is something we need to look into
... that's roughly the high level
... oh, one thing: we also explored one big problem: how can we create a pit of success for people who create experiences thinking about handheld which we want to work on headworn

<Leonard> +q

NellWaliczek: if we put a quadlayer in xr that's a world space thing and that can work in handheld AR but it can break down too
... what would be an easy way to help folks who wish to optimize for handheld
... we need to take this first round of work and turn it into a more concrete proposal

ada: do you see people building AR immersive experiences entirely out of dom elements or would this always just be an addition to ar sessions

NellWaliczek: my hunch is that 3d objects will always be important
... (?) did some work in sumerian to allow people to kinda fake this but the performance wasn't great

Nick-8thWall: just wanted to bring up omething for consideration: aas we consider doing things like potentially trying to write a polyfill to allow people to do webxr
... how do we get the security model to work in this case, since the polyfill doesn't have this power

NellWaliczek: so this is something jill (?) has worked with, and it's a hard problem

bajones: I would probably expect that we need to start treating some parts of the spec as extensions as opposed to concrete parts of it
... we will have to have at some point a mechanism that lets you say "i want a layer of this type *if supported*"
... browsers may have different priorities here
... e.g. a layer for DRM video (also not polyfillable)
... going to be a point where the polyfill will admit it can only support the webxr core
... it's kinda sad we'll have to leave things behind , though

NellWaliczek: though we should keep thinking about polyfillability when we design things, e.g. we can implement these layers in a reduced form for polyfills

Leonard: is the quad layer rect or rhombus?

NellWaliczek: rect

Leonard: but it can be tilted?

NellWaliczek: yes, it's a square texture that lives in 3d space

Leonard: can you put an AR scene inside this?

NellWaliczek: yes, an inline scene

<Kip_> r+ Would we support stereo quad layers?

NellWaliczek: this will probably be terrible for perf

<Zakim> klausw, you wanted to say implementation of full input is hard (i.e. "file" input element); screen space vs world space

<adrian> re: the immersive-in-immersive, that's how I did the layers implementation

klausw: what about things like file dialogs, not all of this is possible to implement. perhaps a non-interactible DOM is easier to implement even without polyfillability in mind

<adrian> it could work well with quadlayers if it's iframe-based

NellWaliczek: potentially, we need to dig into this stuff a bunch
... such UIs are things you can trigger from pure JS too, though, so we need a solution here too

<Kip_> Also... Would the DPI of the DOM elements be scalable / selectable? Or locked skeumorphically to desktop DPI.

NellWaliczek: we're so early in the process we don't know what escape valves we have yet
... i have a hard time picturing us landing on a quadlayer design that doesn't support arbitrary textures

<Zakim> josh_marinacci, you wanted to say i do this sort of quad stuff alot when creating visual tools that work inside VR

josh_marinacci: just wanted to chime in i want to be involved in this
... i have use cases for visual tools i'm building that let you create such things inside of media

NellWaliczek: i'd also like to get trevor's input on this stuff, especially on how to build a single mental model for this. it's a gnarly problem, won't work on this in a vaccuum

<cwilso> manishearth: this won't support occlusion and things like this?

<klausw> handheld AR quad layer is both world space and screen space, the app has no control over world space position since it's being moved by the user. Similar to controller attached UI elements in a HMD?

<cwilso> Nell: no. But that is sad.

<cwilso> ...and it makes me sad.

<cwilso> Nell: there are complexities around layer painting order, etc., that don't exist in the underlying APIs.

<cwilso> ... if there's interest we could make that an unconference topic

bajones: to follow up: occlusion on the real world vs vr needs the whole api to be updated for
... but for things like occlusion between layers when you're just doing painter's algorithm stuff
... in such a case you let the quad layer be the bottom layer and render the scene as usual
... and you instead have a dummy layer in your scene which stencils out an alpha layer
... this kinda sucks to do but is a reasonably common technique

NellWaliczek: the real reason for the introduction of quadlayers is

<Kip_> Kip: Requiring "painter's order" does not in particular preclude occlusion of the layers, as the layers can still be rendered from back to front into an accumulation buffer while depth testing (and not needed to depth write)

NellWaliczek: (explains predictive pose)

<Kip_> Kip: This is the same process needed while rendering alpha blended geometry into a scene with a depth buffer populated by opaque geometry and invisible occluding geometry.

NellWaliczek: there's a wide variety of techniques for projection where you offset your pose to correct it
... turns out that people's brains don't see the artefacts of this
... the reason that's relevant is that if you were to take text and slap it on one of those textures, boy does it get blurry
... one of the techniques for this lets the compositor slap the quad in the correct place
... there's a whole bunch of reasons why a quad layer is super useful in vr headsets
... unless the depth buffer is associated in a particular manner you will have artefacts

bajones: oculus has published a bunch of public-facing material on things like reprojection which is really good

NellWaliczek: i would love to see extensions where folks can start playing around with quad layers

<Kip_> When using a natively supported quad layer for rendering text, it is not included in the per-pixel reprojection. Instead, it is composited on top of the reprojected content with the final pose without any per-pixel warping.

NellWaliczek: need to distinguish between UA experiments vs extensions we have agreed on
... e.g. there are hittest experiments chrome did that are going to be supplanted by our work and people thought the experiments were stable

<cwilso> scribenick: alexturn

<cwilso> this discussion is about https://github.com/immersive-web/proposals/issues/43

Long lived XR Apps

Trevor: Been talking about WebXR Device API - now want to talk about a strawdog proposal for something different

<dom> Extending WebExtensions for XR #43

Trevor: First, talking about how WebExtensions work
... Most folks familiar with adblockers - long-lived JS/HTML that understand the browser's full context
... Can give permissions like rewrite ads/translate/etc.
... Functionality not associated with one web page
... Wrote a bookmark extension - here's the manifest to describe what permissions it has and the actions to take
... You just zip them up and add them to your browser
... Can also add startup scripts
... Permissions are fine-grained
... What does this look like in XR?
... Think about user agents like Alexa, etc.
... Not well-suited given our current WebXR Device API
... Think too of Google Lens for translation
... Could start/stop, but more of an ambient application
... What do we need to add to the WebExtensions API to allow for these scenarios?
... Currently a CG that has shipped - not a standard
... Security/safety aspects as well
... WebExtensions currently get lots of permissions - we need to be careful about security here
... Need a new kind of background script that can signal tracking loss, etc.
... Unlike WebXR Device API, we can't just give full control - UA has to orchestrate
... Background script could tell UA to render glTF and request changes

<Zakim> klausw, you wanted to say how much does this overlap with WebXR API? Address separately, or as part of declarative XR?

klausw: This seems scary - is this right approach?
... Maybe fits better with declarative XR approach with contributing geometry
... New set of APIs here

Trevor: That's right

klausw: This is a CG topic - not for WebXR Device API in WG

<bajones> +q

Trevor: Indeed, this is scary - not necessarily more scary that today's 2D WebExtensions

bajones: This is the purpose of WebExtensions - because you install them explicitly, we trust them more
... Scary not necessarily a reason to avoid WebExtensions here
... This can often feel like the "WebXR group"
... This really is the "Immersive Web WG and CG"
... Anything that advanced the immersive web is fair game
... Is this something you anticipate surfacing through browsers as non-exclusive applications on HoloLens or Magic Leap?

Trevor: While these widgets could end up in browser home environments, these could also end up in the actual shell home environment of the device too
... Have ambiently running things, plus GIS system, plus agent, plus lens for translation
... Collaboration here may depend on which browsers/OSs want to work together here

DonB: To security concerns, John's doc is great and calls out many threat vectors
... This is new territory here
... Devices can diagnose maladies users may not even know they have sometimes
... We're in sensitive territory here
... Need to at least warn users

Trevor: I share these concerns as well
... Current installation process is clear about kinds of data you get
... Browsers that support WebExtensions let you review the extensions and manage them and revoke them
... Indeed, new type of data now

<brutzman> John Pallett's work on threat vectors and vulnerabilities: https://github.com/immersive-web/privacy-and-security/blob/master/EXPLAINER.md

Trevor: Google has a change to the manifest file to change the permissions model
... Just wanted to have a review here

Max: This is something I think of a lot
... When we've all got glasses on, we'll need to mix different apps in this way
... I wouldn't get too hung up on how this lights up
... Seems like a good way to play around
... One of only ways on web to do two things at once right now
... Fully endorse experimentation

Trevor: Same problems being addressed in native device platforms
... Multiple apps, etc.
... We should solve this in web ecosystem too

klausw: Need a model for multiple things happening at once, but permissions are a risk if devs get hacked
... Should experiment here

Trevor: Blair's experimented with compositing multiple apps too
... Need to securely route input in a way the user understands

Kip: Considered intercepting and modifying real world information?
... An extension could make XR sites more accessible - e.g. smoothing out hand motions for Parkinson's

Trevor: Have thought about this, but need to flesh this out in the doc
... Let's talk later

ada: Now time for lightning talks
... One from David A. Smith about Croquet V

<ChrisLittle> bye

David: One way to think of the web is as a VM
... We are a direct ancestor of the Xerox Park team
... 5th iteration of the system
... This shows why we need some of the things that Trevor was just talking about
... Always at the whim of the VM we're running on
... Would rather not go outside the web here
... First wave of decisions is good - want to make sure the next wave of decisions is good for us too
... You can see portals from different sites, but need them to be properly sandboxed
... Doug Englebart's mother of all demos is now 50 years ago
... The important concept to him was about the computer as a communication tool to extend what you are
... We have the opportunity to fix the web to make it a truly collaborative platform
... This is a shared space - Bert's moving on his device and we can both interact together
... This is a huge Excel spreadsheet
... Shows what you can do in 3-space moreso than on a monitor
... You may think this is a small transition, but you're wrong
... You're going to walk down the street and have many views as you go
... Took the object and brought it from one world to another
... Information flows seamlessly between systems
... Multi-user at the base
... Need to make links on real-world objects
... Need the sandbox, but we need a lot further than that
... May want to start from where we want to be and work backwards, rather than try to step forward from where we are
... Data from multiple sources - completely sharable

Kip: Any efforts you've had to decentralize as part of the effort?

David: Totally decentralized - links can go anywhere
... One central thing is a reflection server for timestamps
... Using CouchDB as a database
... Thin on requirements
... Running this on a simple HTTP server on the computer

<dom> Croquet V Open Source

Trevor: Can you speak to the security features missing now?

David: What Nell mentioned is right to the point
... We're a shared space - may want to control what the user can do
... Programmable from within the system
... Can't just be 2D and needs proper occlusion
... JS all the way down
... This is the power we need to provide to the end users - not just another web page
... The web has got the vast majority of the world's information

bajones: I am standing between you and the unconference (literally)
... Mirror views work differently on different platforms and some may not be able to deliver final composited output cheaply
... Who believes mirror views in the web window is important for version 1 of the web API

<josh_marinacci> +q

Nell: How much of this is about capturing mixed reality video?

bajones: For Twitch streams, apps may just stream the mirror window itself
... So far, there are mechanisms apps can use

<alexis_menard> +q

josh: Is this an OS/UA feature instead?
... Feels like something that the browser would do rather than the app

bajones: In the case of lots of mobile systems, there are OS capabilities for mirroring/casting to other devices

Alexis: Very convenient that in dev tools you can see what's going on in device
... But also, useful to simulate device

bajones: Useful to keep doing that
... Systems have mirror windows and let you do that all the time

David: The main reason I see a need for that is not for me but for others to see what's going on
... Kind of a marketing problem
... Many ways to do it, but good to have a seamless model for that

cabanier: Not important for version 1, but good to do it at some point
... If you display raw image on monitor will look really weird

<Zakim> klausw, you wanted to say inspector mirroring doesn't work for Android VR

cabanier: Perhaps extra eye to render to without distortion
... Don't need it now

klausw: Inspector works now but won't work for immersive-vr in future

<Zakim> Kip, you wanted to comment on asymmetrical experiences (be able to click on mirror window to select things from 2d display)

bajones: Could make this work later

Kip: App may want to react to clicks on mirror view
... For asymmetrical experiences, could restrict ability to do that

bajones: This is more contentious than I thought :)

<klausw> ^^^ 15:26: inspector content view will stop working for immersive-ar, already doesn't work for immersive-ar

bajones: Asymmetrical use cases are really interesting and don't want to lose it
... You can also just render a third view of scene manually for asymmetrical experience

DonB: Important that others can see what's hidden inside headset
... Need to keep thinking about this

<dom> Expose a hidden area mesh or a hidden area depth write function #463

Kip: Full details in the issue
... 3 options for WebIDL
... Many pixels in the render won't be visible in the headset
... Need to define where those areas are
... More important on mobile where bottleneck is fill rate and memory bus

<dom> [reviewing https://github.com/immersive-web/webxr/issues/463#issuecomment-458839560 ]

Kip: Propose adding a triangle strip to XRView for hidden/visible area
... Allow degenerate triangles to break the strip
... 18% perf improvement just with the draw call to draw the strip
... More advanced engines want to extrude and have narrow culling volume
... May want to help apps get extruded volume directly
... Can make this customizable too
... Went away from implementing clear function
... Engines can do smarter things with mesh itself
... Can make better decisions if you're inverting depth buffer in how you clear
... Clear function approach would require more spec detail around how to interact with other state
... Might be easier to implement mesh approach while keeping additional benefits

bajones: Does it change for things like IPD? Do we need an event?

Kip: For some headsets, it may be beneficial
... For combined culling frustum, would change with IPD

<artem> +q

Kip: Should probably just be fetched every frame

artem: How does it work with multiview?

Kip: By providing the mesh, it's up to the engine to clear - mesh is per-eye
... Engine can do this work per array element

ada: Unconference topics: Occlusion meshes, camera alignment, future of AR

cwilso: Show of hands

ada: And mirroring
... Occlusion: 5 votes
... Camera alignment: 3 votes
... Future of AR: 7-ish

everyone: (group votes to do something that no longer involves scribing)

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/01/30 23:43:01 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/Intro/Spec status/
Succeeded: s/dom/DOM/
Succeeded: s/??/Alexis/
Succeeded: s/eifjccibejtvhvejefnggkvhbchkhvgctrtdiidiitff//
Succeeded: s/Extending WebExtensions for XR/Long lived XR Apps/
Succeeded: s/immersive-vr/immersive-ar/
Present: dom Atsushi_Shimono_(remote) Chris_Little_(remote) Hirokazu_Egashira_(remote) ChrisLittle ada bertf jungkees Tony_Brainwaive trevorfsmith NellWaliczek alexis_menard bajones RafaelCintron alexturn dkrowe Laszlo_Gombos
Found ScribeNick: dom
Found ScribeNick: josh_marinacci
Found ScribeNick: Manishearth
Found ScribeNick: Manishearth
Found ScribeNick: alexturn
Inferring Scribes: dom, josh_marinacci, Manishearth, alexturn
Scribes: dom, josh_marinacci, Manishearth, alexturn
ScribeNicks: dom, josh_marinacci, Manishearth, alexturn

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting

Agenda: https://github.com/immersive-web/administrivia/blob/master/F2F-Jan-2019/schedule.md

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]