W3C Immersive Web Community and Working Groups Face to Face 2nd day (Seattle)

06 Feb 2020



Manishearth, bajones, mounir, dino, jrossi, kip, avadacatavra, cwilso, LocMDao
Ada Rose Cannon, Chris Wilson, Manish Goregaokar, Trevor F. Smith
avadacatavra, ib, kip, trevorfsmith


WebXR Layers and Advanced Layer topics

rik: short overview of what webxr layers are. what's the probem? currently webxr renders to a single framebuffer
... causes performance problems, visible artifacts in text, high battery consumption
... also want to offer efficient 3d video and interactive html ui
... proposed solution is webxr layers built on top of openxr layers
... disadvantages: author no longer in full control of scene
... what are layers: swap chained backed buffers stacked on top of each other
... managed by a compositor not the browser
... types of layers: projection, quad, cylinder, equirect, cube map. layers can be mono or stereo
... cylinder layers are useful for 360 video
... gl layers: all layers in a page share a webgl context. write only. problem: no antialiasing
... framebuffer layers: main idea: overcome limitation of no antialiasing
... third type of layer is video layers
... with webxr you draw the entire scene. with layers, how does the author know where the layers are?
... something needs to be done with hit testing
... how can we do dom layers?
... similar system to dom overlays. how do we do hit testing. what are the privacy and security issues

klausw: (missed)

<trevorfsmith> klaus: We should distinguish between DOM layers and UA things like keyboards.

<kip> Klaus: need to determine what support is needed for DOM layers. Could be burden on UA implementers

RafaelCintron: are the privsec issues the same for dom layers as dom overlay?

klausw: i think it could apply equally. you restrict poses so the hosting page no longer gets poses or events. i think we could use the same logic. you need to think about things like opacity the user might not be aware that they're interacting with a dom layer

RafaelCintron: what happens if you put a dom layer on the wall and it navs to a page does that mean i can't draw the room?

klausw: you just don't get controller poses for cross origin content

<kip> klausw: 3rd party content may require the UA to draw the controllers

klausw: it would be an option to block 3rd party content or block poses completely
... there could be styling issues. could be a bit jarring. but if there's 3rd party content for e.g. ads could be jarring anyways

ravi: interaction gets a little more difficult parts are on different layers

rik: number one use case of dom layers is ui

<Zakim> kip, you wanted to ask how DPI is selected

kip: if we're going to show things rendered with dom eles in a layer one thing the ua has to decide is what dpi things need to render at
... would it be a fixed val or autodetected

rik: i think autodetected based on what the browser is rendered at

ravi: some of this is being discussed on dom overlay

jrossi: leave it up to the ua. up to the ua to decide what's right for quality and perf. would you change dpi based on movement of panel

kip: should it be detectable by js

jrossi: instinct is no

<ravi> https://github.com/immersive-web/dom-overlays/issues/9

jrossi: dom layer type is the hardest one of these and is the furthest off for us. particularly video is big bang for buck

ada: cylinder one is really good for text. how would you render text without dom layer

jrossi: texture. more to do with ability to recompose layer at time that it's displayed

cabanier: by the time the text texture gets to the compositor the pixels are distorted

bajones: subpixel rendering probably doesn't work in a vr headset

cabanier: we want to make sure that the design for layers now doesn't stop us from doing dom layers later

klausw: if you want to take this in stages having a display only dom layer is good. can still do most of what you want by having app do hit testing. could be a stepping stone if we aren't ready to commit to fully interactive dom layer

Manishearth: pathfinder is a fully interactive gpu based text renderer
... uses aa

jrossi: depends on the device. features on the quest will invalidate that

bajones: presentation -- my motivation: wider graphics api support
... wider array of inputs into webxr. webgl2 texture arrays, webgpu should make a huge difference. if we create the layer system in a robust way, we can easily take advantage of the layout benefits cabanier talked about as well. not diving into details of multilayer
... artem's layers core proposal: two basic parts. 1. layer types define how layers content is shown. 2. layer source types defines what is shown
... layer source aka swap chain aka image source
... openxr relies on the dev to set up the layer correctly, but on the web we'll benefit from trying to do the right thing for users, especially when it's clear what that is
... main idea: creat layers to define presentation in the world, then create a layer source which automatically allocates the gpu resources needed to satisfy the layer's needs
... code sample: layers are fairly cheap to create. backing sources are more expensive.
... instead of a framebuffer source, we use a webgl2 texture layer source in this example. now we have to do a bit more management in the render loop
... this will enable multiview
... how would this work with a quad layer
... need to set up reference space, transform, width, height, which has nothing to do with the backing texture. then to populate it with data, you take a layer source (which should all be fairly interchangeable), and you do have to specify some additional details like width and height
... it can't be reasonably assumed unlike the projection layer
... the difference with the quad layer is that you need to specify the dimensions/stereoness of the source

Manishearth: is there a reason we need a transform not an offset space

bajones: you could use an offset space but i find it harder to reason about for position. could just be me. in either case this matches pretty directly with how openxr handles things

bialpio: is it a reference space or an xrreference space

bajones: i don't think there's any reason it has to be an xr ref space. for example it could be cool to do a wrist mounted ui
... the webidl pulls very directly from how openxr does things
... differentiate btw xrprojectionlayer and xrnonprojectionlayer. for example, it doesn't make sense to construct an xrvideolayersource with an xrprojectionlayer

RafaelCintron: someone proposed adding this sort of information like stereo to html video element so devs don't have to specify that themselves

<Zakim> kip, you wanted to ask about interaction with 360 video audio

bajones: the ability to manually set this is probably important because even if videos could be marked there's a lot of unmarked data out there

kip: has there been thought to what happens to audio of the video?

bajones: the openxr concepts of video are entirely visual

cabanier: what do we do with 360 audio and video. do we do ambisonic audio?

jrossi: i can see the author wanting to override this so i can see this being an additional option

Manishearth: can take video eles and shove into libaudio api
... there's been discussions of adding xrspaces to libaudio points

cabanier: one difference is that you individually create layers and then sources. i created it all at once because theres a 1-1 relationship between the layer and the source

bajones: that's a good point. there's nothing that says that the way i'm doing it is the right way. cabanier has an alternate proposal. wrt the reason i used this pattern is the ability to separate things in the constructors
... it makes the documentation more self explanatory and better error handling imo
... when you use this kind of layer, only these kinds of init values make sense, etc
... that was the primary reason i used this pattern. again not saying this is the right way. i just thought it was more ergonomic personally

<cabanier> markdown of alternate proposal: https://github.com/cabanier/xrlayers/blob/master/webxrlayers-1.md

DaveHill: is there a way to create a layer after creating a layersource?

bajones: the thing i didn't like about that pattern is it made the layer source more fluid in a way i didn't like. i wanted the allocation to be more tightly tied to the constructor.

cabanier: should it be in the charter? who wants to work with us on this

jrossi: we've had convos who would love to have this. there's a path for dev engagement and content

mounir: how big is drm?

jrossi: we haven't seen a lot of drm 360 content

mounir: webcodecs expose the decoding on the web. we can now play a video entirely in javascript

<Zakim> cwilso, you wanted to ask the charter question

Manishearth: a key reason people want fxr is for watching videos

cwilso: it sounds like people want webxr layers in the charter. if you disagree let me know later

<ravi> @Manishearth: Magic Leap Hand joints info: https://developer.magicleap.com/learn/guides/lumin-sdk-handtracking

<Manishearth> ravi: https://github.com/immersive-web/webxr-hands-input/issues/1

Shipping Debriefs and Sharing Messaging on v1 and

<ib> scribenick: ib

manishearth: has webxr implemented with AR .. we don't have any other modules, including gamepad

<Zakim> kip, you wanted to talk about mozilla

manishearth: we are starting to experiment with hit testing

kip: on firefox side we have working prototypes .. can't give specific timing right now but stay tuned

jrossi: shipped webxr in december and enabled hand tracking in experimental

<jrossi> jrossi: with gamepad module later in December

mounir: hit testing will be shipping in 81

rafaelClinton: have webxr turned on by default in windows mixed reality

<trevorfsmith> bajones and mounir: shipped WebXR in december, AR will be in Chrome 81

RafaelClinton: working on the hybrid multi adapter and hit testing

<trevorfsmith> ada: Maintain immersiveweb.dev, a place for devs to get status of the API and how to get started in frameworks.

ada: what kind of messaging do we want to share with developers to get them on the same page .. for example do we want to suggest they build single pages that support everything or do targeted ones

treverofsmith: we have talked before about handling the complexity of varied types of input and how it gets way too complex .. we need a library that simplifies this, higher level abstraction

dino: I'm not ignoring the lack of webkit and i'm working on it

Klausw: on the input side suggests engaging with aframe community
... a generic library probably isn't feasible but aframe might provide a good middle ground
... we need to have both a low level way as well as some best practices for how to deal with the complexity
... ... at a minimum we should provide a library or some documentation instead

<klausw> the input library issue I mentioned: https://github.com/immersive-web/webxr-input-profiles/issues/160

dm: ... Checking some webapi docs and we should probably coordinate between this and webxr
... ... which of the browser vendors are actively working on spreading the message of webxr?

cwilso: ... we are actively tackling the MDN docs for webxr ... if any one wants to help out get in touch

<ada> Manishearth:

manishearth: we have people working on docs and inconsistency ... can put you in touch with Eric if you are interested

kip: we have a few people working on this from our mixed reality content team and improving the developer experience, including demos and checkout "hello webxr"

jrossi: I'm a good POC from the oculus side, our general approach has been to not recreate community resources. We are also looking on the content side over the next year

<Zakim> cwilso, you wanted to ask if we want to make this a more explicit "project"?

cwilso: Is there a desire to setup a group/mailing list to coordinate these efforts?

jrossi,dm: .. agreed this sounds like a good idea

cwilso: anyone who is interested in participating reach out to the chairs

kirby: quick question for jrossi, is reactvr dead?

jrossi: ... we are revisiting this and trying to determine if this is the right thing, but we are working on a path for migration

mounir: ... Shipping of hit testing ... it means we are actually releasing it

avadacatvra: ... Does that mean that google is signing up to ship changes if there is a change?

mounir: ... yes

Manishearth: .. are you considering the transient ?? issue resolved

mounir: ... yes the main issue is the name change

bialpo: Transient input name change may not sufficiently advertise that it is for edge cases only


ada: .. I want to bring this up again
... ... navigation - essentially the ability to change the location of the page you are currently on and when you go to a different page you are straight back into the interaction
... ... goal is to achieve seamless immersive navigation experience

jrosssi: in oculus browser we have a lot of content discovery services and we would like to be able to deep link into an immersive experience.

jrossi: for example you don't want to have to click in the desktop and then click again once in the immersive experience

ada: ... is that kind of like PWA on the oculus home screen?

jrossi: not exactly its more oculus curated content

ada: .. like a book mark?

jrossi: not exactly, its more like a discovery portal that surfaces things you haven't seen before, what we want is to remove the friction to get into these experiences

<ada> q_

trevorfsmith: a lot of this seems to revolve around how the user understands the transitions ... in the past we landed on there not being a way to make these transitions from a user communication standpoint

avadcatavra: ... it comes down to a trusted UI.
... ... the fall back is to go into 2d browser mode since that is a trusted UI
... ... we are working on prototyping some immersive trusted UI concepts, user testing , etc... maybe a few months
... ... this is a requirement for an immersive navigation experience

Nick-8thWall: ... people really care about this and we should solve this

DaveHill: ... is the idea to show key information like url so you can understand where you are?

avadacatavra: ... yes exectly, we need to be able to give nonspoofable inforamtion to the user

artem: ... we have an experimental implementation behind a flag ... you can play with it

bajones: ... this is a long standing request from people in the community
... ... developers external to this group have a concept from ready player one on what it means to open a link in VR ... and at least for the foresable future this isn't the case. We need to give developers a realistic idea of what this will look like

ada: ... would internal links (same domain) be easier? or would it be just as bad?

avadacatavra: yes and no, yes they are less of a risk but we need to figure out how to communicate to users when this is possible ... managing expectations

bajones: ... i don't think it fundamentaly changes Xdomain versus same domain
... ... however there is an out by using a single page app that streams in new content
... ... its a bit of a cheat and might be another area develoeprs need to understand

ada: ... part of the reason i added this to the schedule is to gauge the feasibility of doing this in the next 2yrss
... ... seems like this is doable , any opinions either way?

Nick-8thWall: ... i think it would be useful to break it out into same origin and X origin. Same origin would unlock some value and might make progress

avadacatavra: ... we think this belongs in this charter

jrossi: ... this seems like it hinges on the trusted immersive UI ... so yes we would like to work on it

mounir: ... what do you mean by 'add it to the charter'

cwilso: ... for now it means it explicitly NOT out of scope

mounir: ... do we have someone who is willing to open a spec?

avadacatavra: ... yes maybe not in the next few months but we would be interested

<trevorfsmith> Yes, it's lunch.


<kip> nickscribe: kip

<trevorfsmith> scribenick: kip

<cwilso> Surma's blog post: https://dassur.ma/things/omt-for-three-xr/

cwilso: Surma did an interesting exploration on moving work off of the main thread. Wrote blog post about it
... brandon said lets talk about it. Drop in and chat with us. So here we are
... Brandon, do you have additional context

Doing Work off the Main Thread

brandon: Really awesome post, when reached out about how feasible it would be to move WebXR to worker
... response was that it should work
... Took all events from window to off of window
... Never moved beyond that as we were all busy
... now things work, time to revisit

Ada: Present through webex if have anything to present...

Surma: I don't have any to present. Have blog post
... Not sure who has seen it
... Main thing have been working on is trying to promote
... UI libraries belong in main thread
... ThreeJS samples moved physics to worker
... worked out how to synchronize renderer and worker in lock step
... All possible now
... Some things should be easier
... You write a web app. As developer don't know what device will be
... Variety of devices out there
... First experiment showed that VR device has same problems but with more pressure
... Some 72 some 90hz some higher
... Less and less possible to run on main thread
... Main thread saturated with rendering
... Essential to move to other thread to keep stable frame rate
... Experiment found that had to.. in worker didnt' know frame rate.. Just chose 90hz.. Which is not ideal
... Would love to see that can have access to RAF in worker for WebXR sessions to run at the same framerate
... Other aspect is that had to manually forward all the input events such as input events (controller)
... Had to manually forward from main thread to worker. Could use help from standards group there
... Has been thought about at some point. Let's pick it up again

cwilso: Anyone else have interest in driving down this patch now
... To see what is necessary.
... In particular need to forward events
... Any other topics need to experiment with
... To make it more efficient?

ada: We might be the wrong audience to guage enthusiasm for this
... Are people consuming the api or consuming frameworks built on the api
... If we have anyone here who has a lot of contact with developers and may want to say what performance is needed for main thread

Manish: Servo is very multithreaded. Not a specific main thread
... WebXR is done on a different thread. Script thread. Rendering elsewhere. works nicely
... Much harder for other browsers to move their architectures
... Good if we can support WebXR in workers

cwilso: Serma's exploration goes through interesting steps. get physics off UI thread
... heavy weight when running1000's of objects
... We're the ones that will accidentally educate WebXR developers in what they do (keep things off the thread)

trevorfsmith: Putting on developer hat
... Complexity in moving off UI thread is that have to manage replication
... The trick is that the worker thread who is doing off-main-thread needs to talk to server and receive replication events for changing managed thread then pop up to UI thread for rendering
... Try to put in effort to solve for developers would need to take into account this complexity. More than what developers do on their own

<Zakim> cwilso, you wanted to talk about follow up on Ada's "consumer of api"

cwilso: Queued to follow up on Ada's consumers of the api
... Ties in with what you said. Serma is on the web dev team. Group tries to educate developers on how to use an api
... part of concern is that people will build stuff that work and think it's fine. But won't scale up to 1000's of balls
... Then blame api. dive into work on patterns. Encourage people to create threejs, WebXR apps that put physics on other thread. Important to do now rather than wait until its a problem

Nik: Sounds like a useful thing, what Ada said
... Echo sentiment. Developers follow by example. Come up with a pattern that should be followed.
... People will appreciate

ada: would like to say the same. Everything is new and changing rapidly. Good examples of multithreaded WebXR as the default way to do things. Would be a good way to show good practices

diego: Agree there
... ties well with communication before lunch
... Makes sense to expose this to developers

brandon: For one, would like to clarify Manish's
... JavaScript doesn't allow... there is some similarity in chrome. Stuff happens in other process
... Not where people are falling down.
... Want something to happen on a thread. Main thread also tends to be a place where people tend to do UI rendering, audio work
... Overhead of process is what we are talking about here
... agree it is a good time
... Set some standards
... depends on where we see bottlenecks
... "Lots of physics makes my app fall down."..
... suspect it's more than that and there is value in letting WebXR work in worker
... Historical note... Period during WebXR development. Where people wanted this API to be worker only
... API compatibility.
... Need to get immersive sessions from a user activation event
... Those just don't happen in workers
... See this api working at first glance by working on main thread but being transferrable to worker thread
... Everything should be able to happen there

ip: For Physics implmentation in sumerian saw performance improvement. Happy to put developers that work on this

Rafael: Few months ago there was w3c gaming at the workshop at microsoft
... Run by Babylon Js people
... Use sendmessage to send things over. Hard when using 1000's of particles.
... Hard to maintain sharedarraybuffer
... And preserve ergonomics
... No conclusions. Nobody from JS team there
... Maybe we can have some objects be transferrable. Children transfer along
... One thread at a time can access
... something to think about

Nik: Question.. Babylong example is specific to WebXR or general property of Canvas work?
... Other things specific to webxr to be concerned about, or solve at a different level of the stack.

Rafael: WebXR makes it worse.
... A problem on the web platform in general. Tough as JS engines like to keep everything together. Not have locks everywhere
... Deadlocks.. Memory Corruption not acceptible on web platform like native

ada: Brandon, would making the Xr session transferrable?

brandon: Would need to expose most objects to worker. Should be relatively doable
... Need to make session transferrable
... Make sure pluming happens
... Otherwise sessions in chrome's implementation has been checked with offscreen canvas. Pretty agnostic
... Not sure if workers would make scheduling more complicated in that context
... In idl terms, everything to make worker exposed..
... Need to take a deeper level validation
... Need to see if concepts are compatible with workers
... Difference between in-practice

mounir: There is no guarantee that we do this unless we get push from customers

Ada: Best done as a module?

mounir: Perhaps new version of the spec can do that

manish: Concern about making it shippable across. Let it be created on a thread, then it stays there. When throwing it across, the entire RAf system may need to be moved across
... Lots of work to figuring that out
... We have a lot of problems in getting that to work web.
... fine create on separate thread. Not to create on one and move to another
... RAF loops may be a problem

cwilso: Nobody in queue
... interesting to explore but not somehting that we can hammer out in short time frame
... Any other conclusions we can

brandon: Patterns are viable
... Want to encourage in examples and libraries
... It is a reasonable first step
... without necessity to move the entire working group

mounir: For more context. As implementer.. Is complex to move something to workers
... Easy to have an idea. Looks like media.. MSE in a worker 1.5 years ago still working on this
... We had website coming with use case where struggling and made it happen
... Expect mozilla and oculus to also have not enough resources

Ada: To confirm. This is a blocker to using offscreen canvas?

Brandon: No.. Can use offscreen canvas today
... Can't necessarily use offscreen canvas in a worker...
... The way to get that info in main thread.. Two methods...
... Applicable way is to tear off bitmap from back buffer. Transfer to main thread by imagebitmap
... PResent to canvas
... Possibility to take some method to consume image bitmap in webgl layer
... Would be really hard to synchronize
... Current model cant' get pose data and send to worker within one Js callback
... Would always need to render one frame ahead of time. Can't predict poses
... To be feasible. need to add new structures for predictive frame poses.. Spiral out of control...
... Really need RAf loop to happen in thread where rendering is happening

cwilso: Anything further to add?

ada: To summarize...
... Not feasible right now. Worth talking about in next version of WebXR
... Please make record if customers find it needed
... Make it known .. Put on mailing list
... Encourage other people to start looking at it

cwilso: there is perhaps a call to explicitly ask those who are educating webxr to consider writing samples that pull work off the main thread
... For the effort we talked about earlier
... Thanks serma!
... Suggest we pull some lighting talks. Some new ones added
... 5 lightning talks scheduled.. Ahead of schedule...

Lightning Talks: 8th Wall and WebXR

Ada: Let's do lightning talks

nik: [preparing for talk]
... Many know me from 8th Wall
... Coming to meetings for last year now
... At 8th wall we have latest products
... Want to talk through how 8th wall see WEbXR
... AS a critical
... Recent product launch is a cloud editor and hosting solution
... Solving developers biggest needs
... Way we architected it works great with the WebXR API
... Published two examples on WebXR experiences on this platform
... Walk through thinking behind this. Where coming from and going
... Over last year exciting products and brands
... consistent speed bumps
... In order to shorten development cycle and increase launches
... Developed web based platform to make XR workflows easier to get content out there
... Gratifying to build a product that love to use
... Fun and exciting thing to use
... Places people are having trouble are...
... Immersive development requires to connect to device. Local simulator not sufficient
... How connect to local server... HTTPS requrests...
... Have to tie device to computer
... AR.. VR is new medium...
... Lots of little examples. Little examples don't translate to full web experience

<scribe> ... New and challenging medium .. Important to collaborate with people across teams and agencies.. Many eyes on problem

UNKNOWN_SPEAKER: Content we see are agencies on behalf of companies...
... Leads to communication problems.
... Assets in immersive are large, requiring custom renderers
... Challenges around asset distribution that not found elsewhere in front end wevb development
... What it's not...
... Didn't build a game engine.. Aframe, threejs, Babylon are excellent for this
... cloud editor is not tied to a specific xr engine
... works with WebXR api's
... experiment with tensorflow Js
... Not built to by tied to one of them
... Not an application framework
... Can use frameworks or non at all
... Nothing like Unity with metadata management system or wiring together
... Not a "No code" solution..
... Not like a service instance where needs to spin up every time accessed
... Meant to be fast to learn and production ready
... Looks like...
... components of editor with full featured IDE with everything you would expect
... Fast build system
... Even small example projects are ready to turn into bigger projects
... Full scale web sites
... Type checking...
... social tag management...
... PWA support if want it
... Encourage rich, long lived experiences
... When start editng, have secure development sandbox over web with secure certs and https
... no network configuration...
... Scan QR code for fast access on device
... Streams back to browser. don't need to plug in
... Build with collaborative source code built in
... Web worker in browser runs full version control system for detecting when folks land changes.. need to sync. resolve conflicts... manage commit history all iwhtin browser
... good for collaboration
... Publishing side...
... Do what we could to mimick best industry practices
... Staging environment where can push changes before publicly accessible
... Atomic rollbacks
... global CDN for assets..
... Can load GLTFS that get parsed
... Can point own domain, has dns routing for that
... Own testimonial...
... Heard about brandon's XR dinasours...
... Now have fast debug cycle and can publish instantly
... Served from CDN. Was able to do last night after dinner as fast excersice
... Experimented with Tensorflow Js.. Fast to get to example
... Not just my word for it...
... Free trial users...
... [Showing positive testimonial]
... VR specific experiences.. [showing demo]
... Fun to look at Brandon's dinosaur example
... Shows QR code to scan on phone
... Page that is rendered is specific to this browser
... Igcognito window doesn't know what it is
... [Demoinstrating editing code in web based IDE]
... Save... Build...
... Pushed commit to server
... Rebuilt project... Populated page...
... [Showing how to land changes and make public....]
... Now landing the changes that will update a whole bunch of stuff on the source code backend
... Commit history shows the commit
... Now can go to publish
... Have option to make any version in history public version
... Go to this link...
... [praying to demo-spirits]
... [dinosaur appears!]
... [rawr]
... Tool that really enjoy using
... Any questions?

trevor: This is interesting
... How are you posititioning relative to glitch?

nik: Compared to glitch does a couple of things differently
... [Debugging minification....]
... Regarding glitch...
... Really easy to use for toy example use case
... Where you have code that is not run through a build system
... Also not served in a way that is not extremely fast. Takes a while for glitches to load

trevorfsmith: For people who want fast CDN

nik: for commercial projects
... any other questions?
... Want to mention.. Super useful tool for everyone in this room. With rapid iteration helpful thing
... We as a company can help you and your companies use a tool like this for faster immersive web development

trevorfsmith: Is it self serve, contract based, pricing model?

nik: Pricing model is currently...
... Subscription to use it. Agency: $100 per month
... [missed]
... Unlimited development for non-commercial projects
... Self serve options for commercial projects
... Negotiated options...

ada: Nobody else on queue

Lightning Talks: WebXR input profiles

Ada: Input profiles is next

brandon: [prepares for presentation]
... Fast lightning talk
... Want to highlight work that mostly Nell did
... Unfortunately not here to present herself
... Highlight tool that recently released. Encourage contribution
... Shipped 1.0.0
... Published three packages
... Registry. Assets. Motion controller library
... Helps managine rigging of those assets
... Viewer available online
... [Showing viewer]
... Can pick which profile that want to look at
... can choose left or right hand
... Can move sliders to articulate triggers...
... Can move thumbstick [Visual model reflects slider movement]
... If thumbstick is depressable.. Then that is accounted for as well
... works for all of the assets in here
... fucntions as both a preview and to debug assets that are being developed
... Can view in VR
... Ensure lighting is okay and so on
... 47 different meshes in library right now
... Across profiles
... Have usually a left and right
... Exception daydream and Samsung that don't have handedness
... Also have generic profiles contributed by Chris at Amazon
... Did a wonderful job in making variety of abstract that don't line up to real world controllers but work for realistic XR controllers
... Left and right handed and non-handed versions of each of htem
... all properly rigged. Some with WebXR logo on them
... Rest of assets came from variety of different places
... Some from Aframe...
... Used blender to rig up
... Ideal is that we can move forward with more popular controllers. Contribute own meshes as companies bring new devices to market
... Landed library integration
... Three.js and Babylon.js added support in 24 hours after publishing it
... If you go to three.js examples...
... shooter.. cube... will load in assets from libraries
... Not sure which Babylon examples.
... Starting to be picked up by random people on the web
... Intent from aframe heard to bring in
... To show useable controllers
... And have heard some intrest from some native clients
... They are looking at repository and saying "that looks really useful" want in on that
... while tools are aimed at javascript. Assets are just gltf 2.0
... Way that motion controller library interacts with them is not complex. Just lerping between nodes in mesh
... Feasible to use this in native applications as well. Even if not based on WebXR at all
... Still early.. Usage is picking up quickly
... [Hockey stick shaped graph!]
... Pretty much all library usage is Oculus Touch v2 device
... since chrome is not currently advertising correct profile for this device (showing as oculus touch).. Almost all of this is coming from Quest
... [Show pie chart]
... This page [showing recent stats] is available for everyone
... Usage from CDN
... Can see spikes
... Numbers low overall. But can see when an experience catches interest
... Generic trigger showing activity from oculus touch hands....
... People are playing around with new mode announced
... 400 per day
... Basing off of how often is accessed

room: Maybe developers just reloading

Brandon: This blip of windows MR controllers is probably just me
... Not perfect.. Numbers are low so abnormalities
... Fun to see gaining in usage
... Next steps are to get people to add new devices to the registry. don't need a mesh to do this
... If have a device. You want to be consistently identified
... Go in here and add a PR to create registry entry
... Just requires identifying string that the device will use for ID
... Get some information about the layout. Eg, have buttons, joysticks...
... If a manufacturer. . Please consider adding mesh
... If you have a standalone headset.. Would be amazing if can get mesh used in home screen for consistency
... Would be wonderful for users
... Please contribute even if not rigged static mesh. Somebody can add articulation after the fact to improve experience
... For WebXR implementations. Make sure when bringing devices online that you do best to match profiles that are registered
... Mechanisms for reporting deprecated strings when
... All is not lost if used wrong string
... best if make effort to consistently use names that go into registry
... Then everyone gets the same devices and assets

trevor: does this library have the concept of themed mesh sets?
... for example, if I make an experience where the environment is low poly...
... how hard would it be to swap out different asset sets

brandon: good question
... Need to think about alternate versions of the assets in general
... Try to create something that is appropriate for the device that they came from
... This one is super low poly. not even modelled buttons
... Reduced poly count on some
... Index controller for example is a very detailed model, but devices that can handle it are powerful
... Another iteration...
... Have an engine that can handle draco compressed files...
... Others may not be able to handle that
... Would be a great use for alternative meshes
... Just don't have it yet
... Would be useful
... Have in the past had HTC saying ... OpenVR has system where user can have local stylized mesh
... don't think we will get to that level
... Some experiences have translucent versions of the models.. where glowy around the edges but don't occlude
... Hands are another thing that we need to figure out
... Basic concept is "let's start showing something reasonable"
... Work from there
... If would like to see in action, have some demos on device

Lightning Talks: Occlusion

cwilso: Next talk?

ada: one more...

Diane: Can talk about security

Lightning Talks: security

Diane: Some interesting things
... Some interesting things with eye tracking
... IPD is important to make experience comfortable but is also a fingerprinting vector
... Turns out variation in IPD is enough to be combined with IP address
... Could easily differentiate between individual users
... With extreme granularity
... While we look at things like this. We need to remember that these measurements we are using are a...
... How to avoid fingerprinting...
... IPD can easily be extracted from an eye tracking device
... IPD is given to experience
... We suspect that gaze can also be used for unique user identification
... Backing up...
... Talk more about what eye tracking is
... Technique where device tracks eye movements
... Differentiate eye tracking from gaze tracking
... Eye tracking - What the eye is doing is importing
... Gaze tracking - What the eye is looking at
... Pupil dilation is part of eye tracking
... Gaze tracking knows what you are looking at
... HCI Human Computer Interaction community
... eye tracking found to huge range of things that show how effective sites are at conveying information
... big problems...
... Eyes have subconcscious movements...
... Gaze tracking can expose sensitive characteristics.. Such as sexual orientation
... Research also indicates that can diagnose disorders.. Autism...
... can use to access protected health info
... Advertiser can infer that you have anxiety
... People with Anxiety are more susceptible to "running out now!".
... Someone without anxiety would be susceptible to other ads
... Depression...
... Labels can follow you.. No transparency... Can't remove labels...
... Can't call up advertisers...
... Have been profiled you without actual diagnosis
... Ethical issues
... Educational settings and medical diagnosis of children
... Research indicates that we need to be cautious of eye tracking data
... Research shows we can use as a diagnostic tool
... do we want this ability on the web

Klaus: Follow up...
... Distinction between fingerprinting and profiling
... Example with IPD...
... If you have a slider, you can read out position with high accuracy
... If have 10 digits
... Can be consistent for a device
... Separate question...
... Identify user
... what can you infer from that? May be able to tell if a user has a height or age or gender
... Two related but separate threat vectors
... Have suggestions in spec such as rounding
... IPD doesn't need 10 digits
... reducing number of digits reduces fingerprinting
... But doesn't reduce profiling
... Important to be clear which aspect is mitigating
... Be open to possibility of gaze tracking having privacy concerns
... Open to idea "Do we want to expose this on the web at all"?
... Might not be a good idea. Just because we can

brandon: Covered by Klaus

ada: Could the user agent do something with the data
... If using it for rendering certain bits of the shader

brandon: Stepping in...
... Tempting.. Some things we can use gaze information for that improves web experience without leaking
... Use eye tracking to automatically set IPD for example
... As Klaus mentioned need to quantize
... Even if you start to use for foveation
... Even if not explicitly exposed, there are weird timing attacks to infer where the user is looking
... If want to know if user is looking at ad. Can make every pixel in the ad its own triangle or cube.
... A lot longer to render frame when looking at ad.
... People try this. highly motivated
... Could still leak through some of that information
... Foveated rendering is an inevitability.
... But we need to be considerate about even the most non-obvious way to access the information

Dianne: Ads are interested in engagement
... Will do anything to increase engagement
... In privacy preserving way, provide engagement from gaze tracking, but no other information
... Improves their metrics and privacy for the user
... Could be a win for both privacy and advertisers. Stuck with ad paradigm on the web
... One thing we can consider.. Is there a way we can make this a win-win possibility for both
... Abstracted engagement api for gaze should be looked at strongly

Rafael: Coulple of questions
... Can pose data be used for gaze tracking already?
... why is gaze tracking okay, but eye tracking not?
... Fidelity of information?

Dianne: Head pose can be used as a proxy
... for gaze
... Eye tracking is gaze tracking and more
... Both are okay in abstracted forms on the web
... Not okay in their raw forms
... Difference beween raw and abstracted?
... One is raw data... One is "you have looked at this for X number of milliseconds"
... Example... Raw gaze would be able to tell precise movements of eyes
... Abstract would say what the user dwelled on with their gaze without exposing subconscious movements
... Subconcious movements make it difficult to use the eye as a cursor
... Need to have some kind of smoothing to get rid of subconscious movements.
... can infer the user's intention
... Final thing...
... Usability is an important research focus in HCI community
... Heat map of a website.
... Notes how user interacts with features
... slow process requiring explicit concent to be hooked up to eye trackers
... In a headset this can be done without consent
... Site would be able to rapidly iterate new features, but have privacy invasive
... Revealing sensitive information discussed before
... Integration of gaze tracking is something we need to be mindful of
... Should think about doing it but need to be careful about how we do it
... Statistic to leave with
... Spending 20 minutes in a VR simulation leaves 2,000,000 recordings of body languages
... Some intentional
... Some unintentional.
... Need to think of consequences
... Planning to have this at Kai ethics mixed reality workshop. will send mailing list with citations

<trevorfsmith> Snack time!


<trevorfsmith> scribenick: trevorfsmith

ada: The last thing is the tooling ecosystem presented by Brandon.

Tooling Ecosystem

bajones: This is more for reports of interesting things people are aware of from the ecosystem. I can start by saying that since we shipped WebXR in Chrome and Oculus browser, we've seen three.js add support and drop WebVR, their support continues to improve with input handling. Babylon.js has also added WebXR support. Those are the popular toolsets used by many devs. I also know that A-frame has integrated WebXR out of box.
... I've had other toolmakers reach out and I know that it's coming along on other fronts. Could others share in terms of what you've seen in terms of tooling or blog posts?

ada: The first set of tooling that comes to mind if the WebXR emulator extension to Chrome, which I'll add to immersiveweb.dev along with input profiles.

<Zakim> kip, you wanted to give a shout out to https://blog.mozvr.com/webxr-emulator-extension/

<kip> Takahiro Aoyagi

kip: The WebXR emulator from Takahiro at Mozilla lets you simulate it using a nice GUI and seeing output that would be in a headset.

<kip> And Fernando Serrano

bajones: The follow-up question is have you encountered the need for tooling that aren't being met? If we hear early from the community then it's we can improve the dev experience.

artem: I have in mind a graphics debugger and compiler, more related to WebGL. Currently there are no good tools for that. I tried several in Chrome but they don't seem to work with mobile devices (Quest) and without those it's hard to figure out why the render is black.

bajones: Like Pix?

artem: There is a perf profiler to see mid-frame flushes, etc. The other thing is to track WebGL states: meshes, draw calls, states. We have work on renderdoc for that, but I'm sure whether it's easy to be usable for general public because it requires a flag in the browser. So, there are some issues. In gender renderdoc supports native apps but not browsers.

bajones: There are extensions but they don't work on mobile. I'm not aware of anything that does work on mobile. Chrome's tools aren't perfect when it comes to debugging.

<ada> trevorfsmith: a lot fo the tooling i hear devs need are the boiler plate for session management and moving from flat to portal mode ot immersive mode seamlessly including the surrounding DOM content, its complex enough that one of the exsiting app frameworks will need modifying so that it is handled from the very beginning, esp when combined with a11y it's hard i have tried to do something with in put but it is hard

<ada> ... also art work flows every technical artists need a lot of hand holding to undestand the tradeoffs for devices, sample projects would be really helpful.

<Zakim> kip, you wanted to ask if there is interest in any asset baking, occlusion baking offline

<Zakim> ada, you wanted to mention vertex ao baking

kip: Contrasting with native apps, one half of any engine is the component that runs offline: lighting, occlusion, navmeshes, etc that are too expensive at runtime because it's too big. Is anyone working on tools for this, like are in native engines? Light-map baking?

ada: I did see people excited about baking ambient occlusion baking into verts as tacked onto libs. There was a three.js demo that did an indirect lighting pass to bake in once. It would be nice to drop this sort of baking into three.js, etc.

kip: People should consider whether data from tools could be usable as input to others.

bajones: When Hello WebXR came out, it gives brief intro to which tools were used and how. I went into some of the dl'ed resouces to figure out how they were structures. I'd like to see more people share as they build out their non-trivial experiences. I have a half-written one for XR Dinosaurs. Hello WebXR has been well received and people are impressed, they're looking to replicate the quality and perf of that content.

kip: I believe an article was just posted about Hello WebXR.

ada: I think I retweeted it an hour or so ago.

<bajones> https://blog.mozvr.com/visualdev-hello-webxr/

ada: Is this a topic that would be good to add to the IW dev advocacy mailing list when that's online? We could get devs to divulge their secrets and tricks that make them money.

DaveHill: Maybe this is obvious, but it feels like we're at where we were with webdev in early 2000s. There aren't established tools and pipelines, everyone is cobbling together tools and it looks OK but doesn't scale up.
... I don't know what we could do. In Unreal and Unity they really started to fill that gap on the native side. Now you wouldn't build a native game engine unless you're just super passionate about it. Those tools are so good, now. I don't know if there's a way to look at how it progressed on the native side and build a similar roadmap for webdev.

<ada> trevorfsmith: i do a lot of talking between people choosing between unreal and unity and web dev and doing an analysis and the tooling kills it half the time, and what you can do in unity in a week often feels so much more than what you can do in the web in a week.

<ada> ... there is work flows and editors and libraries and frameworks, the web has good of the libraries and frameworks but is missing the tooling.

<ada> ... they are big efforts it either needs to be a lot of groups pitching in i.e. oss or a big single unit

ada: I was wondering, is anyone in contact with Unity and Unreal about how they're handling going toward web targets? Unity has Project Tiny that makes sub-1MB loads for the web.

DaveHill: One of the things about where Unity and Unreal are aside from workflow they pull it all together into one cohesive package. In early 2000s we would pull in an audio lib, a physics engine, etc. Now native folks don't need to do it. I guess that's kind of the space that A-frame is filling.

yonet: When I start talking about WebXR most devs are totally lost. Cameras, lighting, they aren't familiar. Any Unity developer sits down and starts being productive. I think a-frame is helping because it looks like HTML.

<Zakim> kip, you wanted to shout out about Mozilla ECSY: https://github.com/MozillaReality/ecsy

kip: I'd like to shout out to Mozilla ECSY, that has the goal of performance. I have Hubs and Hello WebXR telling me that they'd like a WebGL frame debugger and inspector. The need to real-time light baking. Some tools for LOD asset management (load and generate). Also, a reiteration of the idea that we have the WebXR test API but what would it look like as a WebExtension API where people could plug tools into the browser.
... Like Takahiro's extension.

Manishearth: It seems like this might not be something that would be easy to implement given the way they've done the test API. At the same time, I do like this idea.

kip: Is there interest in multiple browser vendors to share this?

Manishearth: We have a test API and this would involve shoring it up and exposing it to WebExtensions.

cwilso: Any other topics for this face to face?
... When do you think another f2f would be useful? TPAC is the last week in October in Vancouver, BC.
... Should we try to have a meeting between now and then? We don't have to plan them now, but we should plan with more notice than this time.

jrossi: It might be helpful to have a markdown file of topics that would work for in-person conversations.

cwilso: We're supposed to send six weeks notice for any f2f meeting so that others can attend. People need to get visa's, tickets, etc.

avadacatavra: What's the half-way point between now and October?

cwilso: June.
... People tend to not like December. The middle of Summer is also hard.

mounir: How will we do the calls now that more work is happening in the WG?

<mounir> ack

cwilso: We've separated between CG and WG calls and sometimes the WG takes over a CG time slot? What do people think we should do going forward?

Manishearth: When we do the agenda the CG and WG stuff is mixed so we discuss WG items in CG anyway.

ada: I generate the agendas using GitHub tags but they don't currently split by CG/WG but they could.

Manishearth: I'm suggesting that we might just have them in the same call.

ada: Part of the issue is that the people in the CG haven't agreed with the IP restrictions. And though we're close there's definitely people who aren't in the WG.

cwilso: Are there people on the calls who aren't in the WG?
... We could invite folks as invited experts.

Manishearth: We can go through the agenda and announce which (WG/CG) is happening each week.

ada: Unfortunately people tag agenda items the Monday before the Tuesday call slot.
... We could set a deadline.

cwilso: We should have a long chat about that.

[lots of conversation about whether to keep WG/CG calls separate]

ada: People should tag agenda items by EOD Thursday (London) and we'll determine which G is having a call.

And that's a wrap!

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2020/02/07 00:18:59 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/manishearth ... /manishearth: /
Succeeded: s/kip .../kip: /
Succeeded: s/jrossi .. /jrossi: /
Succeeded: s/mounir .. /mounir: /
Succeeded: s/rafaelClinton ... /rafaelClinton: /
Succeeded: s/RafaelClinton .. /RafaelClinton: /
Succeeded: s/ada ... /ada: /
Succeeded: s/treverofsmith ... /treverofsmith: /
Succeeded: s/dino ... /dino: /
Succeeded: s/Klausw ... /Klausw: /
FAILED: s/dino ... I'/dino: I'/
FAILED: s/Klausw ... /Klausw: /
FAILED: s/kalusw a generic /... a generic/
Succeeded: s/klausw .. we/klausw: we/
Succeeded: s/manishearth .. servo/manishearth: /
Succeeded: s/klausw a generic/klausw: a generic/
Succeeded: s|s/dino ... I'/dino: I'/||
Succeeded: s|s/Klausw ... /Klausw: /||
Succeeded: s|s/kalusw a generic /... a generic/||
Succeeded: s|hi ib, could you type as 'nick: scribe-sentences' but not 'nick ... xxx' (continuous line could be built just with '... xxxx'||
Succeeded: s/ada: .. new topic navigation/topic: navigation/
FAILED: s/"https://ghe.oculus-rep.com/cabanier/xrlayers/blob/master/webxrlayers-1.md"/"https://github.com/cabanier/xrlayers/blob/master/webxrlayers-1.md"/
Succeeded: s|s/"https://ghe.oculus-rep.com/cabanier/xrlayers/blob/master/webxrlayers-1.md"/"https://github.com/cabanier/xrlayers/blob/master/webxrlayers-1.md"||
FAILED: s|"https://ghe.oculus-rep.com/cabanier/xrlayers/blob/master/webxrlayers-1.md"|"https://github.com/cabanier/xrlayers/blob/master/webxrlayers-1.md"|
Succeeded: s|https://ghe.oculus-rep.com/cabanier/xrlayers/blob/master/webxrlayers-1.md|https://github.com/cabanier/xrlayers/blob/master/webxrlayers-1.md|
Succeeded: s|: https://ghe.oculus-rep.com/cabanier/xrlayers/blob/master/webxrlayers-1.md|: https://github.com/cabanier/xrlayers/blob/master/webxrlayers-1.md|
Succeeded: s/Serma/Surma/
Succeeded: s/tdsmith/trevorfsmith/
Succeeded: s/cwilso/Rafael/
Succeeded: s/Media/MSE in a worker/
Succeeded: s/... New and challenging medium ... /... New and challenging medium .. /
Succeeded: s/experient/experience/
Present: Manishearth bajones mounir dino jrossi kip avadacatavra cwilso LocMDao
Found ScribeNick: avadacatavra
Found ScribeNick: ib
Found ScribeNick: kip
Found ScribeNick: trevorfsmith
Inferring Scribes: avadacatavra, ib, kip, trevorfsmith
Scribes: avadacatavra, ib, kip, trevorfsmith
ScribeNicks: avadacatavra, ib, kip, trevorfsmith
Agenda: https://github.com/immersive-web/administrivia/blob/master/F2F-Feb-2020/schedule.md

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]