W3C

Immersive-Web WG/CG TPAC 2020 Day 4

05 November 2020

Attendees

Present
ada, Brandon, cabanier, cwilso, klausw, Laszlo_Gombos, madlaina-kalunder, Manishearth, rik, yonet
Regrets
-
Chair
ada
Scribe
ada, bajones, Brandon Jones, cabanier, Chris Wilson, cwilso, madlaina-kalunder

Meeting minutes

Navigation (@toji, @manish, @cabanier) (35 Minutes)

bajones: I think Oculus is the only one who's implement any form of navigation at this point (i.e., the ability to jump from page to page without leaving immersive context).
… the UA is navigating to a new URL, but you're still in an immersive context. A la a "portal" in my immersive web app that pops me into a new virtual world. "There are complications."

rik: Oculus browser has implemented, but for most origins it's behind a flag. For security reasons, this isn't something we can just turn on. I think there are couple of origins we trust -for Facebook 3D photos, e.g.
… unless there has been movement [on the security investigation] we don't intend to turn this on (by default)

Brandon: can you give a desc of how this is presented to the user?

Rik: for 3D photos, you get a loading spinner, then the new immersive page comes in.

brandon: the spinner is the UA itself?

rik: yes.
… the spinner actually comes from the OS, not even the UA.

brandon: what happens if you take a long time (30 sec, e.g.) to load?

rik: you'll get 30 seconds of spinner.

brandon: Does the new page know it's being navigated to?

Rik: yes - I think Diego proposed something that provides a handoff.

brandon: it does seem critical that we let users know 1) that they're about to navigate, and 2) where they're navigating to. These shouldn't be able to be reasonably spoofed or hidden.
… I'm not sure how to make that happen. Using the Oculus flow as an example, the spinner should show where you're going to (origin)
… it would be really nice if there were something like the "hover over a link, get the URL" in 2D navigation.

[I'd note that can be defeated]

brandon: but maybe the right place is onbeforenavigation
… but for some people, popping up a "you're about to navigate" would kill the experience.
… I feel like we need some guidance from browser security people.

rik: not sure it's possible, but could we pre-register origins or something like that?

brandon: in terms of manifest or the like?

rik: yes

brandon: that feels clunky in terms of user experience
… imagining a geocities-style user experience, where lots of subdomains...
… this seems almost untenable. It seems more practical to have a popup on page load, saying "this page may navigate you to XXX"
… or that experience, but one that hides after the first time.
… it feels like it's going to be clunky no matter what you do.
… I'm worried people will not get the UI they want here.
… I think Oculus' experience is only tenable because it's controlled on both ends.
… (also, that scenario is fairly lightweight in terms of resources)
… you could be staring at that spinner for a long time.
… navigating out to the page maybe isn't so bad...

cwilso: maybe the right thing at this point as we have been looking at the UI parts, the best thing is to hit up security experts and the TAG and get pointers/advice from them.

cwilso: some of the assumptions were making may not be correct

rik: didn't John Pallett look into this?

brandon: I think so, but that's a while in the past, and the context may have changed.
… we should probably verify.

brandon: I just wanted to get current status and next steps. "Yes this is a good idea and let's continue" or "No for these reasons we're not pursuing". It keeps coming up but we don't seem to have concrete steps.

ada: I seem to recall Diane doing work around this. Should we ask?

manish: I can ask, though her being at Apple now might make that problematic. Her idea was somehting like "you have some UI navigation chrome, that you can keep out of your way, and it expands when something's happening - like an address bar.

brandon: seems like you'd need to do some composition with the frame
… more like layers

manish: Diane will put it on archive today or tomorrow

all: thanks Diane!

ada: does anyone have bandwidth to have a go at implementing this?

brandon: would need to see, but probably don't have bandwidth

manish: would be best if a headset did this; handhelds already have chrome

ada: ok, we're over by 5 minutes, let's take a break.

<atsushi> (will drop here.)

<atsushi> (failed to push in notice)

Required dependencies, Example: Should Hit Test be made part of the AR module? (@toji, @klausw)

yonet: so our next subject is the required dependencies
… i have Brandon and Piotr to introduce

bajones: the topic at hand is that we have been developing soem of these features as modules
… it seems to have been working very well for us, as we were able to develop the core api really well
… however upon looking back it would be worth taking some time to look at past developments of the feature itself
… should these features be seperate features?
… should we lift these features to core?
… we on the chrome team see immersive ar and hit test used in immersive sessions in our samples
… correction: immersive ar sessions... we almost never see just immersive ar used
… the reason why they are seperate is more a matter of timing to get immersive-ar features out
… now that both immersive-ar and hittesting module are fairly well establishedand used
… we should look into combining these modules
… would anyone like to voice their support or opposition to the topic
… it could be a simpler message to developers

<klausw> let someone else go ahead

manishearth: my initial reaction is that this does not sound like a good idea to me
… in particular we had an open questions that we left to the UA: can you report that uou support immersive-ar
… the environement blend mode only gets reported in immersive-ar
… i don;t want to be in a situation where the headset does not support immersive-vr

klausw: the concern is not so much that we see immersive-ar sessions uusing hittesting all the time if the content is ar-centric
… your concern is that if headsets want experiences that is more immersive-vr but have transparent optics, hittest would not matter for them at all

manishearth: environement blend mode was not important so far in order to make the immersive-vr look nice
… but you want all the vr examples to run on a hololens for example

cwilso: the opaque environment blending must not be applied in immersive-ar
… for alpha blend environment blending this technique will not be applied for immersive-ar
… for other it should be applied regardeless
… we can advertise that a device that uses ar-blending could still use the feature

manisearth: ... this will require any device to state that they support immersive-ar and we would stop them from having the choice
… all UA users are already can make this choice to use additive-mode in VR

bialpo: we should be careful by saying what should be implemented by the UA
… all modules have a mechanism to say which features are supported
… you need to be able to recognize this as a mode, to surface environment blend mode on a session
… you need to be able to recognize a feature descriptor
… and try to leverage the specific feature
… it seems taht most of our modules could be merged in a big spec
… it is my perspective to keep them seperated
… to work on features seperated
… and point developers to specific specs
… we probably should take that (developer communication) to account
… i would not be opposed to merging those two

klausw: to say I think a main issue is implementation burden. Hit test can have a minimal implementation, i.e. intersect with floor plane. On the other hand, DOM overlay requires a full browser implementation in immersive mode, that's likely in the person-years of effort range, so I'd be against making that a mandatory feature

brandon: i agree with that

<Zakim> klausw, you wanted to say I think a main issue is implementation burden. Hit test can have a minimal implementation, i.e. intersect with floor plane. On the other hand, DOM overlay requires a full browser implementation in immersive mode, that's likely in the person-years of effort range, so I'd be against making that a mandatory feature

bialpio: if we keep it seperated it will also be easier for the browsers to implement the features
… we could just dump a list on what gets implemented in each of those
… are we considering making any features mandatory in ar / vr?
… just because the feature is more naturale in AR it does not mean it can not be featured in VR
… the anchors will not change over time in VR, they will be dynamic. but there is nothing in the spec to rule out that something cna;t be used in both
… we have a mechanism to say what is implemented

bajones: i want to clarify that if we merge the two modules together, i don;t think that hittest should be mandatory
… i would not change the text at all
… besides integrating them into the same document
… i would continue to support that hittest is avaibable
… it would imply to a lot of people that these features go hand in hand. i would be worried about that a bit
… that developers would confuse the presence of one that the feature is mandatory
… maybe merging these would add some confusion to this
… these features seem to be highly correlated, but there have also been enough arguments that this may not be the case
… in that case while I personally feel that we should leave them separate now
… we could have more discussion: when in the group is it appropriate to merge things or hold off until webXR spec 2
… maybe there is some benefit to agree that this is the proceedure

manishearth: like you said we don;t need to imply the feature to merge the specs
… i am open to merging
… the ar module does not have more to be added
… everything is related and it would make sense to merge for maintenance
… the ar module is not very large and therefore it would make sense
… teh biggest concern would be for me that ar requires hit test

bajones: my biggest concern would be developers-facing
… a little bit of reading could sort it out for them
… i am always hesitant to developers reading fine-prints
… the idea that you have a vr headsets that implements this
… the impact would be that we will force the environment blend mode on you if you want hit test

bialpio: question: does anyone remember what was the original plan with modules?
… was there a plan to start merging into the main spec when they mature?

bajones: my intention was, that we will deal with this later
… it was a natural thing to bundle a bunch of extensions and make them a part of the core
… we did not talk about specific plans
… we want to free up to be able to work independently without cross dependency
… which mostly worked fine, but there are some weird exceptions where this modular approach did not work so well
… there are some awkward dependencies
… i don't recall a specific plan on module merging
… so this is why i wanted to bring this up to day
… do we wait for the big bundling day?
… this might happen regardless in the future
… i was not sure if intermediate merging is a good plan
… i can see it going either way
… we need to be careful about implications

manishearth: i prefer not merging things except core concepts
… i like how css does it, where they accurately split things
… this is far future stuff

bajones: we can look at established groups and they must have a good reason on why they split things

yonet: let's shoot for small messes

bajones: do we want to try and establish through a straw poll
… if people would like to merge or not?
… or if we should bother with this at the moment

ada: we will do a straw poll

bajones: +1 for being in favor of mearging -1 for keeping them separate

<ada> -1

<cabanier> -1

<lgombos> -1

<bajones> -1

-1

<cwilso> 0

<Manishearth> +0.5

<bialpio> +0.25 (merge common concepts like XRRay, XRWebGLBinding, wait with the rest until Big Bundling Day)

bajones: i feel like this is coming down to ... keep them seperate ... pretty clearly
… we can revisit it again if we have compelling reasons
… we just keep them separate for now

WebXR WebGPU Binding (@toji)

yonet: we will get started with the webGPU topic

<bajones> https://docs.google.com/presentation/d/17_YzJAavUluGFNBOY8-N2itdaV02cCQhu7iBoOx00E4/edit?usp=sharing

bajones: don't stress reading through all of the slides to much, i will highlight the important parts
… just a quick update on where we are with webgpu bindings
… and a few questions that come along
… the current state: the proposal has a repo
… it builds heavily off the layers module which is webgl centric

<yonet> https://github.com/immersive-web/WebXR-WebGPU-Binding

bajones: there are a couple of interesting differences
… it has no impact on ... and the dom layers
… webgpu will only use the new layers interface
… we would like to move towards a more layer-centered approach
… presenting the IDL Interface (-> slides)...
… what are the differences from the webGL layers
… one big thing ist that format / usage must be specified
… in this case the webgpu spec will specify the bgra format
… which can be rendered on all platforms
… on all desktops and most mobile devices this is the most efficient format to render to compared to rgba
… this is why we will use bgra by default
… you could also specify a different format
… if you want a depth stencil buffer you can specify a format, but by default it will be null
… there is also the texture usage. if you want to render the texture on top you have to specify the usage
… we need to be able to allow developers to change that
… we determined that these are all important for webgpu
… this is why this makes those two apis align pretty nicely
… all projection layers will use texture arrays
… this does not actually allow for multiview rendering
… it's not clear how much benefit you would get from it
… side-by-side rendering is still allowed
… couple of areas for discussion:
… any questions on what we just talked about?

cabanier: when you return the list of supported formats, it seems like the format is not enough but you also need the type?

bajones: for webGPU follows those patterns a lot more, where formats are a well defined list
… strings that represent a very specific memory layout
… those implicitly refer to a type
… which is not the case in webGPU, or that we would like to avoid and simplifu

rafaelcintron: the equivalent feature of multiview was not removed in [vulkan]
… we won't be able to add this in the future

bajones: i would not want to prevent to add this in the future
… we will use texture arrays to accomplish this in the future
… my hope is that by saying this always uses texture arrays for simplicity will be the right call
… i have a couple of questions
… these apply to both
… it may be useful to update the xr projection layer...
… to advertise what the texture width, height, layers are
… you could only get the dimensions once you are in the middle of a frame loop
… this can be problematic
… if you need a compatible depth buffer for your layer
… it may be useful to lift this up to this specific interface
… so you could lift this off the critical frame loop path
… any concerns from the group?
… there may be a scenario where we would want to allow this
… unless someone wants to advocate for this path, i feel like the current path is probably fine

cabanier: this is a good thing to have... and update the layer spec
… there is a number of textures internally i assume?

bajones: if you have a texture array, you likely want to have a depth buffer of the same size
… we are not sure if we want to report it as one
… or as the number of textures

rafaelcintron: i don;t feel too strongly about this
… unless we know it could become a problem for people not to allocate their depth buffer outside
… we should keep things flexible

cabanier: we do need extra layers
… you would be able to infer it in the frame loop

bajones: it would be slightly more awkward to do it in the frame loop
… especially on the webGL side
… this will make the math a bit harder and you have to keep track of more
… i would be concerned if something will change with the stream, the developer must watch the sizes frame over frame
… i am not convinced that too many people will do that
… a better way would be to fire an event that a layer has changed
… rather to keep track of it each frame
… it might just be more consistent for everybody: once you specified the layer size, it will stay the same
… it would really just be the UAs choice (eg. for performance reasons)...
… we do still depend on setting the viewport that the api reports to you
… that can change over time
… we went through this discussion with Klaus that this should be application-driven for non-corrective reasons
… this is in the hands of the applications
… this is probably a better route

rafaelcintron: we should have spec texts that says this is not really relevant to the webGPU

bajones: There is an issue on the webgl layers, if we should keep this value around
… do we still need ignoreDepthValues?
… some applications still need this for compositing reasons
… but in some cases the developer may not be populating the depth buffer with values of the scene
… we wanted a way to signal to the system to turn off using those depth values
… this is how it is being used now
… this concept got carried over when we did the layers api
… if the devlopers don;t want to use specify it, they should just allocate it themselves
… they should populate it with valid values
… this is how we got rid of the boolean at creation time from the old method
… sometimes you would allocate a depth buffer, but the system would just not use it for compositing
… sometimes applications may want to know that
… i don't know how often this situation will come up
… or if it will burden developers
… we could just get rid of this value at all
… this feels like an easy thing that developers could overlook

rafaelcintron: is your position to kill it for webGL and webGPU?

bajones: we could make it report a reasonable value for both
… i would prefer that we strongly lean into the practice that this is what you should concretely do if you have good value or if you don't
… they should not have to switch on the fly
… we have the xr webgl layer which hands back frame buffers
… we have the layers module which has the webgl api
… what we are talking about is just the layers module variant
… the one that hands out frame buffers is in core
… which will continue to have this boolean in it

rafaelcintron: given that i would agree with removing this
… give out textures and make your own depth buffers

cabanier: the other proposal is to move it to teh xrGPU binding

bajones: i apologize that i did not see that proposal
… this has a different implication to me
… as long as we can get the verbage right...
… hey you might want us to allocate the depth buffer, that would seem more reasonable
… unless anyone has further questions or comments... i have no more slides

cabanier: does the browser allow you to mix webgl and webgpu layers?

bajones: i have wondered about that. i would appreciate if anyone could speak up if it's not like this for them
… it seems like they always boil down to a common set of surfaces
… but they all get funneled down on a common set
… for compositing
… weather or not we use compositing or not the UA will be able to get surfaces for that
… my intent is not to say that you could only do either. they should be intermixable
… there is not a lot of motivation for developers to do so
… i do thing tehre is motivation to mix media, dom and graphics
… we need to make sure that this works

rafaelcintron: i don't see any problem at least on windows to mix those
… they will all compose and work correctly on the same page

cabanier: do you initialize so you can do both?

rafaelcintron: today, webgpu runs on the d12

bajones: if you have an openGL texture, you can get to the windows presentation layer
… we do this in chrome today
… it's all the same driver that handles this

XR Accesibility (@Manish, @Yonet)

yonet: manish went through requirements
… and we will get an update
… and we should see if anyone is interested on working

Manishearth: I don't have much new to report
… they haven't gotten back to us
… I know bajones is planning on writing a document

<bajones> Apologies, having audio issue and can't hear anyone. Will work it out ASAP

Manishearth: to share with TAG and this group
… everything boils down to the a11y feature of webgl which isn't great

yonet: you said it's not possible to scale reality. Why can't we do it in vr?

Manishearth: I recall saying that but now I don't remember
… normally scaling for a11y is making it bigger so it's readable
… right now we don't have a way to do this in vr
… but content can do this itself

klausw: if you're using phoneAR, you can use the OS level functionality to the overlay is zoomed
… zooming doesn't make sense

yonet: for hololens we can scale the whole thing
… and it would be really nice to have that

bajones: I generally agree with Manishearth , zooming in vr will be very difficult
… how does hololense do this? Is content scaling up, or is there a system level scaler?
… it seems the content has to do this

yonet: in the experience, you would be able to scale it up to the room scale
… so you can interact with it differently

bajones: this sounds like tilt brush
… where you can scale with your hands to do some detailed thing
… it really helps people with the content in that environment
… but it's not a result of an OS a11y feature

yonet: this is one of the issues that are still open

<yonet> https://youtu.be/G5m7ukcGeQg

<Zakim> klausw, you wanted to ask if there's accessibility work at the OpenXR level?

klausw: is there anything happening at the OpenXR level?
… many of these thing make sense there as well
… does anyone know?

bajones: maybe Alex knows

yonet: do you know lachlan?

lachlan: there is no explicit support but it's highly dynamic so the runtime can do it itself

cabanier: DOM layers would give the session the same a11y features as a web page

Manishearth: what was the audio question

yonet: the open issue

bajones: it was about documenting better on how people can use audio

<Manishearth> https://github.com/immersive-web/webxr/issues/815

<yonet> https://github.com/immersive-web/webxr/issues/815#issuecomment-524492352

bajones: I believe that issue was brought up in the context to make audio a component of the spec
… but I forget why this is an outstanding issue
… we need to document it better in the spec because it is not normative text
… since dom overlay and layers were brought up, every time where a system has a content aware structure it has good implementations for accessilbility
… this could carry over to media layer
… I'm not sure what opportunities are available there
… similarly we can look at frameworks like aframe
… maybe the work there has already been done
… we are just a consumer of raw pixels.
… can we do an OS level zoom. That is likely not practical
… maybe there is an extension to change the floor level
… then you can make yourself float
… and that is something we can surface in the API
… is this a good idea to do through the UA?

cabanier: how does video have a11y?

bajones: it's just subtitles
… and then we can make it easy to make the titles right in front of you
… based on system preferences

klausw: more that could be done with audio
… you could show visual cues where the audio is coming from
… for many things if things were accessible from the OpenXR level
… we should see what the platforms are doing
… for competitive games, you might not want this
… but those are likely rare use cases

<Zakim> ada, you wanted to ask about that doc

ada: I want to say that sites do not know that user is using accessbility tools
… and this is very important
… that information is kept private
… wrt subtitles, someone was talking what they would like to see for subtitles in XR
… they had an URL but I can no longer find it

yonet: if you block people that manage their height, that would be very unfortunate

Laford: the input system in openxr is done by the runtime
… it has mechanisms where users can reconfigure the binding
… the browser is a runtime consumer, this binding API could provide a level of accessibility
… wrt competitive games, is there a way to stop it?

klausw: some games look for tools installed
… I'm unsure if this is an issue since webxr is not used for competitive games

bajones: I agree if that that won't happen
… for the most part, the web is not about getting the ultimate performance
… I don't want concerns for that type to stop our development
… right now, chrome and edge do work with OpenXR
… we don't expose a binding system
… we expose the left, right squeeze action etc
… it's worth nothing that using steamvr you can rebind chrome
… so a pedal could become a squeeze
… it's unfortunate that we don't have contextual name
… I agree it's critical that if we find we need to turn off a11y features for the integrity of the content, would be to something at session creation time
… so we are not advertize arm lengthening
… because it would leak information about the person

<Zakim> madlaina-kalunder, you wanted to strongly agree that it should not be disclosed if someone is using accessibility features. in games, the developers will have to find out anyways if there are any cheats involved

madlaina-kalunder: I wanted to strenghten bajones
… it's important to know how it can influence the experience
… remapping is something that is commonly used
… is there a way to provide a semantic scene?

<Zakim> klausw, you wanted to reply to Q

klausw: it depends on the framework
… aframe would be able to do it
… I think there was a discussion on integrating that

<Zakim> ada, you wanted to mentio nthe xbox adaptive controller

klausw: but this would be for specific frameworks

ada: to go back, you can use the XBOX adaptive controller

bialpio: I want to make sure I understand the question
… are we trying to make sure that the API are attaching semantics
… to real world understanding
… maybe this is something that could be done
… if I understand the frameworks don't hand this information
… so it could be a bit more challenging
… but it's not something we can force them to do

yonet: maybe hit testing could tell if the user hit something

cabanier: kip had a proposal to add semantics to the scene and make it available to screen readers

<yonet> https://w3c.github.io/apa/xaur/

ada: I would like to mention the XR a11y requirements documents
… it contains the feedback and thoughts from the groups
… these are things that we need to be aware of

yonet: we can invite experts to some of our sessions

WebXR Hand Input (@Manishearth, @fordacious, @thetuvix)

Manishearth: Hand input shipping in Oculus for a while
… API is more or less done
… Matter of pushing forward to a point where Oculus and Edge can ship without a flag
… Alex Turner wanted to talk about what it would take to get there
… Manish's main concern is stronger privacy/security analysis
… Microsoft may have been looking into it

laford: Put up for TAG review
… Spec functionally shouldn't change much

<ada> ack \

laford: don't know what privacy/security analysis would entail

manishearth: Current privacy/security docs don't contain much

<yonet> https://github.com/immersive-web/webxr-hand-input

manishearth: Explainer has a privacy section written by Diane
… Would be nice if a privacy expert could look through and identify fingerprinting concerns and mitigations
… Don't consider it a blocker, not much else to do.
… if MS/Oculus could work together on an unflag date

laford: MS is pretty happy if everyone else is.

cabanier: Oculus is about to start a privacy/security review
… Unsure if they need to get explicit permission for hand access
… Oculus devices implicitly have access to hands, but it's a new sensor for the web.

Manishearth: The spec does require a permission prompt in that it requires a feature descriptor
… up to UA how they handle feature descriptors
… Should strongly consider having a prompt
… Already have a prompt, should look at extending that.
… But the spec only strongly suggests it.
… Secondary view is an example of a feature that probably doesn't need a prompt.

cabanier: Happy to work with Microsoft and share results of internal privacy review with the group

laford: We're at stage of about to implement on top of OpenXR in Chromium

manishearth: Good to hear! At this stage mostly blocked on Oculus/Microsoft figuring out timing
… No siginificant spec descions, just polish

laford: Alex turner concerned about a couple of C-isms, such as how enums are used.

yonet: Do we need to add WPT test for hand input.

Manishearth: Oh, yeah. We should!

<Manishearth> https://immersive-web.github.io/webxr-test-api/

Manishearth: (But not me, because I'm not tied to an implementation ATM)
… Still doesn't block unflagging API
… WebXR was out for a while without tests

cabanier: we don't intend to gate on WPT

AOB

ada: Anything else?

manishearth: Alex Turner requested this meeting. Hope that we covered it. Lachlan?

laford: Think Alex is in a meeting, checking.

manishearth: Fill poses may have been the only minor thing, mostly cosmetic

ada: Should we fill time by naming things?

bajones: *Stares in Software Engineer.*

cabanier: Working on generic hand model

laford: Is there a plan to expose hand meshes, as opposed to the current joint-based model?

manishearth: Didn't have time or motivation to look at meshes. Joints allow for gestures and approximate with a rigged hand
… if someone wants to do the work to write up a mesh API we can look at it.

laford: Yeah, meshes for gestures is not great.

manishearth: Already seeing an explosion of cool content, so likely not necessary to have API provided meshes

cabanier: I wrote a hand mesh API when working at magic leap.
… Runs at a lower framerate

manishearth: Thought about it a little. Making API that can return hundreds of points efficiently is tough.

laford: Could do it in concurrence with other types of meshes that you may want to track.
… but that's a whole different discussion. Makes sense not to include for now.

cabanier: Looked it up. There IS an oculus hand mesh API
… used in Oculus Home screen
… can be used in conjunction with joints, just renders nicer

ada: Did anyone see Babylon JS update today? Added hand tracking.

<yonet> https://twitter.com/babylonjs/status/1323308129631694848

yonet: Coming next week, can try now.

manishearth: Hololens has another API for room meshes. Can't recall if OpenXR API is the same.

laford: Doesn't exist yet. Windows specific bridge exists.

manishearth: Assume that OpenXR will eventually have room mesh API. Curious is that will be the same as hand mesh API.
… worth waiting for more progress in that direction

cabanier: Tried it at one point. Definitely very different APIS
… world mesh is lots slower and larger. Add/remove updates

manishearth: In that case is probably OK that they're different. May want to share types for triangles.
… Reluctant to start until I have more visibility of how world meshes work to find commonalities

laford: Industry differentiates the two and has separate APIs
… thought about the difference between objects
… collecting vertices/indices. Difference is in update time and how static/dynamic

Makes sense to share primitives
… but not an expert in this area and not sure why they tend to be separate.
… Not worth assuming that they should be separate or together. Should investigate.
… Similarly, treatment of hand joints and human skeleton is another potential area of shared concepts

<cabanier> cabanier: https://github.com/immersive-web/real-world-geometry/blob/master/webxrmeshing-1.html

ada: Anything else on this topic?
… Nothing, so moving on to any remaining business.
… Tomorrow's call is at 8PM GMT

laford: One more topic. Overlapping-squeeze profile?

<Manishearth> https://github.com/immersive-web/webxr-input-profiles/pull/185

<Laford> https://github.com/immersive-web/webxr-input-profiles/pull/185

laford: not super familiar with history
… original problem with hands is that squeeze/select overalp
… spec implies they are distinct
… has a grasp action mapped to 4th button

manishearth: I like this.
… grasp is a way better name than overlapping-squeeze
… need a line in the spec about it
… need to work with framework people to make sure we handle it right. I'm hopeful.

ada: Think we can call it an evening/afternoon/morning. :)

Minutes manually created (not a transcript), formatted by scribe.perl version 124 (Wed Oct 28 18:08:33 2020 UTC).