Meeting minutes
archiving webvr rocks
ada: : First issue is archiving webvr rocks
atsushi_: WebVR related repositories are still up but have not been updated for 6 or 7 years
… proposing archiving repos
ada: Archiving makes sense for webvrrocks. Archiving webvr makes me feel sad, but it's still available to read on GitHub
cabanier: It looks like we don't even own the domain anymore
ada: In that case, yes, archive webvrrocks AND webvr repo
ada: Are there any other repos we should archive?
atsushi_: I will review and send the list to the mailing list for consideration
Move to WG? immersive-web/WebXR-WebGPU-Binding#18
ada: WebXR WebGPU bindings have two implementations. It's really stable seemingly, so I suggest we move to the working grou
Mike_Wyrzykowski: This has shipped in Vision OS 26.2
bajones: We came up with a few minor tweaks, but could benefit from having more users trying it out before committing to spec
Mike_Wyrzykowski: We shipped it without flag
<alcooper> +1
<bajones> +1
<Mike_Wyrzykowski> +1
<cabanier> +1
<mkeblx> +2
ada: Let's do a straw poll on moving this to working group
<m-alkalbani> +1
cabanier: There are still outstanding questions about foveation
<ada> yeah like that, or email me or the chair's mailing list
Mike_Wyrzykowski: Foveation should be discussed since it's handled differently depending on platform
ada: Let's make an issue for that
ada: We will make a formal call for participation for objections/approvals
Add Scene Description API for automation and a11y immersive-web/webxr#1363
<ada> immersive-web/
@ada Moving on to scene description API, waiting for Baran to join the call
ada: I've been working on this for a while, primarily for accessibility tools to know whats on the screen and offer up to screen readers.
… declarative list of what is on the screen against what is being rendered
… Also useful for gaze glow on visionOS.
… It's useful to know what you will interact with before you interact with it.
… I used three.js. Wouldn't need to render at the full eye buffer size, it can be much smaller
… Maybe half size.
… It could be in stereo or mono depending on os
… Here's how it works: You have a scene graph of various attributes, gives them a description, tells you how to interact with them, and also assigns an id.
… That id should be a number. You can render the relative color in linear space on the id buffer, which is a buffer where it looks like shilouettes of whatever the user can see.
… System can them know what's on the screen according
… There's a lot of ways this could be enhanced
… Will have performance overhead since it's an additional render
… Added interactivity is a bit of a carrot for devs hopefully
<Zakim> atsushi, you wanted to discuss will we want to trigger review by horizontal, esp TAG? (or did we?)
<ada> sorry atsushi missed that for the last topic
cabanier: Your proposal is very similar to what we've previously discussed. In the past we discussed using this as fallback content element in markup
ada: How does this work if it's not just rectangles?
cabanier: It can be any shape actually
ada: Could be tied to area rules, since screen readers will create bounding boxes
cabanier: Color in linear space: we probably just want a 16bit texture
… You also do need stereo in order to draw the outline, since it will be a different position for each eye
bajones: Lower resolution is a double edged sword. higher resolution can all be done in one pass, but lower may require multiple render passes.
… If you're doing the render as part of color pass, in XR we want that multisampled but not a traditional multisample resolve.
… You would need to do a manual resolve (painful) or another pass altogether
<bajones> https://
cabanier: One of the problems with spacewarp will also appear hear: how do you handle opacity?
ada: That would be up to you as the author.
cabanier: Author would need to be mindful of that then...
bajones: How many carrots can we provide to devs to stay that this is for their benefit?
… Is there an automated event driven system to drive this?
… Could fire a select event on raycast, depending on aria roles
… You might only need to write out selectable objects here. If there is a big forest scene with a selectable animal, you may only need to specify the animal
… The webgpu sample linked is a good example of what we are talking about
ada: ON the topic of webgpu, our accessibility folks were very excited about the possibility of extending this to webgl and webgpu
alcooper: I like the idea of sending clicks. If we want to dip toes into gaze, we might expose hover as well. I know Apple is opposed to this
… We might consider exploring this as well if we start going down this road
ada: We would likely pass the buffers back to OS.
… (given existing policies re: Safari exposure)
… I think it's worth continuing the discussion, but our stance should not preclude that
mkeblx: When you lose the hierarchy, in an example with two characters holding an amulet that is a child of the character, you can't highlight both objects in different ways at the same time
ada: You may need to remove the amulet from the hierarchy altogether
… focus events might work better for this over hover events
mkeblx: if you could have some structure to the colors, then you could impose some hierarchy that is inferred
ada: Worth talking about, but maybe not for mvp
mkeblx: I'll try to find some other examples that are less niche
Web3D (.x3d) Talk : Web3D and the Web of Worlds
[talk by Web3D - Nicholas Polys]
moving real world geometry to the WG
alcooper: maybe it's time to move this to the WG
… name doesn't match the feature, it's been behind a flag on chrome for ages
… planning to look into launching this sometime in the next year
… question is should we move real world geometry and (plane-detection is the feature name) to WG?
ada: can we rename the repo name without breaking anything?
atsushi: no issue on renaming
ada: will go ahead and rename it, let me know if anything breaks
alcooper: there is one file that should be moved (one that relates to meshing)
bajones: we can drop the file about meshing (and other mentioned by alcooper)
<cabanier> +1
ada: in room poll on moving real world geometry to WG
<alcooper> +1
<mkeblx> +1
alcooper: there are a couple of things flagged as unstable, worth chatting about these in future meetings
ada: can discuss those in WG meetings since they're not major changes
… will send cfp for this next week, love moving stuff to WG!
State of WebXR cross-vendor testing immersive-web/webxr-test-api#90
<ada> immersive-web/
alcooper: wanted to ask about testing and if there is any reporting on that information
… good to good representation from other vendors on there
ada: question to Mike_Wyrzykowskiif we use tests internally
Mike_Wyrzykowski: we do not
cabanier: at one point, we looked into wpt and they weren't working on android, has that changed?
<ada> here are the WebXR WPT tests: https://
alcooper: all testing on blink, they run on desktop platforms internally, but fail sometimes due to JS strictness
… thought they run on andoird, we can deinfetly run them internally
… usually face issues running them on mac, due to strange issues like focus etc
… question to apple: point of wpts is interpopelity (spelling), it would be a good thing to have us run them so we can say this is a complete implementation. if we ever want to use feraturs in the core spec, it would be worth running these tests and passed
ada: on moving webxr to rec, intention is for webxr now to be a living standard - but agreed we'd like to have tests run without issues. can't answer now but will take a note and shake the apple tree
cabanier: looking through history of wpts, someone on the team said interaction with browser UI relies on a driver
alcooper: faking a click is only thing that goes through a driver, everything else goes through a fake backend that goes to the device
alcooper: [quick investigation of this]
ada: adding issue to agenda to make sure we keep following up on it
cabanier: valuable for more than webxr, for browsers in general
alcooper: there is testdriver.click (only works for top level frame), we do use that. only thing needed from test driver perspective is the click
… there is a thing called test driver that runs wpts all over, but for webxr specifically it's just a click
… test api is basically simulating a fake device
ada: queued this for next WG meeting to check up on it, to keep some heat on it
alcooper: did we want to add someone else as an editor in addition to Manish?
ada: agreed
ada: any interest in becoming editor, good chance for someone to get involved
m-alkalbani: interested, will need final confirmation
spatial figure
<ada> can the teleconn here us?
<Raul> webspatial/
<Raul> ^Re Pico WebSpatial presentation
<Zakim> ada, you wanted to ask whether the developer specifies where windows appear
ada: when the developer opens a new window, do they choose where the window goes?
parth: for a new scene, there is no control over where that is placed
ada: for spatial pointer interaction, is there a way to obfuscate the user's position
… and gain information about height, etc
… which is a normal browser doesn't have
ada: yes, but things are more restrictive on the web
bajones: based on your showing, the rendering is done through the visionos toolkit
… you are not feeding pose data into the embedded web pages
parth: yes, we have a builder that decides the spatialization
… that places the div and marks it as translucent and then the os renders it
bajones: you mention the react integration
… does it require it?
… If I had a html page with some css, would that work
parth: right now, we only have a react SDK but the intent is for it to be agnostic
Joshinch: spatial transform sounds very similar to transform detached
… maybe we could try to broach it the same way
… we want to provide the depth of the element
… I wonder if we can somehow merge those
parth: yes, I think they are very similar
Joshinch: great if we can unify
bajones: I think one of the concerns is simply that you are a deep integration with the OS
… but that is big departure
… from how browser do their rendering
… the spatial figure was mostly a way to avoid
… that. it wouldn't allow for this set of effects
… but for trying to do the easy thing first, spatial figure would require tearing up the browser
… I don't think anyone doesn't want this
ada: the syntax I want to propose is very similar
bajones: I wanted to tie up some threads here
… there are 3 different ways of getting to the same goal
… there was a lot of confusion about how spatial figure would fit in here
… what is the most practical thing on how they would function
… this is an amazing prototype
… but we'd all like to get to the same place
parth: right now you can't do these effects, we don't have a clear answer here but we would like to continue the discussion
ruoya: I also talk to ada and have questions about the spatial figure
… and it seems like it was more about the model tag but maybe it can do this simple thing
ada: spatial figure is focused on a smaller use case for a smaller problem
… implementing detached is very hard for us but now I head that our proposal is hard for others
… it's trying to establish it to get in the hands of userss.
… I 'd like to make sure that the easier parts could be implemented by others
Joshinch: if we all have agreement that this simple approach is the way to spatial
… so why wouldn't we put this in the browser compositors
ada: for us, this is part of a larger unified spatial web proposal
… I'd like that the other concepts aren't prevented
… the other syntax I'd like to propose is only a tiny bit more
Joshinch: but can't we do that
… in the compositors
ada: I want to add that with extra syntax
… transform is cool but not enough to do the job by itself
… hopefully we can find a path that is good for everyone
cabanier: please write down the spatial figure proposal so we understand how it works
adekker: can you talk more about the material background?
… I know we discussed the microsoft proposal from Diego
parth: the question is if material background has more nuance
… we'd like to provide backgrounds to what users are seeing
… we may want to standardize on how they would look like
… thick, thin, ...
ruoya: what is the question about?
adekker: a lot of the materials are OS dependent for performance reasons
… how do you provide consistency? Or do you particular versions of transparency? Will there be a media query?
ruoya: this has been discussed in our team
… our team is to support this in every platform
… we are thinking about this. Apple supports the most
… our approach is to support 3 while apple has 5
… if users use our SDK, they shouldn't have to worry
adekker: it would be nice to have a progressive so things get better
… the microsoft explainer had a comma separated list from most to least preferred
ada: this would be handled by css using @supports
… or maybe you can do it, by adding them all ad then the supported one would be picked
… or maybe it's like how you do do fonts
… css has many ways to handle the progressive enhancement
… although this isn't an endorsement, I like to think about CSS APIs
ruoya: what would be the next step?
Joshinch: it's the same for us
ada: you can take it to the CSS working group
Joshinch: it seems that that is a bit early
ada: I worked out the spec so it's modular so people can break it out
Joshinch: I'd like to talk to our TL but don't want to wait 6 months
ada: I will talk about more stuff when I get approval to do so
… I'm happy to advise on stuff or explain more once I have permission
… I hope that sounds sensible
Joshinch: what is the scope you are talking about
ada: once we propose it, we can take chunks to the working group
Joshinch: maybe we can discuss the 3 different perspectives
ada: last time we tried to take it apart in chunk and we came up with spatial figure which didn't go over super well
… I think it won't be hard to find agreement
… we need a separate context
WebSpatial
Slideset: https://
Feature Request: Cubemap texture support for native media layers. Multi-tile feature immersive-web/webxr#1419
<ada> immersive-web/
bajones: Idk if im the best but not sure if anyone else familiar. thsi appears to be a request where there are people who want to play a 360 video but want to use a cube map for equirect for high qual
… in this case in the interest of bandwidth they only want to stream in sides of cube that the user is looking at
… right now we do have cube layers - I think thats whats being asked for in the media bindings that would take 6 different videos and place them in a cube round you and only advance streamingat ones you are looking at
cabanier: is that the ask?
bajones: hard to tell because hes talking about a couple things but I think so
bajones: this is interesting but I see where hes coming from.... not sure if its too niche, not sure its one that we want to commit to native implementation for
… you could get same kind of effect by either having a map layer and putting videos to tiles you are looking for and ignoring the sides that are behind you
… or not sure how well it work but you could position different quad media layers all around you and then do what he was going to do to just advance ones hes looking at
… would end up with scenes most likely
bajones: no loss qual, but more manual setup which they want to avoid
bajones: if this setup was standard, then it would be worth considering more. but its not and there is a lot of weird layouts there.. some are backed by meta and apple,
… apples video format is non standard, if they brought it to webxr... you would have to render it to a mesh in a projection layer... not greatest but is what it is
… I dont know if there is anything I would reccomend here
bajones: would love to touch on dif topic related
cabanier: I think he already has all features he needs
… if he wants to only update textures he needs in cube map... then surface to 3d... for that element, just for ones not in view
… doesnt have to have array of textures...
… you put them in the regular cube map layer, not something that you need if you make the cubemap layer. I dont think its logic I would want to build in
bajones: what do you think of idea of pos 6 different quad elements
cabanier: doesnt help
cabanier: same thing
… thats what media layers are
… has everything he needs. Cube map already is an array of textures, not sure about video textures
bajones: inclined to agree. stuff is in place already, requires work in application... but that is apropriate
cabanier: agree
… if you pause and you look it wont be there.. which isnt good
bajones: tangent - cubemaps... we have some contractors that are working layers implementation, excited to see up and running.
… one of the things they ran into was cubemaps. love to ask meta if they have had trouble with it
bajones: in our browser all the xr surfaces are passed over processs boundary, we have to share them using whatever the os primitives are... in android xr it ends up in a hardware buffer which in almost no hardware as far as I know actually supports cube map as shareable surface
… so the developer thats owrking on this is working finding ways around that, which involves more copies than we would like
… we could get around it by instead of sharing image across boundaries as cube map... share it as a texture arrray... 6 layer array.. functionally not different. its the same as webgpu. Gl just likes to use different things for same thing
… wanted to ask meta, is it something you have a problem with?
bajones: is it something you would consider adding a mode or option to layers api where if you are doing a cube map actually request as a texture
cabanier: we dont run into that problem since we use a hardware buffer.
… didnt run into that, I know it works for us but not everywhere..
… I dont mind adding it kidna unfortunate to do it... would you wanna trick it? because under the hood you could just spoof it as cubemap.
hacky
bajones: have considered. some hackiness that could go on... we have existing hackiness... for other things. generally dont want to do that though
… we have the texture type that is available on every other layer... in this case what would be interested in is using same texture type that is available on every other layer and make it avaialble on the cube layer
cabanier: if people are already using cube map layers today it would be broken
bajones: dont want to change defaults
cabanier: o k
bajones: it would make the cube map layer look more like rest of them
… if you ask for texture array explicity... you would get 6 layer array
cabanier: texture array would be more efficient
bajones: we would send it over the process boundary, and support cube map as a copy still
cabanier: ok sounds good
cabanier: if you use android hardware buffers?
bajones: on this hardware we share it dirrectly, wrap it as a hardware buffer and have people render into it directly
alcooper: we share the buffer directly the page draws in and we immediately read from it . we have copies now but we could theoretically tie it in
bajones: we do find that on windows sometimes the handles coming out of openxr... dont have the right flags set on them internally
… in that case we do a copy anyway because of the process boundary
… except in cases like this where we found that we cant send as a cubemap, so either hack a webgl api to fake cubemaps or just say offer this
bajones: I will update the spec
cabanier: aligned
ada: cube maps in the layers right?
ada: we dont have cube shaped media layers?
bajones: we do have cube shaped compositer layers
bajones: great for skyboxes weird video layouts etc
Camera intrinsics immersive-web/administrivia#228
<ada> immersive-web/
cabanier: meta recently gave access to camera access to front camera on quest, behind permission prompt etc
… first thing they want to do is detect QR code, etc and draw a box on top, a classic use case
… the cameras are not the same position and orientation as displays, so don't have that data provided to devs
… devs need this data
… raw camera access api can give this data potentially
alcooper: raw camera access data currently doesn't provide a sufficient amount of info
cabanier: so we would need a way to provide this intrinisic camera data, position, fov, etc
… shile not strictly only for webxr, likely only useful or will be used in webxr
cabanier: works surprisingly well (the general get camera access and do CV on it) so overall feature useful
ada: webxr should be secure more by default, and having a use case of getting camera just to put a cube on QR code can be handled in better ways
cabanier: so this may have to be done outside of this group anyway as not a webxr spec
ada: overall the CV approach has more specific APIs that could cover
cabanier: there are cross platform issues, so yes maybe exposing intrinsics (like direct QR code / marker handling etc)
alcooper: currently the API gets a camera from an XRView
… maybe could expose a XRView associated with a camera for this. maybe there are sync issues with this?
… did explore exposing extra info like camera geometry for the Depth api for webxr.
cabanier: returned a xrview and had a mapping for depth api. but the depth api is different, maps more directly with screen pixels
alcooper: also, is it time to re-visit the other APIs like marker tracking and see where they are at? if we can make progress on this that would work
cabanier: yes we could do on our platform (meta) QR code tracking, but does come with restrictions like 1fps tracking
mkeblx: I guess one meta comment is that one platform e.g. Apple's that didn't want to support camera then we could return null.
ada: WebXR's required/optional/enabled features enable this
m-alkalbani: is the marker tracking implemented on meta?
cabanier: no not implemented yet
ada: different platforms are good at image tracking and bad as QR codes, and vice versa
bajones: answering does android/androidxr support qr code tracking?
… we may want to go down path of supporting the route of tracking markers (exact form TBD)
https://
pico: checking if they support QR code tracking
alcooper: there will always be developers who want more, and wider world expects or is pretty comfortable with having camera access available
… so it does make sense to make it better, but also work on some use cases like QR code tracking
bajones: it's an interesting environment where we can fix some cases like QR codes, but there's going to be other demands from developers
ada: but for privacy HMD is different than phone in terms of camera control
m-alkalbani: what is the state of marker tracking proposal? what's next
bajones: if going to do marker tracking, let's pick a common denominator that everyone can track
Dynamic Foveation immersive-web/webxr#1420
ada: happy that marker tracking discussed and we can follow up with getting back on the table
ada: Dynamic foveation allows for adjustng quality where gaze lands. Apple strongly believes pages shouldn't know where your gaze is (and we go to great efforts to make the browser not even know this).
… Is there a way to do this without leaking gaze to the site? Maybe some WebGPU-only thing or other alternative.
… I'm not familiar enough with the stack to have good suggestions, but hopefully someone else here is
Mike_Wyrzykowski: If we prevent readbacks/reading the texture in any form, this wouldn't lead to gaze.
… WebGPU has memory-less textures called transient attachments. So you can write to this but never read it back
… But can't do multipass renderpasses without leaking the info.
… Even if we prevented pixel readback or copying, multipass would allow reading
Mike_Wyrzykowski: No reason we couldn't implement this on WebGL either; but maybe skip this part
bajones: Definitely still worthwhile to do WebGL, unquestionably where most content is produced since WebGPU isn't widely supported
… but in WebGL it can be done silently and in WebGPU its trickier
… hadn't considered transient attachments yet, though it's pretty new.
… Not sure I follow about not doing multipass
… Could envision getting a transient attachment for the foveation level of the pixels,
… cabanier has done samples outputing colors based on derivatives of values between textures
… Rendering based solely on that output can give the foveation values which gives a rough maps of gaze. Wouldn't be the best, but would be possible
… So are you saying only rendering to textures that also can't be readback either
Mike_Wyrzykowski: Yes
bajones: I don't think we have a mechanism for that
… Maybe require texture usages only contain render attachment?
… If you don't have any of the usage attachments no way to get data out of API
Mike_Wyrzykowski: Yeah, those restrictions would prevent leaking
bajones: Probably not terrible. Canvas textures only have render attachments anyways and most folks don't ask for more though they can, so it's probably reasonable
<ada> ack
<Zakim> ada, you wanted to ask about having it as a feature flag
ada: Is this the kind of thing that you can just turn on?
… e.g. we can do dynamic foveation and you only asked for default foveation, so do it anyway?
bajones: Not just something you can get for free.
… Need to set something to indicate where foveation takes affect
… probably looks like request a "memoryless" texture, pass it into some variable rate shading, it does it's thing and hten discard the texture
… difference between doing htis dynamically vs statically. Folks may only grab it once if it's static. So developers need to be told to grab every frame (or at least when to update)
… developers will explicitly need to integrate it, and we can't just say "We foveated it"
… while you know which textures came from XR, this isn't the way the APIs work
cabanier: Would you be able to postprocess since that requires reading from the texture?
bajones: Can still attach and load but can't sample from the texture
… Can blend against that, but couldn't blur or sharpen
Ada: Would have to render out a separate buffer for the bits you need
cabanier: Can't copy bits out?
ada: copy to a different texture and then blit?
bajones: You've used up all of the fillrate and so you don't get anything
ada: So can't redo with lower resolution, e.g. just as depth buffer
bajones: Yeah, not really
… What's apple's opinion on quantization of these kinds of things?
ada: No Comment
Mike_Wyrzykowski: If developers don't invalidate any of the restrictions, then could just be a boolean flag, but may need to adapt application if they want to use this and postprocessing will be a challenge as well.
… maybe better once WebGPU spec evolves, but not as it stands today
ada: Not opposed to saying if you want postprocessing you only get static foveation
… you're already doing a difficult expensive thing so maybe don't care about max framerate
bajones: If it's static, no one cares if you can read the texture back
… probably a world in which we say, "technically we're eye tracking, but really we're just switching between like 4 textures"
… but there's a resolution at which that happens where it becomes uncomfortable
… but no one is probably prepared to answer where that line is
… but interesting if we can consider a mechanism for "Just give me a static foveation"
ada: I also don't know anyone doing post processing
bajones: If more people do WebXR on M5 chips they may have cycles to spare for it; it's generally just been too expensive for developers to do on devices
mkeblx: Maybe we don't need to think about it too much because you can just do this natively
… is there a reason we need to do something different?
… can you always do a timing attack though?
bajones: There is a side of that to this
… I'm not sure how much of a concern this actually is
… we found with WebGL, and likely WebGPU as well, we had a concept of secure textures you couldn't read back from, similar to what we described here
… but found a demo where someone showed that even without ever writing out the RGB values sampled from a texture to a buffer were still able to get a reasonable approximation of the image by altering hte amount of work the shader did based on the RGB or luminance values
… tiny square for red, giant square for blue, and just observe how long draw calls took.
… This took a very long time, but you could get a gray scale image of the "protected" value
… By the nature of this tech, you're going to spend less time on some pixels and more time on other pixels. So could scale and observe the workload based on this
… Could theoretically generate a rough approximation of what that shading map looks like
… it's slow, not practical and for VR certainly such a chunky experience that I don't think people would stay in the experience
… but is worth discussing as a thing that has technically been proven possible
mkeblx: You can even do a timing attack with the model tag
mkeblx: May take one minute to get 5 bits of data
cabanier: Would be good to see if Apple would be okay with 4x4 or 3x3 quantized options
Ada: Should we make a CG repo or do it under core spec? How do folks feel?
bajones: Probably needs two pieces. One is to have the WebXR side of things (e.g. how to produce the foveation and what are the limitations when it's tied to gaze)
… leaning very heavily on WebGPU since that's where developers need more action, but can maybe just do it as a boolean in WebGL
… for the WebGPU side, apart from the images/restrictions, need basic support from WebGPU API for variable rate shading, which would be taken up by that group
… poke at them (aka bajones and Mike_Wyrzykowski )
… The actual feature for foveated textures would be in the WebGPU spec since there's applications beyond just WebXR, games do this today
cabanier: Expression tracking may tie into this as well?
WebXR Expression Tracking immersive-web/administrivia#227
<ada> immersive-web/
alcooper: main reason flag this - to feel pulse of room; proposal originally by rick. already openxr extensions to do this.
ada: gut insinct - don't love this; you could get persona, and do same thing.
bajones: does apple expose facial tracking, in native apps?
cabanier: - want to make sure it's privacy enabled; fact that is blend shapes helps with that.
bajones: when talking about privacy - about quantization of data. Break down discrete poses of eyes, and same might go to rest of facial features. Privacy in this case means maybe 16/32 discrete values that we will go between, so we dont get micro-expressions, but get overall mouth moving, etc.
alcooper: Q to Ada: are you concerned with whole api? or just a part of it?
ada: delivering something without eyes (for example) seems incomplete
ada: Don't think webxr should be handling their own avatars, since website has full view with what you're doing with a lot of your body. Love to have higher level avatar system, but monumental to build.
bajones: Meta has avatar system, synthesizes a lot of information, would there be an appetite, to exposite it through an API (if we were to develop).
cabanier: Hard to say. Implementation is small part of it. Doesn't see why we wouldn't expose it. If developers would want it.
cabanier: would be surprised if openxr api doesn't already do that.
bajones: cabanier Facetracking seems to exist already in meta docs.
ada: If you're eating (AVP), obscures mouth area.
… Seems like general consensus that there is appetite for this. If bajones implements, cabanier will as well.
ada: Will make a repo for this
<atsushi> i|bajones ldk if im the|scribe+ joshi
<atsushi> i|alcooper main reason flag this|scribe+ parth
<atsushi> i|bajones Idk if im the best |scribe+ joshi