18:18:14 RRSAgent has joined #immersive-web 18:18:18 logging to https://www.w3.org/2025/11/21-immersive-web-irc 18:18:25 present+ 18:18:27 Present+ 18:18:28 present+ 18:18:29 present+ 18:18:30 present+ 18:18:30 present+ 18:18:31 present+ 18:18:36 present+ 18:18:36 Zakim has joined #immersive-web 18:18:46 scribenick: Raul 18:18:53 rrsagent, this meeting spans midnight 18:19:20 ada: : First issue is archiving webvr rocks 18:19:38 ruoya has joined #immersive-web 18:19:44 atsushi_: Staff repositories are still up but have not been updated for 6 or 7 years 18:20:08 q+ 18:20:09 ... proposing archiving repos 18:20:12 ack ada 18:20:27 q+ 18:20:39 ada: Archiving makes sense for webvrrocks. Archiving webvr makes me feel sad, but it's still available to read on GitHub 18:20:45 ack cabanier 18:21:01 cabanier: It looks like we don't even own the domain anymore 18:21:11 s/Staff repositories/WebVR related repositories/ 18:21:30 ada: In that case, yes, archive webvrrocks AND webvr repo 18:21:40 parth has joined #immersive-web 18:22:00 ada: Are there any other repos we should archive? 18:22:25 atsushi_: I will review and send the list to the mailing list for consideration 18:22:38 https://github.com/immersive-web/WebXR-WebGPU-Binding/issues/18 18:23:07 q+ 18:23:32 ada: WebXR WebGPU bindings have two implementations. It's really stable seemingly, so I suggest we move to the working grou 18:23:32 q+ 18:23:54 ack Mike_Wyrzykowski 18:24:01 q- 18:24:34 ack alcooper 18:24:34 Mike_Wyrzykowski: This has shipped in Vision OS 26.2 18:25:21 q+ 18:25:47 bajones: We came up with a few minor tweaks, but could benefit from having more users trying it out before committing to spec 18:25:53 ack Mike_Wyrzykowski 18:26:19 Mike_Wyrzykowski: We shipped it without flag 18:26:30 +1 18:26:33 +1 18:26:38 +1 18:26:39 +1 18:26:41 q+ 18:26:45 +2 18:26:48 ada: Let's do a straw poll on moving this to working group 18:26:49 +1 18:26:50 ack cabanier 18:28:25 cabanier: There are still outstanding questions about foveation 18:28:27 yeah like that, or email me or the chair's mailing list 18:28:40 atsushi has joined #immersive-web 18:28:53 q+ 18:29:11 ack Mike_Wyrzykowski 18:29:18 q+ for will we want to trigger review by horizontal, esp TAG? (or did we?) 18:29:28 Mike_Wyrzykowski: Foveation should be discussed since it's handled differently depending on platform 18:29:42 ada: Let's make an issue for that 18:31:32 ada: We will make a formal call for participation for objections/approvals 18:32:08 https://github.com/immersive-web/webxr/issues/1363 18:32:43 @ada Moving on to scene description API, waiting for Baran to join the call 18:33:24 ada: I've been working on this for a while, primarily for accessibility tools to know whats on the screen and offer up to screen readers. 18:34:04 ... declarative list of what is on the screen against what is being rendered 18:34:19 ... Also useful for gaze glow on visionOS. 18:34:47 ... It's useful to know what you will interact with before you interact with it. 18:34:49 q+ 18:35:46 ... I used three.js. Wouldn't need to render at the full eye buffer size, it can be much smaller 18:35:57 ... Maybe half size. 18:36:21 ... It could be in stereo or mono depending on os 18:36:37 q+ 18:37:11 ... Here's how it works: You have a scene graph of various attributes, gives them a description, tells you how to interact with them, and also assigns an id. 18:37:49 ... That id should be a number. You can render the relative color in linear space on the id buffer, which is a buffer where it looks like shilouettes of whatever the user can see. 18:38:03 ... System can them know what's on the screen according 18:38:23 ... There's a lot of ways this could be enhanced 18:38:37 ... Will have performance overhead since it's an additional render 18:38:54 ... Added interactivity is a bit of a carrot for devs hopefully 18:38:56 q+ 18:39:16 ack atsushi 18:39:16 atsushi, you wanted to discuss will we want to trigger review by horizontal, esp TAG? (or did we?) 18:39:41 sorry atsushi missed that for the last topic 18:39:48 ack cabanier 18:40:30 atsushi_ has joined #immersive-web 18:40:32 cabanier: Your proposal is very similar to what we've previously discussed. In the past we discussed using this as fallback content element in markup 18:40:56 ada: How does this work if it's not just rectangles? 18:41:02 q- 18:41:06 cabanier: It can be any shape actually 18:42:01 ada: Could be tied to area rules, since screen readers will create bounding boxes 18:43:15 cabanier: Color in linear space: we probably just want a 16bit texture 18:43:44 ... You also do need stereo in order to draw the outline, since it will be a different position for each eye 18:44:41 bajones: Lower resolution is a double edged sword. higher resolution can all be done in one pass, but lower may require multiple render passes. 18:45:34 ... If you're doing the render as part of color pass, in XR we want that multisampled but not a traditional multisample resolve. 18:45:51 ... You would need to do a manual resolve (painful) or another pass altogether 18:46:02 q? 18:46:04 ack bajones 18:46:30 q+ 18:46:30 https://webgpu.github.io/webgpu-samples/?sample=primitivePicking 18:46:34 ack cabanier 18:46:53 cabanier: One of the problems with spacewarp will also appear hear: how do you handle opacity? 18:47:15 ada: That would be up to you as the author. 18:47:35 cabanier: Author would need to be mindful of that then... 18:47:51 bajones: How many carrots can we provide to devs to stay that this is for their benefit? 18:48:37 ... Is there an automated event driven system to drive this? 18:49:29 ... Could fire a select event on raycast, depending on aria roles 18:50:01 q+ 18:50:09 ... You might only need to write out selectable objects here. If there is a big forest scene with a selectable animal, you may only need to specify the animal 18:50:14 ack mkeblx 18:50:20 q+ mkeblx 18:50:41 ... The webgpu sample linked is a good example of what we are talking about 18:52:07 q+ 18:52:18 ack mkeblx 18:52:40 ada: ON the topic of webgpu, our accessibility folks were very excited about the possibility of extending this to webgl and webgpu 18:55:16 q? 18:55:19 ack alcooper 18:55:53 alcooper: I like the idea of sending clicks. If we want to dip toes into gaze, we might expose hover as well. I know Apple is opposed to this 18:56:14 ... We might consider exploring this as well if we start going down this road 18:56:41 ada: We would likely pass the buffers back to OS. 18:57:08 ... (given existing policies re: Safari exposure) 18:57:38 q? 18:57:38 ... I think it's worth continuing the discussion, but our stance should not preclude that 18:57:46 atsushi has joined #immersive-web 18:58:28 q+ 18:58:38 ack mkeblx 19:00:16 mkeblx: When you lose the hierarchy, in an example with two characters holding an amulet that is a child of the character, you can't highlight both objects in different ways at the same time 19:00:36 ada: You may need to remove the amulet from the hierarchy altogether 19:01:53 ... focus events might work better for this over hover events 19:02:53 mkeblx: if you could have some structure to the colors, then you could impose some hierarchy that is inferred 19:03:07 q? 19:03:11 ada: Worth talking about, but maybe not for mvp 19:03:26 mkeblx: I'll try to find some other examples that are less niche 19:17:25 parth has joined #immersive-web 19:19:42 Scribe: m-alkalbani 19:19:56 [talk by Web3D - Nicholas Polys] 19:38:34 q? 19:40:50 https://docs.google.com/presentation/d/1QS3Quir0FqP0uEHgZ7aBB-ouhgVxBdq6lHUjZjfA77g/edit?slide=id.g3a67a7e1a66_1_20#slide=id.g3a67a7e1a66_1_20 19:41:32 q+ 19:41:42 ack alcooper 19:43:11 https://github.com/immersive-web/real-world-geometry/issues/45 19:43:19 ada: topic: moving real world geometry to the WG 19:43:45 alcooper: maybe it's time to move this to the WG 19:44:14 ... name doesn't match the feature, it's been behind a flag on chrome for ages 19:44:23 ... planning to look into launching this sometime in the next year 19:45:06 ... question is should we move real world geometry and (plane-detection is the feature name) to WG? 19:45:28 ada: can we rename the repo name without breaking anything? 19:45:37 atsushi: no issue on renaming 19:45:54 ada: will go ahead and rename it, let me know if anything breaks 19:47:02 alcooper: there is one file that should be moved (one that relates to meshing) 19:47:44 bajones: we can drop the file about meshing (and other mentioned by alcooper) 19:48:30 +1 19:48:32 ada: in room poll on moving real world geometry to WG 19:48:33 +1 19:48:35 +1 19:48:35 q+ 19:48:40 ack alcooper 19:48:56 alcooper: there are a couple of things flagged as unstable, worth chatting about these in future meetings 19:49:17 ada: can discuss those in WG meetings since they're not major changes 19:49:37 ... will send cfp for this next week, love moving stuff to WG! 19:50:17 https://github.com/immersive-web/webxr-test-api/issues/90 19:50:37 alcooper: wanted to ask about testing and if there is any reporting on that information 19:50:50 q+ 19:50:56 ... good to good representation from other vendors on there 19:51:07 ada: question to Mike_Wyrzykowskiif we use tests internally 19:51:12 Mike_Wyrzykowski: we do not 19:51:19 ack cabanier 19:51:39 cabanier: at one point, we looked into wpt and they weren't working on android, has that changed? 19:52:14 here are the WebXR WPT tests: https://github.com/web-platform-tests/wpt/tree/master/webxr 19:52:24 alcooper: all testing on blink, they run on desktop platforms internally, but fail sometimes due to JS strictness 19:52:27 q+ 19:52:48 ... thought they run on andoird, we can deinfetly run them internally 19:53:09 ... usually face issues running them on mac, due to strange issues like focus etc 19:53:13 ack alcooper 19:54:16 ... question to apple: point of wpts is interpopelity (spelling), it would be a good thing to have us run them so we can say this is a complete implementation. if we ever want to use feraturs in the core spec, it would be worth running these tests and passed 19:54:23 q+ 19:54:25 ack ada 19:54:59 q+ 19:55:03 ada: on moving webxr to rec, intention is for webxr now to be a living standard - but agreed we'd like to have tests run without issues. can't answer now but will take a note and shake the apple tree 19:55:23 ack cabanier 19:55:59 cabanier: looking through history of wpts, someone on the team said interaction with browser UI relies on a driver 19:56:26 alcooper: faking a click is only thing that goes through a driver, everything else goes through a fake backend that goes to the device 19:56:40 alcooper: [quick investigation of this] 19:56:54 ada: adding issue to agenda to make sure we keep following up on it 19:57:06 cabaniervaluable for more than webxr, for browsers in general 19:58:04 alcooper: there is testdriver.click (only works for top level frame), we do use that. only thing needed from test driver perspective is the click 19:58:34 ... there is a thing called test driver that runs wpts all over, but for webxr specifically it's just a click 19:58:50 ... test api is basically simulating a fake device 19:58:57 q? 19:59:20 ada: queued this for next WG meeting to check up on it, to keep some heat on it 19:59:36 alcooper: did we want to add someone else as an editor in addition to Manish? 19:59:41 ada: agreed 20:00:29 ada: any interest in becoming editor, good chance for someone to get involved 20:00:44 m-alkalbani: interested, will need final confirmation 20:01:20 s/cabaniervaluable/cabanier: valuable/ 21:31:29 Raul has joined #immersive-web 21:36:21 parth has joined #immersive-web 21:38:30 Mike_Wyrzykowski has joined #immersive-web 21:39:30 q? 21:39:55 m-alkalbani has joined #immersive-web 21:42:08 present+ 21:43:13 present+ 21:44:44 bajones has joined #immersive-Web 21:45:14 Present+ 21:45:51 atsushi has joined #immersive-web 21:46:28 can the teleconn here us? 21:46:58 Raul6 has joined #immersive-web 21:51:22 atsushi_ has joined #immersive-web 21:54:51 Raul has joined #immersive-web 21:54:54 https://github.com/webspatial/sample-techshop 21:55:21 ^Re Pico WebSpatial presentation 21:59:10 q+ to ask whether the developer specifies where windows appear 22:05:45 q 22:05:48 q+ 22:07:20 scribenick: cabanier 22:07:24 ack ada 22:07:24 ada, you wanted to ask whether the developer specifies where windows appear 22:07:53 ada: when the developer opens a new window, do they choose where the window goes? 22:07:54 Q+ 22:07:55 mkeblx has joined #immersive-web 22:08:23 parth: for a new scene, there is no control over where that is placed 22:09:22 ada: for spatial pointer interaction, is there a way to obfuscate the user's position 22:09:30 adekker has joined #immersive-web 22:09:32 ... and gain information about height, etc 22:09:45 ... which is a normal browser doesn't have 22:10:24 ada: yes, but things are more restrictive on the web 22:10:46 bajones: based on your showing, the rendering is done through the visionos toolkit 22:11:00 ... you are not feeding pose data into the embedded web pages 22:11:09 q- 22:11:16 parth: yes, we have a builder that decides the spatialization 22:11:29 ack bajones 22:11:33 ... that places the div and marks it as translucent and then the os renders it 22:11:44 bajones: you mention the react integration 22:11:48 ... does it require it? 22:12:12 ... If I had a html page with some css, would that work 22:12:33 parth: right now, we only have a react SDK but the intent is for it to be agnostic 22:12:34 ack Joshinch 22:12:52 Joshinch: spatial transform sounds very similar to transform detached 22:13:05 ... maybe we could try to broach it the same way 22:13:16 ... we want to provide the depth of the element 22:13:36 ... I wonder if we can somehow merge those 22:13:43 parth: yes, I think they are very similar 22:13:51 Joshinch: great if we can unify 22:14:15 bajones: I think one of the concerns is simply that you are a deep integration with the OS 22:14:25 yonet has joined #immersive-web 22:14:27 ... but that is big departure 22:14:36 ... from how browser do their rendering 22:14:48 ... the spatial figure was mostly a way to avoid 22:15:02 ... that. it wouldn't allow for this set of effects 22:15:30 ... but for trying to do the easy thing first, spatial figure would require tearing up the browser 22:15:46 ... I don't think anyone doesn't want this 22:16:22 ada: the syntax I want to propose is very similar 22:16:38 bajones: I wanted to tie up some threads here 22:16:58 ... there are 3 different ways of getting to the same goal 22:17:17 ... there was a lot of confusion about how spatial figure would fit in here 22:17:33 ... what is the most practical thing on how they would function 22:17:49 ... this is an amazing prototype 22:17:53 Q+ 22:18:01 ... but we'd all like to get to the same place 22:18:04 q? 22:18:09 ruoya has joined #immersive-web 22:18:35 parth: right now you can't do these effects, we don't have a clear answer here but we would like to continue the discussion 22:18:51 ruoya: I also talk to ada and have questions about the spatial figure 22:19:07 ... and it seems like it was more about the model tag but maybe it can do this simple thing 22:19:26 ada: spatial figure is focused on a smaller use case for a smaller problem 22:19:54 ... implementing detached is very hard for us but now I head that our proposal is hard for others 22:20:09 ack Joshinch 22:20:21 ... it's trying to establish it to get in the hands of userss. 22:20:54 ... I 'd like to make sure that the easier parts could be implemented by others 22:21:22 Joshinch: if we all have agreement that this simple approach is the way to spatial 22:21:33 ... so why wouldn't we put this in the browser compositors 22:21:49 ada: for us, this is part of a larger unified spatial web proposal 22:22:05 ... I'd like that the other concepts aren't prevented 22:22:20 ... the other syntax I'd like to propose is only a tiny bit more 22:22:42 Joshinch: but can't we do that 22:22:49 ... in the compositors 22:23:06 ada: I want to add that with extra syntax 22:23:32 ... transform is cool but not enough to do the job by itself 22:23:47 ... hopefully we can find a path that is good for everyone 22:23:51 q+ 22:23:55 ack cabanier 22:26:01 ack AndrewDecker 22:26:44 cabanier: please write down the spatial figure proposal so we understand how it works 22:27:04 adekker: can you talk more about the material background? 22:27:29 ... I know we discussed the microsoft proposal from Diego 22:28:11 parth: the question is if material background has more nuance 22:28:26 ... we'd like to provide backgrounds to what users are seeing 22:28:40 ... we may want to standardize on how they would look like 22:28:48 ... thick, thin, ... 22:28:58 ruoya: what is the question about? 22:29:13 adekker: a lot of the materials are OS dependent for performance reasons 22:29:44 ... how do you provide consistency? Or do you particular versions of transparency? Will there be a media query? 22:29:53 ruoya: this has been discussed in our team 22:30:06 ... our team is to support this in every platform 22:30:17 ... we are thinking about this. Apple supports the most 22:30:35 ... our approach is to support 3 while apple has 5 22:30:56 ... if users use our SDK, they shouldn't have to worry 22:31:31 adekker: it would be nice to have a progressive so things get better 22:31:45 q? 22:31:50 ... the microsoft explainer had a comma separated list from most to least preferred 22:32:03 ada: this would be handled by css using @supports 22:32:29 yonet8 has joined #immersive-web 22:32:55 ... or maybe you can do it, by adding them all ad then the supported one would be picked 22:33:05 ... or maybe it's like how you do do fonts 22:33:18 ... css has many ways to handle the progressive enhancement 22:33:42 q? 22:33:44 ... although this isn't an endorsement, I like to think about CSS APIs 22:34:18 ruoya: what would be the next step? 22:34:30 Joshinch: it's the same for us 22:34:41 ada: you can take it to the CSS working group 22:34:50 Joshinch: it seems that that is a bit early 22:35:46 ada: I worked out the spec so it's modular so people can break it out 22:36:06 Joshinch: I'd like to talk to our TL but don't want to wait 6 months 22:36:21 ada: I will talk about more stuff when I get approval to do so 22:36:35 ... I'm happy to advise on stuff or explain more once I have permission 22:36:45 ... I hope that sounds sensible 22:36:53 Joshinch: what is the scope you are talking about 22:37:11 ada: once we propose it, we can take chunks to the working group 22:38:23 Joshinch: maybe we can discuss the 3 different perspectives 22:39:08 ada: last time we tried to take it apart in chunk and we came up with spatial figure which didn't go over super well 22:39:20 ... I think it won't be hard to find agreement 22:39:30 ... we need a separate context 22:47:26 Slides: https://docs.google.com/presentation/d/1iXJppD2rV6LQuAsRmlBO11d-7HC8ZOtE/edit?usp=sharing&ouid=106222469972426838941&rtpof=true&sd=true 22:57:48 parth has joined #immersive-web 22:58:30 ruoya has joined #immersive-web 23:00:33 atsushi has joined #immersive-web 23:04:39 joshi has joined #immersive-web 23:04:48 present+ 23:05:00 chttps://github.com/immersive-web/webxr/issues/1419 23:05:09 s/ch/h 23:05:54 bajones: Idk if im the best but not sure if anyone else familiar. thsi appears to be a request where there are people who want to play a 360 video but want to use a cube map for equirect for high qual 23:06:10 ... in this case in the interest of bandwidth they only want to stream in sides of cube that the user is looking at 23:06:48 ...right now we do have cube layers - I think thats whats being asked for in the media bindings that would take 6 different videos and place them in a cube round you and only advance streamingat ones you are looking at 23:06:57 cabanier: is that the ask? 23:07:14 bajones: hard to tell because hes talking about a couple things but I think so 23:07:42 bajones: this is interesting but I see where hes coming from.... not sure if its too niche, not sure its one that we want to commit to native implementation for 23:08:07 ...you could get same kind of effect by either having a map layer and putting videos to tiles you are looking for and ignoring the sides that are behind you 23:08:28 or not sure how well it work but you could position different quad media layers all around you and then do what he was going to do to just advance ones hes looking at 23:08:34 would end up with scenes most likely 23:08:54 bajones: no loss qual, but more manual setup which they want to avoid 23:09:18 bajones: if this setup was standard, then it would be worth considering more. but its not and there is a lot of weird layouts there.. some are backed by meta and apple, 23:09:43 apples video format is non standard, if they brought it to webxr... you would have to render it to a mesh in a projection layer... not greatest but is what it is 23:09:49 I dont know if there is anything I would reccomend here 23:10:02 bajones: would love to touch on dif topic related 23:10:12 cabanier: I think he already has all features he needs 23:10:31 ... if he wants to only update textures he needs in cube map... then surface to 3d... for that element, just for ones not in view 23:10:38 ...doesnt have to have array of textures... 23:11:06 ... you put them in the regular cube map layer, not something that you need if you make the cubemap layer. I dont think its logic I would want to build in 23:11:18 bajones: what do you think of idea of pos 6 different quad elements 23:11:27 cabanier: doesnt help 23:11:36 cabanier: same thing 23:11:45 ...thats what media layers are 23:12:12 ... has everything he needs. Cube map already is an array of textures, not sure about video textures 23:12:29 bajones: inclined to agree. stuff is in place already, requires work in application... but that is apropriate 23:12:36 cabanier: agree 23:12:50 ... if you pause and you look it wont be there.. which isnt good 23:13:12 bajones: tangent - cubemaps... we have some contractors that are working layers implementation, excited to see up and running. 23:13:26 ... one of the things they ran into was cubemaps. love to ask meta if they have had trouble with it 23:14:09 bajones: in our browser all the xr surfaces are passed over processs boundary, we have to share them using whatever the os primitives are... in android xr it ends up in a hardware buffer which in almost no hardware as far as I know actually supports cube map as shareable surface 23:14:24 so the developer thats owrking on this is working finding ways around that, which involves more copies than we would like 23:14:57 we could get around it by instead of sharing image across boundaries as cube map... share it as a texture arrray... 6 layer array.. functionally not different. its the same as webgpu. Gl just likes to use different things for same thing 23:15:04 wanted to ask meta, is it something you have a problem with? 23:15:26 bajones: is it something you would consider adding a mode or option to layers api where if you are doing a cube map actually request as a texture 23:15:37 cabanier: we dont run into that problem since we use a hardware buffer. 23:15:57 .... didnt run into that, I know it works for us but not everywhere.. 23:16:35 .. I dont mind adding it kidna unfortunate to do it... would you wanna trick it? because under the hood you could just spoof it as cubemap. 23:16:41 hacky 23:17:03 bajones: have considered. some hackiness that could go on... we have existing hackiness... for other things. generally dont want to do that though 23:17:49 we have the texture type that is available on every other layer... in this case what would be interested in is using same texture type that is available on every other layer and make it avaialble on the cube layer 23:18:00 cabanier: if people are already using cube map layers today it would be broken 23:18:07 bajones: dont want to change defaults 23:18:09 cabanier: o k 23:18:22 bajones: it would make the cube map layer look more like rest of them 23:18:34 if you ask for texture array explicity... you would get 6 layer array 23:18:46 cabanier: texture array would be more efficient 23:19:01 bajones: we would send it over the process boundary, and support cube map as a copy still 23:19:04 cabanier: ok sounds good 23:19:05 q? 23:19:25 cabanier: if you use android hardware buffers? 23:19:41 bajones: on this hardware we share it dirrectly, wrap it as a hardware buffer and have people render into it directly 23:20:06 alcooper: we share the buffer directly the page draws in and we immediately read from it . we have copies now but we could theoretically tie it in 23:20:26 bajones: we do find that on windows sometimes the handles coming out of openxr... dont have the right flags set on them internally 23:20:36 in that case we do a copy anyway because of the process boundary 23:21:06 except in cases like this where we found that we cant send as a cubemap, so either hack a webgl api to fake cubemaps or just say offer this 23:21:12 bajones: I will update the spec 23:21:18 cabanier: aligned 23:21:24 ada: cube maps in the layers right? 23:21:33 ada: we dont have cube shaped media layers? 23:21:41 bajones: we do have cube shaped compositer layers 23:21:57 bajones: great for skyboxes weird video layouts etc 23:22:11 q? 23:22:35 https://github.com/immersive-web/administrivia/issues/228 23:22:40 scribenick: mkeblx 23:23:11 cabanier: meta recently gave access to camera access to front camera on quest, behind permission prompt etc 23:23:37 first thing they want to do is detect QR code, etc and draw a box on top, a classic use case 23:24:05 the cameras are not the same position and orientation as displays, so don't have that data provided to devs 23:24:10 devs need this data 23:24:16 q+ 23:24:33 raw camera access api can give this data potentially 23:24:49 Raul has joined #immersive-web 23:25:02 alcooper: raw camera access data currently doesn't provide a sufficient amount of info 23:25:39 cabanier: so we would need a way to provide this intrinisic camera data, position, fov, etc 23:26:03 q+ 23:26:21 while not strictly only for webxr, likely only useful or will be used in webxr 23:26:42 q+ 23:26:50 ack ada 23:27:20 cabanier: works surprisingly well (the general get camera access and do CV on it) so overall feature useful 23:28:17 ada: webxr should be secure more by default, and having a use case of getting camera just to put a cube on QR code can be handled in better ways 23:31:09 cabanier: so this may have to be done outside of this group anyway as not a webxr spec 23:31:23 mkeblx has joined #immersive-web 23:31:47 q? 23:31:50 ack alcooper 23:31:51 ada: overall the CV approach has more specific APIs that could cover 23:32:41 cabanier: there are cross platform issues, so yes maybe exposing intrinsics (like direct QR code / marker handling etc) 23:33:14 alcooper: currently the API gets a camera from an XRView 23:33:56 maybe could expose a XRView associated with a camera for this. maybe there are sync issues with this? 23:34:33 did explore exposing extra info like camera geometry for the Depth api for webxr. 23:35:34 cabanier: returned a xrview and had a mapping for depth api. but the depth api is different, maps more directly with screen pixels 23:36:01 q+ 23:37:13 alcooper: also, is it time to re-visit the other APIs like marker tracking and see where they are at? if we can make progress on this that would work 23:37:55 joshi4 has joined #immersive-web 23:37:56 cabanier: yes we could do on our platform (meta) QR code tracking, but does come with restrictions like 1fps tracking 23:38:03 ack mkeblx 23:39:04 mkeblx: I guess one meta comment is that one platform e.g. Apple's that didn't want to support camera then we could return null. 23:39:23 ada: WebXR's required/optional/enabled features enable this 23:40:33 ack ada 23:41:45 q+ 23:41:49 ack m-alkalbani 23:42:08 m-alkalbani: is the marker tracking implemented on meta? 23:42:17 cabanier: no not implemented yet 23:43:50 ada: different platforms are good at image tracking and bad as QR codes, and vice versa 23:44:07 bajones: answering does android/androidxr support qr code tracking? 23:44:59 we may want to go down path of supporting the route of tracking markers (exact form TBD) 23:45:03 https://developer.android.com/develop/xr/openxr/extensions/XR_ANDROID_trackables_qr_code 23:46:07 pico: checking if they support QR code tracking 23:46:33 q+ 23:46:37 ack alcooper 23:47:55 alcooper: there will always be developers who want more, and wider world expects or is pretty comfortable with having camera access available 23:48:34 q+ 23:48:34 so it does make sense to make it better, but also work on some use cases like QR code tracking 23:48:40 q+ 23:50:08 Mike_Wyrzykowski has joined #immersive-web 23:50:09 bajones: it's an interesting environment where we can fix some cases like QR codes, but there's going to be other demands from developers 23:50:48 ack m-alkalbani 23:50:52 ada: but for privacy HMD is different than phone in terms of camera control 23:51:32 m-alkalbani: what is the state of marker tracking proposal? what's next 23:52:24 bajones: if going to do marker tracking, let's pick a common denominator that everyone can track 23:52:58 ack ada 23:54:10 https://github.com/immersive-web/webxr/issues/1420 23:54:12 ada: happy that marker tracking discussed and we can follow up with getting back on the table 23:55:04 scribenick: alcooper 23:55:42 ada: Dynamic foveation allows for adjustng quality where gaze lands. Apple strongly believes pages shouldn't know where your gaze is (and we go to great efforts to make hte browser not even know this). 23:56:01 q+ 23:56:09 ... Is there a way to do this without leaking gaze to the site? Maybe some WebGPU-only thing or other alternative. 23:56:24 ... I'm not familiar enough with the stack to have good suggestions, but hopefully someone else here is 23:56:34 ack Mike_Wyrzykowski 23:56:49 Mike_Wyrzykowski: If we prevent readbacks/reading hte texture in any form, this wouldn't lead to gaze. 23:57:08 ... WebGPU has memory-less textures called transient . So you can write to this but never read it back 23:57:17 ... But can't do multipass renderpasses without leaking the info. 23:57:34 ... Even if we prevented pixel readback or copying, multipass would allow reading 23:57:42 s//attachments 23:57:55 Mike_Wyrzykowski: No reason we couldn't implement this on WebGL either; but maybe skip this part 23:57:58 q+ 23:58:01 ack bajones 23:58:22 bajones: Definitely still worthwhile to do WebGL, unquestionably where most content is produced since WebGPU isn't widely supported 23:58:37 ... but in WebGL it can be done silently and in WebGPU its trickier 23:58:53 ... hadn't considered transient attachments yet, though it's pretty new. 23:58:58 ... Not sure I follow about not doing multipass 23:59:34 ... Could envision getting a transient attachment for the foveation level of the pixels, 00:00:00 ... cabanier has done samples outputing colors based on derivatives of values between textures 00:00:31 ... Rendering based solely on that output can give the foveation values which gives a rough maps of gaze. Wouldn't be the best, but would be possible 00:00:42 ... So are you saying only rendering to textures that also can't be readback either 00:00:45 Mike_Wyrzykowski: Yes 00:00:46 q+ to ask about having it as a feature flag 00:00:50 bajones: I don't think we have a mechanism for that 00:01:08 ... Maybe require texture usages only contain render attachment? 00:01:26 .... If you don't have any of the usage attachments no way to get data out of API 00:01:40 Mike_Wyrzykowski: Yeah, those restrictions would prevent leaking 00:02:01 bajones: Probably not terrible. Canvas textures only have render attachments anyways and most folks don't ask for more though they can, so it's probably reasonable 00:02:04 ack 00:02:07 ack ada 00:02:07 ada, you wanted to ask about having it as a feature flag 00:02:21 ada: Is this the kind of thing that you can just turn on? 00:02:26 q+ 00:02:44 ... e.g. we can do dynamic foveation and you only asked for default foveation, so do it anyway? 00:02:55 bajones: Not just something you can get for free. 00:03:07 ... Need to set something to indicate where foveation takes affect 00:03:11 q+ 00:03:29 ... probably looks like request a "memoryless" texture, pass it into some variable rate shading, it does it's thing and hten discard the texture 00:04:00 ... difference between doing htis dynamically vs statically. Folks may only grab it once if it's static. So developers need to be told to grab every frame (or at least when to update) 00:04:15 ... developers will explicitly need to integrate it, and we can't just say "We foveated it" 00:04:30 ... while you know which textures came from XR, this isn't hte way the APIs work 00:04:34 ack cabanier 00:04:34 s/hte/the 00:04:52 cabanier: Would you be able to postprocess since that requires reading from the texture? 00:05:03 bajones: Can still attach and load but can't sample from the texture 00:05:16 ... Can blend against that, but couldn't blur or sharpen 00:05:34 Ada: Would have to render out a separate buffer for the bits you need 00:05:48 cabanier: Can't copy bits out? 00:06:00 ada: copy to a different texture and then blit? 00:06:12 bajones: You've used up all of the fillrate and so you don't get anything 00:06:39 ada: So can't redo with lower resolution, e.g. just as depth buffer 00:06:43 bajones: Yeah, not really 00:07:09 ... What's apple's opinion on quantization of these kinds of things? 00:07:19 ada: No Comment 00:07:21 ack Mike_Wyrzykowski 00:07:52 q+ 00:08:00 Mike_Wyrzykowski: If developers don't invalidate any of the restrictions, then could just be a boolean flag, but may need to adapt application if they want to use this and postprocessing will be a challenge as well. 00:08:09 ack ada 00:08:11 ... maybe better once WebGPU spec evolves, but not as it stands today 00:08:20 q+ 00:08:35 ada: Not opposed to saying if you want postprocessing you only get static foveation 00:08:48 ... you're already doing a difficult expensive thing so maybe don't care about max framerate 00:09:04 bajones: If it's static, no one cares if you can read the texture back 00:09:37 ... probably a world in which we say, "technically we're eye tracking, but really we're just switching between like 4 textures" 00:09:48 ... but there's a resolution at which that happens where it becomes uncomfortable 00:09:55 ... but no one is probably prepared to answer where that line is 00:10:16 ... but interesting if we can consider a mechanism for "Just give me a static foveation" 00:10:21 Raul has joined #immersive-web 00:10:31 ada: I also don't know anyone doing post processing 00:11:00 bajones: If more people do WebXR on M5 chips they may have cycles to spare for it; it's generally just been too expensive for developers to do on devices 00:11:23 ack mkeblx 00:11:44 mkeblx: Maybe we don't need to think about it too much because you can just do this natively 00:11:45 Mike_Wyrzykowski has joined #immersive-web 00:11:51 ... is there a reason we need to do something different? 00:12:48 ... can you always do a timing attack though? 00:13:01 bajones: There is a side of that to this 00:13:16 ... I'm not sure how much of a concern this actually is 00:13:38 ... we found with WebGL, and likely WebGPU as well, we had a concept of secure textures you couldn't read back from, similar to what we described here 00:14:18 ... but found a demo where someone showed that even without ever writing out hte RGB values sampled from a texture to a buffer were still able to get a reasonable approximation of the image by altering hte amount of work the shader did based on the RGB or luminance values 00:14:56 .... tiny square for red, giant square for blue, and just observe how long draw calls took. 00:15:14 ... This took a very long time, but you could get a gray scale image of the "protected" value 00:15:39 ... By the nature of this tech, you're going to spend less time on some pixels and more time on other pixels. So could scale and observe the workload based on this 00:15:59 ... Could theoretically generate a rough approximation of what that shading map looks like 00:16:19 ... it's slow, not practical and for VR certainly such a chunky experience that I don't think people would stay in the experience 00:16:32 ... but is worth discussing as a thing that has technically been proven possible 00:16:41 cabanier: You can even do a timing attack with the model tag 00:16:47 s/cabanier/mkeblx 00:17:07 mkeblx: May take one minute to get 5 bits of data 00:17:42 cabanier: Would be good to see if Apple would be okay with 4x4 or 3x3 quantized options 00:18:39 Ada: Should we make a CG repo or do it under core spec? How do folks feel? 00:19:28 bajones: Probably needs two pieces. One is to have the WebXR side of things (e.g. how to produce the foveation and what are the limitations when it's tied to gaze) 00:19:46 ... leaning very heavily on WebGPU since that's where developers need more action, but can maybe just do it as a boolean in WebGL 00:20:25 ... for the WebGPU side, apart from the images/restrictions, need basic support from WebGPU API for variable rate shading, which would be taken up by that group 00:20:33 ... poke at them (aka bajones and Mike_Wyrzykowski ) 00:21:14 ... The actual feature for foveated textures would be in the WebGPU spec since there's applications beyond just WebXR, though Games do this today 00:21:24 s/though Games/games 00:22:16 cabanier: Expression tracking may tie into this as well? 00:33:18 m-alkalbani has joined #immersive-web 00:34:25 ruoya has joined #immersive-web 00:36:01 https://github.com/immersive-web/administrivia/issues/227 00:36:58 alcooper: main reason flag this - to feel pulse of room; proposal originally by rick. already openxr extensions to do this. 00:37:12 q+ 00:37:20 q+ 00:38:06 ada: gut insinct - don't love this; you could get persona, and do same thing. 00:38:36 ack ada 00:38:39 ack cabanier 00:38:44 bajones: does apple expose facial tracking, in native apps? 00:39:30 cabanier: - want to make sure it's privacy enabled; fact that is blend shapes helps with that. 00:39:40 q+ 00:41:44 bajones: when talking about privacy - about quantization of data. Break down discrete poses of eyes, and same might go to rest of facial features. Privacy in this case means maybe 16/32 discrete values that we will go between, so we dont get micro-expressions, but get overall mouth moving, etc. 00:42:25 q+ 00:42:32 ack alcooper 00:43:01 alcooper: Q to Ada: are you concerned with whole api? or just a part of it? 00:43:35 ack ada 00:43:37 q+ 00:43:42 ada: delivering something without eyes (for example) seems incomplete 00:45:23 @ada Don't think webxr should be handling their own avatars, since website has full view with what you're doing with a lot of your body. Love to have higher level avatar system, but monumental to build. 00:45:33 ack bajones 00:46:41 bajones: Meta has avatar system, synthesizes a lot of information, would there be an appetite, to exposite it through an API (if we were to develop). 00:47:24 cabanier: Hard to say. Implementation is small part of it. Doesn't see why we wouldn't expose it. If developers would want it. 00:48:12 cabanier: would be surprised if openxr api doesn't already do that. 00:49:51 bajones: cabanier Facetracking seems to exist already in meta docs. 00:50:47 ada: If you're eating (AVP), obscures mouth area. 00:51:35 Seems like general consensus that there is appetite for this. If bajones implements, cabanier will as well. 00:52:04 ada: Will make a repo for this 00:53:29 RRSAgent, make minutes 00:53:32 RRSAgent, make minutes 00:53:34 I have made the request to generate https://www.w3.org/2025/11/21-immersive-web-minutes.html yonet 00:53:58 rrsagent, make logs public 00:54:59 meeting: Immersive Weg Groups (WG/CG) 2025/Nov f2f Day2 00:55:01 agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-November-2025/schedule.md 00:55:16 previous meeting: https://www.w3.org/2025/11/20-immersive-web-minutes.html 00:55:22 chair: ada 00:55:28 rrsagent, publish minutes 00:55:29 I have made the request to generate https://www.w3.org/2025/11/21-immersive-web-minutes.html atsushi 00:56:15 scribeOptions: -final -noEmbedDiagnostics -public 00:56:16 rrsagent, publish minutes 00:56:18 I have made the request to generate https://www.w3.org/2025/11/21-immersive-web-minutes.html atsushi 00:57:04 i|bajones ldk if im the|scribe+ joshi 00:57:06 rrsagent, publish minutes 00:57:07 I have made the request to generate https://www.w3.org/2025/11/21-immersive-web-minutes.html atsushi 00:57:44 i|alcooper main reason flag this|scribe+ parth 00:57:45 rrsagent, publish minutes 00:57:47 I have made the request to generate https://www.w3.org/2025/11/21-immersive-web-minutes.html atsushi 00:58:38 i|bajones Idk if im the best |scribe+ joshi 00:59:25 i|First issue is archiving webvr rocks|topic: archiving webvr rocks| 01:00:08 i|WebXR WebGPU bindings have two|topic: Move to WG? immersive-web/WebXR-WebGPU-Binding#18 01:00:11 rrsagent, publish minutes 01:00:12 I have made the request to generate https://www.w3.org/2025/11/21-immersive-web-minutes.html atsushi 12:32:09 Zakim has left #immersive-web 23:59:59 i|https://github.com/immersive-web/webxr/issues/1363|topic: Add Scene Description API for automation and a11y immersive-web/webxr#1363 23:59:59 i|[talk by Web3D - Nicholas Polys]|topic: Web3D (.x3d) Talk : Web3D and the Web of Worlds 23:59:59 s/ada: topic: moving real world geometry/topic: moving real world geometry 23:59:59 i|https://github.com/immersive-web/webxr-test-api/issues/90|topic: State of WebXR cross-vendor testing immersive-web/webxr-test-api#90| 23:59:59 i|can the teleconn here us?|topic: spatial figure 23:59:59 i|Slides: https://docs.google.com/presentation/d/1iXJppD2rV6LQuAsRmlBO11d-7HC8ZOtE/edit?usp=sharing&ouid=106222469972426838941&rtpof=true&sd=true|topic: WebSpatial 23:59:59 i|https://github.com/immersive-web/webxr/issues/1419|topic: Feature Request: Cubemap texture support for native media layers. Multi-tile feature immersive-web/webxr#1419 23:59:59 i|bajones: Idk if im the best but not sure if anyone else familiar. |scribe+ joshi 23:59:59 s/or not sure how well it work/... or not sure how well it work/ 23:59:59 s/would end up with scenes most likely/... would end up with scenes most likely/ 23:59:59 s/apples video format is non standard/... apples video format is non standard/ 23:59:59 s/I dont know if there is anything/... I dont know if there is anything/ 23:59:59 s/so the developer thats owrking/... so the developer thats owrking/ 23:59:59 s/we could get around it by instead/... we could get around it by instead/ 23:59:59 s/wanted to ask meta, is it something/... wanted to ask meta, is it something/ 23:59:59 s/we have the texture type that is available/... we have the texture type that is available/ 23:59:59 s/if you ask for texture/... if you ask for texture/ 23:59:59 s/in that case we do a copy anyway/... in that case we do a copy anyway/ 23:59:59 s/except in cases like this where we found/... except in cases like this where we found/ 23:59:59 i|https://github.com/immersive-web/administrivia/issues/228|topic: Camera intrinsics immersive-web/administrivia#228 23:59:59 s/first thing they want to do/... first thing they want to do/ 23:59:59 s/the cameras are not the same position/... the cameras are not the same position/ 23:59:59 s/devs need this data/... devs need this data/ 23:59:59 s/raw camera access api can give this/... raw camera access api can give this/ 23:59:59 s/while not strictly only for webxr/... shile not strictly only for webxr/ 23:59:59 s/maybe could expose a XRView associated/... maybe could expose a XRView associated/ 23:59:59 s/did explore exposing extra info/... did explore exposing extra info/ 23:59:59 i|mkeblx: I guess one meta comment is that|scribe+ ada| 23:59:59 s/we may want to go down path of supporting/... we may want to go down path of supporting/ 23:59:59 s/so it does make sense to make it better, but also/... so it does make sense to make it better, but also/ 23:59:59 i|https://github.com/immersive-web/webxr/issues/1420|topic: Dynamic Foveation immersive-web/webxr#1420 23:59:59 s/ hte / the /g 23:59:59 i|https://github.com/immersive-web/administrivia/issues/227|topic: WebXR Expression Tracking immersive-web/administrivia#227 23:59:59 i|alcooper: main reason flag this - to feel pulse|scribe+ parth 23:59:59 s/@ada Don't think webxr should be/ada: Don't think webxr should be 23:59:59 s/Seems like general consensus that there/... Seems like general consensus that there