16:38:17 RRSAgent has joined #immersive-web 16:38:21 logging to https://www.w3.org/2024/03/25-immersive-web-irc 16:38:22 marisha has joined #immersive-web 16:38:24 present+ 16:39:59 present+ Lazlo Gombas, Javier Fernandez, Alex Cooper, Atsushi Shimono, Brandon Jones, Piotr Bialecki, Rik Cabanier 16:42:00 mblix has joined #immersive-web 16:42:32 yonet has joined #immersive-web 16:42:46 present+ 16:42:52 present+ Laszlo_Gombos 16:50:35 present+ 16:58:09 Brandel has joined #immersive-web 17:00:57 bajones has joined #Immersive-Web 17:01:05 present+ 17:01:21 present+ 17:01:22 bialpio has joined #immersive-web 17:01:46 present+ 17:01:50 present+ 17:01:52 nick-niantic has joined #immersive-web 17:01:58 present+ 17:02:01 meeting: Immersivw-Web WG/CG f2f 2024/03 Day 1 17:02:11 IRC instruction manual : https://github.com/immersive-web/administrivia/blob/main/IRC.md 17:03:04 Guide on Scribing: https://w3c.github.io/scribe2/scribedoc.html 17:03:51 agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-March-2024/schedule.md 17:04:30 zakim, this is Immersive Web 17:04:30 got it, cwilso 17:04:34 rrsagent, this meeting spans midnight 17:04:34 zakim, start meeting 17:04:35 RRSAgent, make logs Public 17:04:36 please title this meeting ("meeting: ..."), cwilso 17:04:47 giorgio has joined #immersive-web 17:04:57 meeting: Immersive Web March 2024 FTF 17:05:01 zakim, start meeting 17:05:01 RRSAgent, make logs Public 17:05:02 please title this meeting ("meeting: ..."), cwilso 17:05:06 Brett_ has joined #immersive-web 17:05:34 mblix has joined #immersive-web 17:05:36 Scribe: marisha 17:05:42 item: https://github.com/immersive-web/webxr/issues/1365 17:06:07 topic: Give developers control over "overlay" browser (webxr#1365) 17:06:14 rrsagent, publish minutes 17:06:16 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 17:07:03 present+ 17:07:04 hello laszlo 17:08:04 Omegahed has joined #immersive-web 17:08:09 :wave: 17:08:12 no video from room also here 17:08:58 present+ 17:09:09 present+ 17:09:11 present+ 17:09:14 present+ 17:10:30 present+ 17:11:21 Nice to see you guys! ;-) 17:11:43 Mats_Lundgren has joined #immersive-web 17:15:11 Phu has joined #immersive-web 17:18:06 https://github.com/immersive-web/webxr/issues/1365 17:18:33 cabanier: A little while ago the Quest Browser released a feature where if you click the Quest button, instead of a plain overlay, we bring up the 2D browser 17:18:52 cabanier: We keep the visual blurred so the developers can change what's in the 2D experience 17:18:59 cabanier: Or could change the settings of your WebXR experience 17:19:08 cabanier: We got many requests to trigger and exit this manually 17:19:39 cabanier: Right now the only way is to hit the Quest button and exit by clicking Resume. Right now we are the only ones with this feature, but we think it's a good feature and could be done by other manufacturers 17:20:05 cabanier: There could be an API to call this at any time. The 'resume' API should probably be user-activated 17:20:09 q+ to ask about DOMOverlay 17:20:19 q+ 17:20:31 q+ 17:20:36 cabanier: We got lots of feedback that devs want to control this 17:20:39 ack ada 17:20:39 ada, you wanted to ask about DOMOverlay 17:20:42 q+ 17:20:42 q+ to ask about potential for pages to grief users 17:20:56 ada: It's a cool idea and it seems like this would be a good fit for DOM overlay 17:20:57 q+ to fully flesh out this idea for HTML support in WebXR 17:21:21 ada: Some DOM overlay wouldn't make sense like 'what does fullscreen' mean, but you're just showing the full browser window 17:21:45 ada: The bit that developers could trigger and bring up might be more interesting if it was with the DOM overlay API where they could more fully customize it 17:22:08 cabanier: Right now it's not just the page but the full Brorwser navigation , refresh button, address bar, etc 17:22:27 q+ 17:22:36 cabanier: DOM overlay takes an element, makes it fullscreen, not sure how this would work 17:22:54 ada: You could take the full window, wouldn't need the Browser chrome 17:22:58 q? 17:23:00 cabanier: I think that would break a lot of assumptions for pages 17:23:31 cabanier: Maybe something like a DOM overlay for when devs want to trigger it 17:23:45 q+ 17:23:54 q+ 17:23:58 ack Brandel 17:24:19 Brandel: Right now you're sending visible: blurred and only head placement is updated 17:24:28 Brandel: seems like it's straying from what visible:blurred was for 17:24:38 cabanier: No we alwasy had this overlay screen 17:25:06 Brandel: But that was meant to have WebXR in a frozen state. But now indirectly you have a way to react in the WebXR session via the 2D page 17:25:17 Brandel: So it's not at the same scope as visible: blurred 17:25:29 cabanier: visible:blurred just indicates you don't have controls any more, only head pose 17:25:57 Brandel: My question is more about whether that's all that we want to vend. What do we expect to come through vs not come through? 17:26:06 Brandel: visible:blurred was not going to update the world for any reason before 17:26:42 Brandel: My main question is whether this is adequately descriptive of what it is and what it's for 17:27:12 bajones: visible:blurred means that something else is taking input right now but page content is still visible and can still do head tracking. Tracking could be at a lower rate 17:27:35 bajones: The intent of that state is: you can still see the world, but none of the inputs 17:28:05 Brandel: For cases of audio, keyboard, and DOM events, you wouldn't lose anything by virtue of being in overlay 17:28:27 cabanier: You're right in that mode, keyboard events would start working again 17:29:27 Brandel: Any set of events that are expected in WebXR vs 2D is what is the concern 17:29:30 ack nick-niantic 17:30:05 nick-niantic: How does this work today, and how could it work in ways that are more beneficial? When you go into this mode, where does the Browser show and where does the 3D content show? 17:30:34 cabanier: Browser is on top, right in front of the user 17:30:41 Brandel: I beleive it's in whatever position it previously was 17:31:13 nick-niantic: So the Browser is not occluded by anything in the scene. If I had a scene with a car in it, with a car's color, I could bring up this window and click a button to immediately update the color of the car? 17:31:14 cabanier: yes 17:31:50 nick-niantic: This sounds like what we were asking for with DOM layers a while back except for a combination of being in a fixed position, or when the browser could always be open, but then you'd need to be able to get it out of your way 17:32:25 nick-niantic: But not doing depth-testing with the scene also is a little bit undesirable, if something was in front of that browser window you could move it out of the way 17:32:35 cabanier: I don't think there's anything stopping us from placing the browser where they want it 17:32:40 q+ 17:32:51 Brandel: The user can put the window wherever they want 17:32:59 cabanier: I think nick wants it programmatically 17:33:33 nick-niantic: Even programmatically is tricky, trying to find a placement for it. We also allow the user to move it around and it can follow your head around 17:33:52 nick-niantic: A system to manage that automatically would be okay, but having it be embedded in the scene and movable 17:34:28 nick-niantic: Being able to do things like press a button on your wrist to make it appear and disappear would be desirable 17:34:47 nick-niantic: We would want it to be interactable with the rest of the scene 17:35:12 ada: Something like that would work on Vision Pro quite well if you don't have hand tracking requested, because we have transient pointers 17:35:21 ada: If you were to target the DOM content, you could just not bubble it up to the WebXR content 17:35:34 Brandel: The only reason it would work with Vision Pro is because it wouldn't work with Meta 17:36:12 Brandel: When you have the ability to track user inputs to a webpage, it exposes serious security issues (like watching someone put input into a banking website) 17:37:16 nick-niantic: Today if I have a webpage and I have an iframe, that would be an external source 17:39:01 Brett_: You could just have the input disappear when user inputs over an iframe 17:39:07 bajones: Yes but that would be weird 17:39:26 ada: You could re-implement something like transient-pointer in order to make this work 17:39:53 ada: I think everyone wants a way to get DOM into the web. this seems like the closest approach so far. 17:40:19 cabanier: You also want occlusion and depth-sorting? That would be pretty difficult and expensive 17:40:48 cabanier: I think the OS supports it but you have to resolve depth for every frame 17:41:01 ack Brett_ 17:41:01 Brett_, you wanted to fully flesh out this idea for HTML support in WebXR 17:41:03 Brandel: Regarding trust and safety, we also don't want to the Browser chrome to be obscured 17:41:15 q- 17:41:26 Brett_: I want to be able to use HTML in VR and position it. The one thing holding it back right now is security. We can't have a cursor here because it could be an iframe 17:41:41 Brett_: We could have a security policy with shared array buffers to not include iframes from cross-origin domains 17:41:54 Brett_: It would allow us to start experimenting with DOM overlay in a safe environment 17:41:57 q? 17:42:14 cabanier: I think there is a proposal here. The HTML would have to come from its own document 17:42:37 bajones: In general the conclusion with this group is that when we get ot a point where we can start doing that, this is very likely going to be a restriction set in place 17:42:54 bajones: There are other technical issues around it. But in general I think we're in agreement that that is the safest way to start pushing into that space 17:43:13 cabanier: Every DOM layer would be its own document with the same origin 17:43:45 Brett_: Right now you can only have one WebXR session but you could maybe later have two WebXR sessions overlayed on each other 17:44:48 cabanier: In the overlay Browser you have to block all input 17:44:56 Brett_: But what about a scenario where you don't blur everything 17:45:12 bajones: Overlay browser is the topic of discussion currently 17:45:49 bajones: for DOM layers that is DOM content in the world, while you retain input in the XR session, I think is desirable and everyone recognizes how powerful that could be. 17:46:50 bajones: Interaction with the DOM layer gets tricky, if the developer has their own cursor pointer (like from a gun), then it diverges frmo the cursor pointer that interacts with the DOM layer 17:47:07 Brandel: WebXR need not be the only pathway to spatial content on the internet 17:47:28 Brett_: But the only way to use HTML content in VR.. how to do that 17:47:28 ack bajones 17:47:28 bajones, you wanted to ask about potential for pages to grief users 17:47:38 bajones: There would be many ways 17:48:30 bajones: One of the things that came to mind about the gesture to pull up the Browser overlay: if I'm pulling up the Browser while in the session, it's probably because I want the user to do something within the session, rather than allowing them to run away 17:48:45 q+ browsering 2 buttons 17:49:09 bajones: This seems like an opportunity to grief the user if I don't let the user do the thing they want to do, if a gesture works 17:49:49 bajones: You could trick the user into never triggering the actual system gesture 17:50:09 bajones: You need a way to mitigate that, and it also implies the version of the Browser that I bring up programmatically should also have some sort of visible difference to the typical browser 17:50:26 bajones: Maybe it doesn't ahve a tab bar, an abbreviated chrome (it feels slightly less useful that way) 17:51:06 bajones: If you're going to allow positioning of teh window programmatically - it feels like a bad idea, you don't want to allow them to hide it behind the user's head or run away from their hand 17:51:25 zakim, close the queue 17:51:25 ok, ada, the speaker queue is closed 17:51:27 bajones: But the ability to hint ot the Browser that "this might be a good place for content" to spawn the Browser there optionally 17:51:59 bajones: I'm more concerned about the griefing scenarios where devs would deny users access to the real browser 17:52:31 bajones: This goes back to the API where there's a gesture to always reliably open the Browser 17:53:26 bajones: In order to fake the system gesture for this, it's a huge ordeal. With a programmatic gesture, you are invited to generate and dismiss this object 17:53:46 bajones: If you have a trigger but not a resume gesture, the concern is quite a bit less 17:54:18 cabanier: The proposal is to require user action to resume the WebXR session 17:54:50 bajones: Clicking the 'resume' button is not propagated to the page? 17:54:52 cabanier: It is not 17:55:34 Brandel: Are the select/click actions trusted in the XR environment? 17:55:42 cabanier: I'm not sure 17:56:00 ada: Wasn't the point of 'select' to have a trusted event in WebXR? 17:56:27 bajones: That was the intention for the event. You also have to initialize the session in the context of a trusted event 17:56:35 q?/ 17:56:39 q- 17:57:30 bajones: If the user can trigger something and it not be closed arbitrarily, it seems like it should be the full browser 17:58:11 bajones: You said you don't need a user activation to call this API. Would it be sufficient to require that but initially open into this mode? 17:58:58 ack alcooper 17:59:33 alcooper: You mentioned a user case to direct devs to a separate site for payment - why do devs want a separate site rather than showing their own content - is directing to a separate 2D page also spoofable? 17:59:55 alcooper: If the developer can guess what your browser looks like an pretend it 18:00:01 q? 18:00:19 Brett_: What about one gesture to open the full browser and another button to open a partial web view (same dom origin) 18:00:34 alcooper: That sounds like what we were discussing as a separate thing and why you'd want to pull up the full browser 18:01:02 alcooper: Maybe for payments there would be a separate API for trusted scenarios 18:01:07 ack Phu 18:01:15 cabanier: I agree it's a concern, maybe multiple tabs should not be available.. 18:02:01 Phu: So bringing it up with the user action (not programmatic event) - what are scenarios where people will bring up the Browser without leaving the WebXR session? 18:02:11 cabanier: Right now they don't have a choice, the browser just comes up automatically 18:02:48 Phu: Bringing up the browser only partially solves the problem that DOM overlay would try to solve. Doing it programmatically sounds additionally useful 18:03:27 Phu: But there is the spoofable concern. Today a webpage can declare whether a webpage can appear in an iframe or not. Maybe we update the contract to allow communication for other modes (webxr) 18:03:44 cabanier: That sounds like a whole new paradigm for the browser 18:04:10 Phu: The website could declare their preferences or declare their trusted origins 18:04:45 alcooper: Tehre might be permission policy stuff that might allow that today. But this is in reverse of that almost 18:05:11 Phu: Could at least enable it in developer mode or something so people could validate the scenario 18:05:30 Phu: Everyone who uses our SDK asks for a way to embed web content 18:06:19 zakim, open the queue 18:06:19 ok, ada, the speaker queue is open 18:06:29 Scribe: Brandel 18:06:29 https://github.com/immersive-web/webxr/issues/1364 18:08:07 ada: Now we have several stand-alone webXR device, the original metaphor for how webXR was devised isn't how we're running them. We don't seem to have broad agreement about what is and is not running in this new model 18:08:34 q+ 18:08:35 ada: e.g. video, getUserMedia, various animation events etc, 18:09:18 idris has joined #immersive-web 18:09:28 q+ 18:09:31 i|https://github.com/immersive-web/webxr/issues/1364|topic: Backgrounded Tabs and WebXR (webxr#1364) 18:09:35 ada: My issue posits three buckets for definite yes, definite no and some "maybes" for things we could gain some benefit from but we should discuss them here! 18:09:42 ack alcooper 18:10:32 alcooper: we ran into this even on desktop VR. the 2D window 'lost focus', resulting in some input shenanigans. We need to find clarity on this. 18:10:56 alcooper: one question is power consumption - what are people permitted to shut down vs. what is obligatory for the session 18:11:06 ack cabanier 18:11:13 q+ 18:11:39 cabanier: I would expect there to be a difference between minimizing a browser to tab-switching to entering an XR session 18:12:18 ada: I had asked for the page's rAF but grudgingly accept the importance of suspending it 18:13:06 cabanier: WebApps may provide an overall listing of which capabilities are available within a minimized or an inactive tab, but it may be down to the individual feature 18:14:24 Brandel: I'm aware that the initial treatment of audiocontext, we suspended the context on session start. 18:14:51 cabanier: We should defer to the logic of the 'hidden state' API for individual features rather than attempt to collect or mandate the status of features on our own 18:15:35 Phu: Working on those features one-by-one in/for webXR is messy, and should be left to the feature owners there 18:15:42 q+ to say that we do reference the page visibility state in the spece 18:16:12 ada: Should we seek these people out to clarify the behavior is and discuss the best resolution for these? 18:16:34 ack mblix 18:16:38 alcooper: I would need to go code spelunking in order to find the relevant owners and discuss the right action 18:17:32 mblix: We have been talking about this from the spec perspective - are there alternatives like performance, to assess the relative costs of enabling various features like window.rAF? 18:17:57 cabanier: we have looked at this in the past, and a lot of people run exorbitantly expensive ads that would negatively affect webXR performance etc. 18:18:18 mblix: could we pursue the optional enablement of various features on the basis of explicit page intent? 18:18:27 cabanier: that seems reasonable 18:19:09 cabanier: This would be difficult to polyfill across different devices and vendors, given the hard boundaries for capability. 18:19:23 ack bajones 18:19:23 bajones, you wanted to say that we do reference the page visibility state in the spece 18:19:26 https://immersive-web.github.io/webxr/#ref-for-visibilitystate-attribute 18:19:46 ada: It's *not* something we can likely polyfill, we would probably need to return the success or failure in the features request for the session. 18:20:34 bajones: We do mention the visibility state in an `inline` session description, It's likely we can do something more useful - even if it's non-normative language describing this. 18:21:16 bajones: I've been spelunking through similar APIs to look for precedent, and found FullScreen and Picture-in-picture. There is scant mention of anything useful for our purposes. 18:21:25 q+ In theory, full user control is ideal? 18:21:40 ack Brett_ 18:22:32 Brett_: Could we have a range of user-configurable capabilities to control this? Ideally both developer- and user-facing capabilities to configure? 18:22:57 bajones: I would recommend against granularity exposed to the user, given the level of legibility those options necessarily entail. I'm concerned people won't understand this. 18:23:43 bajones: What does this enable for the user beyond what reasonable defaults furnish for them? 18:23:56 q? 18:35:46 m-alkalbani has joined #immersive-web 18:41:05 etienne has joined #immersive-web 18:41:37 q? 18:46:13 yonet has joined #immersive-web 18:46:18 Topic: https://github.com/immersive-web/webxr/issues/1360 18:52:21 q? 18:52:24 zakim, open the queue 18:52:24 ok, ada, the speaker queue is open 18:52:59 marisha has joined #immersive-web 18:53:05 s|https://github.com/immersive-web/webxr/issues/1360|Indicate "preferred" immersive mode (webxr#1360 -> https://github.com/immersive-web/webxr/issues/1360 ) 18:53:23 q+ 18:53:58 q+ 18:54:00 q+ 18:54:01 cabanier: I propose this because it is simliar to an openXR API. If a user is already in pass-through, maybe we should allow users to remain in pass-through. Likewise, devices that support AR and VR but _prefer_ VR, it might be useful to hint to the user that the device will do better with a given mode. 18:54:33 q+ 18:54:39 ack yonet 18:54:43 cabanier: I'm not sure of the scope of additional information disclosure, but it seems like it's only a little. The API would indicate what is preferred, but wouldn't enforce it. 18:54:43 q+ 18:55:36 yonet: Generally I undestand developers to be explicitly developing _for_ AR or VR. How does this work for them? 18:56:04 cabanier: you will be able to return the mode that is preferred from the developer's side. 18:57:03 cabanier: a requested AR session would still be an AR session, likewise with VR - this is more about the capacity to provide continuity. Developers could still tailor for individual devices 18:57:37 cabanier: This is about the _device_ providing the indication of preferred mode, rather than the content or the user. 18:58:22 cabanier: The system could let the user decide, but it would be a system indication. A developer could still launch what's available 18:58:32 ack bajones 18:58:51 +q your plan with quest is to switch this flag when in pass-through and web in home? 18:59:13 bajones: There is a long list of people who care a lot about a very little amount of information, so any exposure is something we need to be wary of 19:00:31 bajones: especially in concert with other browser signals, this serves to flesh out even more contextual information about exactly what the user is experiencing. I'm not personally overly concerned myself, but other people are, 19:01:35 bajones: I echo yonet 's thoughts about nudging users and authors towards a specific (and potentially more disclosing) context for interaction, which is both unfornuate and _primarily_ a library problem 19:01:45 q+ 19:01:47 bajones: though realistically, people do stupid things. 19:02:17 bajones: User preferences could provide a way to override information like this 19:02:42 qack ada 19:02:46 ack ada 19:02:51 bajones: particularly remembering preferences could make this less frustrating 19:03:08 q- 19:03:20 q+ 19:03:37 ada: if this could be requested along with a session rather than in `isSessionSupported`, it could be more frictionless - given that it's mainly about a preference but not a required capability. 19:04:34 ada: gating the time at which the information is vended also reduces the scope of how it can be integrated into general information-gathering as well 19:05:22 q+ to say that I like Ada's proposal 19:05:36 ada: whether frameworks would care to honor this information depends on them - but the basic presence of "supports preferred" might be enough on the isSessionSupported 19:06:13 bajones: Most of the time, the main thing that distinguishes an AR session from a VR one is that the AR simply loads and displays _less_ 19:06:34 ack idris 19:06:37 q- 19:07:17 idris: this seems like a good proposal, but the preferences should lie primarily with the developer's awareness of the experience rather than the user's 19:07:48 q+ 19:08:18 ack nick-niantic 19:08:18 idris: the user should have ability to agree or disagree to a given set of arrangements, but it's on the developer to decide what to offer 19:08:50 q+ 19:08:58 ack alcooper 19:09:27 nick-niantic: the uncertainty of not knowing what's going to be entered might imperil the load-status of everything required for the session, but it's not a big deal 19:09:44 ack ada 19:10:40 q+ 19:11:20 ada: given that invoking a session requires a prompt anyway, `preferred-vr` and `preferred-ar` both going into the session would just defer the decision to another moment unless we specify a default 19:11:42 ack cabanier 19:11:44 q+ 19:12:15 ada: this would probably only cater to people who have the ability to construct the absolute best experience possible across all capabilities 19:12:28 q+ 19:12:59 cabanier: generally, AR sessions request many more features than VR. AR wants hit-testing, lighting estimation, world geometry - where VR often only needs local-floor etc. 19:13:34 cabanier: that kind of thing results in additional overhead, given the per-frame computations required for some of them. That becomes expensive if it's not essential and by design for the experience 19:13:52 ack bajones 19:14:30 bajones: we could say if you use 'sessionPreferred' you can only use optional features, but that doesn't feel great. 19:15:51 bajones: I am trying to figure out the circumstances under which you might decide between AR and VR based on the knowledge that one is _more_ preferred? Ultimately, I think an author is going to be highly opinionated 19:16:05 q+ to say what's your preferred? incorect your getting vr instead 19:16:21 q+ 19:16:33 bajones: it's hard to tell how an author would make different choices based on the results of knowing this setting 19:16:45 ack alcooper 19:16:53 bajones: The preloading feature makes knowing ahead of time more compelling 19:16:57 zakim, close the queue 19:16:57 ok, ada, the speaker queue is closed 19:17:20 alcooper: we have different features for VR vs. AR, so there's an immediate divergence based on that decision from the start 19:17:50 ack bialpio 19:18:01 alcooper: as an author, if I already know the category of device, it's likely I can make decisions about what to offer 19:18:33 q- 19:18:38 +q why not call it just info on the users current reality? AR or VR? 19:18:38 bialpio: given that there are a range of states for a given device, it's not likely that we can know what the preferred mode should be based on the hardware. 19:19:38 bialpio: the branching between committing to AR and VR become tangled, especially when some features have been enabled and others haven't. 19:20:17 q+ 19:20:29 ack ada 19:20:29 ada, you wanted to say what's your preferred? incorect your getting vr instead 19:20:37 bialpio: It's not clear to me how much this affects the complexity for pre-fetching, whether people would pull everything or need to make decisions at that moment when the information is known 19:21:31 Would it be nice for a normal webpage to know if its being opened in VR or AR? 19:21:33 ada: if we return this at the moment that the session is actually granted, while you don't get the preload ability, you still have the opportunity to re-negotiate the terms of the session before it has started in earnest 19:22:08 bialpio: Right now the only way to load either AR or VR is to have two buttons - one for each high-level kind 19:23:49 Brandel: we will always have a permission dialog here, so trying to smooth out the friction this kind of query requires isn't as important 19:23:59 Brett_: Does a page knows the context it's in? 19:26:33 topic: lunch break 19:26:36 rrsagent, publish minutes 19:26:38 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 19:44:28 present+ 20:32:34 q? 20:33:34 nick-niantic has joined #immersive-web 20:35:34 marisha has joined #immersive-web 20:35:34 zakim, choose a victim 20:35:34 Not knowing who is chairing or who scribed recently, I propose Cooper 20:35:37 Brandel has joined #immersive-web 20:35:40 presnt+ 20:35:45 present+ 20:35:47 bajones has joined #Immersive-Web 20:35:48 etienne has joined #immersive-web 20:35:49 scribenick:alcooper 20:36:02 present+ 20:36:19 Mats has joined #immersive-web 20:36:40 ada:Topic keeps being brought up around accessibility discussions and automation; but if nothing else this is a good foundation for an accessibility story 20:36:43 atsushi has joined #immersive-web 20:36:53 ... which is currently a bit lacking 20:37:35 ... two part API, first is a stencil buffer where they render the ID of each object as a color with no antialiasing 20:37:51 https://github.com/immersive-web/webxr/issues/1363 20:38:02 q+ 20:38:18 Phu_MS has joined #immersive-web 20:38:25 ... Scene Graph attached to a session somehow; where each object in the scene graph contains an ID that matches the color in the stencil and a few properties to make it useful 20:39:02 q+ 20:39:21 zakim, open the queue 20:39:21 ok, cwilso, the speaker queue is open 20:39:21 i/ada:Topic keeps being brought up around accessibility/topic: Add Scene Description API for automation and a11y (webxr#1363) 20:39:23 zakim, open the queue 20:39:23 ok, ada, the speaker queue is open 20:39:23 q+ 20:39:27 q+ 20:39:27 chair: Ada 20:39:31 rrsagent, publish minutes 20:39:33 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 20:39:39 yonet has joined #immersive-web 20:40:04 q+ Is what your describing like the right side of this image? Could DOM Overlay's be used like the left side of this image? 20:40:10 m-alkalbani has joined #immersive-web 20:40:10 ... Some proposals to make this useful: 4x4 transformation matrix (not sure if local or global), bounding box with identity applied (even if not currently visible), attachment to a specific XR space, name/description (description like an alt tag), role in the environment (aria role) 20:40:44 +q forgot to paste image https://manipulation.csail.mit.edu/data/coco_instance_segmentation.jpeg 20:40:46 ... this could also be used to tab through content and even fire select events without needing inputs 20:41:25 ...sound effects or whether things represent a person on a person 20:41:50 ... navigation to another page (aria role relationship), and/or any state that it has 20:42:08 ... Does this sound useful? 20:42:31 ... Probably wouldn't be too difficult to implement from a browser standpoint (hook a buffer up to screen reader functionalities), also useful for automation 20:42:52 ... Downside is that this wouldn't be automatic; developers would need to do additional work; but unblocks usage in areas where accessibility is required 20:43:21 ... What carrots should we provide to encourage developers to use it? 20:43:42 q? 20:43:51 ack bajones 20:44:12 bajones: This has been talked about a bit in Editor's meetings, a few highlights 20:44:35 ... really interesting idea, been talking about need for accessibility for a long time; not just WebXR but anyhting that creates a buffer of pixels 20:44:41 ... requires hacks to be accessible today 20:44:51 ... possibility of extending to WebGL/WebGPU, which may also be a bit of a curse 20:45:20 ... Talked about this a bit more abstractly, but hard to find a way to meet in the middle with accessibility folks and/or prototype 20:45:45 q+ 20:45:57 ... Really need people who can prototype this to figure out that we're moving in the right direction, and then need to provide carrots to developers 20:45:58 Omegahed has joined #immersive-web 20:46:02 s/carrots/encouragement 20:46:40 ... e.g. developers have to do application specific ray tracing, and we can use this to maybe provide hints or objects that were hit-test 20:46:53 ... Maybe something more primitive than color buffers of objects like just bounding boxes 20:47:08 .... If you have hte color buffer we can ID those a bit better 20:47:22 s/hte/the 20:47:55 bajones: Gives the opportunity for the browser to also highlight what you're looking at which the page may not be able to do, e.g. on the Apple Vision Pro 20:48:04 ... whatever we talk about here, we should disconnect it from a session 20:48:15 ... may want to build this outside of having a session 20:48:21 q+ 20:48:33 ... and then attach it to a space 20:48:48 ... Minor nitpick: Rigid Transforms, not 4x4 matricies 20:48:53 ack nick-niantic 20:48:56 nick-niantic: 20:49:00 q+ to suggest a descriptive torch 20:49:41 nick-niantic: A couple observations, a lot of 3D frameworks lean into these representations that don't give you tools to get a full description of the scene 20:50:01 ... a lot of ways to manipulate geometry of scene beyond initial scene description 20:50:19 ... therefore it's indeterminate and hard to do custom hit testing 20:50:38 ... In some ways having a full description of the scene is impossible 20:51:01 ... I'd try to think more concretely about what values people can get and write a spec to give them a way to get benefits out of those values 20:51:57 ... need to be clear what these scene-reading applications are and what value they provide 20:52:19 ... but also not looking for a solution that fits everything, since likely need to run code to do a hit test 20:52:42 ada: Goal wasn't to force a developer to fully enumerate all these properties; just that they get more benefit to providing more of them 20:52:50 ... if a developer doesn't do anyhting, things will still work as before 20:53:17 ... for automation kind of just want to use it with WebDriver to figure out where buttons are 20:53:51 ... a lot of sites rely on automation testing which is currently hard in WebXR since there's not the same feedback mechansims as HTML/CSS provide 20:54:19 ... accessibility is the most important story; automation is a side benefits, everything else is meant to be developer benefit/motivation to use this 20:54:43 q? 20:54:51 ack Brett_ 20:54:53 ... maybe search engines can use this to index part of the page 20:55:03 ack forgot 20:55:03 forgot, you wanted to paste image https://manipulation.csail.mit.edu/data/coco_instance_segmentation.jpeg 20:55:16 Brett_: Is it like this image https://manipulation.csail.mit.edu/data/coco_instance_segmentation.jpeg 20:55:23 ada: Yeah kind of 20:55:33 Brett_: So this is an accessibility frame buffer? 20:55:41 Ada: yes, with an object on the side 20:56:17 q+ 20:56:18 Brett_: Loading HTML into WebXR solves this for that WebXR (given the existing content), and same for model/image tags 20:56:32 ada: But that doesn't solve the WebGL problem 20:56:58 Brett_: But could you describe this all via HTML? 20:57:11 ada: We'd essentially have to rebuild HTML for the Web which is a 30+ year project 20:57:16 ack cabanier 20:57:27 https://www.w3.org/WAI/APA/wiki/WebXR_Standards_and_Accessibility_Architecture_Issues 20:57:40 q- 20:57:40 cabanier: We've discussed this a few times, including link from Accessibility group 20:57:51 ... Is this the same proposal? 20:57:54 ada: Very similar 20:57:54 q+ 20:58:35 ada: so full scene graph including things that aren't rendered and you'd also have the framebuffer 20:58:49 cabanier: In that case why do we need matrices/bounding boxes and why not just pixels? 20:59:14 ada: Thought it could be cheap and interesting, but if it's not useful shouldn't list it. Initial list is just initial impressions of what my be useful 20:59:32 ack bajones 20:59:49 bajones: From an accessibility standpoint how important is it to see something off the screen exists 21:00:02 ... maybe if you have mobility issues you can be targeted towards it 21:00:09 ... (via tabbing) 21:00:20 ... This makes me say we probably want more than just a pixel buffer approach and they are complimentary 21:00:28 cabanier: Might be very hard to calculate the bounding boxes 21:00:49 bajones: This gets into what Nick was talking about that things may become impractical 21:01:16 Brandel: One place definitely needed is to include/give information about things that aren't in view, and pixels won' t be enough 21:01:33 cabanier: DOM would still give you this information 21:01:58 ... unsure if there is a need for the extended scene description if there's a pixel buffer and the fallback dom 21:02:11 ... fallback dom wouldn't have spaces/bounding boxes, is just standard html 21:03:04 ada: If you just had an object that was one set of objects/vertices with a smart shader you could still work out where things like heads/arms are to be rendererd accurately in the scene graph 21:03:13 +q Model tag smart-shader + 3D CSS Dom Overlay Smart shader 21:03:22 ... scene graph doesn't have to be literally what's being rendered to the page, could fake it a bit/provide basic positions 21:03:41 ack ada 21:03:41 ada, you wanted to suggest a descriptive torch 21:03:56 ada: Another useful thing could be if pixel buffer sometimes came from eye or sometimes from hand 21:04:13 q+ Brett to talk about Model tag smart-shader + 3D CSS Dom Overlay Smart shader 21:04:20 ... e.g. could scan with hand like a torch (flashlight) vs. just what looking at 21:04:56 bajones: Definitely need secondary buffer to get info when pointing sideways vs. what is being viewed; concerned about performance 21:05:06 ada: Resolution doesn't have to be same as pixel buffer 21:05:42 bajones: This does effectively mean doing a second render pass. 21:06:03 ... similar if this buffer isn't antialised but color buffers are 21:06:07 ack Brandel 21:06:11 ... lots of pitfalls that can lead to bad performance 21:06:44 Brandel: Despite the fact that most engeines are using direct scene graphs, that shouldn't necessarily be what's submitted for this since instancing, et.c can lead to inaccuracices 21:06:49 ... but this does mean that it can be simpler 21:07:03 ... Doesn't have to update as often 21:07:10 ... These aren't dealbreakers 21:07:28 ... may also need this for things like shadertoys that are just two triangles 21:07:40 ... WebGL doesn't have a scene graph and just has pixels as well 21:07:46 ... conceptually harder to construct this sometimes 21:08:10 ... shouldn't overindex on what some people have to do to create these views 21:08:35 ada: Could use this to update instances if browser keeps the two in sync 21:08:36 ack Brett_ 21:09:07 Brett_: If we do use model tags could automatically generate scene graph 21:09:24 ... on a smartphone today, can use accessibility features in 3D css/dom overlay 21:09:51 ack Brett 21:09:51 Brett, you wanted to talk about Model tag smart-shader + 3D CSS Dom Overlay Smart shader 21:10:57 ada: It's possible to do this with DOM overlay today, but falls apart via VR today 21:13:12 ada has changed the topic to: https://github.com/immersive-web/administrivia/blob/main/F2F-March-2024/schedule.md 21:20:18 yonet has joined #immersive-web 21:24:41 Topic: https://github.com/immersive-web/depth-sensing/issues/44 21:26:03 scribenick:bajones 21:26:41 Felix_Z has joined #immersive-web 21:26:50 s|https://github.com/immersive-web/depth-sensing/issues/44|Potentially incorrect wording in the specification (immersive-web/depth-sensing#44) 21:26:54 rrsagent, publish minutes 21:26:55 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 21:27:01 bialpio: When talking with Rik about the depth sensing API we realized that there may be a mistake in the spec. Want to see if there's any potential future changes of this sort. 21:27:33 ...: Feels like we will have to introduce a breaking change with what Chrome is already returning in order to be more concrete. 21:28:44 ... In ARCore we're not returning from the near plane, just returning as-is. Doesn't match Quest, need to offset to make them match. 21:29:23 ... We don't want to have to apply this offset to the depth buffer ourselves, which incurs a perf penalty, so need to return an offset for the user to use. 21:30:02 ... That's the breaking change. The data will be the same, but if users don't change how they interpret the data it won't work cross platform. 21:30:38 q? 21:30:43 q+ 21:30:44 ... Do we need to account for anything beyond the near plane. For example, how is the data interpreted between near/far planes? 21:30:50 ack cabanier 21:31:21 q+ 21:31:22 cabanier: How did we come to the conclusion that the values returned are between the near/far plane? 21:31:46 bialpio: We assume 0 means near place + 0 mm? 21:31:54 cabanier: I don't know if that's true? 21:33:04 q+ 21:34:01 bialpio/cabanier: Discussion of how to use projection matrices to get values. Math math math. 21:35:00 bialpio: First thing is that we need to change the spec to not say values are from the near plane, because it turns out to not be true for any devices today. 21:35:57 ...: We also have to expose the near value that it's relative to, or there's no way it can work. Once you have a depth buffer you already have data that you have to fix up. 21:36:58 ...: Might have to expose depth far? Only if the depth buffer data is normalized. In order for the developer to know how to use for anything other than occlusion they need to know these values. 21:37:16 ...: Occlusion might work if the values match the near/far planes. 21:37:47 q+ 21:37:52 ...: How can we future proof, since we're already making a breaking change. 21:38:42 cabanier: I think the depth should always be linear 21:39:10 Brandel: Logrithmic depth is beneficial in some cases, not necessarily in AR context given the scales involved. 21:40:00 nick-niantic: AR core returns 16 bit ints with distance in mm 21:40:08 q? 21:40:09 cabanier: Meta returns floating point meters 21:41:02 bialpio: Bigger the numbers, the more inaccurate the reporting is going to be. 21:41:36 bialpio: AR Core says beyond 8 meters precision falls off. 21:42:18 q+ 21:43:14 bajones: How does the different scale get resolved today? 21:43:40 bialpio: Already reports scaling factor and data format. 21:44:19 Ada: Could we provide a conversion function? 21:44:37 bialpio: That exists on the CPU, but how do we do it on GPU? 21:45:10 nick-niantic: Could you provide a matrix? 21:45:23 bialpio: For non linear values? Probably not. 21:45:47 q? 21:46:01 cabanier: Could we tell devices that decide to go the non-linear route that the have to expose a v2 of the API? 21:46:11 bialpio: Yeah, that would work. 21:46:41 ... last question for Rik. your depth far is at Inifinity? Do we have to expose that? 21:47:27 ... We just expose absolute values. 21:47:41 bajones: Doesn't that imply that depth far is at infinity? 21:47:59 bialpio: Probably? That sounds right but not sure. 21:48:56 ack Brandel 21:48:56 ack Brandel 21:49:02 ... I think we can skip it. Should be able to absorb that math into the scaling factor. 21:50:01 Brandel: Did you say the values are perpendicular in ARCore? Because that would make the values non linear. 21:51:17 21:51:34 ack nick-niantic 21:52:01 nick-niantic: Trying to understand depth near on Oculus. Why do we need to have it? Due to how the cameras compute it? 21:52:19 cabanier: Yes. 21:52:46 ack cabanier 21:53:16 ... Only reason we might need depth far is if implementations use something other than infinity for the far plane. 21:53:25 q+ 21:53:37 bialpio: Does openXR enforce units? 21:53:50 cabanier: depth 16, yes. 21:54:15 bialpio: I think the math will work out. 21:54:17 q- 21:54:28 ack Brett_ 21:55:22 Brett_: Is the near plane supposed to represent where the depth starts counting from from the headset moving out? 21:55:31 bialpio: Yes. 21:55:45 Topic: https://github.com/immersive-web/webxr/issues/1366 21:55:46 ... Think I have what I need to make progress. 21:56:47 s|https://github.com/immersive-web/webxr/issues/1366|Feature for the UA to handle viewing the system inputs during a session (immersive-web/webxr#1366) 21:56:49 rrsagent, publish minutes 21:56:51 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 21:57:04 Scribe: Brett_ 21:57:23 How to scribe: https://w3c.github.io/scribe2/scribedoc.html#scribe 21:58:48 q+ 21:58:48 Brandel_ has joined #immersive-web 21:59:25 ack bajones 21:59:26 q+ 21:59:28 Ada: On the vision Os when you dont enable hand tracking because the a hands arnt visible we show the users hands. We are thinking from using this it might be nice to have this for when you turn on hand tracking they could opt in and use their real hands. In general, instead of using WebXR input profiles you use system rendering. You might be able to have the controller do stuff with depth buffers for interaction. Want to guage interest and[CUT] 21:59:43 like when pages cut out information for security reasons like with dom layers 22:00:22 bajones - depth compositing isn't a factor. today my understanding is on vision pro you can request the hands without hand tracking and get the hands and then it stops if you ask for the hands 22:00:46 ada - we dont want to stop people doing existing hand writing so they would opt into this 22:01:22 brandon - do you envision this as a real time toggleable thing you could turn 22:01:32 brandon - or flag at session time for all meat all the time 22:01:54 q+ 22:02:17 ada : describing transition between system hands 22:02:38 ack cabanier 22:02:49 idea: about putting on boxing gloves but being cute, switching from real hands to controller hands 22:03:22 q+ 22:03:38 ada: maybe it should be toggleable on the fly, like a method in the session; 22:03:52 brett: what about clipping your hands in and our with depth sensing 22:05:26 q- 22:05:43 brandon: by layering the hands video feed on purpose, you could composite it without depth 22:05:59 ada: this will be a feature flag and you turn it on and then there is an open PR that is about inputs and weather or not they should be rendered 22:06:13 ada: in this situation we would set that to true to show that hte hands are already rendered somewhere else 22:06:18 q+ 22:06:43 rick: on the quest that flag false always unless you add this feature 22:07:08 rick: could this work in AR. it would work for AR depth sorting hand visibility or depth sensing can do this too 22:08:03 ada: it could be viable for AR and I guess you would be doing whatever the system does, the system could do 22:08:18 rik: leave the hands up to the UA with interacting with controller? ada: yes 22:08:22 ack Felix_Z 22:08:31 this is about systems rendering inputs 22:09:07 Felix: Is this about hands only? Or what is currently supported by the system? What about tracked sources? 22:10:20 Ada: Tracked sources will be rendered by the system, your in VR and the system is rendering controllers, they could appear on the table, you also have system rendered hands, currently the system does the hand rendering 22:10:44 Felix: What about an input rig? 22:10:52 I need to ask more about that 22:11:28 q+ 22:11:28 Felix: One more question, do you see values in just rendered the active input sources? 22:11:42 Ada: Transient pointers are an example of something that isn't even tracked like that 22:12:08 Ada: Per input? Maybe its gonna be like... like when you do hit testing you give it a target ray mode track, it could be like that 22:12:32 Ada: Maybe its only for tracked pointers like things that would get rendered, actually I dont think so it would be for all tracked pointers 22:12:35 q- 22:13:02 Felix: Turn off rendering of active rendering sources as an option? 22:13:28 Ada: Quest perspective, put controllers down, have hands, pick them up, 22:13:35 Brandon: We talked about trackers representing the body before 22:13:46 1+ 22:13:47 Q+ 22:13:56 ack Brandel_ 22:14:00 Ada: That would be interesting, the user agent would render those? 22:14:38 Felix: We have the tracked sources, thats the controllers, we are using hands right now, controllers not being blocked out when depth is wrong can be destracting 22:14:47 Ada: Would you want to be able to cut out the controller with depth buffer? 22:14:54 ack bajpnes 22:14:58 Ada: technically? 22:15:56 Ada: Session startup? 22:16:10 Rick: Would know if punch-through or not and can not render controller. Your punchtrough hand would remove the hand your rendering. 22:16:31 Ada: User agent could request the feature, then you render, then tell them not to render these, others can or can't 22:17:16 Felix: I got an idea for the possibility to toggle this at runtime? Make this universal, one rendering of hand is one XR input source, a list of masks could be exposed and find hte mapping of the rendering to the XR input source 22:17:27 Felix: Then system rendering can be turned off and people can toggle rendering per input source 22:17:52 Ada: We can't expose the mask raw data 22:18:31 ack bajones 22:18:32 Ada: Per input, seems harder to do it for every input 22:18:51 Rick: but one controller and one hand 22:19:01 Brandon: I don't know if this is rendering a mask per hand 22:19:05 ack Brett_ 22:19:49 Brett: Feet, hands, body, this can expand to the rest 22:20:09 q+ 22:20:28 Ada: compositing hands with virtual objects is an exciting possibility, like light saber 22:20:36 brandon: I would like to know more about levels of control 22:20:49 q+ 22:21:04 brandon: simple start might be good 22:21:47 ack cabanier 22:21:50 other idea: making it good to start is good, this might be an easy thing to change at runtime, global render all controllers rendering sounds fine to me 22:21:57 q? 22:22:04 ack Brett_ 22:22:47 Brett: With masks we can put our feet through portals, we don't see the mask data but layering the masks is still amazing 22:24:21 Rik: If your interacting with your room, you arn't interested in your hands, there is an option to filter hands out of the depth so this is important here too 22:24:46 Brett Fix ^ in the context of hands being filtered out of depth data as a current option today 22:42:52 bajones has joined #Immersive-Web 22:44:58 Brandel has joined #immersive-web 22:46:27 etienne has joined #immersive-web 22:48:30 yonet has joined #immersive-web 22:58:58 Zakim, choose a victim 22:58:58 Not knowing who is chairing or who scribed recently, I propose Rik 22:59:31 atsushi has joined #immersive-web 23:00:38 marisha has joined #immersive-web 23:00:43 present+ 23:01:01 scribenick cabanier 23:01:12 Omegahed has joined #immersive-web 23:01:17 bajones: we've brought this up at TPAC 23:01:27 .... we didn't decide on anything concrete there 23:01:45 ... are there any aspects that we want to reconsider or deprecate? 23:01:59 ... deprecation is an arduous process 23:02:11 ... but it's ok to look on our bets and decide if they didn't pan out 23:02:21 ... maybe it's healthier for implementor and user 23:02:24 q+ 23:02:29 ... inline sessions are an example 23:02:41 Mats has joined #immersive-web 23:03:05 ... but stereo inline or magic window could come in the future but not necessarily using inline session 23:03:18 Mats_ has joined #immersive-web 23:03:19 ... very few people use them. afaik only the samples page does 23:03:40 ... maybe we can simplify things by dropping them 23:03:43 q+ model tags inline, is Inline Mode how Dom Overlays work today? 23:03:56 q+ 23:03:59 q+ 23:04:02 ack Brandel 23:04:17 Brandel: you mentioned that inline isn't used much. Do you have analytics 23:04:24 i|bajones: we've brought this up at TPAC|topic: Evaluate if any aspects of the API should be deprecated. (immersive-web/webxr#1341) 23:04:27 rrsagent, publish minutes 23:04:28 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 23:04:43 bajones: we have some but they're not likely very good 23:05:00 ... it very much depends if they have a headset or not 23:05:18 ... I don't believe our numbers show that this is needed 23:05:57 ... I have not seen much use. Things are based on three or babylon or playcanvas and those don't use inline sessions 23:06:05 ... only the sample page 23:06:21 ack Brett_ 23:06:28 ... the only other use is pages based on the samples 23:06:38 s/scribenick cabanier/scribenick: cabanier 23:06:56 Brett_: inline mode won't get you anything XR related 23:07:14 bajones: you can get orientation data on android 23:07:43 ... you can use offset reference space and other things 23:08:14 ... but the only thing is that if you built your framework around the webxr frameloop, you can reuse it on the 2d page 23:08:34 ... but it's not a big deal to switch between raf and the xrsession's frameloop 23:08:38 q+ 23:09:22 ... maybe offset reference spaces? 23:09:29 q+ 23:09:30 ada: I use those 23:09:44 cabanier: I've seen them used as well 23:10:12 q+ 23:10:23 ack cabanier 23:10:46 cabanier: can we get rid of secondary views? 23:10:56 alcooper: I believe hololens has them 23:11:27 Brandel: NTT has a prototype that uses them 23:12:03 ada: for the a11y pixel fallback, would that be a secondary view? 23:12:15 bajones: that would be a different thig 23:12:34 ada: it's potentially used for a buffer that you can share with people 23:12:46 q+ 23:12:55 bajones: I believe that's the one of the use cases for secondary views 23:13:08 ... I believe hololens had an observer viewport 23:13:30 ... but I believe we discussed using it as an observer view 23:13:48 ... I don't know how well supported that use case is in the spec language 23:13:59 ada: I'm not super attached to it 23:14:12 ... as a web developer, I haven't seen much adoption 23:16:36 It looks like this fairly popular mathematics animation library might be using inline sessions: https://github.com/CindyJS/CindyJS 23:17:15 bialpio: a bunch of api use xrview and secondary views are allowed there 23:17:38 ... so you could add camera data to such a secondary view 23:17:56 q+ to talk about native support 23:17:59 ... for layers, we don't want to add more pressure on the system 23:18:17 ... we only use them because we already have a lot of data. 23:18:22 q? 23:18:28 ack bialpio 23:18:34 ack nick-niantic 23:18:34 ... for raw camera access, I was thinking of creating artificial views 23:18:35 q- 23:18:44 q+ to ask about what if a future device 23:18:53 nick-niantic: for reference space, can we get rid of them? 23:19:11 ... for example, you want an unbounded grounded reference space 23:19:32 ... you can ask for the ground on android and it doesn't really know 23:19:33 q+ 23:19:49 ... the avp lets you ask for an unbounded reference space and you can only walk a meter 23:20:23 ada: the vision pro case is a bug 23:22:56 idris: magic leap and hololens support unbounded reference spaces 23:23:17 cabanier: I believe Quest supports all reference spaces correctly but it doesn't support unbounded yet 23:23:45 nick-niantic: the intent of reference space is not super well implemented 23:24:11 ... let's not give the developer tools that are not consistent 23:24:31 ... we only want to start at the origin 23:24:53 alcooper: are you saying there should be more clarity around the reference spaces? 23:25:08 ... so not deprecate but make them better defined 23:25:19 nick-niantic: as a developer I want the ground to be at 0 23:25:21 q+ 23:25:29 ... I always want the origin facing forward 23:25:44 ... I want you to be able to set my own content boundary 23:26:55 bajones: are you saying that as a developer you want things but maybe that is not what everyone wants 23:27:12 ... for unbounded space, I might wants things pinned around the house 23:27:19 ... you want things to be preserved 23:27:33 ... webxr doesn't have facilities to preserve that 23:27:34 q- 23:29:35 ack Brandel 23:30:17 Brandel: in the context of inline sessions, it doesn't seem that anyone uses is so we can take it out 23:30:30 ... for reference spaces there is enough reason to keep them 23:30:51 ... are there things that we want people to do with them? 23:31:07 bajones: it's certainly possible to have clarifications 23:31:17 ... I wouldn't mind doing a review on these 23:31:34 ... and maybe we should normalize that and codify that in a spec 23:31:44 ... there are edges case 23:32:08 ack Brett_ 23:32:09 Brandel: it's pretty clear that layers benefit from spaces 23:32:22 ... offset reference spaces 23:32:28 q+ 23:32:42 ... and maybe we should add that to the spec 23:32:55 bajones: maybe it makes sense to add a note 23:33:01 ... to the layers API 23:33:16 ada: you can just teleport around 23:33:43 Brett_: it's the steam deck the perfect inline device 23:33:56 bajones: any handheld device, yes 23:34:26 ... if all you're looking for is rendering something inline, yes. But you can just do it with the standard web APIs 23:34:36 Brett_: so you can write a polyfill 23:34:48 bajones: yes but likely I'd be the only user 23:35:06 ... if we deprecate it, give it a long lead time to keep supporting it. 23:35:23 ... give console warning, create a polyfill, etc 23:35:49 ... reach out to developers to tell them to stop 23:36:20 ... in the case of the sample, inline just goes away 23:36:52 q? 23:36:52 ack alcooper 23:36:53 alcooper, you wanted to talk about native support 23:37:22 alcooper: we should be careful removing features that have openxr counterparts 23:37:32 ack ada 23:37:32 ada, you wanted to ask about what if a future device 23:37:32 ack ada 23:37:35 ... if the thing I needed are secondary views 23:37:49 ... and webxr removed support 23:38:06 ada: what if we remove it and suddenly a popular device supports it 23:38:27 ... it = multiple views 23:38:32 ... will it just work? 23:38:45 bajones: there are devices like that like Varjo 23:39:14 ... but they support render them to just 2 views 23:39:46 ... so part of the reason this was introduced 23:40:20 ... some devices have a very large view with 4 large displays but even those render to 2 and then break it up in 4 23:41:49 ... there is a lot of content that will just break if there are more than 2 views 23:42:12 +q 23:42:40 I'm muddy on how this relates to https://developer.mozilla.org/en-US/docs/Web/API/OVR_multiview2, but it seems related 23:43:00 ack cabanier 23:45:25 cabanier: (unbounded reference space issues) 23:45:44 bajones: I'd love to have the discussion to hash this out 23:46:10 ada: in the early days, we added it for hololens but it was never shipped 23:46:53 ... but now with ML and Meta implementing it, we need to go over the issues. 23:47:15 idris: we got customer requests to implement 23:47:47 ack marisha 23:48:05 zakim, close the queue 23:48:05 ok, ada, the speaker queue is closed 23:48:26 marisha: an admin question, for inline session, I find a bunch of code on github that is using it 23:48:35 ... how do you make decision on what's being usedf 23:48:57 bajones: every time a browser is removing a feature, we look at the metrics 23:49:08 ... we should get a pretty clear idea 23:49:27 ... add deprecation warnings 23:50:15 ... and if we get feedback, we might keep it in 23:50:24 ack Brett_ 23:50:58 Brett_: if we remove secondary views, would caves be removed from the spec? 23:51:27 bajones: those devices might just budget and do it within their own implementation 23:51:36 ... and ship their own custom implementation 23:53:51 bbbbbbbbbbbbb3563 has joined #immersive-web 23:53:55 topic: unconference 23:54:29 subtopic: Splats are the new mesh 23:58:25 [Nick going through slide] 23:58:53 q+ 23:59:17 Zakim, open the queue 23:59:17 ok, Brandel, the speaker queue is open 00:01:03 rrsagent, publish minutes 00:01:05 I have made the request to generate https://www.w3.org/2024/03/25-immersive-web-minutes.html atsushi 00:08:52 q+ 00:14:32 bbbbbbbbbbbbb3563 has left #immersive-web 00:29:20 ack bajones 00:29:45 8w.8thwall.app/splat-cactus 00:30:18 8w.8thwall.app/splat-avp <-- Includes workaround for AVP gamma issue 00:39:48 rrsagent, bye 00:39:48 I see no action items