16:41:15 RRSAgent has joined #immersive-web 16:41:15 logging to https://www.w3.org/2022/04/21-immersive-web-irc 16:47:34 rrsagent, make log public 16:47:47 meeting: Immersive Web WG/CG vF2F 2022/04 Day1 16:49:03 dom has joined #immersive-web 17:06:24 bialpio has joined #immersive-web 17:07:18 Nick-8thWall-Niantic has joined #immersive-web 17:09:45 lgombos has joined #immersive-web 17:09:53 winstonc has joined #immersive-web 17:11:09 winstonc has joined #immersive-web 17:12:09 Can we add https://github.com/immersive-web/layers/pull/273 to the agenda for today or tomorrow. 15-20 min max 17:19:17 present+ 17:19:33 Agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2022/schedule.md 17:19:50 RRSAgent, pointer? 17:19:50 See https://www.w3.org/2022/04/21-immersive-web-irc#T17-19-50 17:20:01 alcooper has joined #immersive-web 17:20:10 present+ Laszlo_Gombos 17:20:23 Present+ Dominique_Hazael-Massieux 17:20:28 present+ 17:20:45 Chairs: Ada, Aysegul 17:20:53 Josh_Inch has joined #immersive-web 17:21:00 Meeting: Immersive Web CG/WG F2F - Day 1 17:21:08 https://docs.google.com/presentation/d/1iIWMt-jM1UToQ9Fo4KQz0g5JuqYbGdLvvENL4YBnCG0/edit?usp=sharing 17:21:22 present+ winston_chen 17:21:23 yonet has joined #Immersive-web 17:21:25 present+ 17:21:35 Ashwin has joined #immersive-web 17:23:29 manishearth_ has joined #immersive-web 17:23:37 dylan-fox has joined #immersive-web 17:23:52 present+ 17:25:35 tangobravo has joined #immersive-web 17:27:30 agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2022/schedule.md 17:29:54 klausw has joined #immersive-web 17:32:30 winstonc: 17:32:31 bajones has joined #Immersive-Web 17:36:22 Slideset: https://www.w3.org/2022/Talks/dhm-metaverse-workshop/ 17:38:37 dom: there's been a lot of discussion about the metaverse 17:39:14 dom: folks have shown interest in bringing it forward. it's an online, immersive, interconnected space 17:39:43 dom: until a couple years ago you couldn't say the web was immersive, but that's changing now 17:40:25 dom: in terms of interconnection, navigation, social aspects, not as much there yet 17:40:38 dom: stil lwork to be done 17:41:43 dom: been talking t ofolks in the industry to understand what's missing in this picture 17:42:20 dom: the notion that webxr is a v programmatic approach that gives you an entire environment, creates challenges in the security model of the web 17:42:43 dom: this notion of providing a safe, immersive sandbox for the web has come up again and again 17:42:55 dom: we need to come up with good immersive navigation 17:43:16 dom: also interesting discussions around a11y 17:43:53 dom: if we really want the metaverse to be a critical alternative, all of this is important 17:43:58 dom: also we need strong interop 17:44:43 dom: web content itself also could be made more 3d without a full immersiveness 17:44:58 dom: repurposing 2d content in a 3d world with existing css props has also emerged 17:45:30 dom: likewise for scene formats, a more declarative approach 17:46:00 dom: if we're talking about collaborating in 3d we need 3d capture for videos etc. we have webrtc but 3d would need to be made to work in it 17:46:47 dom: if you're moving from one space to another you'll get a strange transition, so there may be a need to harmonize ux patterns, locomotion patterns, etc 17:47:09 dom: questions around identity and avatar mgmt 17:47:13 q+ to ask about whether we need to change the charter 17:47:16 dom: and being able to transport assets 17:47:50 dom: here to suggest we run a w3c workshop to help bring many people to the table to share perspective, priorities, directions 17:48:04 dom: at least get a shared understanding of the direction 17:48:33 dom: we've done this a bunch of times before (20216, 2017, 2019) 17:48:42 dom: have a draft CfP 17:49:43 dom: contact me if you want to help 17:49:48 dom: oct/nov 2022 probably? 17:49:59 I'm assuming there's overlap in members but we should definitely include the Open Metaverse Interoperability Group https://omigroup.org/ 17:50:24 q? 17:50:36 ack ada 17:50:36 ada, you wanted to ask about whether we need to change the charter 17:50:52 ada: we're just about to renew our charter 17:51:09 ada: do you think there's stuff we should add to our charter? 17:51:28 dom: we've got an open scope charter so we can do stuff like this already 17:51:41 alexturn has joined #immersive-web 17:51:54 q? 17:51:56 dom: my expectation is that there will be a high chance most of the stuff will not be for the wg, more for the community group etc 17:51:57 q? 17:52:11 scribe: Manishearth_ 17:53:25 RRSAgent, please generate logs 17:53:25 I'm logging. I don't understand 'please generate logs', Manishearth_. Try /msg RRSAgent help 17:53:44 rrsagent, bookmark 17:53:44 See https://www.w3.org/2022/04/21-immersive-web-irc#T17-53-44 17:55:13 unknown: i know there's a bunch of ?? working on similar things, leveraging web ?? etc 17:55:23 ... i guess we should have some process ot make sure everyone is aware 17:55:44 ... something to echo back is 17:55:55 ... i've heard a lot of orgs around these metaverse concepts 17:56:02 ... very unclear what they're actually talking about 17:56:18 ... and when they say metaverse they actually mean nfts/etc, still not sure what we're talking about 17:56:37 er, wrong attr 17:57:04 toji: something to echo back is i've heard a lot of orgs around these metaverse concepts very unclear what they're actually talking about and when they say metaverse they actually mean nfts/etc, still not sure what we're talking about, we should alt least be clear on context and make sure we're not retreading 17:57:20 dom: yeah, recently we had a proposal to join the metaverse ?? group 17:57:47 ... been a bit challenging, finding the right way to communicate things 17:58:22 toji: to be clear, not trying to point fingers, just that there's a lot of buzzwordy landrushy interests,very easy to overlook prior art 17:58:40 rrsagent, publish minutes 17:58:40 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 17:58:47 unknown: very good point actually 17:58:55 ... part of that is not wanting to be slowed down 18:00:13 q? 18:02:48 present+ 18:05:51 While we're waiting, just want to echo Brandon's point earlier about visibility & the difficulty of finding past W3C work. Been trying to harness the XR Semantic Web work for the ongoing Accessibility Object Model project and it's been very difficult. Feels like being an archaeologist somewhat 18:12:08 q+ for just for comment but AR and gamepads modules are pre-CR, waiting WG tasks performed... 18:19:57 q? 18:20:32 Here are the slides: https://docs.google.com/presentation/d/1ewsefsmLFKIv0fRExCf1VzgvkepSJnrxn76_c8LmWRk/edit#slide=id.g13c95719e2_0_0 18:20:46 ack atsushi 18:20:46 atsushi, you wanted to discuss just for comment but AR and gamepads modules are pre-CR, waiting WG tasks performed... 18:21:11 Just requested access to slides 18:23:34 q? 18:24:14 Phu has joined #immersive-web 18:24:32 There were 2 more issues that were marked as f2f that are not on the agenda: https://github.com/immersive-web/webxr/issues/1276 and 18:24:32 https://github.com/immersive-web/anchors/issues/71 18:39:09 bialpio has joined #immersive-web 18:40:16 lol, sorry for poking around the menu, @ada 18:40:38 @ada, can you send out the link to your slides today? 18:42:50 tangobravo has joined #immersive-web 18:52:57 present+ 18:53:09 present+ 18:54:19 present+ 18:55:07 nick: 8th wall trying to push capabilities forward. Lack of camera access has been problematic. 18:55:30 ... for ar headsets would like to experiment with different effects that may not be provided out of the box 18:55:38 q? 18:55:43 ... things we see as important but not currently provided: 18:56:33 ... custom visual effects. Lots of interest today. Example of using the camera feed as a kalidescope. 18:56:35 q+ to clarify the blockers 18:56:44 ... also high quality visual reflections 18:57:28 ... image targets: They offer things like curved image targets (for bottles, etc) that no vendor that they no of is providing today. 18:57:57 ... responsive scale: Allows you to place content instantly without having to scan the scene for planes, etc. 18:58:37 ... responsive scale covers 90% of use cases, absolute scale (where scale is always 1:1) covers last 10% 18:58:56 q+ 18:58:57 ... When code doesn't have access to camera pixels these are all more difficult. 18:59:05 ... face effects is another area of interest. 18:59:46 ... Niantic, who just acquired 8th wall, has it's own suite of capabilities. Things like understanding difference between sky/ground/water 18:59:57 ... for Pokemon, naturally. :) 19:00:10 ... Also looking into hand/foot/body tracking 19:00:19 ... Not the only team feeling this need 19:00:42 alexturn has joined #immersive-web 19:01:16 ... Another company (missed the name) needed it for multiplayer syncing, asked Nick to advocate for him today. 19:01:57 ... Proposal from Chrome team is a proof of concept. 8th wall was able to use successfully. 19:02:30 present+ 19:02:38 ... Question today is that given the wide range of use cases and where we see future hardware going, what are the next steps for moving this forward. 19:02:57 ... Have it on the roadmap, but would like to make it more concrete. 19:03:30 (Audio issues) 19:04:07 Brandel_Zachernuk has joined #immersive-web 19:04:45 Simon Taylor: Creating multiple WebAR projects, would like to use WebXR/ARCore. Lack of camera access has prevented them from making the move. 19:05:27 q? 19:05:27 yonet has joined #immersive-web 19:05:50 ... Would like to have more control of presentation (ed: Not a session mode switch?) 19:06:03 ... Current API is really mobile focused. 19:06:23 q+ 19:06:27 ... camera is aligned to the frame. On headset it needs to be predictive and camera is not perfectly aligned. 19:07:12 Previous discussion about handheld camera API vs. HMD camera API: https://github.com/bialpio/webxr-raw-camera-access/issues/1#issuecomment-816395808 19:08:35 (bajones: Sorry, missed some of what was being said next.) 19:08:57 q? 19:09:02 ack ada 19:09:02 ada, you wanted to clarify the blockers 19:09:17 ... Wondering if not having a separate immersive mode would help Safari implement the API? 19:09:31 ada: Wants to give background on TAG feedback. 19:09:44 ... it's a "giving away the farm" type API. 19:10:27 ... Finding ways to inform users of the privacy concerns of camera access can be overwhleming. 19:10:35 q+ 19:10:56 ... don't want every experience immediately jumping to camera access-based solutions. 19:11:08 ... Users don't read dialogs 19:11:42 ... Relates to Rik's suggestions for simplifying entry to XR sessions. 19:12:19 ... Removing some normalization of the WebXR permissions requirements lets UAs put more emphasis on the most "scary" scenarios. 19:12:39 q+ 19:12:45 ... Suggestions of fuzzing camera access to make less easy to abuse, but probably also affects usefulness too much. 19:12:58 ack kl 19:13:42 q+ 19:13:46 piotr: Talking about TAG feedback and how to approach 19:14:02 ... important to involve the right people in the discussions 19:14:24 ... We can try to figure it out, but UX/privacy people aren't on those calls 19:14:40 ... Shouldn't add normative text around permissions flow for that reasons 19:15:02 ... Need to make sure that its something browsers can experiment with 19:15:21 ... Maybe one browser sets a defacto-standard that the others adopt as well 19:15:57 ... Need to make sure user is informed of camera access in the same way as get user media. (Icon showing access, for example) 19:16:32 ... Wanted to ask Nik about what aspect of Chrome's implementation doesn't live up to an MVP state? 19:16:52 ... Good to hear that it works for partial use cases, want to know what the misses are. 19:17:24 q+ to talk about WebRTC 19:17:30 ... How do headset-based implementations factor in? API currently says it's mobile-targeted, but don't want to close the doors on headsets 19:17:35 q- 19:17:53 ack alexturn 19:18:20 Previous discussion about handheld camera API vs. HMD camera API: https://github.com/bialpio/webxr-raw-camera-access/issues/1#issuecomment-816395808 19:18:56 alex: From pasted link, there was a comment about gaps from current state to MVP 19:19:21 ... this is one of the places where the needs for mobile/headset are hard to normalize 19:19:40 ... headsets have different offsets, latency, exposure, etc. 19:20:12 ... At this "raw" layer of the API some of the backend details start to show, and maybe that's OK? 19:20:45 ... Comment has a proposal for one way to get this information into the WebAPI 19:21:17 q? 19:21:20 q+ 19:21:27 ... Curious what other people think. Conclusion from last year was maybe two API shapes are needed? 19:22:15 nick: Responding to a couple of things. For the current API, sub-MVP claims were specifically aimed at headsets. Works well for mobile. 19:22:35 q+ 19:22:46 ... think it would be a mistake to separate API. Think there's a way to modify the flow fairly simply to work well for both environment. 19:23:09 ... Different clock that frames can be on. Texture currently associated with XRFrame. 19:23:37 ack Nick-8thWall-Niantic 19:23:42 ... if we could decouple that would go a long way towards addressing issue. ARCore can still get one callback per frame. 19:24:02 q+ 19:24:17 ... Need a timestamp to build history of extrinsics to project into the world 19:24:56 ... Also need camera field of view matrix. But decoupling frame timing would be most important, metadata would get us the rest of the way. 19:25:39 ... Re feedback of giving up the camera feed, it's a problematic point of view for 8th wall. 19:26:01 ... If they had to wait for browser implementation for everything wouldn't be able to innovate for their customers. 19:26:17 dino has joined #immersive-web 19:26:27 q? 19:26:36 ... Get user media provides a good existing proof where there's a lot of useful things happening with it and users are appropriately informed. 19:26:52 ... No issues with current user flow for getUserMedia 19:27:19 ... lots of real end0user value here that's not met by one-off approaches aimed at specific hardware. 19:27:22 ack klausw 19:28:29 klaus: TAG/privacy reviewers are not convinced that users are making informed choices with existing APIs. Feel that getUserMedia is too powerful. 19:29:04 q+ 19:29:18 (FYI: I'll need to switch over to a different room in ~1 minute as I think I'll lose this one) 19:29:43 ... For marker tracking, concern is that exposing platform capabilities yields unpredictable results around what is tracked. 19:30:47 ... One issue with the current API shape is that extending to meet other use cases is a "slippery slope". 19:31:09 ... lots of nice properties about how things work on mobile (implicit alignment, etc) 19:31:43 ... If the camera feed has a different crop from what's on screen that may have privacy concerns. 19:32:20 ... If you want a more generic API that's more powerful, it could slow down the delivery of any API at all due to privacy concerns. 19:32:23 q- 19:32:48 ... Maybe don't see it as two separate APIs but the more tractable first step 19:33:14 ... Also, we used to have an inline-ar, which was not a good API but allowed for not going fullscreen. 19:33:15 q+ 19:33:38 zakim, close the queue 19:33:38 ok, ada, the speaker queue is closed 19:33:49 ... don't have background on why we removed inline-ar. 19:34:27 ada: TAG really does view WebRTC's current capabilities as overstepping. 19:34:43 ... They may be looking at those APIs again 19:35:17 ... Back when the API was developed was just for video calls. The idea of AR on top of it wasn't considred. 19:35:41 ... I think raw camera access is essential. Extensible APIs are good! 19:35:55 ... Having a higher-level API does help our messaging, though. 19:36:37 ... sites should only need the "scary" API to do advanced things. 19:36:39 q? 19:36:46 ack ada 19:36:46 ada, you wanted to talk about WebRTC 19:36:50 ack ada 19:36:50 ack tangobravo 19:36:53 ack tangobravo 19:37:58 simon: The problem with the privacy thing is that not having it on top of WebXR will move people to other implementations like 8th wall that have (hypothetically) larger potential privacy concerns 19:38:36 ... Alex's proposal for decoupling frames sounds good. 19:39:29 ack alexturn 19:39:30 ... As a company we're not interested in privacy invasive use cases, obviously. 19:40:27 alex: Talking about knowing what people want. Seems like we know what the industry want. Question is are we asking them to wait for something in the platform that they're not going to be able to use. 19:41:01 ... Can't have it both ways. "Need to use this thing first, but then we won't have the funding for the next step." 19:41:36 ... Maybe we can limit things like camera crops on mobile so what you see is what you get. 19:42:06 ... Wondering if what makes sense is to agree on general shape and then figure out how to enhance privacy rather than weaken the power of the API. 19:42:37 q? 19:42:41 ack Nick-8thWall-Niantic k 19:42:42 ... Defend the power that we're giving by explaining that it's not for a one-off use. Not seeing the path for how to get TAG excited. 19:43:12 ack Nick-8thWall-Niantic 19:43:12 q? 19:43:24 nick: Echoing what Alex said. Heard a lot about "TAG has concerns." That's fine, it's part of their job, but our job is to push back on the push back. 19:43:56 ... their concerns are legitimate, we need to work through them to deliver something that improves on user consent AND meets developers needs. 19:44:23 ... if some of that compromise is having both a low-level and high-level API then we could try that. 19:44:38 q+ 19:44:48 ... wouldn't want to land on a situation where we make that compromise but then only get one half of it. 19:45:05 q? 19:45:07 ack bajones 19:45:36 [I think prototyping the higher-level API on the raw access one might be a good way to explore whether there is a value in that mid-level idea] 19:45:49 bajones: I think that klausw who brought up we used to have inline-ar but didn't know why it was removed 19:46:07 i want to shed light on it 19:46:43 rrsagent, this meeting spans midnight 19:46:57 we back away from it because the privacy groups we worked with were concenred that users would be concerned that there is a camera feed on the page which they didn't opt into 19:47:25 the immersive-ar was designed so that, it would make it clear that the camera was beign used in a particular context as well as the additional tracking data 19:48:59 q+ 19:49:09 there is also the issue how much money and effort we can put into implementing multiple things since Google has scaled back their implementations and we may end up with only one of two implementations 19:50:13 bajones: I think it's useful to listen to the TAG but I am not ure it's fair that the WebRTC mistakes should reflect negatively on what we are trying to do. The onus is on us to show that we are not making the privacy situation worse but we are making it better 19:50:35 dom: Pushing back on the push back is perfectly OK. 19:51:11 ... Maybe a useful olive branch is to use raw camera access to demonstrated the desired use cases for the TAG. 19:51:59 ... If doing a high-level/low-level approach is not feasible using the existing capabilities is good for demonstrating need. 19:52:12 ada: It's lunch time! 19:52:48 See you all tomorrow... more inline-ar chat at 10am :) 20:16:49 yonet has joined #immersive-web 20:17:40 Hi everyone, we are running 10 minutes late for Depth sensing. Sorry about the technical issues and losing time. 20:17:51 bialpio has joined #immersive-web 20:32:19 manishearth_ has joined #immersive-web 20:32:27 zakim, choose a victim 20:32:27 Not knowing who is chairing or who scribed recently, I propose winston_chen 20:32:30 zakim, choose a victim 20:32:30 Not knowing who is chairing or who scribed recently, I propose Dominique_Hazael-Massieux 20:32:36 chair: yonet 20:32:40 agendum: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2022/schedule.md 20:32:43 scribenick: dom 20:32:45 q? 20:32:52 Brandel_Zachernuk has joined #immersive-web 20:32:56 ada: wanted to discuss merging depth sensing & occlusion 20:33:05 ... I've only seen used depth sensing used for occlusion 20:33:07 q+ 20:33:14 ... anything other usage anyone wants to report? 20:33:22 ... any concern / support with merging them? 20:33:31 ack Nick-8thWall-Niantic 20:33:31 ack nick 20:33:33 ack nick 20:34:02 nick: in terms of depth sensing, I've seen a lot of good applications in native implementations 20:34:02 ack Nick-8thWall-Niantic 20:34:06 ... which would be great to bring to the Web 20:34:26 ... #1, physics: if you know where the surfaces are and their direction, you can have things bouncing from them 20:34:41 ... constract a mesh from depth map allow for better interactivity 20:34:49 Josh_Inch_ has joined #immersive-web 20:34:53 ... another use case is for example scanning apps to make 3D models of your environment 20:35:01 ... they tend to require a combination of depth & image API 20:35:12 ... it provides a low cost way to generate 3D models 20:35:36 q+ 20:35:51 Ashwin has joined #immersive-web 20:35:58 q+ 20:36:02 ... so more than just occlusion 20:36:19 Ada: if real world geometry as complete as depth, would it be a better fit for these use cases? 20:36:29 nick: would depend on the shape of real world geometry & the details 20:36:51 ... if it comes with a detailed enough mesh, it would probably be OK for interactions / physics 20:36:55 ... not for scanning 20:37:20 ada: 3 options: separate depth & occlusion; merging depth into occlusion; vice versa 20:37:37 nick: if you solve occlusion with depth, that would be sufficient 20:37:53 ada: it's a pain to use depth for occlusion 20:38:46 ack bialpio 20:38:55 Piotr: ArKit implements a mesh api powered by their depth API 20:39:13 ... ARCore has a special variant of hit testing powered by depth 20:39:34 ... re occlusion vs depth, there are privacy aspects to this 20:39:51 ... in chrome, the depth buffer is limited 20:39:55 Zakim track queue 20:40:08 ... if we had an API for occlusion that the site cannot access, we can probably provide a higher resolution API 20:40:11 zakim, open the queue 20:40:11 ok, ada, the speaker queue is open 20:40:15 s/is limited/has limited resolution/ 20:40:25 ... that may be an advantage of having both APIs 20:40:30 q? 20:40:32 RRSAgent, draft minutes 20:40:32 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html dom 20:40:35 RRSAgent, make log public 20:41:17 cabanier: the quest has very limited depth sensing primitives 20:41:33 ... the planes API could be used to sense the walls, ceilings, a desk or a couch 20:41:46 ... we could introduce it to e.G. help put content in a room 20:42:05 Nick: for the quest, to what extent passthrough contemplated for WebXR 20:42:29 ... you could image an experience which as you walk through the room it renders your couch in the virtual space 20:42:42 q? 20:42:56 q+ 20:42:59 ada: the planes API is part of real-world geometry? 20:43:14 Rick: no, it's its own specification 20:43:31 Ada: do we need to add it to the charter? 20:43:41 rick: if it's not, it should be added 20:43:57 ack bialpio 20:44:14 piotr: it's available in Chromium behind a flag; but depth would give more detailed information than the Planes API 20:44:25 ... we leverage ARCore that is limited to horizontal & vertical planes 20:45:46 Alex: we're interested to help with occlusion 20:46:09 ... depth is challenging to implement in hololens 20:46:17 q+ 20:46:34 ack cabanier 20:46:42 cabanier: do you think the mesh is high quality enough for occlusion? 20:46:50 alex: we use it in native apps 20:46:55 q+ 20:47:22 ... it's more about expressing it as a depth frame vs a mesh 20:47:23 ack bialpio 20:47:44 piotr: that reinforces my sense of keeping occlusion 20:48:01 ... otherwise this would require double code paths for managing occlusion 20:48:27 Topic: Lower Friction to Enter 20:49:31 Josh: we see lots of different entry points used by developers, e.g. "enter VR" buttons 20:49:54 ... we've been looking at ways of exposing this in the browser chrome to make it easier for users to identify & recognize 20:50:16 mkeblx has joined #immersive-web 20:50:18 ... once the user clicks it, they enter the WebXR screen 20:50:32 q+ 20:51:22 q+ 20:51:26 cabanier: a possible implementation would replace requestSession, possibly with an implicit associated consent 20:51:46 Q+ 20:51:51 q+ 20:51:55 alexturn has joined #immersive-web 20:51:59 ... this separate also helps with providing a consistent approach, that doesn't depend e.G. on the viewport size 20:52:32 ack bajones 20:52:33 @@@: what signal is used to have the button in the chrome? 20:52:34 offerSession? 20:52:42 q+ 20:52:52 cabanier: this would be via a new API that replaces requestSession 20:52:53 +1 offerSession 20:53:04 @@@: having it declared as early as possible would be useful 20:53:16 q+ to ask about if there is no browser chrome 20:53:30 cabanier: one of the challenge is to tie it with assets being loaded 20:53:40 ... this is not just a signal that the site is VR-compatible 20:53:41 q+ 20:54:20 bajones: this has overlap with navigating into a Web page 20:54:23 laford has joined #immersive-web 20:55:21 ... ideally this would be the same mechanism 20:55:30 q+ 20:55:40 ack bialpio 20:56:05 piotr: how you handle rejecting the permission prompt, in case it can be rejected? will that be exposed to the web site? 20:56:17 offerSession returns a promise that completes or errors when the offer completes? 20:56:36 ... is a new API really needed? can we piggy back that requestSession is promise-based as trigger to expose the button 20:56:53 q? 20:57:22 cabanier: there would no way to reject the permission prompt if we count clicking the button as accepting permission - you could always navigate away 20:57:47 q+ 20:58:08 ack manishearth_ 20:58:32 manishearth: what the situation will be for pages with multiple potential VR sessions? I guess they would have to use requestSession 20:58:43 ... but this would create fragmented approaches to entering a session 20:58:57 ... that may be fine, but it's worth thinking about it 20:59:54 bajones: if people start relying on it as the primary way, this may break interop if a browser doesn't implement the chrome-based ux 21:00:02 manishearth: this could be polyfilled though 21:00:14 ack Nick-8thWall-Niantic 21:00:16 nick: people being confused by the different idioms of entering VR 21:00:43 ... user education is always a problem 21:01:15 q+ to mention that accepting the session could be "higher cost" than similar actions in the same space, like setting bookmarks. 21:01:20 ... having it in the chrome creates different design challenges where you sometimes have to point users to parts of the chrome UI, but that isn't stable over time 21:02:02 ... for multiple sessions, another way would be to use a VR navigation as a way to route the user on the different experiences 21:02:46 ack dylan-fox 21:03:02 dylan-fox: thinking about screen reader users - you accidently click on a button that launches a VR thing that whisk you away from where you were on 21:03:43 ... low vision people have complained about e.g. the pin used in Mozilla hubs that they can't use 21:03:53 ... having it multimodal and undoable is important for accessibility 21:04:20 ack ada 21:04:20 ada, you wanted to ask about if there is no browser chrome 21:04:58 Ada: developers using supportSession for fingerprinting would be exposed which may be interesting 21:05:23 ack dom 21:06:35 dom: I like the proposal, having a consistent way to enter XR would be usful. I think it would be good have a declarative approach, I understand it's neccasary to have the Assets loaded but the page could have a declarative appraoch that "I support VR/XR/AR" it also opens up additional points regarding search and other meta data use cases. I also think it ties in nicely to Brandon's 21:06:36 Navigation 21:06:48 ack cabanier 21:07:10 cabanier: very few web sites have more than one VR web site - can only think of a demo site 21:07:44 ... in terms of accessibility, I think this would be an improvement - users wouldn't have to hunt for a button, the browser provides a direct access to the VR experience 21:08:16 ... the declarative markup approach with a dimmed button until assets are loaded is worth investigating 21:08:26 ... and +1 on exploring the questions of overlap with navigation 21:08:35 q+ 21:08:48 ... overall, hearing support with exploring this 21:09:53 ack alcooper 21:10:13 AlexC: interesting & potentially useful proposal; some concerns about removing a permission prompt - I think that should be left to the user agent 21:10:20 q+ 21:10:24 ... we should also expect sessions can fail 21:11:24 ack bajones 21:11:24 bajones, you wanted to mention that accepting the session could be "higher cost" than similar actions in the same space, like setting bookmarks. 21:11:38 q- 21:11:46 bajones: looking at some of the icons that sit along side this in the example demo 21:12:14 ... many of those are mostly low cost - e.g. creating a bookmark, or opening a menu 21:12:24 ... entering VR is more disruptive 21:12:46 ... if you do it accidentally 21:12:58 idris has joined #immersive-web 21:12:59 ... This may argue for leaving some friction here 21:13:42 .... re declarative approach - separating the question of being VR-capable and being VR-ready would also be useful in the context of navigation 21:14:14 ack cabanier 21:14:21 Re: Icon - Interesting to reference experiences like Sketchfab https://sketchfab.com/3d-models/tilly-0e54f44e56014e079572207a29788335 21:14:28 cabanier: +1 on this signal being useful for navigation 21:14:39 ... +1 on not being prescriptive on permission prompt 21:15:00 q+ to ask about the loaded state 21:15:01 ... we can run user studies to find the right approach 21:15:11 ayseful: would be really useful to share the results of these studies 21:15:12 q? 21:15:19 ack ada 21:15:19 ada, you wanted to ask about the loaded state 21:16:16 ada: having a "ready to show content" signal useful for this context; that signal would also be useful to launch in a PWA context 21:33:12 dylan-fox has joined #immersive-web 21:35:03 unconference page: https://docs.google.com/presentation/d/1iIWMt-jM1UToQ9Fo4KQz0g5JuqYbGdLvvENL4YBnCG0/edit#slide=id.g1256acf68be_6_0 21:36:02 ^just requested edit access 21:36:09 Topic: Does WebXR need a permission prompt? 21:36:31 scribe: Dylan Fox 21:36:34 scribenick: dylan-fox 21:36:51 bialpio_ has joined #immersive-web 21:37:21 Brandel_Zachernuk has joined #immersive-web 21:37:21 Ada - thinking about this as way to help get raw camera access past the tag 21:37:41 Put a lot of effort into making core parts of WebXR available by default - wary of using big hammer for small teacup 21:37:59 Save permission prompt for APIs more worth getting user's explicit consent on 21:38:25 ...Let users distinguish between what's important and what's not so they can make more informed decisions 21:38:56 bialpio has joined #immersive-web 21:38:57 If we ask for permission for every thing, they won't notice that others are more invasive 21:39:06 (btw let me know if there's syntax I should be using as scribe) 21:39:31 q+ 21:39:35 q+ 21:39:39 q+ 21:39:40 q+ 21:39:45 Nick - what's the prompt? 21:40:12 Ada - would suggest that permission requests for certain modules in WebXR umbrella to be non-notifying, non mandatory 21:40:19 browsers could choose not to show permission prompts for certain things 21:40:42 Josh - like entering a VR space? 21:41:03 q+ 21:41:08 ^can do 21:41:30 Ada: is making certain requests non-normative a good idea? 21:41:33 s/Ada -/Ada: /g 21:41:54 s/Josh -/Josh:/ 21:42:25 ___: not against spirit of the thing, but against non-normative part; may make security experts mad at us 21:42:37 Ada: so "may" rather than forced? 21:43:17 ___: Language of spec doesn't talk about prompts exactly - prompts are just one way to get permission. Browser can also say "you already have this permission" 21:43:41 ...Can make that more explicit, let people that have opted in bypass it 21:44:20 ...State of internet is already that it's opt-out 21:44:40 I think this is the section Manish is referring to: https://immersive-web.github.io/webxr/#user-consent 21:44:42 q? 21:44:49 ack bialpio 21:45:00 ...Need non-normative text saying which permissions are granted 21:45:11 s/___/Manishearth 21:45:56 Piotr: Consider that some of the things we want to ask for consent about won't be in advance; be very careful with idea of "implicit consent" 21:46:13 ack alexturn 21:46:27 ...API could be configured in advance in settings, and browser could view that as consent 21:46:54 Alex: differs based on form factor; for mobile browser it may be common to use AR features, whereas headset may be more rare/disruptive to jump into an experience 21:47:02 q- 21:47:07 ...in favor of giving ourselves more flexibility; fine with doing that in ways that are normative 21:47:09 ack bajones 21:47:51 Brandon: Agree that text is already in a state that allows for this; just need different measures such as explicit vs implicit consent 21:48:38 ...Some features use is well understood to be covered by implicit consent, e.g. user clearly signals they want to enter an experience 21:48:40 q+ 21:48:54 ...Text as written gives us leniency, esp when it comes to frequency of prompting 21:49:18 ...Value to first few times that user goes into an immersive session - give them instructions on "here's what you're about to do, here's how to get out," other onboarding 21:49:32 ...Once that's well understood we don't need to announce every single time 21:50:12 ...direction should be normative text only when helpful; consider on case by case basis 21:50:24 ...Going into XR on headset vs phone vs desktop is different 21:50:30 q? 21:50:30 q? 21:50:31 ...May not even realize that the headset on my shelf is lit up 21:50:35 ack cabanier 21:50:48 Rik: way spec is written gives a lot of flexibility 21:51:06 ...don't even need user actions, necessarily; could just go straight to VR 21:51:15 ...OK for the browser to decide 21:51:22 ack manishearth_ 21:51:25 Ada: so you're saying we're already in the situation I'm asking for. cool 21:51:57 Manish: to add to what Brandon said, implicit and explicit consent... spec never mandates one or the other 21:52:08 ...concepts were there as hooks so we could discuss 21:52:23 ...How do you do a permission or explicit consent in VR? 21:52:44 ...We could mandate explicit in some cases but I don't think there's any situation we'd want to do so across the board 21:52:48 ...Many different levels of trust 21:53:03 q 21:53:44 dylan-fox: when it comes to explicit consent in VR, we probably need to think more about whether that's a pop-up, a specific gesture, ... 21:54:03 Ada: next topic is scent-based peripherals 21:54:05 Topic: Scent based peripherals 21:54:48 Alex: smell input or smell output? 21:54:50 Ada: smell input 21:54:57 Ada: smell output 21:54:59 q? 21:55:31 Ada: can't remember who put this on the docket but there are companies designing these 21:55:42 Brandon: don't want to be personally liable for the failure case 21:55:51 Nick: way to use game pad API to implement this? 21:55:52 -> https://github.com/immersive-web/proposals/issues/74 Smell-o-vision (e-nose) #74 21:56:40 Ada: can leave this to webUXD (sp?) or web bluetooth 21:56:53 ___: difference between that and audio? 21:57:26 s/___/MichaelB/ 21:57:55 dylan-fox: alternative to motion controls is useful in the context of accessibility 21:58:36 Dylan: is there merit to discussing switch control or e.g. universal xbox accessible controller? 21:58:41 q+ 21:58:50 webUXD => webUSB, also webHID (https://developer.mozilla.org/en-US/docs/Web/API/WebHID_API) 21:59:20 Brandon: within gamepad API there are recommended mapping; Chrome has simple "map this to A" type features, whereas Steam lets you assign anything you want 21:59:30 ...Not sure if you can reassign motion to button presses 21:59:41 ...At that point browser sets motion controller like any other 21:59:54 ...That is generally right level for accessibility controllers to come into play 22:00:12 ...Don't want to broadly advertise that someone is using an accessibility-focused controller - don't want to expose people to fingerprinting 22:00:23 ...Avoid broadcasting "I'm a user with a disability" 22:00:44 ...Let user be in control of information 22:00:58 ...Wish those capabilities were more widespread 22:01:12 Ada: is that the type of thing that would work well on the Quest? E.g. plugging in a Microsoft accessible controller 22:01:51 ...and using it as a VR controller 22:02:27 ...In terms of operate functionality to allow remapping to let people with accessibility requirements to use non-quest controllers, but scene doesn't know it's a non-quest controller 22:02:56 Rick: could be hypothetical because we don't track that right now 22:03:14 Brandon: website says it's possible to hook up a non-quest controller but might show up as regular gamepad 22:03:35 ...showing up as generic input might be outside of capability 22:04:35 winston has joined #immersive-web 22:04:58 ack Nick-8thWall-Niantic 22:05:21 dylan: would be great to be able to support e.g. mouth controllers, or to have Steam VR Walkin Driver style functionality w/o notifying system of disabled status 22:05:40 Nick: made joystick input control that worked across desktop and mobile headsets; on desktop could use Xbox controller 22:06:03 ...Found that on Quest and/or HoloLens, xbox controllers worked well in web browser until you entered VR, at which point they stopped working 22:06:18 ...counterintuitive that existing gamepad API stops itself from working and letting you use these controllers in an immersive setting 22:06:23 q+ 22:06:39 ...Really fun to take an xbox controller and run a character around, using joystick to drive virtual content 22:06:59 ...Why is old-school gamepad disabled in webxr? What would it take to get xbox controllers as xbox controllers in webXR as gamepads usable for this use case? 22:07:06 q+ 22:07:12 Brandon: not aware of anything blocking the api from working 22:07:51 ...A normal gamepad will not show up as one of the xr sessions because we want to differentiate b/t inputs that were specifically tracked 22:08:07 ...if it's dropping gamepad when you go into a VR session on any device, that sounds like a bug, and it should be filed 22:08:18 Nick: can't remember if it was hololens or quest or both 22:08:38 Brandon: hololens I could see them doing a moat switch, because it goes out of its way to normalize input across different modes 22:08:45 ...but spec-wise, nothing should prevent that from happening 22:08:56 Nick: in that case, I'll sync with my team 22:09:16 ...gamepads are working well within XR, they're fun 22:09:36 Manish: have had opposite discussion, of whether XR controller should be exposed to navigator and get game paths 22:09:52 ack cabanier 22:09:53 ...existing gamepads should just work; if they don't it's a bug in implementation 22:09:56 ack manishearth_ 22:10:14 Rick: didn't know xbox controllers were supported, would like to investigate 22:10:25 ...for VR API hardcoded controllers/hands, but for OpenXR it should just work 22:10:31 q? 22:11:21 Ada: nice to have a conversation around compatibility and accessibility 22:11:25 ...now for a 20 minute break 22:16:23 sorry everyone it just got pointed out the unconf doc was view only 22:16:28 it's now editable! 22:16:33 massive over sight on my part 22:16:58 here is the link feel free to add things: https://docs.google.com/presentation/d/1iIWMt-jM1UToQ9Fo4KQz0g5JuqYbGdLvvENL4YBnCG0/edit?usp=sharing 22:32:26 Brandel_Zachernuk has joined #immersive-web 22:34:28 Ada: Marker tracking, yay! 22:34:47 ...Wanted to bring it up to get status, see if there are blockers 22:34:55 topic: Marker tracking 22:35:12 ...tracking is important b/c one feature people want is shared anchors, which is a pain to implement but very important 22:35:25 q? 22:35:32 ...marker tracking would give us a version of shared anchors w/ requirement of having a physical object or at least something drawn on a physical object 22:35:35 q+ 22:35:40 q+ 22:36:04 Alex: Marker tracking in sense of having some marker is key; support QR code tracking on hololens using head tracking cameras 22:36:16 ...Vuforia and others will use other cameras 22:36:34 ...Very interested in lighting this up so people can track against known qr codes 22:36:41 ...particular feature is QR code tracking 22:36:44 Brandel_Zachernuk_ has joined #immersive-web 22:37:01 ...trying to zero in on right feature subset 22:37:02 q+ 22:37:35 ack alexturn 22:38:06 Klaus: 2 things; one is that it may make sense from API perspective to distinguish QR code from other types 22:38:17 ...Avoid surprise that you won't know if images end up being trackable 22:38:32 ...avoid case of having universe of mutually incompatible tracker images 22:39:10 ...may be difficult to add features like analytical surfaces unless underlying platform supports it 22:39:27 ...concerned about launching without clear view of how it's going to be used 22:40:05 Ada: already have implementations for camera access; you said marker tracking will take a while? 22:40:37 Klaus: no, saying there may be new requirements like tracking cylindrical surfaces or other things not offered by line platform 22:40:39 q+ 22:41:00 ...if other line library exposes it it may take a while to get that new type of marker standardizes and available for wider use 22:41:17 ...Possible if there's raw camera access you can do marker tracking by running your own javascript code 22:41:46 ...Apart from privacy issues, could use e.g. full field of view of camera for tracking incl parts that are not currently visible on screen 22:42:09 Ada: raw camera access a blocker? 22:42:16 ack dylan-fox 22:42:17 Klaus: not in a technical sense, just somewhat entangled 22:42:44 https://www.navilens.com/en/ 22:43:17 q+ 22:43:24 dylan-fox: one use case I wanted to bring to people's attention is that when it comes to love vision navigation that there is a group called navalens that uses rgb qr codes that are more easily visible and allow people with low vision to navigate a public space it's used in Barcelona public transport 22:43:33 ack klausw 22:43:35 q+ 22:43:38 ...there are lots of different types 22:43:43 ack alcooper 22:44:26 Alex Cooper: there is a disjoint in what platform can support and what people want wrt QR code tracking 22:44:37 ...Don't know how well ARCore would do with e.g. curves 22:44:54 ...This is a case where raw camera access is a lot more powerful and guaranteed to span that whole set of things that people are looking for 22:45:29 ...Features are entangled from a roadmap perspective; raw camera access could be a way to fill in the gaps that even a full implementation of marker tracking may not be able to meet 22:45:57 Ada: so if chrome had good support for raw camera and it could also do QR codes well, you could use image tracking using raw camera access under the hood? 22:46:08 Alex: That's the thing we're measuring out now 22:46:17 ...Sounds like some of that might be available right now through raw camera access 22:46:33 ...It's less privacy preserving but I don't know all the runtimes right now can support the breadth of markers we want to track 22:46:47 ...Don't know if there's a runtime that meets developer requirements with marker tracking 22:47:00 Ada: Is there any tool that supports QR codes? 22:47:20 Alex: I know ARCore does not support QR codes, not sure about hololens; seems like Microsoft preferred them 22:47:40 Alex Turner: from platform perspective we have qr codes but not image tracking 22:47:50 q+ 22:48:09 Ada: they keep telling us to do something alongside raw camera access, but if we tell them there is no overlap in the platforms for the different types of images we want to support 22:48:31 ack klausw 22:48:32 ...We can tell them that raw camera access would enable platforms to build the gaps, enable more 22:48:39 ...Could give us leverage to getting raw camera access done 22:48:42 q+ to talk about the polyfill idea 22:48:58 Klaus: currently no overlap; one of us looking at AR Kit capabilities but haven't heard back from Apple 22:49:32 ...don't see feasible path to get something to marker tracking api because it's a somewhat niche market type 22:50:04 ...would need browser side implementations that seem quite difficult vs very doable through raw camera access 22:50:15 ...if you don't have raw camera access there's no way you can do anything at all 22:50:16 ack dom 22:50:35 Dom: if there's a way to expose tracking capabilities to developer... 22:50:45 q+ 22:50:48 ...say you provide a way to run a shader on raw camera stream 22:50:48 q+ 22:51:04 ...without access to raw camera stream, developer could identify things based on shader 22:51:27 ...use case is tracking, not other kinds of processing 22:51:41 ...perhaps we could give that type of optimized focusing on real camera stream without hard coding specific things 22:51:45 q+ 22:51:50 ...should be up to developer to provide right tool to do the processing 22:52:02 ...developer could offer code, then client returns results 22:52:22 Ada: reminds me of ARJS, and you put your image in a special trainer, and it generates a textile 22:52:28 q? 22:52:35 ack alexturn 22:52:35 alexturn, you wanted to talk about the polyfill idea 22:52:38 ...Take your image, put it into a tool someone develops, returns a blob of code you run and it tells you things 22:52:56 Alex T: thinking of something on those lines; how do you restrict output? Would need to provide forcibly low bandwidth output 22:53:06 ...make sure that what you get out is not smuggled data exposed to outside 22:53:14 ...need to figure out how to limit expressiveness of code stuffed in the box 22:53:24 Dom: That's why I'm thinking of a shader 22:53:42 Alex T: I know some people used shaders to do e.g. timing attacks; would take a lot of security research 22:54:11 ...In the interim, I wonder if we could do the polyfill based on marker; e.g. polyfill that uses webcam on device could recognize multiple images 22:54:23 ...We write a proof of concept polyfill to show how to do it without one or the other 22:54:34 ...Warning you get is scary if you have to use the camera stuff, but it could be simpler if you don't 22:54:42 ...but who's going to write the two halves of that polyfill? 22:54:47 laford has joined #immersive-web 22:54:54 Ada: probably bundle both polyfills with the browser 22:55:14 q+ to ask if there'd be timing issues with tracking 22:55:22 Alex T: if we had someone to write it we could open source the code. But still looking around to see who has bandwidth to do it for free 22:55:36 Ada: that's a big blocker, I know many of the people here are not paid to work on OpenXR 22:55:52 q? 22:55:54 Alex T: could hook up the API once the heavy lifting has been done 22:55:56 ack bialpio 22:56:22 Piotr: 2 comments, first one is related to how we can provide some kind of secure end for the CV algorithms to run 22:56:38 ...Been trying to chat with people about challenges there; seems like main worry is side channels we can't fully patch up 22:57:01 ...If there's already something like a secure claim or something that provides us that kind of capability into web platform I'd be very happy to use it 22:57:11 ...but very concerned about how to devise this kind of mechanism; outside of my expertise 22:57:28 ...Might not be feasible at this point but it's a topic that recurs every time we chat about this API 22:57:46 ...Other comment is that it seems like we can try to leverage raw camera access as a way to prototype things on web platform 22:58:06 ...Maybe then we can have ammo to justify moving some use cases into web platform, like building to browser as opposed to saying sites can do the same thing in Javascript 22:58:27 ...QR codes, tracked images, curved images being popular could be enough to justify putting the work into doing it in the browser 22:58:45 ...Let's see which ideas are popular and maybe for the ones that are super popular we can add to platform 22:59:15 Ada: Slight worry that many APIs implemented low-level version then people with passion/energy/bandwidth moved to other projects 22:59:21 ...Might ship raw camera access then stop there 22:59:40 ...Would be nice if there were simple alternatives 22:59:49 q? 22:59:55 ack klausw 22:59:56 ...If it's not done in 5 years I'll keep bringing it up 23:00:20 Klaus: most of what I wanted to say was covered; about doing this as Polyfill or browser side, there are libraries such as ___ which claims to be able to do this 23:00:38 ...seen it working for some; an experiment on these lines could be helpful 23:00:57 ...If you have a software limitation that can handle one type well, that could be sufficient 23:01:06 ...One application wouldn't need to support two kinds of markers if it has one kind that just works 23:01:47 ...If there's a lower latency or uses less power or something, that would be a reason for people to move to using the high-level API over the raw one 23:01:58 ...doesn't mean we shouldn't be doing the low level API 23:02:11 q? 23:02:12 Ada: never suggested not doing the low-level API, just want to make sure we don't forget the high level one 23:02:14 ack Nick-8thWall-Niantic 23:02:43 q- 23:02:49 Nick: A couple of responses; first, talking about providing shader access to images 23:03:11 ...As a data point, when we're doing image tracking, the output of shader we run is very different than the original image, but is a 1024x1024 texture; not low bandwith 23:03:19 ...A lot of information we use to do subsequent processing 23:03:29 ...The idea you can do all your computation in a shader is not how WebGL works 23:03:52 ...Not really sure what the cool extra things you can do in other systems but WebGL is about taking graphics and turning them into other things 23:04:10 ...Another point was around implementations around things like polyfills and reference implementations 23:04:28 ...Reference for polyfill would be very complicated; may include trade secrets 23:04:36 q+ 23:04:40 ...Open source version is generally considered lower quality bar than ARCore or other solutions 23:04:51 ...Not the kind of problem where all implementations are created equal 23:05:09 ...Even for QR code tracking, there are open source libraries that do very good tracking but have nontrivial implementations 23:05:21 ...Often take a high quality, complicated solution and use web assembly to make a version of it 23:05:25 q+ 23:05:31 ...Your reference stack is a giant black box 23:06:01 ...On topic of doing processing in GPU, when you do QR code scanning the only thing we're doing on a shader is shrinking the image before passing to QR code detector 23:06:11 ...there are legitimate reason to get full images out of a camera feed 23:06:12 ack ada 23:06:41 Ada: thinking about low quality, that's fine; there are lots of companies that make money by providing good SLAM built on top of camera access 23:07:16 ...Some companies may need something like 8th Wall to provide more stuff than what the higher level API would provide 23:07:24 q+ 23:07:39 Nick: becomes more concerning when "official browser level polyfill" sets expectations around this is how things are supposed to work 23:07:52 ...could be the thing that everyone uses, even though it may not work in the way you want 23:08:06 ...Different expectations and standards setting vs finding a library on the web and using it 23:08:06 ack alcooper 23:08:34 Alex Cooper: wondering if there is such a clear-cut line of what different runtimes can support or polyfill 23:08:52 ...almost like having 2 portions where image/marker tracking would make sense 23:09:13 ...may need to be concerned about fingerprint effect, but can get some hint of "you can do high level tracking of QR codes on this platform" if there is a dividing line 23:09:16 ack klausw 23:09:33 Klaus: the issue is that you won't be able to query if features are available until you actually start a session 23:09:54 ...you could potentially see if we have required features like marker and image tracking 23:10:06 ...if you put in both as optional features then you only know at runtime 23:10:25 ...Would be nice if people knew what's available but the way APIs are currently designed you won't know until you're already in the session 23:10:31 ...whether your image is trackable or not 23:10:52 ...Not meant as an "official" polyfill, agree with concerns that we don't want to platform something until it meets some quality and support bar 23:11:00 ...Moreso thinking of proof of concept 23:11:04 q? 23:11:16 ...Should work decently but not necessarily like state of the art performances 23:11:32 Ada: No last words? OK, I think we can wrap this up 23:11:42 ...Not the answer I was hoping for but it's good to know the state of stuff 23:11:57 ...Rest of the time was slated for Unconference topics 23:12:12 ...There's a slide deck with a few topics listed 23:12:13 https://docs.google.com/presentation/d/1iIWMt-jM1UToQ9Fo4KQz0g5JuqYbGdLvvENL4YBnCG0/edit#slide=id.g1256a5d4206_5_0 23:12:18 ...There are three topics 23:12:39 ...Let's do 15 minutes break, then 10 minutes per topic, then done 23:31:46 Topic: Unconference 23:32:01 cabanier: Today we have input-profiles 23:32:03 Subtopic: Controller Meshes 23:32:03 Brandel_Zachernuk has joined #immersive-web 23:32:07 scribenick: laford 23:32:23 ...works ok. App responsible for going to the repo and downloading the gltf 23:32:37 ...Some people copy them. Sometimes we want to update them. 23:32:54 ...Implies a rename 23:33:21 ...When you switch to OpenXR we received different joints resulting in broken hand mesh 23:33:38 ...OpenXR could solve this 23:33:40 q+ 23:33:53 ...Is it a necessary burden on the browser? 23:34:16 q+ 23:34:21 ...Also strange that you have to go to a site to download arbitrary controller mesh data 23:34:29 ...Some devs don't like it 23:34:46 ack alexturn 23:34:55 q+ 23:35:18 alexturn: Currently discussing it in the working group 23:35:35 ...If goal is removing the need for access to the repo, might need to do other things to finish the job 23:35:51 ...It also contains data to map buttons to physical controller features 23:36:18 ...People would still need to go to the CDN for that 23:36:25 ack ada 23:36:33 cabanier: CDN wont go away, but have option to use stuff on the device 23:37:11 cabanier: For WebXR, totally reasonable to return a GLTF 23:37:28 alexturn: From OpenXR we're giving out the same binary data that is the repo 23:37:59 ...part of the render model discussion is how much data to give out and what the capabilities of the model would be. 23:38:07 ack bajones 23:38:11 ...Is it "here's a model and here's how to articulate it, or something else" 23:38:38 bajones: Could implement an effective polyfill 23:38:59 ...The input profile repo is never going to go away as that's the location where we register the profile info as well as the meshes 23:39:00 q+ to talk about the OpenXR mapping 23:39:36 ...Its an open question how much we need to solve these problems 23:39:53 ...You can presume the models might load faster locally 23:40:30 ack alexturn 23:40:30 alexturn, you wanted to talk about the OpenXR mapping 23:41:08 alexturn: Would love to be able to encode the mapping from WebXR input-profiles to OpenXR interaction-profiles 23:41:14 ...Would be fairly straightforward 23:41:25 ...Have it be data-driven 23:42:17 bajones: Another thing is matches whatever OpenXR is doing and flow it up through the layers 23:42:27 ...Would effectively consecrate GLTF as a web standard 23:42:42 q? 23:42:54 Subtopic: 3D DOM elements 23:43:08 q+ 23:44:00 alexturn: Difference between model tag (inline 3d DOM) and scene describing tags 23:44:07 ...Maybe there is overlap and there is alignment 23:44:09 q+ 23:44:22 ...Sphere is not as useful unless is part of a suite of scene primitives 23:44:39 bialpio has joined #immersive-web 23:45:23 ada: Additional CSS where there is a keyword that applies transforms in 'real space' 23:45:25 q+ 23:45:45 ...e.g. in real space this object is X far out and Y skewed 23:45:52 @media immersive { ] 23:45:56 ...would prefer this over new scene description stuff 23:46:00 s/]/} 23:46:09 ...Instead we can leverage existing css3D transformational stuff 23:46:22 ack alexturn 23:46:23 ack alexturn 23:46:25 ack bajones 23:46:49 bajones: Agree with Ada. If you were to do it you'd not pick up any CSS with 3D transforms. You'd break everything! 23:46:59 ...People do stupid things to do what they want 23:47:09 ...CSS already has lots of fundamental 3D capabilities 23:47:20 CSS Unflatten Module Level 1 23:47:43 ...Biggest challenge bar rewriting CSS implementation, is how to contain the scope 23:47:50 qq+ to recall we were suggesting to wait for tomorrow for this discussion 23:48:15 ...e.g. how do you keep the volume of your page to something reasonable? 23:48:43 qq+ to see if that magic command only works for dom 23:48:51 q- 23:48:54 q+ 23:49:12 bajones: You'd have limits on how big volumes can be 23:50:29 present- 23:50:30 [this sounds a lot like what CSS has to deal with for the "print" media feature] 23:50:30 ...Magic Leap has a definition for this stuff that may be leveraged 23:50:55 [where you can declare the size of the page you're styles are targeting, e.G. A4 vs US-letter] 23:51:20 ...Content beyond z bounds clipped 23:51:22 ack dom 23:51:22 dom, you wanted to react to bajones to recall we were suggesting to wait for tomorrow for this discussion 23:51:53 dom: Reminder that we wanted to wait till tomorrow 23:51:59 ack cabanier 23:52:07 Ada: "its cool beans" 23:52:19 cabanier: ML has 3D CSS, went through CSS working group 23:52:30 ...Proposal, but backburner'd 23:52:59 https://github.com/w3c/csswg-drafts/issues/4242 23:53:13 Ada: Can still work on it and push to the CSS WG when it makes sense 23:53:23 https://github.com/w3c/csswg-drafts/issues/2723 23:53:52 https://github.com/immersive-web/model-element 23:54:11 ack laford 23:54:41 -> https://www.w3.org/TR/mediaqueries-5/#environment-blending 5.5. Detecting the display technology: the environment-blending feature in CSS Media Queries 5 23:54:59 Here is the repo for 3D dom stuff: https://github.com/immersive-web/detached-elements 23:55:10 q? 23:55:32 Subtopic: Accessibility & W3C Transparency 23:56:17 Immersive Captions CG Final Draft https://docs.google.com/document/d/1P-T5S9pDBbcAGrlJDvbzG0QBLTV1GfrtabfkmohZP6w/edit?usp=sharing 23:56:23 https://www.w3.org/community/immersive-captions/ 23:56:28 W3C immersive captions community group has put out final draft 23:56:46 ...More about lived experience side vs technical implementation side 23:56:53 q+ to with captiosn and video 23:57:12 ...Second link is project to define Accessibility Object Model 23:57:24 ...Intended to make immersive content accessible 23:57:32 ...e.g. alt-text for 3D objects 23:57:44 ...Huge topic that a lot of folks are talking about 23:57:53 A11yVR Meetup - Apr 12 2022 - Building a More Accessible Social Virtual Reality World https://www.youtube.com/watch?v=yF4I263OiMs&ab_channel=A11yVR-AccessibilityVirtualReality 23:58:53 XR Semantics Module https://www.w3.org/WAI/APA/wiki/XR-Semantics-Module 23:58:54 ...Ensuring that people can leverage what we are working on 23:59:34 ...How can we tie all these together? 23:59:40 ...Should not duplicate work 23:59:54 XR Access Symposium, June 9-10th http://xraccess.org/symposium/ 23:59:58 [the XR Semantics Module is part of of the Accessible Platform Architectures Working Group wiki FWIW] 00:00:41 Contact Dylan Fox, Coordination & Engagement Team Lead info@xraccess.org 00:00:42 ...Want to make sure WebXR has the best shot it can in supporting accessibility 00:00:59 q? 00:01:01 ack 00:01:03 ack ada 00:01:03 ada, you wanted to with captiosn and video 00:02:09 Ada: Captions on spherical WebXR layers has an oversight. Should have been part of the platform and not something users need to implement 00:02:28 ...Implementation doable but need to do all user experience stuff yourself 00:04:15 ...If I'm wrong and you can just have a video element with subtitles, that should work in WebXR as you attach it to a layer, but it might not look correct 00:04:35 ...If it doesn't look correct, then that is an issue with video elements on WebXR layers 00:05:09 XR Accessibility project - open source resources: https://xra.org/GitHub 00:05:52 ? 00:05:55 q 00:07:06 Zero Zero, 826 Folsom St, San Francisco, CA 94107 00:09:28 rrsagent, publish minutes 00:09:28 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 00:11:26 i/nick: 8th wall trying to push capabilities forward/scribe+ bajones/ 00:11:28 rrsagent, publish minutes 00:11:28 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 00:13:13 i/dom: there's been a lot of discussion about the metaverse/topic: Metaverse Workshop - Dom/ 00:16:08 i/Here are the slides: https/topic: Successes/ 00:16:10 rrsagent, publish minutes 00:16:10 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 00:18:00 i/nick: 8th wall trying to push capabilities forward./topic: Raw camera access API/ 00:18:01 rrsagent, publish minutes 00:18:01 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 00:19:51 i/ada: wanted to discuss merging depth sensing/topic: Depth Sensing & Occlusion/ 00:20:38 i/bajones: I think it's useful to listen to the TAG/scribe+ ada/ 00:20:40 rrsagent, publish minutes 00:20:40 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 00:21:30 present+ 00:21:31 rrsagent, publish minutes 00:21:31 I have made the request to generate https://www.w3.org/2022/04/21-immersive-web-minutes.html atsushi 00:21:35 rrsagent, bye 00:21:35 I see no action items