17:03:07 RRSAgent has joined #immersive-web 17:03:07 logging to https://www.w3.org/2022/04/22-immersive-web-irc 17:03:13 Zakim has joined #immersive-web 17:03:20 Agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2022/schedule.md 17:03:21 Present+ 17:03:26 Chairs: Ada, Aysegul 17:03:28 bialpio has joined #immersive-web 17:04:59 Present+ Lazlo, AlexC, KlausW, SimonT, PiotrB 17:05:08 ada has changed the topic to: Day 2 Zoom: https://mit.zoom.us/j/91083789145?pwd=NnpXQk5BMENPTFJ5eWVabnFCcXpGZz09 17:05:21 tangobravo has joined #immersive-web 17:05:42 present+ 17:06:03 present+ 17:06:27 Topic: Inline AR 17:07:02 Nick-8thWall-Niantic has joined #immersive-web 17:07:26 -> https://github.com/immersive-web/webxr-ar-module/issues/77 Handheld AR use cases need more than immersive-ar #77 17:07:27 Josh_Inch has joined #immersive-web 17:07:33 scribenick: alexturn 17:07:33 scribenick: alexturn 17:08:38 mkeblx has joined #immersive-web 17:08:59 tangobravo: This is the web build of our AR platform 17:09:05 tangobravo: We overlay menus on top 17:09:44 tangobravo: Using WebAssembly to detect various things 17:10:01 tangobravo: Can capture GIFs, do marker-based image tracking 17:10:17 Ashwin has joined #immersive-web 17:10:35 idris has joined #immersive-web 17:11:16 tangobravo: Also works with ARKit 17:11:18 q+ 17:11:24 laford has joined #immersive-web 17:11:32 tangobravo: Can pause camera tracking and unpause it 17:12:17 tangobravo: Valuable to support a mode of accessing WebXR tracking data without losing control of the DOM 17:12:24 q+ to ask about DOM overlays 17:12:28 q+ 17:13:18 tangobravo: With an inline AR option, experience that don't need to do screen capture or their own tracking could do world tracking 17:14:17 bajones has joined #Immersive-Web 17:14:21 tangobravo: Should be some way to get a bit more data into the DOM 17:14:28 q+ 17:14:33 q+ 17:14:40 tangobravo: Tied for us with raw camera access, since having to start a full session is a blocker 17:14:58 ack ada 17:15:23 ada: Could you use DOM overlay? 17:15:48 tangobravo: One of the challenges is to get the right aspect ratio. For example, to avoid the notch on the phone. 17:16:44 ada: Can you switch to front camera? 17:17:24 Nick-8thWall-Niantic: Front facing camera doesn't do SLAM - not an option 17:17:48 Nick-8thWall-Niantic: If you swap to front-facing camera and had to leave WebXR session, would be jarring 17:18:23 Nick-8thWall-Niantic: Requiring exclusive session means experience is very different across ARCore and non-ARCore phones 17:18:29 q- 17:18:32 q? 17:18:42 ack alcooper 17:19:06 yonet_ has joined #immersive-web 17:20:05 ack Nick-8thWall-Niantic 17:20:18 q+ 17:20:40 alcooper: Shouldn't have to change the aspect ratio just because it's an inline session - we should be able to just keep it the same even if there is a mode switch 17:20:51 Nick-8thWall-Niantic: I strongly agree with Simon here 17:21:18 Nick-8thWall-Niantic: Here's a prototype of how a shopping experience can work within a web page 17:21:40 Nick-8thWall-Niantic: Not at all confusing to users 17:21:48 Nick-8thWall-Niantic: Can go full screen if desired, but it's optional 17:22:19 q 17:22:21 Nick-8thWall-Niantic: As I said before, it's a real challenge to build one experience across multiple backends 17:22:41 Nick-8thWall-Niantic: At the same time, this experience doesn't make sense on a HoloLens or a Meta Quest 17:22:57 Nick-8thWall-Niantic: What does it mean to have the view punched out of your browser - would just be a tiny view and not makes sense 17:23:39 Nick-8thWall-Niantic: If I could wave a wand, getUserMedia would provide intrinsics and extrinsics 17:24:48 Nick-8thWall-Niantic: One argument before was that we are leaking data about the device like extrinsics. However, we already can build lookup tables and so there's not really extra data leaked there. 17:24:55 dino has joined #immersive-web 17:25:24 ack bajones 17:25:26 Nick-8thWall-Niantic: Is that getUserMedia annotation perhaps the right path forward here? 17:26:27 bajones: I do also wonder whether these punchouts are what you'd want on headsets 17:26:45 bajones: I also am not sure about multiple AR sessions? 17:27:23 jrossi2 has joined #immersive-web 17:27:36 bajones: The fact that you are showing the camera feed on the page without a mode switch could freak people out 17:27:49 q+ to ask about mode switch but stay in inline, like camera permissions 17:28:04 q? 17:28:22 bajones: Does showing the camera by default make pages believe that the app can capture the camera? 17:29:58 ack klausw 17:30:13 klausw: About using DOM overlay, the camera field of view would cover the whole phone screen 17:31:06 klausw: You could fix this with raw camera access, but then you are basically back to raw camera permissions 17:31:10 q+ 17:31:32 scribe+ 17:31:39 ack alexturn 17:31:39 alexturn, you wanted to ask about mode switch but stay in inline, like camera permissions 17:31:48 alexturn: are we conflating a few things? we're talking about camera permissions, mode switch 17:32:11 ... inline today would go through camera permissions 17:32:16 q 17:32:30 ... we don't need to remove permissions to enable inline from the start 17:32:36 q+ 17:33:02 ... upgrading the camera feed with AR tracking while keeping permissions would be good first step rather than stalling 17:33:06 scribe- 17:33:09 ack cabanier 17:33:22 cabanier: I am a bit worried that this is a mobile only feature 17:33:45 cabanier: If we want to do inline AR, need a path for headsets too 17:33:50 q+ 17:33:51 q+ to say pose-annotated camera feeds would also be useful for headsets 17:33:57 ack tangobravo 17:34:21 tangobravo: My view is that inline AR is specific to camera-based platforms like mobile 17:34:51 tangobravo: For me, layout is the big problem on mobile, let's keep the browser in charge of compositing and not ask for camera permissions 17:35:09 tangobravo: Because that is not device agnostic, perhaps that doesn't sit inside WebXR? 17:35:47 ack Nick-8thWall-Niantic 17:35:50 tangobravo: The camera approach ticks all the boxes, but perhaps we'd want a more privacy-sensitive approach 17:36:08 Nick-8thWall-Niantic: There are other interesting challenges that arise if you were to try to take a WebXR session without substantial changes and put it into a frame on a page 17:36:28 Nick-8thWall-Niantic: When you start an XR session today, it kills the page's animation frame loop and starts the XR loop 17:36:59 q+ 17:37:01 Nick-8thWall-Niantic: Designed as a full takeover 17:37:13 Nick-8thWall-Niantic: Things might expect both loops to run 17:37:48 klausw: Regarding mobile-only vs. headsets, yesterday we talked about an async camera feed - sounds like it could also solve this use case 17:38:09 klausw: This could lead to solutions that work on headsets too 17:38:55 q+ 17:39:08 klausw: Question is how we can do this to balance privacy and power 17:39:10 q+ 17:39:20 klausw: TAG may not be OK with even more power in getUserMedia 17:40:01 klausw: Would we want to do this basically as a wrapper around ARCore/ARKit APIs? Is that too hardware-specific? 17:40:13 q+ 17:40:19 ack klausw 17:40:19 klausw, you wanted to say pose-annotated camera feeds would also be useful for headsets 17:40:22 ack alcooper 17:40:33 zakim, close the queue 17:40:33 ok, ada, the speaker queue is closed 17:40:40 CLosing the queue since this is getting long 17:40:50 alcooper: Regarding killing the page frame loop, nothing in the spec requires that 17:40:51 [I don't know if people here know about / have tried https://w3c.github.io/mediacapture-transform/ which provides a framework for real time video frame processing for mediastreams] 17:40:59 alcooper: May just be a bug 17:41:27 klausw: Pretty sure that window rAF keeps working 17:41:50 alcooper: Key privacy concern is how users know the camera isn't being captured 17:42:09 q? 17:42:10 alcooper: Still not seeing how this is a strict blocker 17:42:19 Nick-8thWall-Niantic: 17:42:20 window.rAF test: https://ardom1.glitch.me/ 17:42:26 q- 17:42:54 Nick-8thWall-Niantic: From a developer point of view, it's really clear that the best path forward is from a getUserMedia style approach 17:43:25 Nick-8thWall-Niantic: Putting aside potential TAG concerns - we decided TAG is there to provide feedback and our job is to provide feedback from users 17:44:07 Nick-8thWall-Niantic: Is there an appetite to expand reach of ARCore substantially by exposing it through getUserMedia 17:44:26 ada: If getUserMedia is "best", should we just approach them? 17:44:39 Nick-8thWall-Niantic: I don't know the right people there 17:44:45 ada: Dom is a good way to find the right people 17:44:53 Nick-8thWall-Niantic: Us bringing it to them is not a great way to get buy-in 17:45:14 q? 17:45:19 ack Nick-8thWall-Niantic 17:45:25 scribe+ 17:45:28 ack Nick-8thWall-Niantic 17:45:40 alexturn: maybe there is something for us to write in a table with the various privacy trade-offs 17:45:52 ... to help build a shared understanding 17:46:11 ... for hololens, we have people using the native equivalent of getUserMedia to do this 17:46:44 q+ 17:46:50 ... we should look at what it would take to bring this to getUserMedia 17:47:28 Ada: TPAC might be a good time for this kind of coordination 17:47:54 https://github.com/immersive-web/webxr-ar-module/issues/78 - was my thoughts on gUM vs a new session type. For me, new session type allows leveraging all the WebXR stuff around spaces and poses more directly 17:47:56 dom: I'm also the staff contact for WebRTC group where getUserMedia is discussed 17:48:24 dom: I doubt the WebRTC group would want to take up addition of AR things themselves, but they would care how we do it 17:48:46 dom: Getting agreed ourselves on how we would do this is important ourselves - once we do that, let's just have a joint call, before TPAC 17:49:02 dom: I don't expect WebRTC working group would want to own this, but they'd offer guidance 17:49:21 dom: There are more modern approaches like MediaCaptureTransform 17:49:52 When I looked into FaceDetection, my proposal (because it leveraged the front facing camera) was to leverage gUM, and I did talk with some people internally and was pointed to the InsertableStreamsAPI: https://github.com/w3c/webrtc-encoded-transform/blob/main/explainer.md 17:49:55 dom: Some TAG individuals expressed thoughts, but no formal pushback 17:50:47 alcooper, insertable streams has evolved into 2 paths: webrtc-encoded-transform probably not as relevant as https://w3c.github.io/mediacapture-transform/ 17:51:22 also, re face detection, there has been discussions in how to integrate camera-driver face detection which has a lot of similarities with what we're talking about here (I think) 17:51:52 see https://github.com/riju/faceDetection/blob/main/explainer.md (although that particular proposal hasn't been adopted by the WebRTC WG) 17:54:53 Sorry, I just grabbed the first link from my proposal; I think I was proposing something slightly different; I'll note that my proposal was also ~a year and a half ago and we never pushed it further forward either: https://github.com/alcooper91/face-mesh 18:03:19 Brandel has joined #immersive-web 18:09:33 scribenick josh_inch 18:09:36 scribenick: Josh_Inch 18:09:53 Topic: Are 8-bit outputs sRGB encoded? 18:10:04 zakim, open the queue 18:10:04 ok, ada, the speaker queue is open 18:10:05 Rik - on the web colors are rgb coded, we write colors in rgb buffer 18:10:13 -> https://github.com/immersive-web/webxr/issues/988 Are 8-bit outputs sRGB encoded? #988 18:10:17 s/Rik - /Rik: /g 18:10:43 Rik you get a double conversion which makes everything too light. Used to be done with a hack 18:11:17 Rik Not optimal, you have to a lot of things that pretend to be RGB 18:11:34 s/Rik /Rik: / 18:11:35 s/Rik /Rik: / 18:11:49 Rik Anybody from msoft or google know about this? seem to have same issues 18:12:08 q+ 18:12:22 s/Rik /Rik: / 18:12:31 q- 18:12:34 ack alexturn 18:12:41 ack bajones 18:13:19 brandel so Im not an expert but this is an area that is a constant issue for web as a platform. Am curious in openxr is it required or just pref to use open srgb 18:13:26 rik it is the preference 18:13:33 q+ 18:13:54 s/brandel /bajones / 18:13:56 brandel might be an issue but no question we should do something better. When we get to webgpu we could do the right thing. Idk what it is though 18:14:03 s/brandel /bajones / 18:14:21 s/bajones /bajones: / 18:14:24 s/bajones /bajones: / 18:15:10 bajones in this case it needs to be an explicit signal, something about how to create a layer. That way we are not making differences between platform or surprising anyone. Imagine we could do the same thing for the webxr layers API. It takes an explicit format. rik, yes it does 18:15:34 s/bajones /bajones: / 18:16:25 Bajones: I think the big thing is that out of necessity we will have to convert out of necessity and use the pathway to convert the RGB texture. Maybe we can define it as part of the layers api and back port from that interface from there. Dont know what the right shaders are for that 18:16:53 q? 18:16:58 ack alexturn 18:16:58 Bajones: No matter what we do we are going to have to deal with a lot of content out there that is not designed for this 18:17:40 q+ 18:17:49 alexturn: this srgb stuff is one of those topics with so many moving parts, hard to keep top of mind. I would love to have this conversation with the folks who wrote that code. I know there are ways you can signal openxr. This is painful in the openxr layer. 18:18:23 alexturn: I remember you had discussions in how to make this work directly, people found a path that it worked...then stopped. there is a likely a better way to do it. We should find the people who wrote the code and discuss with them 18:19:00 ack alcooper 18:19:24 alcooper: its almost certain that the actual implementation came from rafeal or patrick at msoft 18:19:26 q? 18:19:38 q+ 18:19:53 https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/modules/canvas/htmlcanvas/canvas_context_creation_attributes_module.idl;l=41?q=colorSpace 18:19:56 rik: just wanted to raise and when we do webgpu we can keep in mind and fix 18:20:00 ack klausw 18:20:57 looks like there's a colorSpace attribute for 2D canvas context creation 18:20:59 q+ 18:21:17 see also https://chromestatus.com/feature/5807007661555712 18:21:23 klausw: looks like its a colorspace for 2D 18:21:45 ack babage 18:21:46 anyway, don't have much context on this (no pun intended), but looks like work is being done 18:22:17 ack cabanier 18:22:17 while klaus is figuring that out. Im not sure how widespread the implementation is, I think it chrome it is respected. Know there is a lot of history. May not be reliable. 18:22:50 q? 18:22:51 rik: that propert is actually the colorspace. When they say srgb... they are talking about camera corrected. I am specfically talking about linear and camera corrected 18:23:05 git blame looks like Patrick did the implementation 18:23:05 s/camera/gamma 18:23:28 agendum: https://github.com/immersive-web/webxr/issues/1275 18:24:10 ada: im not sure if this is an issue for discussion 18:24:13 Topic: -> https://github.com/immersive-web/webxr/issues/1275 Prepare for implementation report 18:24:44 ada: correct me if I am wrong but I think this is that we need to politely ask the working group to prepare an implementation report to move the spec further along. 18:24:47 q+ 18:25:06 ada: we can either do it now or we can see if the independant implementation of chrome is enough or we can wait for apple to as well 18:25:30 ada: anyone have strong opinion. personally not pressed to do so 18:25:47 ack dom 18:26:00 q+ 18:26:57 dom: its down to us to determine whats needed for independant implentation, in our case we are thinking of very different underlying platforms. I dont thin kwe should shy away from taking that approach. I dont think I would make it a key requirement for us 18:27:57 dom: clearly there have been many independant implementations. There is a question about how we measure and document the implementations. 18:28:23 ack bajones 18:28:48 brandel: How soon do other specs move from CR to the next stage where they are in independant implentations? 18:29:06 s/bjones 18:29:19 bjones: just wondering what traditional timeline is? 18:29:34 dom: 6 months 18:31:16 dom: the only question of timeline is the question of how much of this group is prudent to write the test cases, run them, and review the results 18:31:31 https://wpt.fyi/results/webxr?label=master&label=experimental&aligned 18:31:36 q? 18:31:41 q+ 18:32:06 ack jrossi2 18:32:11 jrossi: two questions on this 18:32:28 jrossi: is expectation that we are comfortable with manual test or do we need automated? 18:32:37 q+ 18:32:43 ack jrossi 18:33:18 jrossi: I agree with thesis in other specs about platform dif. One thing this group should think about intetionally is that we made the journey from webxr, we should stress test across mobile implementations and ensure it feels interoperable for other devs 18:33:30 ack dom 18:34:01 dom: if we are ok to run manually there is always value in having automated tests if only for regression testing 18:34:39 dom: I agree with assessment. Maybe not just test cases but also a small prototype 18:35:04 ada: automatically verifying that an immersive experience feels "good" is difficult 18:35:26 agendum: https://github.com/immersive-web/webxr/issues/1272 18:36:18 Topic: -> https://github.com/immersive-web/webxr/issues/1272 XRWebGLLayer: Opaque framebuffer + antialias + blitFramebuffer conflict #1272 18:37:54 bajones: this is just a request from a developer from a page who does a lot of java renderings of old game models. Hes good about digging into minutia. One of the things that he called out is that because of the fuzzy language around anti aliasing in teh webxr webgl layer, it doesnt seem like devs can depend on a certain type of anti aliasing or web technique that we use. generally this would be considered invisable but there are som[CUT] 18:38:15 bajones: request to get some clarity around what kind of antialiasing we can get here 18:38:47 bajones: is anyone aware of an implentation that would cause problems here? Suggest that antialising should pass a frame buffer. 18:39:12 bajones: would be shocking if they requested frame buffer false and they got one anyways 18:39:29 q? 18:40:09 bajones: this issue is saying that need to tighten up language around antialiasing. Do you know if there is anything that would prevent msoft reality holense or hardware from doing that 18:40:49 one thing with hololense and holoense 2 is there is hardware reprojection involved. It does what it does. If someone was insisting anti alias off, what are they looking to ensure by turning it off? 18:41:27 bajones: In this case he wants it off so that the functionality like things such as frame buffer are consistent and known. 18:41:44 ada: is it even possible to copy out of a webxr layer frame buffer 18:41:56 bajones: its not possible to copy out of it, you should be able to into it though 18:42:08 bajones: depends on whether the target is multi sample or not 18:42:19 bajones: I dont see much of a down side 18:42:46 alexturn: if somebody wants to turn it off, we are in business to that 18:43:33 bajones: dev has said: we have said that these buffers are opaque 18:44:00 q+ 18:44:04 bajones: all of this to say that we have created an unintentional blind spot. following the suggestions from this dev would allow developers to consistently and confidently interact with buffers. I think we should put this in spec 18:44:25 ack cabanier 18:45:07 cabanier: so in the case multisamples, we use multisample render to texture extension, special behavior if you render to multi sample you dont actually render to buffer. If you try to copy out or in there might be issues because I think one of the things was if there is sample buffers 18:45:45 bajones:yeah so my understanding of that situation is its special in sample buffers return1 because you are dealing with not a multisample texture. 18:46:04 bajones: the output is multisample 18:46:58 bajones: the actual buffer is single sample, you could ask for anti alias.. turn it on, ask for sample buffer, get back 1, means you could do single sample buffer and it wouldnt act exactly like one..sort of confusing but is the reason why we should expose it so that devs can do that and know what behavior to expect 18:47:10 ada: thank you brandon is that enough? 18:47:13 bajones: yes 18:48:36 Navigation slides: https://docs.google.com/presentation/d/1kjAsL9NebaroqQL7thRH6DjRzeIrrTkTBccynP94OKY/edit?usp=sharing&resourcekey=0-nUlPh2G4vHRRjlNcpgkGlw 18:52:05 https://docs.google.com/presentation/d/1ewsefsmLFKIv0fRExCf1VzgvkepSJnrxn76_c8LmWRk/edit?usp=sharing 18:57:49 agendum: https://github.com/immersive-web/navigation/issues/13 18:58:35 bzachernuk has joined #immersive-web 19:01:26 agendum: https://github.com/immersive-web/navigation/issues/13 19:03:04 Topic: -> https://github.com/immersive-web/navigation/issues/13 Navigation update 19:03:15 present? 19:03:15 present+ 19:03:18 present+ 19:03:19 present+ 19:03:29 Zakim, who's here? 19:03:29 Present: dom, Lazlo, AlexC, KlausW, SimonT, PiotrB, cabanier, yonet_, alexturn, ada 19:03:31 On IRC I see bzachernuk, jrossi2, dino, yonet_, bajones, laford, idris, Ashwin, mkeblx, Josh_Inch, Nick-8thWall-Niantic, tangobravo, bialpio, Zakim, RRSAgent, alexturn, klausw, 19:03:31 ... lgombos, alcooper, dom, atsushi, [old]freshgumbubbles, fernansd, Chrysippus, dietrich, SergeyRubanov, etropea73101, Manishearth, ada, `join_subline, cabanier, NellWaliczek, 19:03:31 ... sangwhan, cwilso, iank_, rzr, babage 19:03:33 present+ 19:03:39 zakim, choose a victim 19:03:39 Not knowing who is chairing or who scribed recently, I propose dom 19:03:50 zakim, choose a victim 19:03:50 Not knowing who is chairing or who scribed recently, I propose Josh_Inch 19:03:52 zakim, choose a victim 19:03:52 Not knowing who is chairing or who scribed recently, I propose Lazlo 19:03:54 zakim, choose a victim 19:03:54 Not knowing who is chairing or who scribed recently, I propose KlausW 19:03:57 bajones_ has joined #Immersive-web 19:03:57 zakim, choose a victim 19:03:58 Not knowing who is chairing or who scribed recently, I propose PiotrB 19:04:03 zakim, choose a victim 19:04:03 Not knowing who is chairing or who scribed recently, I propose ada 19:04:14 present+ 19:04:17 present+ 19:05:45 slideset: https://lists.w3.org/Archives/Public/www-archive/2022Apr/att-0003/XR_Navigation_thoughts.pdf 19:05:48 [slide 2] 19:05:55 [slide 3] 19:06:44 [slide 4] 19:06:51 [slide 5] 19:07:42 Brandom, There is a difference in the context for links in VR. I'm introduces navigation contexts 19:07:58 s/Brandom,/Brandon:/ 19:08:57 [slide 6] 19:09:00 bajones_: On a page if you are hovering a linked element if the user clicks they navigate. The same metaphor can apply to XR in certain situations the context can imply that if you take a navigation action then you will navigate. Like standing in a door way or holding an item 19:09:23 navigation destination needs to be shown to the user as some form of trusted UI 19:09:49 no navigation happens until the user takes a trusted action i.e. device button 19:10:15 [slide 7] 19:10:43 q+ 19:10:48 RRSAgent, draft minutes 19:10:48 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 19:10:55 bajones_: navigation can never be triggered by the page i.e. location.href kicks you out of VR, and needs to happen with a trusted non spoofable gesture. 19:11:06 RRSAgent, make log public 19:11:36 i/slideset:/scribenick: ada 19:11:42 RRSAgent, draft minutes 19:11:42 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 19:12:12 [slide 8] 19:12:28 bajones_: in addition the page cannot observe that you are about to navigate to stop them from swapping the navigation context 19:13:21 [slide 9] 19:14:21 [slide 10] 19:14:40 q+ how would people get out of fade to black if the link is not loading? 19:14:47 bajones_: the transition to the new page is fade to black -> interstitial environment -> new site 19:14:59 [slide 11] 19:15:41 [slide 12] 19:15:55 bajones_: this could be used to show navigation related information during the interstitial such as favicon, a equirect map or a simple model 19:16:22 Meeting: Immersive Web F2F - Day 2 19:17:39 bajones_: having a pose for letting people select a link is a vector for abuse, 19:17:51 bajones_: addint contextual information can be really helpful 19:18:26 ada: mitigations for rapid switching, to avoid spam when they developer is constantly changing rapidly 19:18:38 either the position of the link 19:18:53 there probably needs to be some timeout 19:20:26 this could be used to trap users by making the whole page a navigation context so that pressing the navigate button to leave the page instead takes you back to your current location 19:21:12 [slide 13] 19:21:14 back links are tricky, but i think the mechanisms for it are already present and could be a community-standard 19:22:11 [slide 14] 19:23:14 bajones_: accepting navigation requests. Sites which are able to be XR need to signal that we talked a little about this yesterday during the easy enter XR 19:23:33 since it needs to be dynamic a declarative tag is properly not the appropriate solution 19:23:46 q+ to ask about xr pages using things like a model element 19:24:55 q+ 19:25:12 bajones: should be before the window's DOMContentLoaded Event 19:25:53 [slide 16] 19:26:39 [slide 17] 19:26:46 there was some discussion about using isSessionSupported as a Signal it does have an unexpected side effect that it may train users to not trust the button if it's frequently used for fingerprinting 19:26:49 RRSAgent, draft minutes 19:26:49 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 19:27:09 should it be used to allow sitees to swap mode for hardware and sites which support both 19:27:09 q+ 19:28:19 bajones_: should site be able to signal how far the loadings is done 19:28:22 q? 19:29:30 cabanier: thanks for the slides it's a big topic 19:29:31 ack ada 19:29:31 ada, you wanted to ask about xr pages using things like a model element 19:29:35 scribe+ 19:29:48 ack cabanier 19:30:07 ada: what about sites that would be using e.g. a model tag - in general, with XR content not using WebXR 19:30:14 ... would this be appropriate fro this? 19:30:40 bajones: interop between declarative and imperative xr would be good 19:31:06 ... the model content may not necessarily be a good fit for navigation, but it could lead to a gallery of sort 19:31:11 scribe- 19:31:32 ack jrossi 19:32:45 jrossi2: i think needing a dunamic hook it should probably be another specific event rather than DOMContentLoaded, I think we would still like a declarative signal for XR sicne it would really help the UA know that the target site intends to be XR 19:33:45 jrossi2: we also want to think about people travelling together, bookmarks could be used to expose a target destination for each person 19:34:33 jrossi2: also declarative signal allows search engine indexing 19:35:35 bajones_: my initial thoughts were if some of it has to be script driven it should all be rather than a two parter. But I can see where you are coming from. 19:36:30 jrossi2: regarding the two tier it doesn't need to be HTML just any early reliable hint, such as headers 19:36:55 ada: sorry to jump, but a tag can be a header or are a tag 19:37:01 ack laford 19:37:37 laford: i'm not entirely convinced it needs to be dynamic but that is a longer discussion 19:38:21 q+ 19:38:25 laford: having to redifine whata link it feels like a smell because it's so fundamental to the web concepts 19:38:41 ack jrossi2 19:39:38 lgombos_ has joined #immersive-web 19:39:58 jrossi2: i agree that navigation without a button is a bad idea. As a hot take I am wondering if this is a way we could get it script initiated as long as there is a reserved insititial the browser gives such as a thing that pops up that requests the user pushes the button. 19:40:30 q+ 19:40:52 ack jrossi 19:41:38 bajones_: there maybe a path using location.href, if we are in the psositon where we can trigger the reserve gesture but can cause the 2nd activation links become two clicks. 19:42:42 ack tangobravo 19:43:08 tangobravo: in our product when you want to open a tab we make it a two click process 19:43:09 Whynotboth.gif? It might be valuable to have a way to catch "legacy navigation initiations" and that has the two click experience. But then provide the nav context for devs that want to lower the friction 19:43:10 q+ 19:43:14 ack ada 19:43:28 scribe+ 19:43:56 ada: if we start with a 2 clicks approach, this doesn't rule out doing one click as an evolution 19:44:09 ... whereas it would be harder to walk back from one click 19:44:33 Another scenario that we might be able to do one click: same origin nav 19:44:47 q+ 19:44:55 bajones_: one click would be updating window.href and then the user validates the navigation? 19:45:10 cabanier: do you think we could treat it like a permission 19:45:12 ack cabanier 19:45:41 where once you haave gone to a site that once we have gone to a site we give it permission to go back 19:46:08 q+ 19:46:23 +1 not to have one time approval 19:46:52 bajones_:ubersites make this awkward because content agrigators and social media and blogging platforms would then be risky unless we were super granular 19:47:40 ada: going back to the previous page should probably be immediate to let users jump out of a place they do not want to be 19:47:55 q+ 19:48:00 bajones_: as long as it is a reliable signal it seems like a good signal 19:48:21 ack tangobravo 19:48:46 tangobravo: on a point to the UX as you are travelling through the void, 19:48:55 ack alexturn 19:49:04 ... a second click doesn't seem like a terrible UX 19:50:26 alexturn: the biggest risk between seeing undesirable sitse vs getting phished the phishing risk is bigger because the current site can directly target 19:50:33 scribe+ 19:51:15 ada: before long, we'll end up with XR social media with links, which may lead to the kind of undesirable link destination I was talking about 19:51:18 scribe- 19:52:20 bajones_: next steps are to gather feedback and start iterating on solutions in the navigation repo in the Immeresive Web GitHub 19:52:50 bajones_: this will probably replace the proposal written by Diego from AFrame 19:53:48 ... it probably won't be a total replacement bits will be used 19:54:05 q+ 19:54:59 yonet_: and information about when implementaitons might happen 19:55:39 bajones_: Meta has been experimenting with it but a user library built on top of it could be a good way to start experimenting 19:56:06 cabanier: it's a good idea, of course they can't press the real system button but we can see if it works 19:56:15 ack jrossi 19:57:11 jrossi2: a little bit disconnected from navigation, if we could get the declarative solution sooner rather than later, it would enable sites to start implementing the landing features 19:57:27 q+ 19:57:41 bajones_: it seems like a reasoably sperate piece 19:57:55 bajones_: that it can be implemented sepereately 19:58:03 ack dom 19:58:22 dom: I was going to suggest that it might be worth speccing seperately in paralell 19:59:32 dom: there are few things that seem to want to tie into it so worth doing earier 19:59:54 josh-inch has joined #immersive-web 20:00:26 yonet_: lunch 20:02:15 ... lunch for 1h, until 2pm (for anyone who missed it) 20:13:24 Leonard has joined #immersive-web 20:13:33 present+ 20:55:47 present+ 20:59:46 bzachernuk has joined #immersive-web 21:00:01 Ashwin has joined #immersive-web 21:04:13 yonet has joined #immersive-web 21:04:36 chair:yonet 21:04:50 zakim, choose a victim 21:04:50 Not knowing who is chairing or who scribed recently, I propose jrossi 21:05:55 scribenick:jrossi2 21:06:04 i'm talking 21:06:49 josh_inch has joined #immersive-web 21:08:03 dino:Explainer for model tag is the only docs so far, but working on implementation and have some volunteers for speccing 21:08:11 Apple explainer frked to Immersive Web at https://github.com/immersive-web/model-element 21:08:15 topic: model tag 21:08:51 dino: not designed to replace webxr for fully immersive content, for integrating some xr components into html 21:08:59 present+ 21:09:49 dino: why not modelviewer? situations where the page script cannot always do the rendering for security/UX reasons 21:09:56 tangobravo has joined #immersive-web 21:10:47 dino: eg all the information you have to share about head pose etc that is not necessary to be shared 21:11:08 dino: another concern heard: can't align on an interoperable way to render 21:11:55 dino: last 2-3yrs lots of advancement on defining declaratively how rendering should work - won't be pixel perfect interop but close enough we can be happy 21:12:25 present+ 21:12:27 dino: why not discuss in whatwg: immersive web are the experts. should define as much as possible here before bringin it to whatwg more baked 21:12:44 q+ 21:13:31 dino: always best to do this incrementally. proposing a staged approach: 1- an elelement that points to a model, api to control camera and maybe play/pause animation built into the model 21:13:45 dino: 2- scripting contents of the geometry 21:14:17 dino: 3- joining the 3d space with the rest of the web page and give full access to the scene graph (but defining interop scene graph will be hard) 21:15:00 dino: ex: apple.com has watch configurator, we dont want to make a model for all permutations of the watch. just want to programmatically change the material 21:15:19 q+ 21:15:53 dino: [showing live demo] 21:16:05 q+ 21:16:28 q+ 21:17:10 dino: also want to discuss 'real css 3d transforms' and think these two concepts will fit in quite nicely together 21:17:31 ack tangobravo 21:18:05 tangobravo: if we had inline ar through a webxr session without exposing camera, would that replace all use cases of model element and just allow to be moved into modelviewer? 21:18:26 tangobravo:does it allow you to keep the 2d page going and pick models out or something? 21:19:13 dino: what were going to see in mr environments is some depth to the canvas but on a 2d background, so kind of 2.5D 21:19:54 dino: parts could be pertruding out from the page. technically has to be a point where that object extends outside the window rect. very difficult to facilitate that 21:20:19 dino: and then ux like plucking that element out of the page into the 3d environment. rendering environment doesnt allow for that 21:20:22 emmett has joined #immersive-web 21:20:43 [using the model element to have the browser be responsible for "show me this model in my room" is an interesting idea] 21:21:10 tangobravo: makes sense. though imagine having pertruding objects is hard to solve with scrolling etc 21:21:23 dino: its actually pretty cool and creates some need parallaxing effects 21:21:29 Can I raise my hand here? 21:21:32 s/need/neat 21:21:34 q+ emmett 21:21:43 ack ada 21:22:24 q+ 21:23:07 ada: want ability to control subparts of model. the way ive been doing that in aframe with gltf, i give it a child element to control a particular part of the model and then the properties of that element control the part of the model with a transform 21:23:21 ack cabanier 21:23:52 cabanier: what happens if you apply a css to it, like skew/float/etc. if its an iframe are there limitations? 21:25:07 dino: I think transformation is interesting. At the moment, weve made it such that 3d transforms propogate to the object (though havent thought of skew and wish we hadnt added it) 21:25:42 dino: iframes is a good example of why you want this as an element the page isnt controlling because you dont want the user to have to deal with permissions for the information needed for that thing 21:25:50 ack bajones_ 21:26:01 ack bajones_ 21:26:24 q+ 21:26:30 bajones_: are the ways to control the model exposed as direct attributes of the model element? 21:26:35 dino: currently yes 21:27:11 bajones: i think youll want both options. some way to set the transform of the object but also the camera. having an override would be nice 21:27:25 q+ 21:27:50 bajones_: know were not talking model formats just yet, but worried about extensibility into areas like this without being backed by a very consistent format 21:28:00 q+ to compare with SVG, mention usage for inline AR 21:28:04 bajones_:current proposal gives ua ability to support multiple formats 21:28:48 dino: agree. ive done a non comprehensive research into formats, i think if were careful we can propose doms that are similar enough between formats at a high level 21:29:15 dino: we could translate gltf scene graph into js api and thatd be a reasonable place to start 21:29:34 ack emmett 21:30:00 emmett: primary thing model is solving is dealing with camera permissions in AR 21:30:22 emmett: im surprised that the result of wanting to solve that is to create a dom node for all of 2d non immersive, non-ar sites 21:30:42 q+ 21:30:52 q+ 21:31:46 q- 21:32:12 emmett: the model element is to solve the ar case, but it also does everything required of 3d on the web independently of ar. that seems like a difficult approach. 21:32:35 emmett: biting off an enormous amount of work to standardize what leads to effectively a game engine api when you really just need to solve this ar case 21:39:17 dino: would love to show demos but didnt get permissions to share before this, hopefully over next weeks or months 21:39:58 emmett: in my experience, the ux of headset vs phone experience of this stuff has almost nothing in common 21:40:12 present+ 21:40:45 emmett: dont see content that makes sense for an ar headset and also a laptop or phone browser 21:41:39 dino: the examples are not adding 3d transforms to elements in an ar environment. theyre interesting/exciting demos, but not groundbreaking 3d design for web sites. its adding subtle depth to your existing page and it looks really cool in a headset 21:41:57 dino: think like parallax but _real_ parallax 21:43:53 emmett: vr experience of a panel browser is not great. still a floating window whos ux is fundamentally about text and hard to read. in ar, fov makes it hard to imagine existing content but instead floating snippets of geoloc specific content etc 21:44:32 emmett: havent seen a vision doc that really sells me on how this will play out, hard to build a good web apit without that 21:44:38 q? 21:45:01 dino: also partially agree that what you want of an ar headset is these floating snippets of information 21:45:48 dino: also not aware of vision docs that describe what could be done. but thinking now about what things we have to start on to get to something later on 21:46:15 ack leonard 21:46:20 yonet: maybe we can schedule something when demos are avail 21:47:29 Leonard:how are you going to support all the various features of the file format in various browsers, as things are added etc 21:47:44 dino: we have this problem with images, eg new color spaces tagged differently 21:48:02 dino: often doesnt have great fallback, dont have great answers for 3d either 21:48:23 Leonard: not just animation, but different types of rendering like lighting 21:49:02 dino: already have this situation with webgl and gltf. if you didnt update frameworks, it didnt work. 21:50:11 dino: usd is a great example of what could happen. usd is very extensible in a manner that you can have the same file open in multiple dcc's and each one can add its own metadata to that. you could effectively have a usd viewer, could be the browser, that understands more about the usd than another one. agree huge problem. would be great if we could have a baseline set of features 21:50:30 dino: browsers have come to be good at moving together, sometimes naturally or coordinated 21:51:01 emmett: if you use threejs or modelviewer, you can choose when to update and have universal support across browser. but then some browser update slowly 21:51:27 dino: on the flip side of that, hw gets better and you might get an upgrade without the page changing 21:51:59 Leonard: another concern is lighting. very important to commercial retailers who care what their product looks like 21:52:39 Leonard: I didn't see controls for lighting in proposal 21:52:57 dino: great point and another reason why the browser rendering is a good thing, it may have great signals on this that the page does not 21:53:05 q? 21:54:23 dino: one thing not in the explainer, the idea of adding your own idl with some way to specify what scene youre going to use a lighting model so you can experience what it might look like in different scenes 21:54:35 ack Ashwin 21:54:49 yonet: please add questions to the doc (to be shared) 21:54:54 s/idl/IBL 21:55:05 Josh-Inch has joined #immersive-web 21:55:25 https://docs.google.com/document/d/1u_UwbTcK8wDVLIWeRWf0Yb_n-WABKXu3G1fLKzK9aKI/edit?usp=sharing 21:55:25 https://docs.google.com/document/d/1u_UwbTcK8wDVLIWeRWf0Yb_n-WABKXu3G1fLKzK9aKI/edit?usp=sharing 21:55:56 emmett: its great that you get free updates from a browser update, but also a pain because of unexpected issues from breaking changes 21:56:31 emmett: frameworks give power to control your own versioning system 21:57:04 laford has joined #immersive-web 21:57:11 [I think this points to the articulation between model formats and browser integration, which I think is closer to the way SVG is managed than PNG or JPEG] 21:58:44 q? 21:58:45 Ashwin: can approach problem as add web content to webxr content or add 3d content to web content 21:59:20 Ashwin: we did prismatic library, proprietary way to position 3d models inline in a web page and pop out divs from the page all without entering xr session 21:59:35 we==MagicLeap 21:59:49 Ashwin: had demos from real content like NY Times 22:00:02 Ashwin: emmett asked for demos, would encourage looking at prismatic 22:00:07 emmett: yes theyre cool 22:00:41 Ashwin: is there some kind of accounting for models that want to go out of bounds... masive model that extends crazy amounts in z axis 22:00:53 dino: something spec has to define 22:01:45 dino: example as a bow and arrow. arrow extremely long in one dim and narrow in the other. how do you best show that if it was going to go through your eye. need to specify constraints 22:02:39 dino: use cases where you want real world size. what if you drag an elephant into a small room. maybe not something the spec should define but definitely a question of how 3d browsing experiences will work 22:02:41 q- 22:02:45 ack tangobravo 22:02:48 yonet: please post in model elements on github 22:03:05 ack tangobravo 22:04:43 tangobravo: i cant physically see how some of this works [scribe missed some context], if apple wants to innovate on how to display this. seems like some sort of slice of quick look that is easier to solve than the full 3d deal 22:05:20 ack alexturn 22:06:08 alexturn: similar to how this topic has gone before. people frustrated by gaps of mobile ar today, closing those is a priority. vendors with headsets end up seeing a different part of the problem space than others 22:06:44 alexturn:reality 2d web isnt going anywhere. 3d will be layered in and need to figure out the transition 22:07:01 alexturn: [live demo] 22:07:50 alexturn: [dynamics 365 demo doing 2.5d, 2d panel with meaningful tie ins to the 3d] 22:08:20 alexturn: today built by adding 2d into unity. but requests to start from todays 2d thing and layering in the 3d 22:09:00 We see similar requests of people wanting to start from their 2D and layer in 3D than the reverse 22:09:14 alexturn: similar example with hololens remote assist app 22:09:53 alexturn: also dont want to take over full display 22:09:59 q? 22:10:27 emmett: i really like this. something really interesting about this is that the 3d model isnt laid out relative to the page but the window 22:10:59 emmett: would like to see something that allows this and not be attached to the 2d dom 22:11:31 alexturn: risk or opportunity depending on viewpoint on how this evolves 22:12:10 alex: remote assist version has examples of both, like a button extending out of page 22:12:18 s/alex/alexturn 22:12:53 alexturn: agree with dino that this should get staged out incrementally. should align on that 22:13:18 emmett: reminds me of alt proposal, existing standard 3d model schema 22:13:32 emmett: this to me looks more like that. 3d content on this page but its not part of the dom 22:13:36 -> https://schema.org/3DModel 3DModel - A Schema.org Type 22:13:47 emmett: can put information like where it should be placed 22:14:02 emmett: wonder if thats a better, simpler place to start than wedging 3d into 2d dom 22:14:47 q? 22:15:22 alexturn: today doing this in unity ends up reinventing 2d from scratch. so they have similar complexity that we face in enabling all the 3d in 2d scenarios. 22:16:03 yonet: can we do demos in future calls and invite emmett? 22:16:06 dino: yes 22:16:12 topic: DOM layers 22:17:03 -> https://github.com/immersive-web/layers WebXR Layers 22:17:25 zakim, choose a victim 22:17:25 Not knowing who is chairing or who scribed recently, I propose yonet_ 22:17:27 RRSAgent, draft minutes 22:17:27 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 22:17:35 zakim, choose a victim 22:17:35 Not knowing who is chairing or who scribed recently, I propose alexturn 22:17:50 scribenick: alexturn 22:18:12 cabanier: Not much has happened on DOM Layers recently 22:18:16 cabanier: Still some issues on how you do hit-testing 22:18:26 agendum: https://github.com/immersive-web/layers/issues/280 22:18:54 i/ i'm talking/scribenick: ada 22:18:58 RRSAgent, draft minutes 22:18:59 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 22:19:16 cabanier: Every layer would be same origin 22:19:32 cabanier: The way you would communicate with DOM layers would be like with popup windows 22:19:32 scribenick: alexturn 22:19:35 cabanier: So far, feels awkward 22:19:38 q+ 22:19:48 q+ 22:20:33 ack Nick-8thWall-Niantic 22:20:49 i/Topic: model tag/scribenick: ada 22:20:50 RRSAgent, draft minutes 22:20:50 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 22:21:10 Nick-8thWall-Niantic: On my screen is how we see this working 22:21:38 Nick-8thWall-Niantic: Having the DOM elements on the left just show up on the quad to the right 22:21:51 Nick-8thWall-Niantic: All this is doing is excluding the canvas, but it shows the rest of the page 22:22:02 i/topic: model tag/scribenick: ada 22:22:11 RRSAgent, draft minutes 22:22:11 I have made the request to generate https://www.w3.org/2022/04/22-immersive-web-minutes.html dom 22:22:15 Nick-8thWall-Niantic: I know you expressed some concerns about things like transparency - we could handle this being opaque 22:22:38 cabanier: Some of the limitations we talked about before have gone away - now you can mix quad layers with content 22:22:47 cabanier: That would let you do exactly what you see here 22:23:06 cabanier: Couldn't do super fancy effects, but opacity is OK 22:23:16 cabanier: That is something we could do with DOM Overlay 22:23:34 cabanier: Would need to be same origin 22:23:53 scribe+ 22:24:04 ack alexturn 22:24:09 q+ 22:24:28 alexturn: being able to show 2D slates in an immersive experience is important 22:24:46 ... in the context of Mesh / metaverse 22:25:06 ... there are tricks to position Web content - except when using in WebXR 22:25:47 ... there could be security restrictions that apply 22:25:59 ... we're very interested in seeing the DOM Layers happen 22:26:13 ... at least with CORS/same-origin 22:26:28 ... are there security concerns with that kind of restrictions? 22:27:04 ... DOM layers would provide more control than DOM Layers 22:27:14 q+ 22:27:36 ... we've been using iframe - this seems to be a good model for many of the use cases we're seeing in MESH 22:27:45 ... this would be a good place to start experimenting 22:27:55 ack bajones_ 22:28:15 q+ 22:28:59 bajones_: my #1 concern: how do we handle clicking interaction on pages in a secure manner? 22:29:15 ... we absolutely cannot allow the JS to drive where the users is clicking on the page 22:29:33 q- 22:29:33 ... this could use e.G. false-ad engagement, drive the user to click on links they don't intend 22:29:37 q+ 22:30:00 ... you have to have some way where the actual primary interaction with the surface (if it's clickable) is driven by the something that is more UA centric 22:30:21 q+ 22:30:33 ... the only way to do this is to have some soert of js driven mode switch where you say: I'm interacting with page now, to give the browser control of the controllers 22:30:59 ... while keeping e.g. a consistent rendering of the controllers 22:31:20 ... difficult to figure out how to deal the hand-off 22:31:56 ... awkward but probably unavoidable for interactive Web content 22:31:58 ack cabanier 22:32:14 scribenick alexturn 22:32:20 scribe- 22:32:41 cabanier: Yea, that was one of the concerns that caused us to focus initially on same-origin 22:33:07 cabanier: Could be awkward with things in front of the quad 22:33:44 cabanier: Is there any extra stuff that could happen here in terms of stealing info vs. what the page could already do? 22:33:55 bajones_: Want to talk to security folks here! 22:34:07 Nick-8thWall-Niantic: Some of this conversation is a little bit weird to me because I'm coming in with a different mental model 22:34:09 ack Nick-8thWall-Niantic 22:34:32 Nick-8thWall-Niantic: My expectation of DOM overlay is not that the DOM comes from another web page, but that it comes from the current page 22:34:53 Nick-8thWall-Niantic: Ideally, the DOM overlay would actually be not seeing controllers at all, but mouse pointers, clicks, etc. 22:35:06 Nick-8thWall-Niantic: I don't see why having an