17:02:00 RRSAgent has joined #immersive-web 17:02:00 logging to https://www.w3.org/2021/10/14-immersive-web-irc 17:02:03 Nick-8thWall has joined #immersive-web 17:02:07 rrsagent, make log public 17:02:24 meeting: Immersive-Web WG/CG (extended) group call 17:05:06 yonet has joined #immersive-web 17:08:22 agenda: https://github.com/immersive-web/administrivia/tree/main/TPAC-2021 17:08:30 Zakim has joined #immersive-web 17:08:48 agenda+ Solve a11y 4eva (Cover some new work and polish document) 17:08:55 agenda+ Charter progress update and Charter 3:The Chartering 17:08:58 present+ 17:09:02 agenda+ Depth testing across layers 17:09:08 agenda+ Break 17:09:14 agenda+ Focus control for handheld AR 17:09:20 agenda+ Break 17:09:27 scribe? 17:09:33 agenda+ Discussion: Getting Hand Input to CR 17:09:41 agenda+ Communicate earlier that the UA doesn't need a depth texture 17:09:48 agenda+ Extending WebExtensions for XR 17:09:54 agenda+ Break 17:09:56 present+ 17:10:03 agenda+ XRCapture Module 17:10:11 agenda+ Expose a way to query a session about the supported features - need to reconsider? 17:10:17 agenda+ Break 17:10:24 https://www.w3.org/2020/05/immersive-Web-wg-charter.html 17:10:25 agenda+ Projection matrices differ between WebGL and WebGPU 17:10:28 topic: charter 17:10:32 s/topic: charter/ 17:10:39 zakim, take up agendum 2 17:10:39 agendum 2 -- Charter progress update and Charter 3:The Chartering -- taken up [from atsushi] 17:10:43 zakim, list agenda 17:10:43 I see 14 items remaining on the agenda: 17:10:44 1. Solve a11y 4eva (Cover some new work and polish document) [from atsushi] 17:10:44 2. Charter progress update and Charter 3:The Chartering [from atsushi] 17:10:44 3. Depth testing across layers [from atsushi] 17:10:44 4. Break [from atsushi] 17:10:44 5. Focus control for handheld AR [from atsushi] 17:10:45 6. Break [from atsushi] 17:10:45 7. Discussion: Getting Hand Input to CR [from atsushi] 17:10:45 8. Communicate earlier that the UA doesn't need a depth texture [from atsushi] 17:10:47 9. Extending WebExtensions for XR [from atsushi] 17:10:47 10. Break [from atsushi] 17:10:47 11. XRCapture Module [from atsushi] 17:10:47 12. Expose a way to query a session about the supported features - need to reconsider? [from atsushi] 17:10:47 13. Break [from atsushi] 17:10:48 14. Projection matrices differ between WebGL and WebGPU [from atsushi] 17:10:50 RafaelCintron_ has joined #immersive-web 17:12:53 idris has joined #immersive-web 17:16:01 zakim, choose a victim 17:16:01 Not knowing who is chairing or who scribed recently, I propose yonet 17:16:09 zakim, choose a victim 17:16:09 Not knowing who is chairing or who scribed recently, I propose bajones 17:16:23 present+ 17:16:28 present+ 17:16:32 present+ 17:16:34 present+ 17:16:40 present+ 17:16:51 present+ 17:17:00 present+ 17:17:23 present+ 17:17:43 Michael_Hazani has joined #immersive-web 17:17:45 scribenick: bajones 17:18:57 We forgot to set up a scribe for a bit, been talking about re-chartering 17:19:24 rrsagent, publish minutes 17:19:24 I have made the request to generate https://www.w3.org/2021/10/14-immersive-web-minutes.html atsushi 17:19:28 ada: We want to carry forward all the previous specs to the new charter, right? 17:20:12 ada: Gaze tracking? Y/N? 17:21:17 klaus: There was quite a bit of concern about it, and there's higher level alternatives such as focus events. 17:21:26 ada: Okay, let's remove it. 17:21:33 ... image tracking? 17:22:28 klausw: What's the scope? Raw camera access may be sufficient? 17:22:54 ada: If we drop it we can't ship an image detection spec without rechartering 17:23:00 ... face detection? 17:23:11 klausw: I don't know that anyone is working on that. 17:23:25 ada: I think we should remove it. 17:23:32 Q+ 17:23:48 ... so in summary, drop gaze tracking and face detection. 17:25:10 3d favicons is a great example 17:25:13 ... What topics should we look at adding? 17:25:47 bajones: We should consider features that aren't explicitly hardware-related, like 3d favicons. 17:26:30 ada: Yes, we should look at volumetric CSS, 3d favicons and the model tag. 17:27:14 yonet: Face detection has been used for a while for security, and gaze tracking as well. So there are security concerns. 17:27:37 klausw: Gaze could be proivded as an XRInput device. 17:29:03 ada: Let's swing back around to this after we talk about new proposals/blue sky. 17:29:11 zakim, choose a victim 17:29:11 Not knowing who is chairing or who scribed recently, I propose ada 17:30:06 https://github.com/immersive-web/layers/issues/135 17:30:33 Depth testing accross layers 17:30:55 The problem with layers is that they don't intersect they just sit on top 17:31:28 e.g. A projection layer on top of a cylinder layer needs to have a hole punched ut 17:31:39 and if you put the cylinder layer in front you cannot see the controllers 17:32:00 This is a dirty hack which breaks if you move fast 17:32:22 i|https://github.com/immersive-web/layers/issues/135|topic: Depth testing across layers 17:32:23 We've been looking at having depth sorting between layers to do actual sorting 17:33:00 This would be great for having multiple projection layers which can be combined 17:33:50 The issues comes with opacity i.e. a piece of glass in front of a cylinder layer 17:34:14 q+ to shout yes 17:34:17 q+ 17:34:18 Yes 17:34:25 Super interested in that to experiment. 17:34:44 ack yonet 17:34:49 ack ada 17:34:49 ada, you wanted to shout yes 17:35:04 ada: yes I think it is fine if that is a known edge case 17:35:07 ack bajones 17:35:38 q+ 17:35:52 bajones: there are two types of depth sorting. 1. the compositor tests against my depth buffer when rendering the depth buffers 17:36:27 the second type is when we have multiple projection layers which will need a per pixel test to test if each pixel is closer to the camera 17:36:36 q+ 17:37:06 cabanier: we are thinking of doing the second case as it is what has been requested of us 17:37:14 dino has joined #immersive-web 17:39:11 q+ 17:39:25 currently you couldn't have any intersecting layers in an X shape without the second style of compositing 17:39:42 currently they would obscure eachother in a punch through kind of way 17:39:52 bajones: how do you imagine this to be enabled? 17:40:30 bajones: it seems you want a per-layer toggle for this 17:40:56 cabanier:it seems for that situation you could work around it with WebGL 17:41:45 ack Michael_Hazani 17:42:23 Michael_Hazani: I just wanted to express our enthusiasm for this one of our products really relies on it 17:42:44 one of the things we would like to do is multiple XR sessions on top of eachother 17:43:07 q+ 17:43:27 ack ada 17:43:45 ada: when Rick described the multi session use case 17:44:15 q+ to say that depth testing across sessions may require a shared depth range 17:44:28 ada: it is very powerful, combining immersive AR and VR session, without the AR layer 17:44:55 ack Nick-8thWall 17:45:06 ada: like iframes in HTML but for WebxR 17:46:13 Nick-8thWall: we want to use DOMLayers to attatch interfaces to users, having it semi transparent would be really valuable 17:46:47 q+ 17:48:52 q- 17:49:00 ack jared 17:49:11 ada: I think in that situation you could work around it via having the semi transparent layers be the frontmost layers 17:49:45 Nick-8thWall: just wanted to ensure that the wrist mounted or HUD use case was covered 17:49:54 ack bajones 17:49:54 bajones, you wanted to say that depth testing across sessions may require a shared depth range 17:50:11 Jared: even an implementation with the semi-transparent limitations will be useful for us 17:51:09 bajones: right now with webxr you set a depth range for the whole session, that might not be the case if it gets to the point where we are extending it. I think there are more issues but this is just one which comes to mind. At least within the same XR session this should work itself out fairly naturally. 17:51:49 cabanier: @Nick-8thWall DOM Layers will always be opaque at least how they are defined right now 17:52:03 but cylinder or quad layers can have transparency but should blend correctly 17:53:41 For the record: A possible issue with transparancy on geometric layers is intersecting layers with transparency on both sides of the intersection. If two quad layers are arranged in an X and both have transparency there's not a natural order to render them in. 18:01:45 scribenick: cabanier 18:01:56 topic: Focus control for handheld AR 18:01:59 agenda:https://github.com/immersive-web/webxr/issues/1210 18:02:00 q+ 18:02:06 zakim, take up agendum 5 18:02:06 agendum 5 -- Focus control for handheld AR -- taken up [from atsushi] 18:02:11 klausw: for phone ar, autofocus is off 18:02:27 ... because it can cause issues 18:02:27 s|agenda:https://github.com/immersive-web/webxr/issues/1210|| 18:02:45 issue link -> https://github.com/immersive-web/webxr/issues/1210 18:02:45 ... you can choose fixed focus at a far distance or a close distance 18:03:00 q+ to ask whether it would make sense to tie to hit-test 18:03:06 ... if you want to do marker tracking, you can't detect things that are close 18:03:21 ack bialpio 18:03:22 ... should apps have a way to express this? 18:03:30 q+ 18:03:39 bialpio: what do people think about this feature? 18:03:48 ... there hasn't been much activity on this? 18:04:01 ... or are we ok with the way it is right now? 18:04:09 alexturn has joined #immersive-web 18:04:25 klausw: one problem is that marker tracking that needs it, can't use it 18:04:43 bialpio: we might be more forward thinking 18:05:02 ... the tag wants to have focus control for raw camera access 18:05:14 ... so maybe we should think about the focus control now 18:05:14 q? 18:05:17 ack ada 18:05:17 ada, you wanted to ask whether it would make sense to tie to hit-test 18:05:33 ada: I've definitely come across this 18:05:51 ... can the user agent can take care of this? 18:06:06 ... I guess it can be hard for the UA to do so 18:06:08 q? 18:06:10 ack Nick-8thWall 18:06:30 Nick-8thWall: this is a major issue that we run into when scanning qr code 18:06:50 ... I had to print qr codes out on large pieces on paper 18:06:59 ... we prefer that autofocus is always on 18:07:27 q+ 18:07:46 ... on other platforms, we always set things to autofocus because it's clearly the best experience 18:08:09 LachlanFord has joined #immersive-web 18:08:09 q? 18:08:17 ack klausw 18:08:18 ... my preference would be to make autofocus always the default and maybe have an option to turn it off 18:08:30 ... autofocus is clearly the better solution 18:08:45 klausw: I'm unsure when autofocus would make things worse 18:09:06 ... people don't seem excited to have an API 18:09:19 q? 18:09:25 ... the app wouldn't know if a device wouldn't work well with autofocus 18:09:48 Nick-8thWall: on 8th wall, it's very hard to determine what the best experience is on each device 18:10:06 ... on most devices autofocus is best but on some it doesn't work as well 18:10:31 q+ 18:10:31 ... it's unworkable for us to have a per device decision 18:10:37 ack klausw 18:10:55 klausw: so nobody is interested in making this an API? 18:11:12 ... so user agents are free to choose autofocus? 18:11:34 ... or maybe it can be triggered by other signals. 18:12:16 0 18:12:19 0 or +1 18:12:29 0 18:12:31 0 18:12:31 0 or +1 18:12:38 0.5 18:12:41 0.25 18:12:48 .5 18:12:51 i 18:12:54 0 or +1 18:13:05 rrsagent, publish minutes 18:13:05 I have made the request to generate https://www.w3.org/2021/10/14-immersive-web-minutes.html atsushi 18:13:14 ada: it seems people want the user agent to make a decision based on heuristics 18:13:42 i/Depth testing accross layers/scribenick+ ada/ 18:13:51 klausw: ok, we'll make a decision. We might turn on autofocus but then turn it off for certain devices 18:15:15 rrsagent, publish minutes 18:15:15 I have made the request to generate https://www.w3.org/2021/10/14-immersive-web-minutes.html atsushi 18:35:00 present+ 18:35:32 zakim, choose a victim 18:35:32 Not knowing who is chairing or who scribed recently, I propose Jared 18:35:53 topic: TPAC Discussion: Getting Hand Input to CR 18:36:25 https://github.com/immersive-web/webxr-hand-input/issues/107 18:36:34 q+ 18:36:44 ack LachlanFord 18:37:17 q+ 18:37:54 q? 18:38:35 LachlanFord: started working on web platform testing (WPT) 18:39:46 q+ 18:40:13 q+ for just comment, please complete self review checklist before requesting HRs 18:40:29 ack cabanier 18:40:51 ada to moan about W3C sticklers regarding implementations 18:41:05 q+ ada to moan about W3C sticklers regarding implementations 18:41:11 Rik: Although both Chromium, we're do not share code. 18:41:13 q+ 18:41:36 ack bialpio 18:41:39 Rik: WPT should run on Android, as soon as it's up we can write the test 18:41:55 alcooper has joined #immersive-web 18:43:06 blialpio: Not sure what requirements are, launch process before launching a feature should work. It seems like it is up to us to advance. Not sure if WPTs are blocking but they are good to have. Not sure how Oculus Browser is setup. We use a fake device implementation. We are mocking device to only test 'blink code' and can chat about it later. 18:43:12 ack atsushi 18:43:12 atsushi, you wanted to discuss just comment, please complete self review checklist before requesting HRs 18:44:03 atsushi: For horizontal review, please do self-check list. I can do international, there is a self-check list for all horizontal review areas. Please complete first. I will post link to the procedure. 18:44:15 q? 18:44:18 ack ada 18:44:18 ada, you wanted to moan about W3C sticklers regarding implementations 18:44:26 HR procedure -> https://www.w3.org/Guide/documentreview/ 18:45:47 ada: We might in the long term get pushback from some folks in W3C. Two independent implementations may be met with skepticism. Is this something that Apple may able to show? 18:46:10 dino: We don't have devices, but we have the barebones implemented in Webkit. 18:46:31 ada: It is working in Linux in Egalia ? 18:47:18 dino: Egalia has it working. Mac had it working, on Valve headsets. Currently code isn't open source, or in a product so it wouldn't work. Anything we do for our implemenation will work for Igalia as well. The question is if they have hand-tracking. 18:47:21 s/Egalia/Igalia 18:48:16 s/they have/they support a device with native hand-tracking/ 18:49:51 ada: Adding it to the horizontal review checklist. The TAG may have added things to the checklist. Once everyone who is satisfied with this, ping Chris or Ada, to get it done. It is on our homepage. There is an editor's draft. Did we do a CFP for that? 18:50:09 dino: Since it shipping in two browsers we should move it to a public working draft. 18:50:52 Ada: It says WebXR hand tracking already has a working draft. The next step is putting it forward to CR. We could wait to put it forward for CR, or to it now. 18:51:17 Dino: I don't think we can go to CR until we can even test that the two implementations are correct. I am happy to help as much as I can like writing tests: 18:51:28 Ada: Lachlan, you're working on tests? 18:51:47 Lachlan: Yes, Dino and I are working in tests. 18:52:05 Once WPT is done ping immersive-web-chairs@w3.org and we will put it forward for a CFP for CR 18:52:23 q+ 18:52:25 q- 18:52:30 ack LachlanFord 18:52:59 LachlanFord: I think I'm on the queue, it was about implementations. I think Mozilla is there. 18:53:16 Ada: So we do have two independent implementations? 18:53:30 LachlanFord: Yes, I believe so 18:53:31 topic: https://github.com/immersive-web/webxr/issues/1228 18:54:32 Rick: Mozilla had an implementation on the prior version on the API and it wasn't public. 18:55:04 topic: Communicate earlier that the UA doesn't need a depth texture 18:55:27 https://github.com/immersive-web/webxr/issues/1228 18:56:48 q+ 18:56:57 Rick: This should be a short topic. After you create your WebGL, projection layer. You are told you don't need a depth texture, even though you just made one. It would be nice to know before you create. You create color and depth. If you knew ahead of time you would not need to create the extra texture. At the moment if you want to do multi-sampled you lose the memory for no reason. If there could be an attribute, the same attributes that sa[CUT] 18:57:04 q+ to ask about fingerprinting 18:57:28 ack bajones 18:57:32 Rick: yes, no, it could be avoided. Shouldn't be controversial. Any objections to propose if you need it or not? 18:58:38 present- 18:58:45 q+ 18:58:49 ac kada 18:58:53 bajones: I don't find anything objectionable. There may be many cases where people ignore it. If you are going to put it somewhere, it feels natural to put on the WebGL binding. The binding seems like the place where you differentiate and you need to create the binding anyone. Seems like a great to put in that interface. Seems like a great way to allow for better memory use. 18:58:53 ack ada 18:58:53 ada, you wanted to ask about fingerprinting 18:59:16 ada: Is this a property for before a session is granted? This is one more bit of fingerprinting. Could be worth considering. 19:00:02 q+ 19:00:41 bajones: Assuming it is on the binding.. It could be on the session itself but I don't see a reason to have it there. There is always a possibility that you could have different pieces of hardware depending on your systems. Depending on if you have multiple devices on a system. 19:00:43 Sure 19:00:56 q+ 19:01:06 ack RafaelCintron_ 19:01:14 q- 19:01:41 RafaelCintron: I don't object. This is helpful for reprojection. Knowing this allows them to know what they are opting into. 19:02:00 ack cabanier 19:02:14 bajones: good point, we should show that depth buffers are preferred. If this is there we should have them consider that they should provide it. 19:03:00 rick: In the case of the quest this would be false. The three.js attribute we will not populate the depth texture. This is why we proposed it, it would be nice if we don't request it in the first place. 19:10:40 topic: Extending WebExtensions for XR https://github.com/immersive-web/proposals/issues/43 19:12:01 Zakim/choose a victim 19:12:26 Zakim, choose a victim 19:12:26 Not knowing who is chairing or who scribed recently, I propose dino 19:12:46 Zakim, choose a victim 19:12:46 Not knowing who is chairing or who scribed recently, I propose bialpio 19:13:22 scribenick: bialpio 19:14:28 ada: older issue, the idea has been brought up a couple times, in general the idea of combining 2 immersive sessions where neither session needs to be aware that it's embedded in another 19:14:49 chair:yonet 19:14:55 ada: important idea is iframes, so you could drop one page inside of another - it'd be powerful to do this for webxr 19:15:30 ada: you could have an avatar system running on a separate domain, you could feed it locations of people and it'd populate it w/ avatars 19:16:09 q+ 19:16:12 ada: is this still something that people would like? if so, what is missing to enable it? 19:16:27 q+ 19:16:29 LachlanFord: yes, people want to explore it 19:16:33 ack LachlanFord 19:16:37 ack Nick-8thWall 19:17:06 Nick-8thWall: random first impressions - it's important to have for example payment integrations 19:17:16 ... it'd be very useful 19:17:35 q+ 19:17:37 Agreed! There is a big list of those types of apps in the Aardvark project 19:17:40 ack LachlanFord 19:17:41 ada: other use cases off the top of people's heads? 19:18:10 LachlanFord: you get more utility if you have something that's composable 19:18:49 ada: earlier topic of combining layers touched upon this as it'd be needed to solve this 19:18:55 q+ 19:19:01 ack LachlanFord 19:19:07 ada: how about the computing power required? would this be a blocker? 19:19:27 LachlanFord: input routing and security are major concerns 19:19:58 ... composition of multiple full screen images is expensive e.g. on hololens 19:20:37 ... maybe pixels arent the thing to work with 19:21:04 ada: what if we had small widgets that'd be visible and bounding-box them - would that help? 19:21:55 ada: if you wanted to compose multiple sessions - there may be issues around z-fighting (floor plane) and skyboxes 19:22:00 q+ 19:22:04 ack cabanier 19:22:36 cabanier: is the proposal that the main page enters immersive session, all the other iframes would as well? 19:23:07 ada: unsure, maybe a second level of layers that pull content from a 3rd party 19:23:31 ... or maybe you listen in to some event that fires when the main page enters immersive session 19:23:51 ... would like to brainstorm this 19:23:56 q+ 19:24:03 q+ 19:24:16 cabanier: may require a lot of changes, maybe not in the API but in the implementation in the UAs 19:24:25 ack Nick-8thWall 19:25:04 Nick-8thWall: similar to iframes where you pick where they are on the page, it will be important to do something like it for where they are in the session 19:25:37 ... potentially taking 2 clip volumes and merging them together would not make sense - how do you define a logical volume that an inner experience is allowed to fill is important to solve here 19:26:17 ada: good point, we can force the inner experiences to have different scales / do sth with them 19:26:33 q? 19:26:36 ack Jared 19:26:40 ... taking a diorama of the inner experience can be handy 19:27:07 Jared: great to have discussions around it, it's very powerful to have multiple sessions running alongside each other 19:27:31 ... we're experimenting w/ OpenXR and making some progress 19:28:33 ... it's possible to experiment with chromium and openxr 19:28:55 ... there's still some issues but it's still possible to get useful things out of it 19:29:30 yonet: link to a gif, something like changing the scale of the inner experience 19:29:46 ... https://github.com/Yonet/Yonet/blob/main/images/headTracking.gif 19:30:13 ada: probably not adding to the charter now, but maybe TPAC+1 ? 19:44:36 topic: XRCapture Module https://zspace.com/ 19:44:47 https://github.com/immersive-web/proposals/issues/68 19:48:17 19:48:57 alcooper: asks to enable capture of AR / VR experiences 19:49:16 ... gap with native approaches (SceneViewer on Android, Hololens also has a way) 19:49:29 q? 19:49:32 ... no good solution in WebXR so far 19:49:36 q+ 19:49:46 ... approach with secondary view but would miss out on DOM overlay 19:50:02 ... the proposal is a new API to start recording a session and dump it on the disk 19:50:19 q? 19:50:24 ack cabanier 19:50:33 q+ 19:50:34 ... privacy issues mentioned, the mitigation is to dump it on disk instead of sharing the camera feed with the page (if it doesn't ask for raw camera access explicitly) 19:52:42 bajones: there are 2 different outputs of the API - one is that when you use the API to record, the capture will end up on the device's storage (triggers native capture of the device), but then you get a share handle out of it which means the file that was captured can be shared with the device 19:52:59 ... initially I thought that if a recording is not immediately shared, it'll be lost, but that is not the case 19:53:27 cabanier: initial proposal to use views - misunderstood the proposal 19:53:41 ... in oculus, this is possible via an OS menu 19:53:48 q+ to mention the web share api 19:53:49 ... can this be achieved similarly? 19:54:23 alcooper: the way the API is proposed is to have a function that takes an enum value (screenshot vs video vs 360video vs ...) 19:54:31 q+ 19:54:32 ack bajones 19:54:34 q+ 19:55:26 bajones: I don't recall how it works on Oculus, I think it's appropriate to have a way for an API to display a system dialog 19:55:39 ... it's not appropriate to just do things w/o confirmation 19:56:02 ... it feels that the API should hint to the system dialog about what the page requested 19:56:33 alcooper: the only issue w/ hinting is that the page may not expect that it needs to stop recording 19:57:13 q? 19:57:15 q+ 19:57:15 ack ada 19:57:16 ada, you wanted to mention the web share api 19:57:20 ... one of the items is it's built in into the native solutions that don't show prompts but this being the web we need a confirmation 19:57:34 q- 19:57:55 q+ about webshare 19:58:09 q? 19:58:12 ack Nick-8thWall 19:58:17 ada: similar to Web Share, where instead of giving an URL and text to share, it's a special thing that pops up a dialog and the promise gets resolved after the dialog is dismissed 19:58:54 oh you need the 'to' to add the note 19:58:58 qi about 19:59:02 q- about 19:59:06 a- webshare 19:59:09 Nick-8thWall: things that can be tricky is providing a custom audio when capturing for example 19:59:14 q+ alcooper to talk about webshare 19:59:21 q- webshare 19:59:26 vq? 19:59:31 q+ 19:59:38 ... so getting some access to the media streams would be preferred route 19:59:38 ack alcooper 19:59:38 alcooper, you wanted to talk about webshare 19:59:53 alcooper: why not getUserMedia() - discussed in the explainer 19:59:58 q+ 20:00:14 ... this piles on the privacy & security aspect 20:00:28 ... that's why the API is proposed as a new one 20:00:43 q+ 20:00:48 ... on Android there's no implementation for some of the things 20:00:50 q+ 20:01:17 ... discussed with web share folks, the .share() function is a shim to web share API that adds the captured file to their list 20:01:23 q? 20:01:26 ack yonet 20:01:40 yonet: wanted to also talk about privacy 20:01:54 ... useful to just capture the 3d content w/o the background 20:02:07 q? 20:02:10 ack bialpio 20:03:07 q? 20:03:10 ack bajones 20:03:31 bialpio: getUserMedia() drawbacks - the immersive sessions may not show up in the pickers etc 20:04:14 bajones: we maybe could funnel things through getUserMedia() but it does seem like relying on an implementation detail 20:04:52 ... re: "useful to capture content w/o 3d backgroud" - is this an argument for or against? 20:05:15 yonet: it's an argument for because it gives us more control over what's captured 20:05:16 q+ to ask about returning an opaque buffer 20:05:20 ack Nick-8thWall 20:05:59 Nick-8thWall: to clarify - I did not recommend getUserMedia() as a mechanism, I wanted to get a handle to a media stream / track 20:06:35 ... you don't decide on the device, but you get media tracks out of a session 20:07:05 ... permission to record a session can be used as a permission to share things with the site 20:07:17 q? 20:07:19 q+ to talk about tracks 20:07:28 ack alcooper 20:07:28 alcooper, you wanted to talk about tracks 20:08:11 alcooper: understood about tracks, there is still a level of distinction between transiently getting access to camera feed 20:08:31 ... vs the API to trigger the system-level functions 20:08:48 ... triggering the recording does not mean the page gets the file 20:08:50 q? 20:08:57 ack ada 20:08:57 ada, you wanted to ask about returning an opaque buffer 20:09:26 ada: not a fan of having another API + a little bit 20:09:27 q+ 20:09:47 ... we already have a way of handing out opaque buffers (images w/o CORS) 20:10:37 We are over 10 minutes for this topic 20:10:39 ... it'd be nice to have something like this where you ask for a recording and get a URL (UUID) pointing to the resource that can be used with the share API 20:11:04 ... it separates creation of the file from exposing access to it 20:11:20 ack Nick-8thWall 20:12:11 Nick-8thWall: e.g. when we record 8thWall sessions, we will on the device transcode them to mp4 so it's important to be able to access the bytes of the media stream 20:12:41 ... on headsets with recording there is no expectation that there is a camera feed at all so no privacy impact 20:13:06 ... so we should not hamstring the API with a concern that is not always applicable 20:13:31 alcopper: encode the recording as something that would show up as a video file in the user's media library (so .mp4) 20:13:51 Nick-8thWall: other use cases like branding / watermarks 20:14:19 ... so there are cases where access to the recording is needed 20:14:32 q+ 20:15:03 q- 20:15:08 +1 20:15:15 +1 20:15:26 +1 20:15:26 -1 20:15:27 +1, preference for media track implementation 20:15:27 ada: should we work on this? +1 or -1 20:16:08 side note: can watermarking / etc happen post-recording? i.e. when the file is shared with the site 20:16:32 topic: Expose a way to query a session about the supported features - need to reconsider? https://github.com/immersive-web/webxr/issues/1205 20:17:12 scribenick: cabanier 20:17:32 https://docs.google.com/presentation/d/1tMTwkza_WDu5DNknrjwshECQ_OfzNToAXVpt7WXXi-0/edit?usp=sharing 20:18:45 bajones: we need to know what optional features were granted 20:19:08 ... the session might discard optional ones and you have no standard way of figuring out 20:19:40 ... one of the scenarios had that the anchors api couldn't convey that there are no tracked anchors 20:19:52 ... this problem has come up often 20:20:20 ... the proposed solution is an array of granted features 20:20:33 ... (talks about the slide) 20:21:07 ... the only hitch is that the spec specifies them as "any", but they're always dom strings 20:21:15 q+ 20:22:11 ... for instance, dom overlay has an elegant way to still pass a string by adding an optional dictionary in sessioninit 20:22:20 ack bialpio 20:22:24 ... so features should be strings going forward 20:22:45 bialpio: we already say that the features should have a toString method 20:22:54 ... we might as well make them into string 20:23:03 ... so we don't lose anything 20:23:22 ... for anchors, depth sensing might be transitory 20:23:35 ... because it might come and go 20:23:52 q+ 20:24:06 ack alcooper 20:24:17 alcooper: bajones and I talked about this for the XRCapture module 20:24:27 ... you don't want to block a session on it 20:24:58 ... XRCapture would be impossible to know if the feature was granted 20:25:49 Slides for the next section: https://docs.google.com/presentation/d/1wXzcwB-q9y4T5VL9sKRhAWz_2nvwAWdFtTvVgk43rlk/edit?usp=sharing 20:26:15 topic: Projection matrices differ between WebGL and WebGPU https://github.com/immersive-web/webxr/issues/894 20:33:19 Zakim, choose a victim 20:33:19 Not knowing who is chairing or who scribed recently, I propose cabanier 20:33:29 Zakim, choose a victim 20:33:29 Not knowing who is chairing or who scribed recently, I propose atsushi 20:33:37 Zakim, choose a victim 20:33:37 Not knowing who is chairing or who scribed recently, I propose idris 20:33:46 Zakim, choose a victim 20:33:46 Not knowing who is chairing or who scribed recently, I propose atsushi 20:33:49 Zakim, choose a victim 20:33:49 Not knowing who is chairing or who scribed recently, I propose bialpio 20:34:04 scribenick: bialpio 20:34:27 bajones: the projection matrices are supposed to change to normalized device coordinates 20:34:37 ... webgl and webgpu have different conventions 20:34:56 ... so x,y are in -1,1 range, but depth range is 0,1 on webgpu 20:35:32 ... so if you take a webgl proj matrices and feed them to webgpu, you'll get a kind-of-ok results, but those won't be correct 20:36:10 ... for viewports, the problem is that webgl has an origin in lower-left corner, +y going up 20:36:32 ... webgpu is like other APIs, w/ origin in top-right corner, +y going down 20:36:53 s/top-right/top-left 20:37:22 ... it's a problem that you'd like to address before the content is being built 20:37:51 ... the idea is that things that need to change between the APIs will all go on an XRView 20:38:39 ... have XRFrame.getViewerPose() accept a parameter that'll accept the API name (as enum) 20:38:46 ... to specify which convention to use 20:38:49 q+ 20:39:11 ... other approach is to leave it to devs to do the math 20:39:53 ... for proj matrices, there is a matrix to multiply by, similarly for viewports (not-too-hard adjustment is needed) 20:40:08 ... but I'd prefer to hand out the data the right way 20:40:23 ack Nick-8thWall 20:41:01 Nick-8thWall: not too familiar w/ WebGPU & how it binds to WebXR session - right now you set up a web gl layer and that assumes WebGL 20:41:20 ... can't we just get the data in a correct context? 20:41:49 bajones: viewport comes from a specific layer so it knows if it's backed by WebGL vs WebGPU so no extra flag is needed 20:42:05 ... but that's not the case for projection matrix, it comes from XRView 20:42:19 q+ 20:42:26 Nick-8thWall: can you have gl & gpu layers running side by side? 20:42:40 bajones: yes, theoretically 20:43:03 ack RafaelCintron_ 20:43:05 ... there should be no blockers to mix 20:43:08 q+ 20:43:15 ... we just need to make sure we give the right data 20:44:18 RafaelCintron_: WebGPU is only accessible by the new layers spec so can we put the matrices on the binding (same for WebGL) - this way the apps could mix & match and will have access to the right data 20:44:45 ... it'd mean we're deprecating the existing fields 20:45:00 bajones: correct about deprecation 20:45:13 ack Nick-8thWall 20:45:37 Nick-8thWall: whatever is the base layer is the default on the views, and we can have per-layer projection matrices? 20:46:29 bajones: having the projection matrix associated with the specific layer doesn't sound bad 20:46:36 +1 to what RafaelCintron_ said 20:47:06 cabanier: agreed w/ Rafael to put this info on a binding 20:47:27 bajones: do we want to port this to WebGL as well or leave it as is? 20:47:46 cabanier: yes, it should be added there as well 20:48:22 bajones: more WebGPU & WebXR conversations coming tomorrow, we can go back to it then 20:49:14 RRSAgent, make minutes 20:49:14 I have made the request to generate https://www.w3.org/2021/10/14-immersive-web-minutes.html yonet 20:51:20 RRSagent, make log public 23:59:59 s/topic: Focus control for handheld AR// 23:59:59 i/topic: TPAC Discussion: Getting Hand Input to CR/scribenick+ Jared/