04:11:54 dom has joined #immersive-web 17:59:27 RRSAgent has joined #immersive-web 17:59:27 logging to https://www.w3.org/2020/11/06-immersive-web-irc 17:59:58 present+ 18:00:03 present+ Manish 18:00:11 present+ 18:02:30 lgombos has joined #immersive-web 18:04:40 cabanier has joined #immersive-web 18:04:46 present+ 18:10:03 zakim, who is here? 18:10:03 Present: Manishearth, cwilso, Leonard, ada, kip, bajones, cabanier, yonet, trevorfsmith, mounir, alexturn, klausw, dino, alcooper, laford, atsushi, adarose, avadacatavra, pes, 18:10:06 ... Brett, bajones_, Lachlan_Ford, nick-8thwall, Laszlo_Gombos, madlaina-kalunder, alexturn_, rik, Brandon 18:10:06 On IRC I see cabanier, lgombos, RRSAgent, anssik, Karen, klausw, atsushi, NellWaliczek, dino, mdjp, mounir, ada, bertf, garykac, cwilso, iank_, trevorfsmith, sangwhan, surma, 18:10:06 ... Manishearth, rzr, Zakim 18:10:08 bialpio has joined #immersive-web 18:10:38 present- kip 18:10:50 present- dino 18:10:58 present- laford 18:11:05 present- atsushi 18:11:07 Present+ Laszlo_Gombos 18:11:12 present- pes 18:11:26 present- avadacatavra 18:11:30 zakim, who is here? 18:11:30 Present: Manishearth, cwilso, Leonard, ada, bajones, cabanier, yonet, trevorfsmith, mounir, alexturn, klausw, alcooper, adarose, Brett, bajones_, Lachlan_Ford, nick-8thwall, 18:11:34 ... Laszlo_Gombos, madlaina-kalunder, alexturn_, rik, Brandon 18:11:34 On IRC I see bialpio, cabanier, lgombos, RRSAgent, anssik, Karen, klausw, atsushi, NellWaliczek, dino, mdjp, mounir, ada, bertf, garykac, cwilso, iank_, trevorfsmith, sangwhan, 18:11:34 ... surma, Manishearth, rzr, Zakim 18:11:48 present- Leonard 18:12:02 present- madlaina-kalunder 18:12:10 present- alcooper 18:12:18 present- brett 18:12:32 present- nick-8thwall 18:12:35 zakim, who is here? 18:12:35 Present: Manishearth, cwilso, ada, bajones, cabanier, yonet, trevorfsmith, mounir, alexturn, klausw, adarose, bajones_, Lachlan_Ford, Laszlo_Gombos, alexturn_, rik, Brandon 18:12:38 On IRC I see bialpio, cabanier, lgombos, RRSAgent, anssik, Karen, klausw, atsushi, NellWaliczek, dino, mdjp, mounir, ada, bertf, garykac, cwilso, iank_, trevorfsmith, sangwhan, 18:12:38 ... surma, Manishearth, rzr, Zakim 18:13:02 present- trevorfsmith 18:13:14 present- Lachlan_Ford 18:13:26 zakim, who is here? 18:13:26 Present: Manishearth, cwilso, ada, bajones, cabanier, yonet, mounir, alexturn, klausw, adarose, bajones_, Laszlo_Gombos, alexturn_, rik, Brandon 18:13:28 On IRC I see bialpio, cabanier, lgombos, RRSAgent, anssik, Karen, klausw, atsushi, NellWaliczek, dino, mdjp, mounir, ada, bertf, garykac, cwilso, iank_, trevorfsmith, sangwhan, 18:13:28 ... surma, Manishearth, rzr, Zakim 18:13:36 present- yonet 18:14:04 alexturn has joined #immersive-web 18:14:07 bajones has joined #immersive-web 18:14:34 scribe: Alex Turner 18:14:40 scribenick: alexturn 18:15:05 bajones: We talked earlier this week about what's left to be done in the performance improvements repo 18:15:19 bajones: We could call it entirely or assert that we've addressed it 18:15:23 https://github.com/immersive-web/performance-improvements/issues 18:15:51 bajones: I scrubbed through this to write down where we are on individual things 18:15:52 RafaelCintron has joined #immersive-web 18:15:57 present+ 18:16:07 bajones: First is hidden area mesh 18:16:31 bajones: This masks off a certain area of your output surface that won't be displayed, so you can safely not render those pixels 18:17:09 q+ 18:17:31 q+ 18:17:39 bajones: Not sure if Oculus does this, Valve claimed 20% but maybe that's lower with reprojection 18:18:08 bajones: OpenXR has this - not sure how widely it's supported 18:18:18 bajones: Something we could still expose 18:18:25 clarification: when using reprojection, having a larger rendered area is potentially useful since content might become visible 18:18:46 q- 18:18:58 cabanier: Kip says fixed foveated rendering could get the same benefit 18:19:32 bajones: Yea - and fixed foveated rendering is just a bit to flip, vs. having to do a prepass 18:19:37 ack alexturn 18:21:36 alexturn: Gives a benefit if your app is fill-bound 18:21:54 alexturn: Valve and MS at least had it in their native APIs, even with reprojection 18:22:13 alexturn: Oculus, Valve and MS are supporting the extension in OpenXR 18:22:49 bajones: Good to keep this in our pocket - nobody asking for it yet 18:23:23 bajones: Enable devs to select between multiple view configuration 18:24:01 bajones: For systems like Pimax, it would be ideal to render all 4 views 18:24:09 bajones: But if app isn't ready for that, it can render in a compat mode 18:24:14 q+ 18:24:25 q+ 18:24:34 bajones: We do have "secondary views" in the spec 18:24:49 bajones: If you leave it off, you get two views 18:25:01 bajones: If you turn it on, you can get extra views 18:25:39 bajones: Not just for multiple views per eye, can be for first-person observer too 18:26:12 bajones: Less flexible than the OpenXR approach, but it's also less fingerprintable 18:26:32 bajones: From my perspective, this is actually solved - waiting for further feedback 18:26:35 ack klausw 18:27:01 klausw: Correction, the Pimax has two screens - its compat mode is about parallel projection 18:28:02 klausw: We don't have a way to give apps that parallel assumption in WebXR - perhaps that's OK? 18:28:11 klausw: Could also want to let apps avoid double rendering if the secondary view overlaps the primary veiw 18:28:22 ack alexturn 18:30:19 alexturn: Secondary views seem good for HoloLens needs 18:30:52 alexturn: Not sure how this works for Varjo though where the two inner views change if you opt into 4 views 18:31:32 bajones: The spec says primary views are required to get an experience and secondary views aren't necessarily needed 18:33:13 bajones: Technically meets the spec text since you'd still get something in the headset if you ignore secondary views even after enabling it 18:33:29 bajones: Multiple viewport per eye support for lens-matched shading, etc. 18:34:57 bajones: I believe the layers module already covers what we intend here 18:35:09 bajones: Maybe foveated is good enough for now 18:35:35 cabanier: The layers modules does support foveated rendering with just a bit you flip on 18:35:41 bajones: Is it a bool or a float? 18:35:58 cabanier: It's a float, to control the amount of foveation used by the compositor 18:36:31 bajones: Being able to say "ahh, I just want a bit of foveation here" is probably the right thing for the web 18:36:47 cabanier: Definitely seems easier 18:37:06 bajones: Some of the lower-level techniques get pretty arcane 18:37:39 bajones: Some techniques have fallen out of favor - the new technique lets you render another texture to say what shading detail to do in each block 18:39:00 q? 18:40:43 bajones: For supporting headsets beyond 180 degrees, libraries could get confused and frustum cull wrong 18:41:02 q+ 18:42:39 ack cabanier 18:44:28 q+ 18:46:08 ack klausw 18:47:38 klausw: Can we force some of the non-weird configurations to be more of a weird config to get people to think consistently now? 18:48:48 q+ 18:48:54 ack cabanier 18:48:55 q+ 18:49:16 cabanier: The way three.js is doing this today is wrong for Magic Leap, because they assume it just points forward 18:49:50 ack alexturn 18:52:42 alexturn: Windows Mixed Reality has a "jiggle mode" feature which can randomize the rotation and FOV 18:53:04 alexturn: Could be used through a UA to test WebXR engines for correctness here 18:54:51 https://github.com/immersive-web/administrivia/issues/142 18:55:15 ada: We'll go through the remaining perf topics on Tuesday 19:04:40 scribenick: cabanier 19:05:34 bialpio: do we want to discuss cloud anchors 19:05:49 ... the biggest issues are already solved with regular anchors 19:05:52 bajones: yes 19:06:18 klausw_ has joined #immersive-web 19:06:19 bialpio: right now there is a way to create persistent anchor to persist across sessions 19:06:37 ... is this something we want to do because the charter says that we shouldn't focus on this 19:07:09 q+ 19:07:13 ... will you anchor from your phone work in a hololens 19:07:41 ... I don't think we can solve this because it implies some sort of format that can serialized across sdk, devices, etc 19:07:57 laford has joined #immersive-web 19:07:59 ... I don't see how this group could solve this without support from another platform 19:08:07 ... do other people have thoughts? 19:08:31 ack alexturn 19:08:32 q? 19:08:42 alexturn: this is a though one 19:08:59 yonet has joined #immersive-web 19:09:01 ... in openxr we have the same issue in discussion 19:09:06 ... even if we agree on the format 19:09:22 ... it is not easy. some vendors have a cloud 19:09:36 ... if you're on an arcore device, everything comes from that 19:09:55 q+ to ask Alex if there's been and OpenXR discussion about support for anchors/cloud anchors 19:09:56 ... with anchor, there are google cloud anchor. Spatial anchors work on arcore 19:10:18 ... it starts to feel less like a platform thing that you want to do with a browser 19:10:31 ack bajones 19:10:31 bajones, you wanted to ask Alex if there's been and OpenXR discussion about support for anchors/cloud anchors 19:10:58 bajones: is the topic of cloud anchor brought up in OpenXR 19:11:11 alexturn: yes, this topic was brought up there 19:11:43 ... those concerns stalled a solution 19:12:06 bialpio: maybe we can build on the image tracking API 19:12:16 ... maybe we can push it into the frameworks 19:12:32 ... can we describe a format that is understood by all the framework? 19:12:58 ... and emulate the cloud-iness. Is this something we could do 19:12:59 q+ 19:13:17 ... or do we think that is something we can't. Is it feasible? 19:13:17 ack alexturn 19:13:40 alexturn: what pieces need to be in place for azure anchors 19:13:50 ... on some platforms it might be ok 19:14:08 .. but I don't know how you would do it 19:14:27 ... we would need some type of web api like get anchor-blob 19:14:43 ... and it would be the azure spatial blob 19:14:53 q+ 19:14:54 ... but how would that work on an ARCore 19:15:22 ... the cloud SDK that the developer takes in, could use image 19:15:24 q+ 19:15:43 ack bajones 19:15:45 ... the SDK could make the right decisions but it's not obvious how it could be done in a common way 19:16:00 bajones: I can't figure that out in a reasonable way either 19:16:26 ... with android, there's an arcore that works on iOS 19:16:46 ... so there's nothing magical there 19:17:05 ... but even that is not quite good enough since Safari will be using ARKit 19:17:42 ... the only way this works if there is a common serialization format and we can piggyback on that 19:18:29 ... or we do a ton of work, most systems throw their information in the cloud 19:18:39 ... which is not going to work for us 19:19:10 ... we need to come up with our own backend and then somehow push that to the system 19:19:20 ack bialpio 19:19:40 bialpio: going back, we don't need to run our own 8th wall algorithms 19:19:51 ... maybe we can do a point cloud API 19:20:05 ... a serialized way to describe the anchor 19:20:11 ... there is a delivery method 19:20:21 ... which should be independent on the cloud 19:20:27 q+ 19:20:45 ... so we're not reliant on the cloud 19:21:01 ... the challenge is if it's possible for us to get this serialization format 19:21:12 ... how much help will we get 19:21:21 ... and maybe we need to be smart 19:21:33 ack alexturn 19:21:43 ... to get a distilled version of the anchor 19:21:55 alexturn: maybe this will happen when the industry is ready to converge 19:22:19 ... are there trade secrets? 19:22:35 ... even if we're exposing it, are there vendor specific extensions 19:22:40 q+ 19:22:53 ... the blob itself should not be vendor specific 19:23:06 ... I'm unsure how I would have the conversation 19:23:44 q+ 19:23:57 ... what should be in the blob? 19:24:06 ... on microsoft cloud, 19:24:24 ... google can provide extra data to their anchors but not so on iOS 19:24:52 ack bialpio 19:24:55 q- 19:24:59 ... what ever signals are available (???), and then if each platform has special sauce to make them even better, I'm unsure how we'd extract that 19:25:15 bialpio: I agree. It might be too early to talk about this 19:25:30 ... but this is a common request 19:25:47 ... I think a part of the use case, could be solved with image tracking 19:26:04 ... it still might be good enough 19:26:14 q+ 19:26:19 ack alexturn 19:26:22 q+ 19:26:28 ... we should have partial response for more persistent way to get it across sessions 19:26:30 q+ 19:26:48 alexturn: for vendor stuff, it feels like an OK place 19:27:02 ... in practice developes, they grab a specific SDK 19:27:11 ... handling that at the library layer 19:27:45 ack ada 19:27:52 ... I wouldn't be upset if we land there 19:28:06 q+ 19:28:07 ada: it would be great if we could have a talk 19:28:18 ... among vendors 19:28:25 ... this group could be the venue 19:28:41 ... or if we need to change the charter 19:29:08 q+ 19:29:15 ... then maybe they can find a common ground 19:29:24 ... it feels similar to byte assembly 19:29:40 ack cabanier 19:29:43 ... so it would be really great and as a chair I would like to help 19:31:01 q+ to talk about persistent anchors, and promotability when api support 19:31:17 ack bialpio 19:31:20 cabanier: ... 19:31:42 bialpio: if we're living in the world, where there are cloud anchor 19:31:49 ... we can push it down the stack 19:32:00 ... is this something we would like to do? 19:32:04 ack alexturn 19:32:38 alexturn: the question is by what mechanism, how would azure anchors end up on android 19:32:54 ... or is this be served by the web developer 19:33:42 ... if the privacy impact is impacted (???) 19:34:01 ... how do you decide who's allowed to be in 19:34:14 ... local anchor persistence 19:34:35 ... hololens has several versions of anchors 19:34:46 ... I'm sure if everyone agrees 19:34:59 ... but maybe the web could abstract 19:35:16 ack ada 19:35:16 ada, you wanted to talk about persistent anchors, and promotability when api support 19:35:18 ... so we don't have to give the blob to the developer 19:35:22 q+ 19:35:55 ... the underlying thing that we're abstracting is specified in web assembly 19:36:06 ... but here the blobs are opaque and vendor specific 19:36:22 ... and maybe that is where the stepping stone are available 19:36:31 q+ 19:36:31 ... eventually people might converge 19:36:44 ... but that conversation needs to happen first 19:36:53 ... I wonder if people will just wait and see 19:37:03 ... I'd love to see cloud anchors happen 19:37:16 ack ada 19:37:22 ... but we need to have blob format agreement 19:37:34 ada: it would be great to have an intermediate step 19:37:45 ... it would be good to have persistent anchors first 19:38:00 ... and it could be something we could do before we do cloud anchors 19:38:34 ... it would be great if we could have buy-in from vendors 19:38:58 ... I'm wary about vendor specific solutions 19:39:07 q+ 19:39:13 ack ba 19:39:33 bajones: persistent anchors sound a lot like shader caching 19:40:05 ... but on all the browsers, if you pass the same shader string, you can pull the prebuilt shader out of the cache 19:40:24 ... but it only works if you're on the same OS, driver, etc 19:40:46 ... it's circumstantial but we can experiment with it first 19:41:05 ... the web is full of people that can figure out how to do something useful 19:41:27 ... you can look at cloud anchor blobs 19:41:46 ... because the platforms controls the storage of the anchor on the device and the cloud 19:42:10 ... and I'm not convinced that the native formats will stay stable 19:42:50 ... it would be an interesting exercise to get the vendors together to freeze their formats 19:43:10 ... persistent anchors is a more realistic goal 19:43:34 ack alexturn 19:43:47 alexturn: bajones covered what I was going to say 19:51:30 My favorite YouTube channel for this stuff: https://www.youtube.com/c/RetroGameMechanicsExplained 19:53:14 q+ 19:53:16 q+ 19:54:00 scribenick: klausw 19:54:05 ack alexturn 19:54:44 alexturn: looked into lighting up babylon native on HoloLens, porting to babylon.js 19:54:56 alexturn: look into what future modules for RWU could look like 19:55:08 ... how to split up modules that deal w/ generic topic of RWU 19:55:24 ... lowest common denominator features - in WebXR, hit test 19:55:50 ... following on, when doing the hit, hit is on a plane from RWG API, enumerate RWG objects 19:56:08 ... two other topics came up since then: 19:56:24 ... assuming there's hit test, and a RWG module for planes / meshes 19:56:37 ... depth API came up, camera aligned or correlatable with the color image 19:56:49 ... could we fudge something similar enough for hololens or magic leap 19:57:01 ... where camera isn't primarily used for rendering, but could get depth from mesh 19:57:08 ... would this be used for occlusion? 19:57:25 ... demos: duck behind chair or table, creature can pop out behind table 19:57:35 ... is depth API or RWG better for this? 19:57:54 ... optimal API for mesh for analysis purposes is perhaps different from mesh for occlusion 19:58:07 ... if occlusion doesn't quite fit into RWG, may be too slow 19:58:22 ... maybe another bucket that does more for you - an "enable occlusion" API 19:58:38 ... similar to hit test, tell me where the hit is, doesn't care about underlying API 19:58:58 ... for occlusion, could be based on depth or mesh. Do it silently in background and it just happens? 19:59:12 ... does it fill in depth buffer (privacy risks) 19:59:24 q+ 19:59:26 ... could benefit from being tech neutral and more abstract 19:59:28 q+ to ask could we have a write only buffer? 19:59:34 ack bialpio 19:59:49 bialpio: breakdown of modules was mostly done for editorial reasons 20:00:02 ... not really trying to merge different aspects of the same concept 20:00:13 ... all are different aspects of RWG or detecting user's environment 20:00:37 ... just made more sense to create a new module w/ new feature to reason about user's env 20:00:54 ... could decide to have in same spec or keep separate, I don't have a strong opinion here 20:01:15 ... a bit tricky from my perspective to decide if this is the best approach, don't see it mattering that much 20:01:23 alexturn: let me change the question 20:01:39 ... not so much about module breakdown, more about API 20:01:54 ... one path for depth sensing, one path for a boolean occlusion=on/off 20:02:10 ... is image tracking / marker tracking part of RWG/RWU API? 20:02:12 q 20:02:30 bialpio: currently, we've been looking at use cases and finding APIs to fit them 20:02:44 ... not so much looking at cross dependencies between modules. Could do more to connect them. 20:03:02 ... i.e., once we have plane detection and hit test, could have hit test refer to plane 20:03:15 ... what if plane is also part of a mesh? 20:03:31 ... haven't been looking so much at modules as a part of RWG, more use case focused. 20:03:39 ack bajones 20:03:40 ... can see how to make progress here 20:03:49 q- 20:03:51 bajones: answer ada's Q 20:04:00 ... write-only depth buffer possible? short answer yes 20:04:17 ... webgl framebuffer has no mechanism to read back from framebuffer 20:04:23 ... only if it's based on a texture 20:04:42 ... this is different in layers module where depth is a separate texture that can be sampled 20:04:51 ... in terms of boolean occlusion on/off 20:05:11 ... obstacle: with depth data coming out of ARCore currently (which may not be representative), 20:05:25 ... it may be too noisy to use for occlusion. Doesn't get written into depth buffer. 20:05:43 ... It's put into texture, doing a feathered fade depending on how close it's to object 20:05:51 ... search for "tiger" in Google Search 20:05:58 ... uses native app to display it 20:06:16 ... not a crisp line when hidden behind table. a bit noisy, falloff. 20:06:27 ... tiger gets more transparent when getting closer to table. 20:06:29 q+ 20:06:46 ... there's documentation for this on the native side, fairly complex change to rendering 20:07:03 ... gaussian blurred depth output, not nearly as simple as depth buffer which would look really bad 20:07:19 ... have doubts that planes would give a good occlusion result, tend to overextend 20:07:23 q+ to ask about post-processing the frame 20:07:31 ... like the idea in theory, but unclear how it would work in practice 20:07:39 ack klausw 20:07:42 ... until machine learned to better output 20:08:04 rogerdis has joined #immersive-web 20:08:31 ack alexturn 20:08:31 alexturn, you wanted to ask about post-processing the frame 20:08:43 klausw: ARCore limitation is for depth from RGB camera. It's better for phones that have a depth camera such as time of flight 20:09:02 alexturn: this variety is what makes me think it should be scenario focused 20:09:07 q+ 20:09:23 ... apps shouldn't need to care about details, should just get suddenly better when adding hw 20:09:40 ... is the feathering for performance? could this be done via postprocessing? 20:09:49 ... occlusion done separately 20:09:54 q? 20:10:04 bajones: don't know if it's been explored 20:10:16 q+ to add both would work well 20:10:21 bialpio: main reason why current shape of depth API doesn't treat occlusion as main use case 20:10:26 ack bialpio 20:10:38 ... the current tech on ARCore isn't good enough for occlusion from my perspective 20:10:51 ... if we want to be scenario focused, this isn't the best API to get occlusion 20:10:59 q+ 20:11:15 ... focused on getting information about environment, i.e. for physical effects such as bouncing off furniture 20:11:26 ... may need different API for occlusion 20:11:28 ack ada 20:11:29 ada, you wanted to add both would work well 20:11:50 ada: if we were to do depth write case, would work quite well with combination of both 20:12:03 ... if feathering is outside of occluded content, by using depth twice 20:12:16 ... once to block for performance reasons, once for feathering, could work 20:12:37 ... if you think it's a thing that some platforms may not be able to do well, request as optional feature 20:12:42 q? 20:12:45 ack alexturn 20:12:46 rrsagent, make log public 20:12:47 ... platforms can opt in if they think underlying HW is good enough 20:12:52 rrsagent, publish minutes v2 20:12:52 I have made the request to generate https://www.w3.org/2020/11/06-immersive-web-minutes.html atsushi 20:13:01 alexturn: this was a good discussion, seeing how to think through it 20:13:16 ... could introduce another mode for RWG, less watertight but faster quick mesh 20:13:31 ... page could say, if I'm on a device with quick mesh, use that, otherwise use depth sensing API 20:13:42 ... offer multiple methods, engines can pick 20:13:56 ... not every device can do good occlusion, return a quality setting so app can decide 20:14:16 ... is it possible to find a good approach, i.e. postprocessing or prefill, to inherit best behavior for any given device 20:14:31 q+ to step back a little bit 20:14:32 ... what would give best results on something like ARCore? 20:14:41 ... discourage use on older phones 20:15:28 klausw: depth sensors are fairly rare in Android phones at this point 20:15:52 bajones: use feathering on phones that don't have deph sensor 20:15:59 ... not enough phones that don't have it 20:16:02 q+ 20:16:18 alexturn: you don't see many apps using feathering? 20:16:43 bajones: I've seen depth data used more for rough collision physics, where a 5cm offset wouldn't matter much 20:16:45 meeting: Immersive-web WG/CG TPAC 2020 20:16:46 ack klausw 20:16:51 agenda: https://github.com/immersive-web/administrivia/tree/main/TPAC-2020 20:17:19 klausw: could feathering work as an xr compositing postprocess? 20:17:29 (sorry for interrupt. housekeeping for minutes) 20:17:36 bajones: might work, but would require reading back buffer which causes pipeline bubble 20:18:06 ... what you'd be doing is taking rendered scene, force it to resolve, use depth buffer, resample into another output surface 20:18:31 ... think it's doable, but would be awkward, and weird interactions with antialiasing 20:18:41 ... would be pretty expensive 20:19:14 ada: briefly look back at core question, how do we think about real world sensing and modules 20:19:31 ... may want to break these things out into features vs trying to deliver a module to see the world described 20:19:44 ... i.e. here's a feature for occlusion, vs. here's a depth buffer or mesh 20:20:12 alexturn: not sure if we have a chosen path 20:20:29 ... path would be proposing an occlusion api 20:20:41 ... other option would be more data driven and more specific for headsets 20:21:02 ... some do it with depth sensing, others with quick mesh, would happen in occlusion repo to see if it should be a module 20:21:17 ada: is this something people would want to do soonish? 20:21:20 ... create a repo? 20:21:33 alexturn: it's important for us. would like to champion / push forward 20:21:43 ada: anyone want to join alexturn? 20:22:05 bialpio: interested in outcome, but unsure if occlusion is the goal due to ARCore constraints 20:22:23 ada: will create this with alexturn and bialpio as leads 20:22:34 ... occlusion repo 20:30:04 Occlusion repo: https://github.com/immersive-web/occlusion 20:34:28 s/deph/depth/ 20:35:52 https://xkcd.com/221/ 20:37:00 scribenick: yonet 20:37:08 agenda: https://github.com/immersive-web/administrivia/tree/main/TPAC-2020 20:37:26 Marker tracking slides: https://docs.google.com/presentation/d/1_ivZwzNLDn54Q-6wUK3fKGAZgTQ5GJtOClgbVGrh-qw/view 20:38:47 q+ 20:38:55 ack ada 20:38:55 ada, you wanted to step back a little bit and to 20:39:10 ada: when ypou say moving images do you mean video 20:39:38 image is handheld attached to a moving object as opposed to attached to a static wall 20:40:36 klausw: the current prototype works as you give a list of image... 20:41:20 ... design constrains from the privacy point of view, in order to avoid user surprise, we don't want to track litrature, environment 20:41:45 ...potentially if you are using barcodes, we don't want all of the barcodes to be scanned 20:42:50 alexturn_ has joined #immersive-web 20:43:47 q+ 20:43:55 ack ada 20:44:23 ada: what's the difference between a natural image and a syntactic image? 20:44:53 klausw: QRcodes on the slides 20:45:56 q+ 20:46:01 ack alexturn_ 20:46:15 alexturn_: Just around the privacy stuff 20:46:39 ...on hololens we required the web camera. 20:46:58 ...you can figure out where people are in there is a specific QR codes. 20:47:24 q+ 20:47:24 q+ 20:47:24 ...we might want to have specific requests for QR codes 20:47:39 ack bajones 20:48:39 bajones: with ARcore and arkit it takes time to precess this images, around 30 seconds. 20:49:16 ...I am wondering if there is a reasonable approach where we can say, we have a cash of these images 20:49:23 s/30 seconds/30 milliseconds per image/ 20:49:30 s/cash/cache/ 20:49:32 ...you can definetely take image bitmaps and say we have seen them 20:49:37 q+ 20:50:01 ack klausw 20:50:08 ...maybe we can make that little smoother in the additional runs 20:50:19 klausw: it depends on the implementation too 20:50:45 ...if you want to process a ten thousand images, this won't be a good API 20:51:27 klausw: it would be nice if we can have a more privacy 20:52:23 ack alexturn_ 20:52:33 ...I think some features like camera view. If it is facing the camera it is possible to pick up things from users environments 20:53:21 q+ 20:53:50 alexturn_: would the idea be, we can have some of these classes that we can use. We can get pretty universal coverage like qr codes that you would need to feature detect as the developer? 20:54:45 klausw: if we think if it is an important feature, maybe we can use computor vision. 20:54:56 ...it would be doable but costly 20:55:12 ack klausw 20:56:47 alexturn_: you could do the image in the qr code and it is small. 20:57:01 q+ 20:57:04 ack ada 20:57:25 kalusw: it is possible to make qrcodes tracked by adding an image to them, with image tracking 20:57:35 q+ 20:57:39 ack klausw 20:57:50 ada: do we need different platforms to tell us, I can do markers or images? 20:58:25 klausw: we would benefit if we have a common api for both. 20:58:56 ada: would it be like a same api surface but different feature 20:59:06 klausw: more like features 20:59:48 ada: in the bit where you are passing in the image, you would say, find me the qr codes 20:59:54 klausw: yes 21:00:25 ...one thing I want to mention is, image tracking or marker tracking use the cases on shared anchors. 21:00:44 q+ 21:00:59 ack bajones 21:01:03 ...users share image and share the same experience because they have common entry points. I wonder if someone explore the use cases? 21:01:40 bajones: how persistent are the anchors...if I want to do an experince that I start everybody from the same source, marker and 21:01:51 ...move everybody in the same direction. 21:02:18 klausw: we have the tracking status emulated, assumption that it is stationary 21:02:27 q+ 21:02:40 ...basicly establish a tracking system with a stable anchor point 21:03:03 bajones: now I detected a marker and I will drop an anchor 21:03:03 q+ to ask about cancelling 21:03:07 ack bialpio 21:03:12 klausw: it is something we should look into 21:03:39 piotr: if you are assuming that your image is stationary, you can create an achor. 21:04:01 ...we don't offer a helper right now but it is doable with seperate apis 21:04:29 ack ada 21:04:29 ada, you wanted to ask about cancelling 21:04:48 bajones: If you are only using the image to get everyone at the same point vs if you are tracking the movement... 21:05:09 ada: if the image tracking is expensive, would it be possible to turn off 21:05:30 klausw: it would be useful to pause and continue 21:05:53 ...if the application can give feedback 21:05:58 q? 21:06:26 ada: if this is the end of the issue we could move on to the next topic. 21:06:46 ...5 minute break 21:09:39 ada: I filed https://github.com/immersive-web/marker-tracking/issues/1 for your pause/resume suggestion 21:12:13 alexturn has joined #immersive-web 21:17:03 ada: next subject is AR use cases, we can talk about the accessibility use cases too 21:17:35 scribenick: ada 21:19:31 klausw:I think the topic is the difference between headset and handset behaviours 21:19:53 klausw:the q has two parts, is the API good enough to give applications the info they need 21:20:35 ... and are we doing enough to create a pit of sucess so that a phone app works on a headset and vice-versa 21:20:38 alexturn has joined #immersive-web 21:20:40 q+ 21:21:12 ... have the people as oculus had experience with running hand held experiences on their headsets 21:21:16 ack alexturn 21:21:51 alexturn: we've seen a good number of experiences work, it's definitely possible to paint yourself into a corner, we've been working with engines to make sure they work. some features which effect that are using features w ed o not support like dom overlay. 21:22:15 .. i don't have the top set of blockers for of for example some one building for phoen and getting stuck 21:22:23 s/../.../ 21:22:48 q+ 21:23:14 klausw: for model viewer it wasn't an engine issue it's that the developer hadn't had tested on a headset and had a simple bug 21:23:49 yonet: people put the information at 0,0,0 and put the camera offset which works on mobile but breaks in the headset 21:24:25 alexturn: does arcore offset the origin to put it in front of the phone? 21:24:46 ack yonet 21:24:53 alexturn: we had defined localspace 0,0,0 to be at the device origin on the floor so if they are placing it in front it could create this kind of incompatibility 21:25:25 ... this shouldn't break things but seems to be a policy decision which is causing issues 21:25:53 klausw: iirc it could be a near plane issue and the application is placing stuff within the nearplane by placing it at the origin 21:26:12 alexturn: i forget what were doing for the near plane for webxr 21:27:12 ... @yonet if you could create a github issue in webxr for these issues with the URLs it could be helpful. 21:28:23 alexturn: in OpenXR the decision you make first is to pick the device form factor, 21:28:48 which is a decision which makes more sense for native for webxr making that decision is harder 21:28:58 WebXR was designed to be a pit of success. 21:29:00 q+ 21:29:02 ack ada 21:30:07 q+ 21:30:49 ada: could we create a polyfill to fake immersive-ar in immersive-vr to make headset ar testing easier 21:31:10 yonet: could this be a chrome debugger feature? 21:32:56 bajones: this could maybe be done in the webxr emulator 21:33:20 ... it could also be a fairly easy thing to do in an environment like THREE.js wher eyou plug in an AR emulation plugin 21:33:31 ... i don't think the chrome debugger would be useful for this 21:33:32 ack yonet 21:34:06 alexturn: babylon is doing something similar letting you drag the camera round so this could be done at the engine layer 21:35:25 alexturn: you could download the hololens emulator for windows which recently has support for using it through a VR headset 21:36:03 alexturn: specifically requires a windows vr headset 21:36:27 https://github.com/MozillaReality/WebXR-emulator-extension 21:36:28 ada: if that supported more headsets I would love to write about it 21:36:57 yonet: do we know who is maintaining the WebXR Emulator Extension? 21:37:10 everyone: silence 21:41:35 aysegul: Rik is there an Oculus Emulator? 21:41:46 cabanier: yes but it doesn't work in WebXr 21:41:52 yonet: so how do you debug? 21:42:04 cabanier: you can use adb 21:42:28 ada: you can set it up to work wireless 21:42:46 bajones: you need to install the drive the n go to about:inspect and set up port forwarding to setup port forwarding 21:44:07 this lets you inspect the page and use a local server 21:44:42 cabanier: you can add the ip address of your computer as a secure host within the browser 21:45:00 bajones: sometimes you have to unplug and replug a few times 21:45:22 cabanier: that can happen if you have an adb server running 21:45:54 bajones: the oculus developer hub is really useful such as lets you be able to capture screenshots from your desktop 21:46:51 cabanier: It also lets you turn off leaving immersive when you take off the headset 22:26:13 RRSAgent make minutes 22:27:08 RRSAgent, make minutes 22:27:08 I have made the request to generate https://www.w3.org/2020/11/06-immersive-web-minutes.html yonet 23:59:59 previous meeting: https://www.w3.org/2020/11/05-immersive-web-minutes.html 23:59:59 i/bajones: We talked earlier this week about what's left/topic: Performance Improvements (@toji, @manish, @cabanier)/ 23:59:59 i/bialpio: do we want to discuss cloud/topic: Anchors (@bialpio, @fordacious)/ 23:59:59 i/alexturn: look into what future modules for/topic: RWG + Depth Sensing, Module Breakdown (@toji)/ 23:59:59 s/image is handheld attached to a moving object/... image is handheld attached to a moving object/ 23:59:59 i/Marker tracking slides: https:/topic: Marker/Image Tracking (@klausw)/ 23:59:59 i/ada: next subject is AR use cases,/topic: AR Use Cases (50 minutes)/ 23:59:59 s/which is a decision which makes more sense for native/... which is a decision which makes more sense for native/ 23:59:59 s/WebXR was designed to be a pit of success./... WebXR was designed to be a pit of success./ 23:59:59 s/this lets you inspect the page and use a /... this lets you inspect the page and use a /