16:44:59 RRSAgent has joined #immersive-web 16:44:59 logging to https://www.w3.org/2021/10/15-immersive-web-irc 16:45:05 Zakim has joined #immersive-web 16:45:11 zakim, list agenda 16:45:11 I see nothing on the agenda 16:45:18 rrsagent, make log public 16:45:33 agenda: https://github.com/immersive-web/administrivia/tree/main/TPAC-2021 16:45:48 meeting: Immersive-Web WG/CG (extended) group call - Oct/2021, Day 2 16:46:32 agenda+ New proposal ideas? 16:46:39 agenda+ Potential AR feature - point clouds 16:46:47 agenda+ Allow dynamic frame timing 16:46:55 agenda+ Feasibility and desirability of persisting anchors 16:47:05 agenda+ Discussion: HTMLModelElement 16:47:14 agenda+ Provide statistics to help guide performance 16:47:24 agenda+ Review WebGPU API design 16:47:32 agenda+ A feature that toggle stereo on runtime. 16:47:40 agenda+ Discussion with OGC Geo Pose Standard Working Group 16:47:45 zakim, list agenda 16:47:45 I see 9 items remaining on the agenda: 16:47:46 1. New proposal ideas? [from atsushi] 16:47:46 2. Potential AR feature - point clouds [from atsushi] 16:47:46 3. Allow dynamic frame timing [from atsushi] 16:47:46 4. Feasibility and desirability of persisting anchors [from atsushi] 16:47:46 5. Discussion: HTMLModelElement [from atsushi] 16:47:47 6. Provide statistics to help guide performance [from atsushi] 16:47:47 7. Review WebGPU API design [from atsushi] 16:47:47 8. A feature that toggle stereo on runtime. [from atsushi] 16:47:47 9. Discussion with OGC Geo Pose Standard Working Group [from atsushi] 16:58:29 bialpio has joined #immersive-web 16:58:49 bajones has joined #Immersive-Web 16:59:03 present+ 16:59:23 rrsagent, publish minutes 16:59:23 I have made the request to generate https://www.w3.org/2021/10/15-immersive-web-minutes.html atsushi 17:02:22 lgombos has joined #immersive-web 17:02:42 present+ 17:02:52 (Come at me zakim!) 17:02:52 did I call into the right meeting? 17:03:07 Nick-8thWall has joined #immersive-web 17:03:19 Rik, it should be: https://mit.zoom.us/j/98308045638?pwd=cHdaZzJlc3k1YlFpeGpJNjdhb2cxZz09 17:05:38 thanks. I think I dialed yesterday's first 17:05:40 present+ 17:05:48 present+ 17:07:28 present+ 17:07:33 present+ 17:08:23 heads-up, looks like the calendar at w3 already lists our today's mtg as "past event" so if you don't see it, go to "View past events" 17:09:04 alexturn has joined #immersive-web 17:09:46 ah, that's an existing issue, that we cannot view an event which is running. not in upcoming nor in past... 17:09:53 (has fixed??) 17:09:58 scribenick: cwilso 17:10:17 zakim, list agenda 17:10:17 I see 9 items remaining on the agenda: 17:10:18 1. New proposal ideas? [from atsushi] 17:10:18 2. Potential AR feature - point clouds [from atsushi] 17:10:18 3. Allow dynamic frame timing [from atsushi] 17:10:18 4. Feasibility and desirability of persisting anchors [from atsushi] 17:10:18 5. Discussion: HTMLModelElement [from atsushi] 17:10:19 6. Provide statistics to help guide performance [from atsushi] 17:10:19 7. Review WebGPU API design [from atsushi] 17:10:19 8. A feature that toggle stereo on runtime. [from atsushi] 17:10:19 9. Discussion with OGC Geo Pose Standard Working Group [from atsushi] 17:10:21 RafaelCintron has joined #immersive-web 17:10:22 https://github.com/immersive-web/proposals/issues/71 17:10:50 https://github.com/immersive-web/proposals/issues/70 17:11:24 ada: this is the new proposals idea issue - anything we might want to add to the new charter, or keep in the background. 17:12:01 ... something I already added is supporting 3d screen based hardware; do we want to add that to our charter. 17:12:24 ... does anyone want to suggest new things? 17:13:11 q+ 17:13:17 ack bialpio 17:14:19 piotr: should we think about working on point clouds? native apis have some capabilities about this; seems like it falls under RWG already, so probably don't have to add to charter explicitly 17:14:27 i|https://github.com/immersive-web/proposals/issues/70|topic: New proposal ideas? 17:14:32 rrsagent, publish minutes 17:14:32 I have made the request to generate https://www.w3.org/2021/10/15-immersive-web-minutes.html atsushi 17:14:49 nick: I would be generally supportive of point clouds; we do have a whole session to discuss later 17:15:02 s|Rik, it should be: https://mit.zoom.us/j/98308045638?pwd=cHdaZzJlc3k1YlFpeGpJNjdhb2cxZz09|| 17:15:03 rrsagent, publish minutes 17:15:03 I have made the request to generate https://www.w3.org/2021/10/15-immersive-web-minutes.html atsushi 17:15:33 ada: how do people feel about adding 3D screen support, like dSpace etc 17:15:35 present+ 17:15:42 q+ 17:15:50 q+ 17:15:52 Leonard has joined #Immersive-web 17:16:04 bajones: if something isn't in the charter, can we talk about it? 17:16:04 present+ 17:16:29 ada: not off the table; ... 17:16:36 q+ 17:16:51 nick: for 3d displays, seems like the views api would support this? 17:17:01 ack cwilso 17:17:04 q+ 17:18:13 ack ack alexturn 17:18:16 ack alexturn 17:18:28 q+ 17:18:55 chris: if something isn't in the charter - not mentioned int he scope - it's probably not covered by the patent policy. If you want the group to own it, you should probably make sure it's covered in the charter. 17:18:57 ack Leonard 17:19:21 alexturn: mentioning zspace thing - I would be supportive of adding that. 17:19:52 leonard:can the charter be less specific? 17:21:33 ack bajones 17:21:56 cwilso: charter can be vague - you're essentially licking the cookies, whether you are eating them or not (by putting them in deliverables) 17:22:13 ... Members may object to the charter because of that expanded scope, though. 17:22:50 q+ for (just comment...) we list some potential specs in current charter as under incubarion in IWCG, and I feel ok if we could include potential ones like them (at least) with one line of target area(s)... 17:23:04 brandon: it feels awkward to say we're going to put in the charter an entire new category of devices in order to potentially add one new flag. 17:23:15 ack ada 17:23:29 ... it's probably rolled into 3d css or the like,. 17:23:37 q+ 17:24:11 ada: so maybe we should not have a specific deliverable, but say we're going to support immersive devices like immersive display technologies. 17:24:12 +1 17:24:16 ack atsushi 17:24:16 atsushi, you wanted to discuss (just comment...) we list some potential specs in current charter as under incubarion in IWCG, and I feel ok if we could include potential ones like 17:24:19 ... them (at least) with one line of target area(s)... 17:24:57 atsushi: in current charter we have several lines of incubation, so I feel it's better to include emerging areas through incubation. 17:25:19 ada: that's a good idea. 17:25:31 ... we do have a lot more flexibility through the community group. 17:25:40 ...CG isn't covered by our charter. 17:25:44 q+ 17:25:54 ack Nick-8thWall 17:26:54 nick: mostly +1 to brandon, such a small patch on the existing spec (maybe not even a patch, just in implementation). Understand process in general, but maybe let others know we have this covered. It's basically mostly done - you might initially implement it as a desktop session. 17:26:56 ack cwilso 17:27:24 q+ 17:27:25 q+ 17:27:29 q+ 17:28:42 ack ada 17:29:09 cwilso: should put in the charter; it's not a big deal, but we want to ensure it's in scope for patent policy reasons. 17:29:35 ada: this hardware didn't exist last charter, so we should like this cookie. 17:29:42 s/like/lick 17:30:11 ack alcooper 17:31:02 alexcooper: expanding scope is good; I think we shouldn't say we don't need to update the spec at all, devs probably need to have more information. Non-normative notes, at least. 17:31:17 ... rendering to these different devices might be a little bit different 17:31:22 q? 17:31:24 ack Leonard 17:32:11 s/alexcooper/alcooper 17:32:14 leonard: two items: audio; input from users' environment (note privacy concerns), and audio output (may be covered?). and 2, haptics. 17:32:23 laford has joined #immersive-web 17:32:24 +q 17:32:36 ack alcooper 17:32:40 present+ 17:33:03 laford: on haptics, openxr does have support for haptics, but it's not that exciting - buzz-at-freq-for-time 17:33:11 ada: do we want to add haptics? 17:33:44 cwilso: isn't this covered by gamepad? 17:34:02 alcooper: yes, seems sufficient for our scenarios 17:34:09 ada: what audio would we need? 17:34:15 leonard: don't know 17:34:25 q+ for (just comment, no voice needed) non-normative notes are free from IP and we usualy state just as "Other non-normative documents may be created such as:" in charter 17:34:33 ack laford 17:34:50 laford: haptics is a volatile rabbithole. 17:34:52 ack atsushi 17:34:52 atsushi, you wanted to discuss (just comment, no voice needed) non-normative notes are free from IP and we usualy state just as "Other non-normative documents may be created such 17:34:53 q+ 17:34:55 ... as:" in charter 17:35:04 q+ to mention haptic wear hardware 17:35:27 ack cabanier 17:35:29 atsushi: non-normative notes are free from IP commitment, so other documents could be created (that aren't Recommendations, and don't imply Patent Policy) 17:35:42 ack ada 17:35:42 ada, you wanted to mention haptic wear hardware 17:36:04 cabanier: we did request enhancing haptics on gamepad, but didn't get much traction 17:36:25 dino has joined #immersive-web 17:36:56 ada: one thing I've heard g4merZ talk about is haptic devices - like haptic suits or gloves. 17:37:05 q+ 17:37:10 ack bajones 17:37:57 brandon: I don't have visibility in every market, but I don't see too many efforts in haptic suits or gloves that are credible yet. 17:38:31 ...what is relevant is more full-body-tracking. You can kind of stitch together today with the Vive tracking pucks. 17:38:48 ... I don't know what we would need to represent that 17:39:19 ... Also facial recognition, lot of research in pulling skeletons out of this data... 17:39:25 +1 17:39:29 ... in a totally non-creepy and non-threatening way... 17:39:46 ... seems more realistic than haptics 17:40:03 ... might want to add full-body-tracking to the charter 17:40:09 q+ 17:40:21 ack laford 17:40:22 ada: I note we have loads more discussion before we send this charter in for approval. 17:40:50 laford: full-body tracking sounds like part of a subset of human understanding, which seems good to add. 17:41:04 OK, and on to issue 71 17:41:08 zakim, take up agendum 2 17:41:08 agendum 2 -- Potential AR feature - point clouds -- taken up [from atsushi] 17:41:12 scribenick: laford 17:41:34 bialpio: Last gaps between web and native APIs we have 17:42:03 bialpio: Is there much interest? Or is it too low level? No concrete proposals atm but we can work towards exposing info about point clouds underlying XR systems use to derive higher level features 17:42:09 q+ 17:42:11 bialpio: What does the group think? 17:42:13 ack Leonard 17:42:25 Leonard: point clouds are potential GB in size 17:42:40 Leonard: how do you deal with that much data in memory or on the network? 17:42:52 Leonard: Is the tech fully ready yet? 17:43:02 q+ 17:43:25 Leonard: Geospatial people are working on static point clouds. Apple consumer devices have lidar point cloud 17:43:38 Leonard: consumer may be in the order of hundreds of MB 17:43:48 Leonard: ISO is working on point cloud streaming 17:43:56 yonet has joined #immersive-web 17:44:05 ack Nick-8thWall 17:44:07 Leonard: Idea is good, its a dangerous area to unwittingly wander into 17:44:25 yonet + 17:44:27 Nick-8thWall: For us having a basic data structure would be useful 17:45:03 Nick-8thWall: our curved image target tracking within an immersive session requires extracting feature points so having access to a precomputed set of feature points directly would make our implementation much easier / cheaper 17:45:12 present+ 17:45:19 q+ 17:45:22 Nick-8thWall: we could build curved tracking on point cloud and image texture api much more cheaply than raw 17:45:24 ack Leonard 17:45:41 Leonard: Is this data point cloud lidar or something separate? 17:46:06 Nick-8thWall: The point cloud collected by the device for tracking, whether lidar or image feature detection etc... 17:46:20 q+ 17:46:27 Leonard: Would be useful to structure it in such a manner to not deal with data sets but device collected data 17:46:42 ack bialpio 17:47:16 bialpio: should look at it through the context of xr session. Would be a matter of exposing the data that xr systems are tracking currently to synthesize the higher level features 17:47:24 bialpio: would preclude working on big datasets 17:47:35 +1 for device tracking at the moment 17:47:37 bialpio: could be a point of simplification for the API 17:47:45 bialpio: only looked at it through the lens of AR 17:47:51 +q 17:47:54 q+ 17:48:03 bialpio: is there an OpenXR equivalent 17:48:21 bialpio: how would this be implemented on as many devices as possible? 17:48:23 ack laford 17:49:18 ack alexturn 17:49:28 laford: no mechanism in OpenXR, for WMR devices it may be part of the secret sauce 17:49:41 Jared_ has joined #immersive-web 17:49:48 alexturn: Not necessarily the secret sauce but may just have been something we did not want to give out 17:50:19 alexturn: dont want devs taking too hard a dependency on the specifics on the point cloud 17:50:29 alexturn: e.g. training an ML model on hardware A and it doesn't work on hardware B 17:51:07 alexturn: Perhaps for ARCore/Kit it has stabalized enough to not worry about this? 17:51:47 alexturn: for hit-test apps that would have been agnostic may lock themselves in by choosing point cloud 17:52:05 alexturn: do user agents have to pretend they have the wrong type of data for compat? 17:52:33 q? 17:52:48 alexturn: may be approaching the point of technology specific but could accelerate algorithm development e.g. Nick's algo 18:02:15 yonet has joined #immersive-web 18:04:15 yonet_ has joined #immersive-web 18:04:15 https://github.com/immersive-web/webxr/issues/1233 18:04:31 cabanier: Want to talk about optimizing how we deal with frame rendering 18:05:04 cabanier: in Oculus we get all poses and ask for a new texture. If not free, have to wait. 18:05:23 cabanier: OpenXR has a way to solve this 18:05:36 cabanier: with webxr layers you could kind of mimic it but its not explicit 18:06:30 cabanier: Oculus also has "phase sync". Which adjusts the display time based on your rendering overhead to minimize latency 18:06:45 i|https://github.com/immersive-web/webxr/issues/1233|topic: Allow dynamic frame timing| 18:06:46 cabanier: need to know the rendering stages though 18:06:49 rrsagent, publish minutes 18:06:49 I have made the request to generate https://www.w3.org/2021/10/15-immersive-web-minutes.html atsushi 18:07:01 alexturn_ has joined #immersive-web 18:07:07 cabanier: could make it explicit for layers 18:07:35 cabanier: Would like to experiment with it 18:07:49 cabanier: is it already working on android? 18:07:59 q+ 18:08:00 q? 18:08:03 ack laford 18:08:40 q+ 18:08:42 laford: Cass Everitt big driver of this on the OpenXR side 18:08:53 q+ 18:08:54 ack RafaelCintron 18:09:24 RafaelCintron: No objections in principle. Will need to restructure things to get pose data separate from rendering data 18:09:29 q+ 18:09:36 RafaelCintron: In a browser architecture, how much difference will it make? 18:09:38 ack bajones 18:09:42 q- 18:09:46 RafaelCintron: If it is affective, seems like a win 18:09:48 q+ 18:10:04 bajones: Right now there is separate mechanisms in the layers API for headpose and texture. Would that be sufficient? 18:10:53 q+ 18:11:24 bajones: Feels like something that ideally should be best practice 18:11:43 cabanier: Could be a normative note? 18:12:30 bajones: having a note "if you spread out these calls backends may be able to rendering" seems like the basic step we should do here 18:12:38 * optimize rendering 18:13:33 Nick-8thWall: One thing to be aware of; We depend on the order of threejs operations. If these things get change in a material way, we'd want to make sure our experiences aren't broken 18:13:55 cabanier: Should be optional as to not break existing experiences 18:14:30 Nick-8thWall: The threejs implementation is a little too turn-key with no ability to manually drive it. Have to hook into the threejs systems that are restrictive 18:14:46 ack Nick-8thWall 18:14:57 Nick-8thWall: If we are rethinking threejs / webxr APIs maybe we should give more control to external apps and leave rendering to threejs 18:15:50 Nick-8thWall: example: "responsive scale", where experience are layed out in front of you where the dev intends them to be. We rely on threejs to update its own structures based on webxr poses, and we have to MITM them before threejs uses them 18:16:20 Nick-8thWall: Were they separate and control fell to WebXR this would be more convenient 18:16:38 q+ to speak to three.js integration a bit 18:16:40 ack alexturn 18:17:42 alexturn: Worth pushing to see how far you can get. OpenXR is general based on how much runtimes want to optimize. Tricky to update pose half way through the frame causing a synchronous round trip all the way up and down the stack 18:18:07 alexturn: potentially an API for apps to ask for update poses. Things may be different in WebXR due to Asynchronicity 18:18:08 q+ 18:18:09 q? 18:18:13 ack bajones 18:18:13 bajones, you wanted to speak to three.js integration a bit 18:18:57 bajones: Responding to Nick: Threejs does not directly control the direction of threejs. I agree that threejs is a little too turn-key. 18:19:47 bajones: the threejs devs like this and like the API to be opinionated. Coordination based on large scale restructuring may be tricky but they could be pursuaded 18:20:04 q? 18:20:04 Ada, have you heard of Babylon.JS? 18:20:18 RafaelCintron:of course :) 18:20:25 I also like PlayCanvas 18:20:36 There is also wonderland 18:20:42 ack cabanier 18:21:16 cabanier: Replying to Alex: Used to offer getting updated poses during lifetime of frame, Didn't that go away in OpenXR? 18:21:45 alexturn: When you call xrLocateSpace you get the latest answer based on the time, so for the same time may get a more update pose later 18:21:58 alexturn: can leverage this for rendering by calling it at a later time 18:22:24 alexturn: we rely on locate space being instant, whereas in WebXR there is a whole process model that changes the dynamic 18:23:19 alexturn: xrFrame is more of the explicit model we used to have. Could see an xrFrame API for "getUpdatedPositions" which gives you a new xrFrame 18:23:39 q? 18:24:39 topic: Feasibility and desirability of persisting anchors 18:24:44 https://github.com/immersive-web/anchors/issues/69 18:25:04 scribenick: cabanier 18:25:25 ada: anchors are amazing and I want to share them 18:25:34 ... or hold onto them and share them later 18:25:46 ... some platforms have shareable anchors 18:26:02 ... they could be stored in the browser and used later 18:26:02 1+ 18:26:07 q+ 18:26:08 q+ 18:26:08 q+ 18:26:08 q+ 18:26:13 ack alexturn 18:26:40 alexturn: there are transient anchors for making things more stable 18:26:59 ... there are persistent anchors for use across sessions 18:27:18 ... and there are shared anchors which have a cloud backend which are hard to standardize 18:27:38 ... unless UAs support all backends, it would be hard to standardize 18:28:11 ... magic leap lets you find the nearest anchor and you find the nearest post to that anchor 18:28:19 q? 18:28:22 ack Nic 18:28:22 ... MS lets you create an anchor and use it 18:28:40 q+ to ask about just leaving the anchor list populated 18:28:42 Nick-8thWall: I was going to say the same thing that alexturn said 18:29:16 ... the cloud based spatial location is more application level and not so much within the browser's scope 18:29:37 ... there are use cases for multi session but it seems less of a use case 18:29:51 q? 18:29:54 ... I don't think it's the highest priority 18:29:54 ack bajones 18:30:24 bajones: I was going to ask alexturn and Nick-8thWall to ask what OpenXR is doing 18:30:38 ... we want to get them on the same page 18:30:55 ... fundamentally it will be hard to standardize on our level 18:31:08 q? 18:31:11 ack Leonard 18:31:15 ... because it is very platform dependent 18:31:36 q? 18:31:38 ack ada 18:31:38 ada, you wanted to ask about just leaving the anchor list populated 18:31:46 Leonard: could the developer turn off anchors 18:32:04 q+ 18:32:09 ada: are persistent anchors a stepping stone to shareable ones? 18:32:49 ack alexturn 18:32:50 ... if you have anchors enabled and you go to the same domain, you can find them again 18:33:23 q+ 18:33:26 alexturn: we could find something that could be persistent across sessions 18:33:54 ... I don't know if it's trickier for phone based AR 18:33:55 q+ 18:34:11 ... sharing is much more difficult 18:34:57 ... for instance we have different APIs for persistent and cloud API 18:35:08 q+ to suggest that maybe this is not the path 18:35:10 ... persistent anchor are a native API 18:35:20 ack Nick-8thWall 18:35:43 Nick-8thWall: I could walk through to the computer vision 18:36:01 ... if there's a key attached to an opaque blob 18:36:12 ... what does the blob contain? 18:36:27 ... there's a bunch of tech that goes into that re-identification 18:36:42 ... there are image level hashes to re-identify scenes 18:36:52 ... a lot of these technologies have magic 18:37:14 ... for instance google and MS use very similar technology 18:37:28 ... so their token might not be understood by each other 18:37:41 ... (point clouds) 18:37:53 ... even being future compatible with yourself is hard 18:38:11 ... so sharing across devices and vendors is hard 18:38:28 q? 18:38:29 ... even across session that are hard problems to be solved 18:38:35 ack bialpio 18:38:50 bialpio: talking about the 3 tiers 18:39:15 q+ 18:39:18 ... on arcore you always need to go to the cloud to persist something 18:39:33 ... it seems that there is something that's browser level 18:39:49 ack alexturn 18:39:53 ... but the big unknown is to find a way to works everywhere 18:40:10 alexturn: local persistence skips the vendor problem 18:40:16 q+ 18:40:19 ... and that over time things change 18:40:36 ... we don't hand the blob and just store things internally 18:40:58 ... in that world, there's more a chance that we have a common API 18:41:08 ack ada 18:41:08 ada, you wanted to suggest that maybe this is not the path 18:41:11 q- 18:41:13 ... right now we don't have a great way to share across 18:41:29 ada: it seems that persistence isn't the route towards sharing 18:41:38 ... and not that useful of a feature at all 18:42:06 ... last time we talked about this, we said we didn't want to share 18:42:11 ... and this hasn't changed 18:42:39 q+ 18:42:40 ... maybe for the next version of anchors we could define a meta format 18:42:54 ... is this something that we want to do? 18:43:01 ack alexturn 18:43:03 ... or should we wait 18:43:13 alexturn: at the moment all the formats are opaque 18:43:27 ... my suspicion is that this is churning 18:43:51 ... we can't promise compat 18:43:51 q+ 18:44:03 ... until things stabilize 18:44:18 ... there was a body that was trying to align this across vendors 18:44:22 ack bialpio 18:44:27 ... but I don't think we're at that point yet 18:44:40 bialpio: we might want to think about the use cases 18:44:52 ... I'm pretty excited about marker tracking 18:45:09 ... so you can localize multiple people at the same time 18:45:21 ... it can certainly help with thatr 18:45:29 q? 18:46:00 https://www.openarcloud.org/ 18:46:03 alexturn: openarcloud.com was the group but it doesn't seem to have updated since 2019 18:46:19 s/openarcloud.com/openarcloud.org 18:47:52 RRSAgent, make minutes 18:47:52 I have made the request to generate https://www.w3.org/2021/10/15-immersive-web-minutes.html yonet_ 18:50:54 chair:yonet_ 18:51:25 topic: HTMLModelElement https://github.com/immersive-web/proposals/issues/69 18:53:03 idris_ has joined #immersive-web 19:01:50 i will be there in 5 minutes 19:02:37 sorry that i'm late 19:06:04 Zakim, pick a victim 19:06:04 Not knowing who is chairing or who scribed recently, I propose alexturn 19:06:11 present+ 19:06:55 alexturn_ has joined #immersive-web 19:07:11 scribenick:alexturn_ 19:07:26 alexturn__ has joined #immersive-web 19:07:39 scribenick: alexturn__ 19:08:02 dino: Recently, we made an explainer proposal for a tag 19:08:21 dino: The idea is to promote 3D models to be at the level of image and video on the web 19:08:30 dino: Obvious issues are which file format to support, how to get consistent rendering 19:08:39 dino: We still think this is a really good idea 19:08:39 q+ 19:09:04 ack Leonard 19:09:06 q+ to ask about the charter 19:09:11 dino: One thing we think this is useful for is as a WebXR layer - might be useful as an easy way to put a 3D model into WebXR as a layer 19:09:15 q+ 19:09:27 Leonard: In general, in concept this is a good idea 19:09:30 lgombos_ has joined #immersive-web 19:09:35 Leonard: Helps folks incorporate 3D into regular web pages 19:09:46 Leonard: Still a tremendous number of problems trying to display a simple model 19:10:05 Leonard: Differences in look across browsers could be an issue 19:10:17 Leonard: Wish to see a WebComponent similar to the model-viewer component Google has 19:10:19 q+ 19:10:31 Leonard: I think this is good to implement as a WebComponent but not as a web element 19:10:31 q+ 19:10:32 ack ada 19:10:33 ada, you wanted to ask about the charter 19:11:18 ada: To Leonard's point, I actually think the lack of consistency across browsers is a feature, not a bug 19:11:29 ada: Browsers can adapt the content to the limitations of the system 19:11:47 ada: A developer in WebGL may not pay attention to the specs on a low-power device and could miss devices that can otherwise handle that content 19:12:16 ada: Dean, do you think this would be a good thing to go into the charter? 19:12:24 dino: I would love to see this go into the charter. 19:12:32 ada: Cool! 19:12:39 ack brandon 19:12:47 ack bajones 19:12:52 bajones: Agree that this belongs in the charter. 19:13:08 I'll wait until everyone speaks and then try to answer :) 19:13:11 bajones: I've been somewhat vocal in my disagreement with this proposal online - I just want to be clear about the goals we're achieving here. 19:13:11 q+ 19:13:25 bajones: Not sure if a full tag is what we should do first here. 19:13:50 bajones: Agreed that WebGL is hard! I know it well and also find it difficult building compelling content. 19:14:10 bajones: I do wonder whether the folks who would use this tag are comparing it to the model-viewer component 19:14:31 bajones: Should just be one include as the difference 19:14:45 bajones: Whereas model-viewer has a lot of extra capabilities vs. what this proposal gets you 19:15:07 bajones: If we move forward here, let's focus on the delta in what you can get compared to model-viewer today 19:15:14 bajones: Ease of use is not likely one of those 19:15:38 bajones: Better performance, faster parsing, better battery life could be key advantages here 19:15:50 bajones: Not sure if we have these numbers yet. 19:16:19 ack cabanier 19:16:21 bajones: Knowing the key advantages is important here 19:16:23 q+ 19:16:32 cabanier: One key advantage is that you could get the models to actually show up in 3D 19:16:53 https://modelviewer.dev/ 19:17:00 cabanier: WebXR needs permission prompts to get head poses - would solve those issues. 19:17:09 cabanier: Cross-origin stuff could be solved here 19:17:14 q+ ... lighting, etc. 19:17:22 cabanier: Scrolling can be difficult to handle smoothly - native handling could solve that 19:17:40 cabanier: How big can it be? Is it constrained to a little area? OS could manage constraints there 19:17:53 cabanier: This would start simple but could solve issues over time 19:18:01 ack https://modelviewer.dev/ 19:18:13 alexturn__: I agree with Brandon 19:18:17 alexturn__: to the points which cabanier was making, one key difference is the user viewing it on a headset or a falt display 19:18:18 ack alexturn__ 19:19:09 ... what benefit are we getting from the component, for flat displays the benefits are ux like battery and performance. But for immersive hardware it's a huge ux benefit because it can actually have depth 19:19:32 q+ 19:19:45 ... what if people want to drag things out like you can with images in web browsers today. You could then do that with the model without the page needing to understand it 19:20:02 can drag it to other applications or to your desktop 19:20:16 s/can/... can 19:20:23 ack dino 19:20:35 q+ 19:20:37 ... there are many benefits we can add in the flow of the page for a 3d model 19:20:59 ack ..., lighting, etc. 19:21:24 ack lighting 19:21:33 ack ... 19:21:39 ack etc. 19:21:46 dino: Alex and Rick said many of the things I was thinking here. 19:23:07 dino: Definitely useful for headsets - even for phones, folks have asked us for 3D models within 2D WebViews within AR scenes. 19:23:24 q+ 19:23:34 q+ 19:23:38 ack Nick-8thWall 19:23:49 q+ 19:23:57 dino: Consistent rendering is a key thing to solve - MaterialX could help here as it tries to tackle that 19:24:18 dino: Battery life and dragging out of the page are interesting too 19:24:52 Nick-8thWall: Scenario of embedding 3D content within floating 2D WebView is indeed interesting 19:25:14 Nick-8thWall: Coming at this from a different perspective on the developer side 19:25:34 Nick-8thWall: I generally want to give power to developers here - if developer can do it or browser can do it, I want to give power to developers 19:25:49 Nick-8thWall: Developers rarely want to stop at simple 3D content - they want interaction as well 19:26:00 Customization is important too, even for a simple viewer 19:26:09 Nick-8thWall: Customization is important too, even for a simple viewer 19:26:19 q+ to address the two elephants in the room. 19:26:31 Nick-8thWall: If we prematurely put things behind a interface, we may cut off experimentation in that regard 19:27:00 Nick-8thWall: Is there another session type here as a looking-glass immersive session 19:27:17 Nick-8thWall: Could that be done programmatically? 19:27:40 Nick-8thWall: In terms of efficiency, I think there's a tendency to lean on these concerns too much 19:27:51 Nick-8thWall: We've seen great strides in SIMD in WebAssembly over time 19:28:10 Nick-8thWall: The web tends to get more efficient over time, whereas the perf concerns at the beginning of a standards process fade over time 19:28:31 ack RafaelCintron 19:28:36 Nick-8thWall: I would always advocate for tech that lets better products get built rather than a cookie-cutter mold 19:28:52 q+ 19:28:55 RafaelCintron: Dean shared a lot of my points 19:29:05 q- later 19:29:24 q+ 19:29:49 RafaelCintron: Permissions are simpler here - all you have to do is let things pop out, rather than allowing scene understanding, etc. 19:30:09 RafaelCintron: Accessibility can be better too - can have special modes that browsers support such as wireframe mode that don't require pages to participate 19:30:19 RafaelCintron: None of this should replace anything that exists 19:30:22 ack Leonard 19:30:38 Leonard: Several people have mentioned lighting - we've looked at this extensively in Khronos 19:30:58 Leonard: In Commerce, people want the product to look exactly right - not just lighting but tone-mapping after rendering 19:31:17 Leonard: Without a reference here, I'd be very concerned 19:31:33 Leonard: Folks at Google have talked about getting the tone mapping exactly right with model-viewer 19:31:51 Leonard: People have talked about VR vs. AR - AR lighting is trickier 19:32:10 Leonard: Point lighting vs. other lighting can be an issue with PBR 19:32:33 Leonard: Format evolution can be another concern if some browsers don't support newer model formats 19:32:48 ack bajones 19:32:52 Leonard: If we can't have a path forward for model formats, feels way premature 19:33:02 bajones: Glad Leonard brought up the lighting work in Khronos 19:33:26 bajones: A lot of work to take PBR from glTF and add extensions for lighting that retailers find important and ensure it appears consistent in multiple renderers 19:33:44 bajones: We have tests to ensure that three.js and Google's Filament and Babylon are all within tolerance 19:34:01 bajones: This is important work that is already being done and it'd be a mistake to not lean on that 19:34:40 bajones: The model format that this thing hinges off of is going to be enormously important 19:34:50 bajones: If we end up in a situation where browsers can pick and choose which formats to choose and not have at least one shared format, that will be strictly worse than where we are today 19:35:10 bajones: It would be a very sad state of affairs if one browser supports one format and one supports another and not have one unified format 19:35:19 bajones: We don't want to repeat the mistakes of video tags past 19:35:46 bajones: Even if we have the tag, we should make sure we can still experiment in that same space with lower-level primitives 19:35:47 q+ 19:36:00 dino has joined #immersive-web 19:36:02 bajones: We did talk about having inline sessions with some level of head-tracking 19:36:21 bajones: Has some issues and limitations around permissions 19:36:39 bajones: Also would create problems rendering outside the bounds of the page 19:36:59 bajones: Would be good to make sure we can still experiment outside the bounds of the tag 19:37:16 laford: Just wanted to voice support for the concept in general - really like the idea 19:37:21 ack laford 19:37:43 laford: Rather than having WebGL separate from the DOM - having 3D models be part of the 2D DOM 19:38:01 laford: Even just having the ability to saying this is inline 3D content and seeing what browsers do with that 19:38:14 laford: Phones will treat it differently than desktop and different than headsets 19:38:30 laford: Moving content around can be enabled if everyone knows what is going on with the tag 19:38:32 ack dean 19:38:37 ack dino 19:38:48 dino: Just to be clear, this is definitely not trying to replace model-viewer or WebGL or anything like that 19:39:07 q+ 19:39:08 dino: I did research the model-viewer's excellent research into lighting in the explainer 19:39:12 yonet_+ 19:39:17 q+ 19:39:19 q+ 19:39:34 dino: The lighting problems mentioned apply to WebGL viewers as well - we need to explore this no matter what 19:39:35 q- 19:39:52 dino: Understand the need for a single format - video has one today even if not in the spec 19:40:11 dino: Not mentioning glTF just an oversight 19:40:42 dino: Even if you look at these different scene formats, they are fairly similar and I could imagine an API level here 19:40:57 ack bialpio 19:41:13 dino: The explainer proposal would be just a start 19:41:58 bialpio: The permissions benefit only works if we don't let the app read back things that could be dangerous 19:42:08 good points bialpio! Internally we joke about this as "VRML5" 19:42:11 bialpio: Might start simple but could grow into something very complicated to support 19:42:12 ack cwilso 19:42:12 cwilso, you wanted to address the two elephants in the room. 19:42:31 cwilso: Happy to see that one of the elephants in the room was brought up around formats 19:42:45 cwilso: Building standards is for improving interoperability - need a baseline of support 19:43:11 cwilso: If we come up with a tag that can't be used consistently everywhere, we haven't made things more consistent 19:43:27 cwilso: Also worth noting that if we propose a new tag, this will ultimately just be a proposal to the WHATWG HTML WG 19:43:34 ack alcooper 19:43:38 alcooper: Kind of wanted to go back a little bit to WASM and SIMD 19:43:47 +1 to what cwilso said. Can incubate here, but would have to go to WHATWG eventually. 19:44:02 alcooper: Looked into face mesh support in the WebXR APIs and a lot of developers told me the perf gains wasn't worth the tradeoff 19:44:04 q+ 19:44:35 alcooper: A lot of people brought up other things we just can't do that might be better tradeoffs 19:44:52 ack cabanier 19:45:34 cabanier: Just want to note around performance that we are still adding things to the spec to fix performance - that will still be the primary gotcha for years to come 19:46:10 cabanier: For inline models, each session would need its own full-screen buffer - may be a non-starter 19:46:10 q- 19:46:34 cabanier: Video tag has had format gotchas, but it wasn't fatal - things go on even with less common formats also supported 19:46:48 cabanier: Just to have 3D content pop out, I don't think we need permission prompts there 19:46:50 ack Nick-8thWall 19:46:57 Nick-8thWall: Just want to respond to a few things 19:47:03 re: Rik's comment about performance and needing fullscreen contexts - that would only be the case if you were rendering above the page. You could still do a lot of interesting work rendering behind it though a cutout window that's bounded to the canvas. 19:47:29 re Rik: correct, my point was that you need to stay declarative if you want to skip permission prompts 19:47:29 Nick-8thWall: To the point about performance not being there, just want to thank you for doing the work to improve perf for everyone - it does pay off for the community! 19:47:43 Nick-8thWall: Heard some conversations about declarative vs. imperative 19:47:54 Also, certain limitations of WebGL won't carry over the WebGPU where, for example, you'll be able to have a single device driving multiple canvases. 19:48:14 Nick-8thWall: Heard the permissions issues around imperative APIs 19:48:25 Nick-8thWall: Ease of use is not really an issue since you can wrap declarative APIs around imperative APIs 19:48:30 q+ 19:48:39 Nick-8thWall: Just want to make sure we're honest there about the benefits 19:49:01 Nick-8thWall: Nobody believes we're here to remove WebGL/WebXR 19:49:22 Nick-8thWall: If there are things the browser can do in a privileged way, that can hold back innovation and land us on suboptimal solutions 19:49:40 Nick-8thWall: In the same way the web evolved from to , that should be the way we look here 19:50:09 Nick-8thWall: If we have some way to make things interactive, we can allow more innovation 19:50:19 Nick-8thWall: Need to be careful not to privilege innovation in browser over innovation in developer 19:50:21 ack ada 19:50:45 ack alexturn__ 19:50:46 alexturn__: Great discussion. I think we'd benefit from a clustering of topics 19:51:05 alexturn__: there is a spectrum of how convincing they are. i agree ease of use is the least convincing. 19:51:50 alexturn__: and yes, perf might be transient. the most important ones to me are about user experience. with bucketing of topics we might be able to easily identify these things. the explainer should do this. 19:52:33 alexturn__: e.g. img v canvas - even though canvas is a superset, it hasn't replaced img. 19:52:43 alexturn__: i suspect there is room for both approaches 19:53:34 ada: one big advantage we get from this is better accessibility. a gltf that has been marked up in a way that can be exposed to a screen reader would be a significant benefit. 19:54:11 ada: also, the ability to add annotations and click events into a (like image maps) makes a lot of sense 19:54:24 q+ to mention that model-viewer does have accessibility support and hotspots 19:54:54 laford (via chat): image -> canvas === model -> webgpu / webxr 19:54:54 Image: User interpretation of content, with all the browser bits and pieces (events, accessibility etc...) 19:54:54 Canvas: Developer wants this to behave and look a specific way so sacrifices the benefits of the tag 19:54:54 Analogously a basic model tag is the image and WebGPU / WebXR is the Canvas 19:55:33 ada: e.g. click on a entity that has some identification, that can be exposed to the accessibility system 19:55:39 (from meet chat):