16:02:24 RRSAgent has joined #immersive-web 16:02:24 logging to https://www.w3.org/2022/09/15-immersive-web-irc 16:04:14 klausw has joined #immersive-web 16:04:52 cabanier has joined #immersive-web 16:05:56 bkardell_ has joined #immersive-web 16:15:07 Holli has joined #immersive-web 16:18:39 lgombos has joined #immersive-web 16:18:48 Present+ Laszlo_Gombos 16:19:36 present+ 16:19:40 yonet has joined #immersive-web 16:20:04 bialpio has joined #immersive-web 16:20:06 Brandel has joined #immersive-web 16:20:08 bajones has joined #Immersive-Web 16:20:51 scribenick: bajones 16:21:04 https://www.w3.org/groups/wg/immersive-web/participants 16:21:29 agendaL https://github.com/immersive-web/administrivia/tree/main/TPAC-2022#day-1--september-15th-2022 16:21:31 We're all getting a bit of a late start this morning :) 16:22:00 Introductions 16:25:19 https://github.com/immersive-web/layers/issues/265 16:25:44 scribe:bajones 16:25:58 cabanier: This seems to have been scheduled earlier but couldn't find discussion or resolution 16:26:26 ... If you are playing a stereo video and then show the UI would like to make the video mono 16:26:33 ... Currently no way to do that. 16:27:27 q? 16:27:28 ... Would mostly be an attribute on cylinder and quad layers, could also apply to GL layers 16:27:43 scribenick ada: 16:27:55 bajones: I remember we talked about this before but it's good to talk about again 16:28:09 chair: yonet 16:28:10 ... in layers stereo or not is set at layer creation time 16:28:22 ... is it possible to create and swap in a non stereo video 16:29:00 cabanier: the issue is that the video is top-bottom so you would see that 16:29:14 bajones: you could then choose to render just half the content 16:29:32 bajones: if you are going to do it for a media you might as well do it for the others too 16:30:21 bajones: as a follow up, because meta has been the only one who has implemented layers what would be the implementation cost of this 16:30:42 cabanier: it's simple to implement 16:30:51 ... should we come up with a name for the property? 16:31:08 ... 'force mono'? 16:31:23 bajones: would it be possible to make the attribute mutable? 16:31:30 cabanier: not really 16:32:20 bajones: this would probably only used in transient situations, the app would still continue as normal but the compositor would only do half the work 16:32:34 cabanier: we would duplicate the left eye view to the right eye 16:33:22 cabanier: feels like it should just be boolean 16:34:04 bajones: I wonder if it's worth allowing the developer to specify how it is shown so the developer can pre-emptively optimise by not rendering a particular eye 16:35:12 ada: an enum would give us more freedom down the line 16:35:19 cabanier: but they are really annoying to spec 16:36:02 bajones: i don't really care too mcuh between bool and enum but enum could be useful 16:36:45 bajones: if it's in the spec we should definitely define which eye is prefered 16:36:58 ada: is there anywhere else in the spec where one eye is favoured? 16:37:02 cabanier: no 16:37:25 q+ 16:37:55 ack bialpio 16:37:55 bialpio: We could do enum disabled/enabled then later do force left force right 16:39:25 bajones: (to cabanier) I am worried that doing this for projection layers that for weird displays it wouldn't work 16:40:10 cabanier: all but projection layers 16:40:53 ...cylinder, quad, cube, equirect 16:41:20 cabanier: name? 16:41:42 bajones: I don't love forceMono 16:42:43 bajones: although it does seem fitting 16:43:37 bajones: maybe forceMonoView or forceMonoPresentation to inform that it's not the shape changing 16:44:01 https://docs.google.com/presentation/d/1typ1VnQ9uzjKK0S_-lM430i6w99e1DfQCD-NPOY5OLg/edit?usp=sharing 16:44:51 ashwin has joined #immersive-web 16:45:26 https://github.com/immersive-web/layers/issues/287 16:49:48 scribenick: yonet 16:50:02 What is the disadvantage of creating multiple layers 16:50:19 It is expensive to create 16:51:25 bajones has joined #Immersive-Web 16:51:57 bajones: why not destroy the low res layer and create a high res one? 16:52:10 rik: Customer with a video playback library is streaming in multiple video resolutions 16:52:16 ... Wants to select based on which comes in first 16:52:22 ... Would like to swap video source mid-stream 16:52:35 ... Can't be an attribute on the layer 16:52:37 bajones: Why not? 16:52:41 ada: you establish layer from the video element, they are not changing the url of the video, right? 16:52:42 rik: Because layers are currently agnostic to source, want to preserve 16:52:53 rik: changing the url,yes 16:53:42 if you change the source of the html video element, you are starting a new stream 16:54:02 the purpose is to download the higher resolution of the same video. 16:54:11 q+ would not the network cache help here 16:55:08 piotr: is the streaming bandwith is the issue? 16:56:09 bajones: starting the low res stream and waiting for the high res to switch. I am reluctant to create two steps where it could be on one. 16:57:13 q? 16:57:19 ...I would say, if there is a concrete reason why this supports the user to do something we can do more efficiently than we should do 16:57:25 q+ 16:57:34 ack lgombos 16:58:54 lgombos: because you change the source the new download starts, does it really start a new download, do you know 16:59:37 rik: I don't know but they are getting a black frame 16:59:44 q+ 17:01:16 bajones: maybe they are doing something with start and destroy. if the issue is opaque layer and that's the reason they are getting black that might be a useful thing to communicate. 17:01:32 ...maybe it is a signal that we dont need but I'm not sure 17:02:23 rik: the feedback was 17:03:28 bajones: I would like to hear more technical details. It might be something they can change in the userspace 17:03:52 ack Manishearth 17:04:00 ...maybe the source swapping is the efficient way or we might need to give a signal 17:04:26 manish:html video elements already support multiple sources. 17:04:40 Will has joined #immersive-web 17:04:42 q+ 17:04:47 ...it can also be used for resolution. I don't know how smooth that is 17:05:10 ...to me this seems like a problem that needs to be fixed by video. 17:05:25 ...it is a problem with video in general 17:05:28 scheib has joined #immersive-web 17:05:30 q+ 17:05:46 q+ 17:05:48 ... we should look into what we have right now 17:07:06 q? 17:07:14 ack cabanier 17:07:17 ack Will 17:07:46 marcos: it is sublty different with video conditions but it will switch 17:07:47 present+ 17:08:13 will: if multiple sources video will use the first one. 17:08:47 ...working on HLS that will be available by Q4. HSLs.js is available today 17:09:00 scribe+ 17:09:19 will: up to the player to decide how thsi will be implemented 17:09:42 ack bajones 17:09:47 rik: shakaplayer works with media layers 17:09:50 q- 17:10:13 ada: I think this resolves this issue 17:10:26 agendum: https://github.com/immersive-web/layers/issues/288 17:10:33 ...let's move on the next topic - Break! 17:10:41 present+ 17:10:48 present- 17:10:59 present+ 17:11:25 rrsagent, make log public 17:11:28 rrsagent, publish minutes 17:11:28 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 17:30:33 bialpio has joined #immersive-web 17:30:42 present+ 17:31:04 marcosc has joined #immersive-web 17:31:08 Present+ 17:31:09 present+ 17:31:34 present+ 17:31:40 JohnRiv has joined #immersive-web 17:34:33 present+ 17:34:36 present+ 17:34:38 zakim, who is here? 17:34:38 Present: Leonard, JoeLamyman, manishearth_, Laszlo_Gombos, cabanier, bajones, cwilso, bialpio, marcosc, ada, JohnRiv, yonet 17:34:40 On IRC I see JohnRiv, marcosc, bialpio, scheib, yonet, bkardell_, cabanier, klausw, RRSAgent, atsushi, kzms2, garykac, hyojin, sharonmary6, rzr, OverTime, NicolaiIvanov, 17:34:40 ... helixhexagon, fernansd, [old]freshgumbubbles, etropea73101, dietrich, Chrysippus, SergeyRubanov, bemfmmhhj, babage, NellWaliczek, \join_subline, Zakim, sangwhan, cwilso, iank_, 17:34:40 ... ada, Manishearth 17:34:42 present+ 17:35:12 zakim, choose a victim 17:35:12 Not knowing who is chairing or who scribed recently, I propose cwilso 17:35:26 scribe+ 17:35:41 https://github.com/immersive-web/layers/issues/288 17:35:48 lgombos has joined #immersive-web 17:35:57 rik: this should be short: https://github.com/immersive-web/layers/issues/288 17:36:14 ashwin_ has joined #immersive-web 17:36:16 q+ 17:36:18 ... no frame available is black; can we make it transparent? 17:36:24 bajones: purple! 17:36:38 ...is there a default on gl layers? 17:36:45 rik: I think it's transparent 17:37:02 bajones: surprising media layers don't do that? 17:37:22 rik: originally we didn't have opacity, so maybe that's why? but that's gone away 17:38:19 ada: so a transparent PNG on a media layer would be black? 17:38:24 rik: no, transparent 17:38:33 ...opacity is a multiplier 17:39:22 bajones: behaviorally seems fine. are there scenarios where we can collapse transparency? 17:39:54 ....can videos tell you if they're transparent? 17:40:00 rik: i don't think Chrome supports this 17:40:15 bajones: that's an optimization anyway. I think this is fine. 17:40:17 q? 17:40:22 ack ada 17:40:54 ada: it makes sense to me to start off as transparent. 17:41:05 igarashi has joined #immersive-web 17:41:08 q? 17:41:12 present+ 17:41:20 ada: resolved 17:41:36 ....next item on the agenda is about 17:42:30 https://github.com/immersive-web/model-element/issues/55 17:42:42 ...but let's wait on that until the scehduled time 17:42:51 s/scehduled time 17:42:57 s/scehduled time/ 17:43:04 s/scehduled time/scheduled time/ 17:43:16 bajones: let's talk about immersive audio 17:43:26 mfoltzgoogle has joined #immersive-web 17:43:27 ... web audio is like webxr, imperative API 17:43:31 Present+ Mark_Foltz 17:43:52 ... WA can spatialize audio through HRTFs (head-related transfer functions) 17:44:12 ... WA is looking at pulling in file formats with spatialized audio 17:44:32 ... also 3DOF "hearables" - audio AR 17:44:41 Leonard has joined #immersive-web 17:44:48 present+ 17:44:54 ... how do they get the data streams out of those types of devices 17:45:39 ...lkinda like Cardboard, with no visual component 17:45:55 ... all our APIs have video component, which has a privacy aspect to it 17:46:23 ... it would be interesting for this case to have something like a type of session with just the tracking aspect, and no visual aspect to it 17:46:36 q+ to ask about declarative 17:46:51 jeff has joined #immersive-web 17:46:59 ... having that kind of session would be beneficial for not only this, but other scenarios - I can't remember where atm but I've heard this request before 17:47:00 q+ 17:47:02 ack ada 17:47:02 ada, you wanted to ask about declarative 17:47:37 ada: should be really interesting, if audio was more declarative, if you could place this in the format rather than using imperative APIs. 17:47:40 q+ 17:48:00 present+ 17:48:38 ...omnitone apparently gets this wrong now 17:48:54 ... that would work well for adding audio to immersive AR scenes. 17:48:56 ack cabanier 17:49:25 rik: after meeting with audio folks, I looked in to this, and what people use - Howler.js seems to be pretty well supported. 17:50:07 q? 17:50:11 ack cwilso 17:50:14 ...maybe the device orientation API would be easier? it also pops a permission prompt, might be easier. 17:51:08 cwilso: I just wanted to comment a couple of things, device orientation would be good enough for 3dof, audio in some areas is temporarily very strict and in others is very forgiving 17:51:10 bajones has joined #Immersive-Web 17:51:14 Brandel has joined #immersive-web 17:51:20 q+ 17:51:27 cwilso: three.js has in built panner node support 17:52:03 cwilso: the only thing that has to happen behind the scenes is the panner node input 17:52:03 https://github.com/immersive-web/webxr/issues/1248 17:52:13 bajones: did a little digging, found existing issue 17:52:16 ashwin has joined #immersive-web 17:52:29 Holli has joined #immersive-web 17:52:49 ...on 6DOF audio-only session. I thought this was from a specific company, but not sure. 17:53:46 ...with what Rik was saying about device orientation, the only problem is that it doesn't represent multiple devices (e.g. it doesn't track orientation of "device on my head", but just "the device") 17:54:05 ... with devices like Airpods, it might only be a 3DOF pose, but might be 6DOF in the future 17:54:22 ... we talked about those kind of devices in the past 17:54:49 ... I think there's enough of an overlap, and it maps into the idea of our API 17:54:56 q? 17:55:01 ack bajones 17:55:02 ack baj 17:56:01 ada: who's interested in the intersection of audio and WebXR, e.g. this audio-only session? 17:56:15 klausw has joined #immersive-web 17:56:30 bajones: seems like two questions: 1) integration of Web Audio and XR , e.g. wiring up a panner node 17:56:51 ...2) do we want types of sessions for no-visual-component sessions 17:56:57 q+ 17:57:01 Brandel_ has joined #immersive-web 17:57:11 ... may be paths forward for both of these 17:57:17 q+ 17:57:47 q+ 17:58:07 ack Leonard 17:58:13 ..."do we want to do things to advance these" 17:58:24 +1 on making audio integration better 17:58:29 q+ 17:58:30 leonard: do we need audio-only sessions for accessibility reasons? 17:58:36 ack baj 17:59:30 bajones: I don't think it's necessary; but we might want to encourage people to have better audio cues in their experiences. 17:59:38 q+ to note https://github.com/immersive-web/webxr/issues/390 18:00:35 ... special mode is probably better for when you're dealing with hardware limitations (e.g. pair of glasses that only has audio) 18:00:47 q+ to note https://github.com/immersive-web/webxr/issues/815 18:01:05 ada: would that hardware-limited scenario break things today? 18:01:38 alexturn has joined #immersive-web 18:01:43 brandon: yes, probably 18:02:00 ... due to exposing zero views 18:02:04 vq? 18:02:40 q? 18:03:24 ...it might work, but it would be fragile in how experiences are authored. for this modality of apps, you really want it to be more like an inline session that is declared as audio-only 18:03:30 ack Brandel_ 18:04:40 brandel: new iOS has better HRTF details (e.g. shape of your ear). Curious to know if Panner Node supports this. 18:05:09 ...stereopanner just does stereo panning 18:05:11 scribe+ 18:05:53 cwilso: web audio just has a generic use a hrtf to do this, there is a default hrtf that the useragent could replace with a better one, to do a better experience. 18:06:03 ... no one has thought about doing that yet 18:06:08 ack cwilso 18:06:08 cwilso, you wanted to note https://github.com/immersive-web/webxr/issues/390 and to note https://github.com/immersive-web/webxr/issues/815 18:07:05 ... on the other two issues I would like to call out these issues about non visual sessions, which deny allow of non-visual sessions and an issue for hooking up panner nodes 18:07:10 zakim, close the queue 18:07:10 ok, ada, the speaker queue is closed 18:07:12 q? 18:07:18 ack ada 18:07:21 ack ada 18:07:27 present+ 18:07:55 ada: just wanted to mention that the 8th wall wanted trracking-only sessions to have their own visual implementation 18:08:11 alcooper has joined #immersive-web 18:08:15 bajones: https://github.com/immersive-web/webxr/issues/1248 18:08:20 ada: let 18:08:33 ada: let's move on to the element, with a new scribe 18:08:40 agendum: https://github.com/immersive-web/model-element/issues/55 18:08:46 zakim, choose a victim 18:08:46 Not knowing who is chairing or who scribed recently, I propose yonet 18:08:53 zakim, choose a victim 18:08:53 Not knowing who is chairing or who scribed recently, I propose bialpio 18:08:54 present+ 18:09:00 Will has joined #immersive-web 18:09:05 scribenick: bialpio 18:10:02 introductions 18:10:28 bialpio_ has joined #immersive-web 18:10:40 scribenick: bialpio 18:10:45 q+ 18:10:51 scribenick: bialpio_ 18:10:53 zakim, open the queue 18:10:53 ok, ada, the speaker queue is open 18:10:53 zakim, open the queue 18:10:55 ok, cwilso, the speaker queue is open 18:11:02 q+ 18:11:02 q+ alexturn 18:11:39 ack alcooper 18:11:55 ack alexturn 18:12:25 @Ada: It's your new job 18:13:20 Emmett: already fair bit of discussions - interesting idea, but... 18:13:42 ... I've been arguing that we're not yet ready to standardize, arguments already in the issue 18:14:06 ... I think we should be talking about what problems we're trying to solve, as standardizing may be A solution, but not THE solution 18:14:12 q+ 18:14:32 ... main point is that right now looks a lot like 18:14:56 ... so far the main advantage is that we can skip a permissions prompt that WebXR would show 18:15:07 q+ 18:15:10 ... but it may not be a reason enough to go with standardizing 18:15:21 ... especially since the API shape can get massive 18:15:47 ... so the main question is what the goals are 18:15:48 ack marcosc 18:16:08 marcosc: apologies if intentions not clear enough, let's rehash... 18:16:22 ... goal is to have a simple way to include 3d models in the web pages 18:16:28 ... commerce case really important 18:17:16 ... AR case - it'd be cool if we didn't have to download such components twice 18:17:24 ... accessibility story is more compelling 18:17:31 q+ 18:17:33 ... API surface is going to be a challenge 18:17:37 q+ 18:18:11 Emmett: what is the delta? what does a standardized element give you that you won't get from existing options? what do you gain? 18:18:28 marcosc: browser renders it for you, so you don't need to download any JS 18:18:37 ... no dependency for any JS library 18:18:42 q+ 18:18:45 ... you can get new format support for free 18:19:01 1+ 18:19:03 q+ 18:19:13 Emmett: how do you get a consistent format support across different browsers, & why having a standardized element is better 18:19:32 ... we right now have consistent rendering across browsers and we can rapidly iterate on the solution 18:19:54 ... I don't understand how we're going to achieve that when we have different browsers w/ different schedules 18:20:07 marcosc: tag does not preclude the solutions in JS 18:20:11 q+ to discuss baseline 18:20:31 ... browsers may be behind but over time they stabilize and catch up 18:20:31 q+ to discuss object 18:20:37 q+ 18:20:49 q+ 18:20:57 ack ada 18:20:58 ... the advantage is that it's built-in into the browser, we have a baseline 18:21:00 q+ 18:21:39 ada: re feature gap of model vs model-viewer - that's not a big disadvantage, we can keep adding things to browser impl 18:22:05 ... if at the start model doesn't work for people, they can still rely on model-viewer 18:22:08 ack alexturn 18:22:38 alexturn: it may come down to philosophy of what the web should do 18:23:13 ... my brain goes to: what can't you get with the current solution 18:23:57 ... similarly w/ VR and AR browsers 18:24:26 ... there are things you can do but you are limited to the plane of the browser 18:24:48 Josh Carpenter demo: https://joshcarpenter.ca/composable-3d-web/ 18:25:10 ... when you have model tag, we can now do things in headsets 18:26:38 ... we may reach a point where we use models for UI elements and requesting WebXR for all little things would be an overkill 18:27:06 Emmett: I don't see how the dots connect between Josh's slides and the browser yet 18:27:27 ... I'd get more interested in it if I saw how those 2 connect 18:27:44 ... when I look at Josh's slides, I don't see a browser, it's more like a maps experience 18:27:46 ack Brandel 18:28:18 Brandel: I've been playing w/ Apple's technology preview demos with icons 18:28:40 ... I'd echo alexturn - it's an opportunity for the browser to do w/ the information that is privileged 18:28:55 ... there are things that aren't safe to expose to the site 18:29:15 ... so what is it that people want to achieve? 18:29:26 qq+ cgw for a brief chairing reminder 18:30:01 ... we wouldn't consider using WebXR e.g. in apple.com, permission prompt is the main reason 18:31:07 ... we can also have dedicated hardware and native libraries that'd be more efficient to use rather than JS 18:31:22 ... it's valuable to have browser-level support 18:32:23 ... you can do lighting estimation in immersive WebXR, but it'd be nice to do something similar mediated by the browser, without exposing the camera to the site 18:32:40 vq? 18:32:48 ... with that you can see reflections on the object 18:33:32 ack cgw 18:33:32 cgw, you wanted to react to Brandel to discuss a brief chairing reminder 18:33:58 ... is good example of the use cases, but it won't be able to do the same thing as the browser 18:33:59 ack baj 18:34:36 bajones: everybody's talking about Josh's slides - 2nd half goes into how this could look like in a browser 18:35:01 ... we talked through those concepts w/ Josh 18:35:18 zakim, close the queue 18:35:18 ok, ada, the speaker queue is closed 18:35:35 ... everything you see here is far-looking, and we approached it through "what could we do through WebXR" 18:35:51 ... so I don't think it requires the browser to be managing this 18:36:11 ... but how can we do this without the prompt 18:36:29 ... seems like we'd like to be able to hand off rendering to the OS components if they exist 18:36:41 ... but the concern here is consistency 18:37:21 ... having sat through glTF meetings, and what comes back is that we need things to render consistently everywhere in a matter that is close to real life 18:37:51 ... what we don't want is having the model be rendered completely differently across browsers 18:38:12 ... there will always be differences in capabilities so we may need to be able to opt in to different capabilities 18:38:24 ... but consistency is difficult if rendering mechanism is OS-level 18:38:42 Josh Carpenter's slides: https://joshcarpenter.ca/composable-3d-web/ 18:38:42 ... it stops being a problem if you rely on JS library 18:39:06 ... so commerce can fall back to JS simply because those use cases could then rely on rendering consistency 18:39:28 q? 18:39:32 ack yonet 18:39:54 yonet: when we previously met, there was a lot of questions and Dean promised demos 18:39:59 https://joshcarpenter.ca/video/c3d/model-remix-everything.webm 18:40:04 ... so we could see what is an MVP 18:40:19 ... as it may affect discussions 18:40:46 my headphones just died so I am trying to recalibrate 18:40:49 marcosc: demo is what we released behind a flag 18:41:19 Brandel: we have demos fit for public consumption I think? 18:41:22 q? 18:41:31 Not everyone has Safari. Can we see something (screen share)? 18:41:32 ... straightforwad demonstrations of what we think should be possible 18:41:47 ack cwilso 18:41:47 cwilso, you wanted to discuss baseline and to discuss object 18:41:49 ... demos tomorrow 18:42:08 cwilso: taking off my chair hat 18:42:11 plh has joined #immersive-web 18:42:15 present+ 18:42:32 ... my concern is that is the baseline, built into browser, but that may not be true 18:42:52 ... as we cannot guarantee it will happen everywhere in a consistent manner 18:43:07 ... has baseline that all browsers implement 18:43:13 ... and there are extensions 18:43:30 ... I'm worried that if we don't have an interoperable baseline, we will fail 18:43:43 ... point of standards is to be interoperable 18:44:05 ... so we should not call it a web standard 18:44:21 marcosc: this is an incubation 18:44:42 cwilso: so we need to be careful how we communicate 18:44:56 marcosc: agree, that's why this is an incubation, that's why we're reaching out now 18:45:06 q? 18:45:09 ... we need to prove that we can render consistently 18:45:33 ack cabanier 18:45:37 Emmett: we've gone through this as well in model-viewer since for AR on iOS we have to use QuickLook 18:46:09 cabanier: there are examples in Josh's slides that were explicitly a browser 18:46:35 ... in quest browser the power is that it can be rendered in 3d 18:47:05 ... we could do reflections and we cannot do those w/o permission prompts today 18:47:37 ... as for consistency, we may not even be here right now as different browsers can render things inconsistently even now 18:48:06 Bajones: interesting where the line of sand is 18:48:29 ... but the problem is that if one browser renders glass correctly and the other renders it as gray blob 18:48:42 ... similarly, if one browser comes up w/ hair model and the other does not 18:49:06 ... so there are distinctions between incorrectly representing colors and inconsistent rendering of models 18:49:29 Emmett: one case in point is now with how roughness gets displayed 18:49:38 ... it's less about the colors 18:49:54 ... glTF is what aims to solve this 18:50:09 q? 18:50:17 ... path tracers are the baselines and rasterizers should aim to be close to those 18:51:01 bajones: lighting being used as an input for rendering is a nice idea, but all the current devices that I've used use low res approximation of env lighting 18:51:23 ... so for shiny models you may run into inconsistencies as well 18:51:35 ack Leonard 18:51:37 q? 18:51:39 ... that indicates that we cannot hand off things to the renderer and be done 18:52:21 Leonard: the way this is done is presented as new tag but now we still need to figure out a lot of stuff 18:52:30 q+ 18:52:39 ... the most important part in all of this is correctly rendering the 3d model, including animations 18:52:59 ... commerce retailers are interested in non-static things being shown 18:53:26 ... it's concerning to me that it's not addressing questions around rendering, camera, lighting... 18:53:35 q? 18:53:38 ack klausw 18:54:01 klausw: one thing that is confusing is that what do we want to include initially 18:54:07 ... what do we add things later 18:54:18 s/what/how 18:54:31 ... how will the site author know what is available 18:54:45 ... so if animation gets added later, how do we surface it to the site authors 18:55:05 ... so it'd be good to have a process for adding features 18:55:11 q? 18:55:14 ... since there may be a long tail of capabilities 18:55:34 ... that aren't implemented across the board 18:57:03 yonet: lgombos and marcosc are points of contact for the repo 18:57:52 marcosc: please file issues in the repo in case we didn't cover something 18:58:35 scribenick: cabanier 18:58:50 https://github.com/immersive-web/model-element/issues/18 18:59:05 bajones: this ties into about the earlier discussion about consistency 18:59:25 ... which format model tag chooses to support 18:59:34 ... so we should discuss this at lenght 18:59:41 q+ 18:59:57 ... earlier we talked about to match the video element by having multiple src tags 19:00:02 zakim, open the queue 19:00:02 ok, cwilso, the speaker queue is open 19:00:11 ... I think that model was widely seen as a mistake 19:00:12 q+ 19:00:39 ... browsers like firefox ended up broken because it didn't decode all formats 19:01:05 q+ to read back Domenic's comment 19:01:10 ... I'm worried that there's a fair amount of people that choose their platform of choice or leave out the ones of other browsers 19:01:22 ... I think this is the most important choice 19:01:33 ... I know that Apple prefers USDZ 19:01:41 ... Google prefers GLTF 19:01:51 ... we like the fact that it is well standardized 19:02:03 ... it's proven to be easy to render in javascript and native 19:02:19 ... and there's concern that USDZ isn't standardized at the same level 19:02:37 ... the standard is USDZ = USD in a zip 19:02:53 q? 19:02:56 ... USD is a black box so I don't think this is an appropriate format 19:02:57 ack marcosc 19:03:08 marcosc: thanks Brandon 19:03:30 ... from webkit/Apple side, we like the other vendors to have strong opinions 19:03:30 q+ 19:03:47 ... so if you're another vendor, please voice your preference 19:03:59 ... as for the video, we support various formats 19:04:12 ... but if we can agree on a format, that is great 19:04:25 ... but it shouldn't preclude different experimentations 19:04:36 ... maybe there's a future format which is fantastic 19:04:55 ... the advantage of the src option, is to allow media queries 19:05:09 ... it's well suited for various environments 19:05:26 ... the picture and video element are used in the same way 19:05:41 ... this is why we went with that model despite its pains 19:06:04 ... Apple thinks USDZ is a good format but if everyone disagrees, we might need to revisit 19:06:09 q? 19:06:12 ack cwilso 19:06:12 cwilso, you wanted to read back Domenic's comment 19:06:19 cwilso: I have 2 things 19:06:39 ... dominic mentions the video and requiring royalty free formates 19:06:57 ... he suggests that there's a minimum bar for the format that is picked 19:07:05 present- 19:07:13 present+ 19:07:25 ... I don't know if we can even do that. Having an open specification is of the utmost importance 19:07:26 q+ 19:07:34 marcosc: I agree 19:08:05 q+ 19:08:21 q? 19:08:25 ack lgombos 19:08:37 lgombos: marcos asked for feedback, Samsung prefers GLTF 19:08:50 ... for interop, we already discussed it quite a bit 19:09:12 ... most of it is in the content itself which is done in another group 19:09:40 q? 19:09:42 ack bajones 19:09:45 ... so if we decide what the baseline and format is, compatibility and standardization is most important 19:09:55 ack bajones 19:09:59 bajones: Marcos brought up media queries 19:10:17 ... it is not that multiple sources isn't the way to go 19:10:58 ... that use case of media queries should be supported 19:11:08 ... but that shouldn't extend to different formats 19:11:11 q/ 19:11:12 q? 19:12:15 emmet: (???) you might have the data at Apple 19:12:27 ... but we have a convertor from gltf to USDZ 19:12:34 ... it's very difficult 19:12:49 ... not that many people create USDA file 19:13:06 ack Leonard 19:13:17 ... so if you have metric of how many people use that format, you will know how many people use modelviewer 19:13:41 Leonard: gltf supports many things 19:13:48 ... lately gtx was added 19:13:50 Q+ 19:13:54 ack bajones 19:14:04 bajones: this is more about consistency 19:14:22 ... USDZ and GLTF both have extension methods 19:14:42 ... and not all extensions need to be supported by a renderer 19:14:50 ... and might not even make sense 19:15:18 ... we need to make a consideration so users can know what features are supported by the browsers 19:15:24 ... we need to offer user control 19:15:50 q? 19:15:56 q+ 19:16:04 ... maybe you have a model that has all the latest features, but maybe one UA doesn't support it in which case the author should be able to disable it 19:16:21 emmet: I'm unsure if anyone talked to NVidia 19:16:35 ... they seem very interested in web and 3D format 19:17:05 ... they are using USD as the scene formatting stuff and gltf for the format (?) 19:17:07 ack marcosc 19:17:13 marcosc: I did read that as well 19:17:31 https://developer.nvidia.com/blog/universal-scene-description-as-the-language-of-the-metaverse/ 19:17:47 ... it's a bit buzz-worthy but I agree that it's pretty cool what they are doing 19:18:17 ... to leonard's point about rendering consistency, we've done a good job and is getting better 19:18:26 ... we will figure this out as we go along 19:18:42 ... there are better use cases, and the format provide rendering hints. 19:18:48 q? 19:19:19 https://www.khronos.org/assets/uploads/developers/presentations/Metaverse_and_the_Future_of_glTF_-_All_Slides.pdf 19:19:31 bajones: this is the nvidia push 19:19:48 ... worth mentioning that Khronos is doing a similar effort 19:20:03 ... it's a collection of scenes and just as buzz-worthy 19:21:53 igarashi_ has joined #immersive-web 19:22:07 present- 19:45:07 jacobcrypusa has joined #immersive-web 20:11:08 yonet has joined #immersive-web 20:15:42 lgombos has joined #immersive-web 20:21:45 scribe Leonard 20:21:56 q? 20:22:00 bajones has joined #Immersive-Web 20:22:19 agendum: https://github.com/immersive-web/model-element/issues/56 20:22:35 bialpio has joined #immersive-web 20:22:41 present+ 20:23:11 Ada: Thinks CORS should be required. 20:23:25 zakim, open the queue 20:23:26 ok, ada, the speaker queue is open 20:23:34 Rik?: should just be like 20:23:37 q+ 20:24:04 q+ 20:24:15 present+ 20:24:16 ack ada 20:24:18 Ada: If limit pollyfills to only JS, then it imposes a circular limit 20:24:37 ack bialpio 20:24:52 marcosc has joined #immersive-web 20:24:52 present+ 20:24:53 present+ 20:24:58 present+ 20:25:02 q+ 20:25:13 Piotr: Easier to relax requirement than add it later. Propose to initially polyfill with requirement, then reduce it later 20:25:28 q+ 20:25:39 Ada: Could this be a non-normative requirement? 20:25:48 Note: It === CORS 20:26:03 ???: Hard to do non-normative security requirements 20:26:06 q- 20:26:08 ack marcosc 20:26:54 q+ 20:26:54 Marcos: Agrees with Piotr 20:27:01 Ada: Does it work on video? 20:27:14 q+ 20:27:27 ack cabanier 20:27:30 Marcos: No. Video is a single source. Models are not 20:27:47 Rik: Models are not self-contained? 20:28:15 Brandon: USDZ similar to GLB, pack everything into a single file, but not required 20:28:35 q+ 20:29:08 q+ 20:31:01 Marcos: Trying to reduce attack surface by requiring confirmation that accessing a separate server is OK 20:31:40 ack klausw 20:31:52 qq+ klausw 20:32:16 ack klausw 20:32:16 klausw, you wanted to react to klausw 20:32:24 Marcos: Originating content establishes relationships with other servers 20:32:31 q- 20:33:08 q+ 20:33:09 Klaus: Control access to resources to save costs, etc. 20:33:43 q+ to talk about the patchwork nature of the web 20:33:45 q+ 20:33:48 ack Leonard 20:33:49 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:34:21 s/scribe Leonard/scribe: Leonard/ 20:34:32 s/agendum: http/topic: http/ 20:34:35 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:34:58 q? 20:35:02 ack cabanier 20:35:17 s/scribenick ada:/scribenick: ada/ 20:35:32 s|https://github.com/immersive-web/layers/issues/265|topic: https://github.com/immersive-web/layers/issues/265| 20:35:53 s|https://github.com/immersive-web/layers/issues/287|topic: https://github.com/immersive-web/layers/issues/287| 20:35:55 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:36:12 s/agendum: http/topic: http/ 20:36:14 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:36:24 Rik: Doesn't like the idea of preventing access 20:36:41 ... [really more than that, but it is kind-of subtle] 20:37:10 s/agendum: http/topic: http/ 20:37:11 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:37:15 present+ 20:37:22 Marcos: Provides explaination of what happens. 20:38:03 q? 20:38:18 ack ada 20:38:18 ada, you wanted to talk about the patchwork nature of the web 20:38:43 Some discussion of limiting glTF to not allow secondary connections. [that would break glTF -- LD] 20:39:28 Ada: Wants feedback from Architecture group before reaching decision. 20:39:37 Marcos: Agrees 20:40:00 q+ 20:40:01 q? 20:40:06 q- 20:40:08 ack marcosc 20:40:22 Rik: Want to make sure it is done for good reasons. 20:40:39 Marcos: Already gave example 20:41:30 q+ 20:42:56 q? 20:43:00 ack cabanier 20:43:14 meeting: Immersive Web WG / TPAC 2022 Day1 20:43:38 agenda: https://github.com/immersive-web/administrivia/blob/main/TPAC-2022/readme.md 20:44:03 Leonard: Rik mentioned disallowing subrequest 20:44:09 s/agendaL http/agenda: htp/ 20:44:19 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:44:19 ... that would give you geometry and nothing else 20:44:33 ... and this would prohibit certain domains 20:44:36 q+ 20:44:37 Leonard: It's possible to set the zoom view to 'speaker' rather than 'gallery', and then pin the 'Granville' participant to get the folks in the room fullscreen 20:45:01 ada: are you saying things can be pulled from anywhere 20:45:01 s|https://github.com/immersive-web/model-element/issues/55|topic: https://github.com/immersive-web/model-element/issues/55| 20:45:04 ack Leonard 20:45:07 ack cabanier 20:45:14 s|https://github.com/immersive-web/webxr/issues/1248|topic: https://github.com/immersive-web/webxr/issues/1248| 20:45:27 q+ 20:45:44 ack bialpio 20:45:49 s|https://github.com/immersive-web/model-element/issues/18|topic: https://github.com/immersive-web/model-element/issues/18| 20:45:53 Rik: Still likes a single-file complete model 20:45:58 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:46:48 q+ 20:46:51 Piotr: Concened about excessive bandwidth usage 20:47:13 ack klausw 20:47:22 Ada: That issue has been around since the beginning of the web 20:47:40 Klaus: HTTP Referrer header already can do that 20:47:52 s|https://github.com/immersive-web/model-element/issues/55|topic: https://github.com/immersive-web/model-element/issues/55| 20:48:03 s|https://github.com/immersive-web/webxr/issues/1248|topic: https://github.com/immersive-web/webxr/issues/1248| 20:48:04 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:48:31 q? 20:49:10 s|topic: topic: https://github.com/immersive-web/webxr/issues/1248|https://github.com/immersive-web/webxr/issues/1248 20:49:13 Conclusion: Ada will take issue to TAG. Expects the response to "Use CORS" 20:50:02 s|topic: topic: https://github.com/immersive-web/model-element/issues/55|https://github.com/immersive-web/model-element/issues/55 20:50:04 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 20:50:41 topic: https://github.com/immersive-web/model-element/issues/13 20:51:49 Unknown speaker: looks a lot like a media element. 20:52:08 Leonard: this is marcosc 20:52:11 ... It's just (a lot) more than 1-dimension (e.g., audio) 20:52:20 q+ 20:52:54 ... Do all media elements need "controls" 20:53:03 ... This is from Marcos 20:53:14 q+ 20:53:22 ack bajones 20:53:43 Will has joined #immersive-web 20:53:47 q+ 20:54:38 Brandon: Noted that media elements have many controls, spec language, and related APIs in common. 20:54:58 q+ 20:54:59 ... glTF have multiple animations. How would that worlk? 20:55:05 Marcos: Doesn't know 20:55:23 q_ 20:55:27 q? 20:56:04 ack Leonard 20:56:32 q- 20:56:36 q+ 20:58:08 [Note] Marcos needs to leaves WG. Discussion might be 20:58:09 ack Will 20:59:30 ????: Media elements supports multiple tracks, but not necessarily all playable at the same time 20:59:55 Ada: Points out that the text track (caption) can play with audio & video 21:00:35 q? 21:00:43 ack Brandel 21:01:26 Brandel: Looking at must-haves and not-haves. 21:02:20 ... Single animation track seems to be important 21:02:20 q? 21:04:33 https://docs.google.com/presentation/d/1typ1VnQ9uzjKK0S_-lM430i6w99e1DfQCD-NPOY5OLg/edit?usp=sharing 21:05:26 zakim, choose a victim 21:05:26 Not knowing who is chairing or who scribed recently, I propose Leonard 21:05:29 zakim, choose a victim 21:05:29 Not knowing who is chairing or who scribed recently, I propose plh 21:05:42 scribenick: cabanier 21:05:51 ada: image tracking unconference 21:05:58 ... I'm a fan of image tracking 21:06:19 .. the last time we talked about it, the consensus is that it's interesting 21:06:33 ... but different hardware platforms have different solutions 21:06:37 ... and they don't overlap 21:06:53 ... arcore does images well but not QR codes 21:07:14 ... likewise hololens is good at tracking QR code but can't track plain images 21:07:45 ... so the consensus was, if we can't ship an API across devices, should we do it all? 21:08:09 ... the more I was thinking, in the case of HW, the use cases are different 21:08:26 ... the hololens is tailored towards industry so QR codes make sense 21:08:35 ... while arcore is more consumer focused 21:08:54 ... I think they tend to support different audiences 21:09:06 q+ 21:09:11 ... so it's probably not a big deal that they're different 21:09:47 ... as a developer advocate, dom content and image tracking were the most important 21:10:08 ... one of the things that's hard to do is shared anchors 21:10:33 ... and the industry doesn't have a shared API 21:11:11 ... but with qr code and image tracking, 2 users could localize to the same space 21:11:18 q? 21:11:23 ack bajones 21:11:55 bajones: one of the things that makes this difficult is that arkit requires an image processing step upfront 21:12:02 ... I can't find any runtime API 21:12:30 ... arcore (??? something less complicated) 21:12:55 q+ 21:13:12 ... I am concerned that image tracking requires an offline process 21:13:45 ... if we want to have image tracking, we might have to use our own algorithm 21:14:12 ... it's a concern that we can deliver images that can be consumed 21:14:36 ... arkit wants non-repeating nicely defined images 21:14:52 ada: do we want a pile of floats shared across the platform? 21:15:02 dom has joined #immersive-web 21:15:02 bajones: it would be a path 21:15:03 q? 21:15:07 ack klausw 21:15:08 klausw: 21:15:25 klausw: so yes, arcore lets you upload images at runtime 21:15:35 ... it doesn't work with animations 21:15:58 ... there's a subset of images that could work 21:16:11 ... I wasn't aware of the details of arkit 21:16:19 to clarify you couldn't animate every magic the gathering card you are limited to 5ish images 21:16:31 ... but ada made a good point that the use case doesn't overlap 21:16:44 q+ 21:16:48 ... another thing that came up is that we're providing raw camera access 21:16:51 q+ 21:16:53 ack q+ 21:16:57 ack ada 21:17:01 ... so that could be an avenue 21:17:21 ... it's a weird API if it has unpredictable results 21:17:40 ada: raw camera access might give us a solution here 21:17:52 ... for instance three.js might just build it in 21:18:06 q+ 21:18:13 ... users shouldn't have to give up the farm for a basic feature 21:18:16 ack bialpio 21:18:37 bialpio: the common use case from Nick was to detect images on curved surfaces 21:18:47 ... this is not something we want to standardize 21:18:55 ... so raw camera access might be needed 21:19:17 ... the point is that it would be awesome to have image tracking across platforms 21:19:31 ... even with that being available, that might not be enough 21:19:44 ... should be extend the API to account for these use case 21:20:04 ... so the simple api does something basic but more advanced cases use raw camera access 21:20:08 q? 21:20:10 ack klausw 21:20:41 klausw: if someone goes far enough to set up physical object 21:20:51 q+ 21:21:06 ... raw camera access isn't a big barrier 21:21:08 ack ada 21:21:23 ada: I understand where Nick's example comes from 21:21:49 ... but we don't want webxr to always ask for camera access so people just always give it out 21:22:16 ... it's good that people stay cautious 21:22:20 q? 21:22:24 q+ 21:22:29 ack klausw 21:22:43 klausw: we do have an implementation in chrome of the draft spec 21:22:53 ... and it's ready to go if this is what people want 21:23:20 ... are people ok with making this a standard? 21:23:28 ... or should it be completely different 21:23:46 ada: I'd love to go forward with it 21:24:13 ... as bajones said, people may encounter problems based on the limitations of ARKit 21:24:16 q? 21:24:40 yonet: do we need another contact for marker tracking 21:24:59 ada: does anyone else want to be a contact? 21:25:08 (Rik Cabanier) volunteers 21:29:56 WebRTC meeting zoom information is here: https://www.w3.org/events/meetings/bc28a876-4512-488e-95ba-99c91d8c4d49 21:54:30 I have made the request to generate https://www.w3.org/2022/09/15-immersive-web-minutes.html atsushi 22:19:33 dom has joined #immersive-web