Meeting minutes
<yonet> Agenda: meetings/2024/2024-01-23-Immersive_Web_Working_Group_Teleconference-agenda.md
<bkardell_> uhm
<bkardell_> my calendar somehow had this at 3ET, but the agenda says 2... so... I missed it, I guess?
it'
yonet: welcome to 2024! this is the last 6 months of our current charter, so we'll have to recharter soon
<bkardell_> me present+
yonet: (notes we're on Zoom now, will revise mailing list to not point to webex)
Body tracking API (proposals #87)
<yonet> github issue
Rick: we've often heard a developer request for body tracking. We don't have dedicated cameras to track, so it's mostly educated guesses, but you can emulate it.
… so I exposed the existing OpenXR extension in WebXR. I'd like to see what people think; I'd like to ship experimentally to let developers try it out. Would like to know if other headsets/hardware have their use cases filled by this.
… what does everyone think? Are there concerns?
… we use universal emulation (e.g. universal hands for hands, not real hands) to avoid giving PII)
Blair: any thought to supporting devices that return less info? Are you expected to return all info?
Rick: everyone has their own extensions, e.g. for hands; in our cases, they track the upper body
alcooper: I'm a little concerned that your device might be the only one we know of that implements it; the published OpenXR extension doesn't have some of these bones (like leg and foot bones)
rick: yes; I asked internally and they were supposed to be released, but perhaps we were lagging behind in Khronos
alcooper: the other concern is a privacy one.
rick: yes. if there was an implementation that returned real data, we would have to use rounding to make sure we're not giving unique user data
… and there will be a permission prompt
yonet: that's a separately permission prompt?
rick: it's already in place. all our documentation puts hand and body together, so the permission prompt already says hand and body target. you'll just get one prompt
<leonard> 1+
yonet: you might not want to give body permission but give hand permission - e.g. a user in a wheelchair
rick: it's not very precise today (already renders body as if you're standing up if you're seated)
bkardell: that feels like making a decision based on limits/hw tech today. Long view of the feature seems like it should be separate
rick: hand tracking is more sensitive?
bkardell: no, if you have the ability to do perfect body tracking, you might want to enable hand input, but not giving permission to track the body
+1
rik: it's still there in the specification. you still have to ask for hand permission to get the hands data
yonet: any other concerns?
rik: can we move this into the CG?
yonet: any objection?
yonet: okay, I'll move it in.
<atsushi> brandel: I'm already building models procedurally/topic: ready and complete events/Promises for responding to source file availability etc (model-element #75)
yonet: moving on: immersive-web/
ready and complete events/Promises for responding to source file availability etc (model-element #75)
brandel: I'm already building models procedurally, but MVP v1 will need to involve downloading a byte stream. Will need to know when the bytes are ready, when the parsing is done.
… promises and listeners, etc.
… just wanted to put this forward and see what others thought
alcooper: this wasn't on the agenda sent in email
yonet: late addition by ada
brandel: wanted to raise it, but we can delay discussion if you'd prefer
alcooper: feedback to chairs: please do lock the agenda when sent out in email
leonard: what happens if the source is streaming, or the model is very large and can be [partially?] displayed before completely downloaded?
brandel: conceptually, we've only been looking at static assets that are relatively small self-contained assets.
… this is intended for MVP
brandon: I do have general concerns about making sure things stay open to the possibility of streaming. But to Brandel's question "are promises the right mechanism for this", pretty definitively yes. But if you have a streaming system, may need multiple updates with progressions.
brandel: I don't want to exclude streaming, but do want to get MVP rolling
brandon: I do agree that the proposal here doesn't prevent expansion to streaming
leonard: if the asset includes inherently large assets - video, audio - may put more urgency on solving streaming
brandel: one of the benefits of GLB (GLTF) and USDZ is they have the ability to be monolithic. Don't want to prevent more sophisticated formats/scenarios in the future, but get an MVP going.
lazlo: you named two events. I understand the "ready" one. What are you thinking of the use case for observing the networking?
brandel: I haven't checked whether we have progress events shipping already themselves, but I was thinking about it in the same way of image loading today.
… it means you know there is nothing network related that could cause hte thing to fail.
rik: in the past we thought perhaps we don't need model element, just use image/video. Could the model element derive from image or video?
<lgombos> immersive-web/
brandel: conceptually, yes. not sure if it's literal or conceptual.
rik: don't you already get these events then?
brandel: not sure
<Zakim> bajones, you wanted to ask about difference between first renderable and fully loaded (ex: loaded with/without animations)
brandon: any thought given to first-renderable vs complete?
… you have "completed downloading byte stream" and "ready". but what does first-renderable entail?
… does this include textures, animations?
brandel: in the instance we're conceptually targeting, it's monolithic. Everything has been downloaded. Rasters used for texture maps, etc.
… I don't know if we have a view of hte lifecycle of a model that isn't monolithic yet
brandon: to clarify, two different processes here: downloading/fetching, and decoding. Decoding might take a significant amount of time. This is just complete downloading?
… should there be a distinction to those stages?
… I think you're considering today that is monolithic?
brandel: yes, right now I think that's correct
brandon: my only immediate feedback is that end state should probably be named something that makes sure it communicates "I have nothing more to do". "ready" can mean any one of the intermediate states.
brandel: that's fair.
joe_pea: is there a plan to make the model element more declarative?
… for example, manipulating the camera
brandel: off the top of my head: I understand the motivation, but will need to talk to folks about the benefits.
joe_pea: happy to provide feedback
leonard: is the intent of the model element to download everything, then process it, or to do progressive downloads e.g. how a PDF works?
brandel: the asset types we've been looking at have been monolithic. We might seek alternatives in the future
… the intent is not to be exclusive
AOB
yonet: thanks all!
… we'll get the agendas locked 24hrs in the email agenda
<yonet> https://
<atsushi> s|i//brandel: I'm already building models procedurally/topic: ready and complete events/Promises for responding to source file availability etc (model-element #75)||