W3C

– DRAFT –
Immersive Web WG/CG call

31 October 2023

Attendees

Present
alcooper, bajones, bialpio, cabanier, Laszlo_Gombos, Leonard, yonet
Regrets
-
Chair
yonet
Scribe
lgombos

Meeting minutes

immersive-web/depth-sensing#37

cabanier: depth sensing, working on exposing depth sensor.. wondering if this attribute is still needed

bialpio: in our implementation arcore does not care about device orientiation, so mapping is needed between the coordinates, this is the source of this attribute

cabanier: landscape texture, turn the phone, how would that look

bialpio: need to take a look at the spec or the code.. chromium getdepth method has some description

bialpio: xrview coordinates.. multiply with a matric, and get depth buffer (in meters)

cabanier: in our case the depth api returns fov and views.. in our case we try to build it on top of what openxr api returns

bialpio: when you have depth texture, we do not specify the units

immersive-web/depth-sensing#43

cabanier: when you get the information from teh system, you get swapchain and depth near and far

cabanier: .. depending how you do it pixels might not match

bialpio: even fov is different

bialpio: we have rawvaluetometers..

cabanier: for us it is always 1

cabanier: when you create projection matrixes, you use fov and depth near and depth far.. there are default values that experience can replace

bialpio: z values are key..

cabanier: experience shoudl use the same z values as the system

cabanier: I think two approaches - one we return is as new attribute or second completely ignore the session values

bialpio: lets continue discussion on the bug

bialpio: I though resolution of z values could be encoded into rawvaluetometers

immersive-web/real-world-geometry#40

cabanier: several web developers run into this.. there is a method to recapture the planes

cabanier: it takes a few sec to warm up.. what is long enough ? there should be a signal for "for sure no meshes"

cabanier: what can we do here ? maybe a new event ?

bajones: feels like an event..current you expect to poll ?

cabanier: yes

bajones: wonder what the event should be for ? maybe it should be some kind of "update finished event"

bajones: in some system these surfaces are precumputed at the time webxr session created

bajones: in some other system, surfaces are continuously discovered .. need to consider different platforms

cabanier: I also want to make sure we support several systems, including android

cabanier: if there is an event, than we do not need to look at every frame

bialpio: are we changing the api to be event based (this was considered in the past) ?

bialpio: at least this particular problem, maybe there should be an opt-in to subscribe to this event

bajones: on android scanning never really completes

bajones: conserned about portability of webxr content if we introduce this optional event based api

cabanier: if it is an event, than content would wait for it to complete

<dylan> Have to drop here, but FYI XR Access is doing a community discussion on Captioning on Nov 15th. Should have some interesting problem solving around 3D positional metadata for 360 degree captions. https://xraccess.org/community-discussion-captions/

<dylan> Thanks all, feel free to reach out anytime on accessibility-related issues - dylan@xraccess.org

bajones: is it reasonable if e.g. on Meta platform the content automatically opt-in for this event ?

cabanier: not really

cabanier: proposal: "needs room setup" event

cabanier: not sure how muti-room would work in webxr

bajones: if it is really just "needs room setup" than we could consider an attribute instead of an event

bajones: event is still also fine

bialpio: bikeshed "inital room capture" or "room capture needed"

bialpio: is this intent to fire only when openxr finished scanning ..

cabanier: no..

bialpio: can you arbitrary start a new capture any time

cabanier: yes

bialpio: in the spec "initiate room capture" can be only run once per webxr session

bajones: maybe the event is stated as "room capture is newly avaiable"..

cabanier: if we want to add multi-room support, we anyways need to update current spec

bajones: if the method is "initialize room capture".. maybe the event is "room capture uninitialized"

cabanier: we shoudl try to avoid a website continuously call this..

immersive-web/layers#304

cabanier: when you ask for multiple textures we only support left and right.. so only work devices with 2 screens

bajones: algorithm itself is just for 2 views

bajones: we want to expand spec for multiple views per eye (not just 1)

cabanier: the layers spec does not mention left/right eye for secondary views

cabanier: I do not think anybody implemented secondary views

bajones: we put it in the spec as we knew HW is coming

cabanier: for sure we need to improve the spec and need a proposal to review

cabanier: pico headset implemented webxr layers.. Meta is not the only implementation

<yonet> https://xraccess.org/community-discussion-captions/

yonet: if there are WG issues, we schedule a meetign in 2 weeks..

<yonet> Thank you lgombos

<yonet> Thank you Atsushi

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

All speakers: bajones, bialpio, cabanier, yonet

Active on IRC: alcooper, atsushi, bajones, bialpio, cabanier, dylan, Leonard, lgombos, yonet