W3C

- DRAFT -

SV_MEETING_TITLE

28 Apr 2020

Agenda

Attendees

Present
trevorfsmith, Kip, Leonard, cwilso, ada, dino, alexturn, bajones, cabanier, manish, Manishearth, Yonet, rad, radians, mounir, cwervo
Regrets
Chair
Chris Wilson
Scribe
cabanier

Contents


<atsushi> Meeting; Immersive Web WG

<cwilso> https://github.com/immersive-web/administrivia/issues/91

<scribe> scribenick: cabanier

cwilso: does anyone have an objection to pull those into the WG?
... ok. I hear no objections. I will send out an official email to the list

1. administrivia#91 Move hit-testing and dom-overlay to WG

<cwilso> https://github.com/immersive-web/hit-test/issues/86

hit-test#86 Hit test results are fundamentally rays, not poses; to figure out the best path forward here

cwilso: manish do you want to cover?

Manishearth: the way it works now, you bounce at the pane and the result is something that points into the y-axis
... and this ambiguous. Because what we want is a ray
... not a transformation from one vector to another
... the most pressing thing is exposing the hit test transform is not the best solution
... a position and orientation is better
... however this API has shipped
... one option is to do nothing
... another one is to expose both
... so we also give an xrhittestpose
... I think Blair and Peter have some thoughts

alexturn: it's interesting because we had similar issues in the openxr group
... we had different behavior wrt pointing
... but there's alignment with other questions
... what does up mean? maybe there's a gravity aligned up
... could be tricky to write down

piotr: the hit test is supposed to be used for object placement
... so the current api is well suited for this
... right now you get the matrix which you need for webgl
... we give you the same data but just in matrix form
... we specify that the y-axis is the plane normal
... so exposing both seems like a helper function because you get both anyway
... model-viewer already adopted the hit test api and we didn't get any feedback about problems with this api
... so even if we expose position and normal, it might not get used
... it would be great to have feedback from people consuming this API

<bajones> Come back to me

Manishearth: I understand what you're saying

<ravi> if anything, here is another data point from Magicleap. Our hittest apis do return just position and normal vector at the intersecting plane

Manishearth: the problem is that there is more information with the transform
... if we expose vectors differently, different UAs might show things differenttly
... because we expose more information than we need, we will have differences
... so it's better to expose the fundamental
... which is more determined

<bajones> I should be back now?

bialpio: y-axis is supposed to be plane normal
... x is to the right
... so yes, it's overspecified with this approach

Manishearth: this affects apps because you picked something so other people will have things rotated
... so unless we pick something that makes sense, ...
... it feels better that we expose it as a ray
... because there won't be any difference across browsers

bialpio: so this will have different results across implementations?
... I want applications to use it as is, if they don't care
... they can adjust the coordinate system any way they want
... I did recognize it as an issue in the spec

bajones: it sounds like people came around
... the fact that it's a transform, the z-axis points back to the viewer
... which is really nice default behavior (ie the xr dinosaurs app)
... during the placement phase things would face you
... all the hit testing stuff in native does the same
... maybe we should specify if z always points towards you
... there's an issue that x and z coincide (ie facing a wall)
... because we already are shipping with a transform, maybe we just need to specify how they work

alexturn: I'm torn. we haven't shipped hit test API so I don't have a lot of experience to draw form
... it feels useful that things face the user
... so there should be a way to get a pose
... things might work better if it's on the floor than on the wall

<bajones> Maybe the wall placement should always align to gravity? Sounds kinda ugly to special case, but... :P

alexturn: if it's on the wall, there's a singularity issue. Also what if I'm above or below, it could flip
... I don't know how to fix that

mounir: given that we have the matrix which has shipped
... we need to clarify it and maybe we should add the normal on top of that
... we shouldn't ship things based on unspecified behavior
... we did go to the model viewer folks and they wouldn't use these new calls

Manishearth: the reason I'm wary of specifying the matrix
... what reference space do you want to be relative to?
... in general it depends how you're looking, that determines how you place it

<Zakim> Kip, you wanted to ask if the orientation along the Y axis exposes meaningful information that couldn't be calculated with a helper function

Manishearth: it's hard to capture implementation intent

Kip: we should look at patterns on the web
... generally we don't have helper function
... in addition to the ray, we have the orientation around the vector
... is there any information from the platform itself from the implementation?
... or should this come from the framework like three.js?

bajones: we were really need, is the 3d coordinate and the surface normal
... if we're going to keep it, the matrix would be a convenience
... the primary bits of concern for me is backwards compatability

alexturn: I still don't know what down is when you're on a wal
... I still don't know what down is when you're on a wall
... in my mind, if I'm placing it on the floor, it can face any way
... but if it is on the wall, it should face away from it
... maybe that's why platforms let you specify it

Manishearth: this reinforces that you might need a reference space to determine what is flat
... wrt doing both, we do the same with ???
... we can overspecify so people that want it can have access to it
... creation paths of discovery is good

cwervo: we consume the API
... I agree with brandon that we can map away the problem
... but I think Manish is right, if we expose these extra information on top what already exists, gives the most flexibility

mounir: it sounds to me ???

Manishearth: this would not help existing content but it would let new content discover it
... existing content would continue as is even if it's broken

mounir: we should specify it so it's no longer broken

Manishearth: it's hard to do this with the current specification
... finding a convention is tricky, maybe impossible

mounir: to me a convention, (???)
... we have to be practical. I don't know what we do with a vertical wall
... I don't understand the argument that it's never spec compliant

Manishearth: I think we can
... not get a good convention
... for instance people might not test on walls or ceilings and get a unexpected result

cwilso: I don't think we reached consensus on this

<cwilso> https://github.com/immersive-web/webxr/issues/943

Manishearth: let's summarize in the github issue

webxr#943 Which timestamp should be used for frame callbacks?; we should pick something here

bajones: I did a bit of research
... we have a timestamp that gets passed into the raf loop
... which is a pattern across the platform
... it's underspecified in the spec but we covered it in issue 347
... and put some solid text in the explainer
... (reads from the explainer)
... the timestamp should be the same as the window raf
... it's not detailed enough for xr purposes
... so we said that there should be an addional xrtiming
... like when the frame started, dropped off, etc
... but we didn't know which ones we could expose
... and since then there hasn't been any movement
... so we should talk into more detailed timing and then supplement it

alexturn: this relates to discussion with anchor
... and dynamic time indexing of spaces
... phones and tablets are capturing at a certain time
... for headsets you do predictive timing
... usually apps don't have to care
... unless you try to correlate with the real world
... to extent that it matters, it would be input correlation
... I would go with the most useful timestamp
... which would be capture time for phone and predicted time for headsets

cwilso: we had similar issue with web audio
... one of the challenges is that we have to convert between the 2
... audio time is not observable
... web midi has the same thing
... timestamps tends to be high res
... I like the idea that when you expose timings that you name them appropriately

bajones: I don't disagree with alexturn
... we need to find the timestamps that are meaningful
... so you can synchronize things
... my concern is that the timestamps will be different in different session
... inline = window raf, vr = predicted, phone = capture
... this is why we should say that people are what people expect

<Zakim> Kip, you wanted to suggest that we choose one kind of timestamp that all implementations can provide (behaving the same as window.raf), then have optional attributes for more

Kip: bajones mentioned that there can be differences on different platforms
... maybe we can choose the lowest common denominator
... so default would be like a window raf
... and then there would be optional accessors with explicit naming

cwilso: maybe the issue can have a bit more details and we need to update the spec

bajones: yes. The best path forwards is to restate what was said
... unless there's a very strong argument, I will put together a PR
... and then open a new issue where we can redirect into
... or reuse the older one
... I would love to have better timing as long as we can guarantee that it's useful

cwilso: are there any other issues?

Manishearth: I managed to get hand tracking to work experimentally

<bajones> :thumbsupemojiformanish:

Manishearth: so we're making progress there

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2020/04/28 20:00:26 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: i/https:/topic: 1. administrivia#91 Move hit-testing and dom-overlay to WG
Succeeded: s/we can go to the model viewer folks and get their feedback/we did go to the model viewer folks and they wouldn't use these new calls/
Present: trevorfsmith Kip Leonard cwilso ada dino alexturn bajones cabanier manish Manishearth Yonet rad radians mounir cwervo
Found ScribeNick: cabanier
Inferring Scribes: cabanier

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting

Agenda: https://github.com/immersive-web/administrivia/blob/master/meetings/wg/2020-04-28-Immersive_Web_Working_Group_Teleconference-agenda.md
WARNING: Bad day of month "2020" (should be >0 && <32): 2020 April 28
Date command/format should be like "Date: 31 Jan 2004"

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]