W3C

Device and Sensors Working Group Teleconference

19 Sep 2016

See also: IRC log

Attendees

Present
(Gloria), Anssi_Kostiainen, Dominique_Hazael-Massieux, Dong, Ji, Kenneth_Christiansen, Koichi, Mikhail_Pozdnyakov, Takagi, Tobie_Langel, Tomoyuki_Shimizu, Zoltan_Kis, Tobie, Langel
Regrets
Chair
Anssi
Scribe
zkis

Contents


Introductions

<anssik> Dom: staff contact for DASWG, also for the WebRTC WG

<kenneth_> Works at Intel, worked on web platform since around 2006. Growing interest around IoT and sensors on mobile devices makes one standardized API more important, which is why I am here

<anssik> Anssi Kostiainen from Intel, spec editor (sensors recently) and acting chair for this meeting, other interests e.g. Second Screen, WebVR

<Maryammjd> present Maryam_Mehrnezhad

<shalamov> Alexander Shalamov from Intel, implementing sensors based on Generic Sensor API specification for chromium

<zkis> Zoltan Kis @ Intel, I design/implement node-style IoT Web APIs for constrained runtimes. Interested in API consolidation with browser APIs. Also, editor for Web NFC, member of Web Bluetooth and Web of Things.

<anssik> Ningxin: introduction, working for Intel, from Shanghai, working on Crosswalk Project, Depth Camera, interested in SIMD.js

<scribe> ScribeNick: dom

Agenda review

Anssi: Today, generic sensors
... tomorrow, looking at concrete sensors
... with demos of implementations
... Concrete sensors come with specific needs on privacy & security for instance
... tomorrow afternoon will look at the other deliverables of the group, vibration API
... with possible new needs for vibration around VR
... Then battery status, currently in CR (back from PR due to privacy concerns)
... finally wake lock — we can look at the TAG feedback
... and we'll finish with joint meetings with WoT IG, possibly Geo
... the Geo WG is not meeting this week; they have been working on the Geolocation API and the Device Orientation API
... the latter is a high level sensor fusing together gyroscope, accelerometer, and magnetometer
... We took the approach of decomposing this in lower level APIs, which is required for advanced use cases in VR or AR

Kenneth: or even regular games

Anssi: the Web Platform has historically exposed high level APIs as it is often easier from a privacy & security perspective
... but it has also limitations
... there is a whole spectrum of options among the two extremes

Kenneth: also, high level sensors are harder to get right from an interop perspective as show in device orientation

Anssi: I think it's a matter of finding the common ground
... if you go too low, the API ergonomics can be difficult
... From KDDI, what's your perspective on the level of abstraction on the GPIO API?

Zlotan: from a constrained environment perspective, I have had to adapt the API a bit

<inserted> Koichi_Takagi

<kotakagi> Koichi Takagi@KDDI/CHIRIMEN Open Hardware Community, investigating low level APIs such as WebGPIO, WebI2C and so on. The software (and hardware) was released as open source last April on https://chirimen.org/ . Please review the API on https://github.com/browserobo/

Riju: Android will preview dynamic sensors with a connected / disconnected state

Maryam: what do we consider as a sensor and what we don't?
... e.g. camera / mike aren't considered as sensor in this group

Anssi: very good question; there is one definition in the generic sensor document

<anssik> definition of a sensor: https://w3c.github.io/sensors/#concepts

Maryam: currently, sensors like accelerometer are categorized very differently

Anssi: the camera API is clearly different, partly for historical reasons since it predates the sensor framework

Dom: the camera API has shown needs for higher level API, both for ergonomics & security policy

Maryam: still, it's confusing that there is no clear hierarchy or common models for sensors, especially as the platforms nowadays associate more and more sensors
... e.g. the Android sensor hub associates camera with other sensors

Kenneth: right now, our approach has been to advance pragramatically with what can be easily exposed and what can be achieved more quickly

<Maryammjd> https://arxiv.org/pdf/1605.05549v1.pdf

Riju: the boundary for the generic sensor is Micro-electric-mechanical sensors per the spec

<anssik> https://en.wikipedia.org/wiki/Microelectromechanical_systems

Maryam: the Generic Sensor API covers a lot of the movement sensors
... see last page of https://arxiv.org/pdf/1605.05549v1.pdf showing the type of sensors available in mobile devices today

Kenneth: a lot of these sensors would fit there (temperature, barometer)
... others like bluetooth or fingerprint seems very different

<riju> guys have a look for an intro to generic Sensors : https://01.org/chromium/blogs/riju/2016/generic-sensor-api-javascript-powered-platforms

-> https://www.w3.org/2009/dap/ Device & Sensors WG home page

Kenneth: there is something weird about the naming of specs - in some case we name the thing measured, some time the things that is measuring

Anssi: [introducing Tobie]

Generic Sensor

Tobie: some of the long term issues brought up by the TAG (e.g. exposing sensors in workers) will likely need time
... also permission management
... I created 3 buckets for the issues so that we can ship a first version of the API at some point
... not sure at this point if we want a distinct level 1 / 2 or merge them
... esp in light of TAG feedback
... combined with implementors feedback

Dom: the guiding principle is that we need to find a consistent set of features that will ship in 2 or more browsers

Anssi: let's look at a reasonable l1 + l2 scope

Tobie: the TAG feedback that having this in the main thread will be a massive blocker from a performance perspective

-> https://github.com/w3ctag/spec-reviews/110 TAG feedback on Generic Sensor API

<anssik> https://github.com/w3ctag/spec-reviews/issues/115

s/110/issues\/110/

-> https://github.com/w3ctag/spec-reviews/issues/115 Feedback on Ambient Light Sensor API

-> https://github.com/w3ctag/spec-reviews/issues/115#issuecomment-236365671 Alex' feedback on jank & sensor sampling

Tobie: we wouldn't want this potential performance issue to block shipping

Anssi: so exposing this to Worker seems more urgent based on that feedback

Tobie: but workers have a more complex permission-ing problem

Riju: from a performance perspective, the sensor is done in another thread

Kenneth: but the problem is not for implementations, but for developers doing lots of sensors, the risk is to make Web app slow

Riju: the problem is combining workers and permissions

Tobie: push uses permissions in service workers
... I think we need to look back at this and see how hard the problem space is

Riju: note that sensors only react when the page is visible
... not sure how that work with workers

Dom: it seems to me that sensors provide lots of data with possible lots of computation needed on it
... some at least in mid-term, having that done in workers seems like the right thing
... but yeah, the permission story might be difficult
... also, we need to distinguish normal worker vs a service worker which enables background processing

Anssi: we have issues #106/73 for background service worker, and issue #12 for worker/shared worker
... it seems like providing sensors in a worker is an easier problem than exposing it in service worker

<anssik> https://github.com/w3c/sensors/issues/12

Zlotan: the spec says its agnostic about fusion or low level api sensor
... do you distinguish between remote or bluetooth-connected device vs wired devices?

Tobie: we've punted on this; we only consider "on-device" sensors
... but that will likely need to be widened
... this also depends on where Web Bluetooth goes
... or automotive or WoT IG
... hard distinction to make between a sensor vs a server

Zlotan: the transport is not necessarily as important as the privacy/security difference among sensors

Tobie: there is a big difference depending on whether this is handled by the OS vs the browser

Zlotan: but I think that aspect should be mentioned in the spec

Tobie: I think I have something there

Zlotan: it says "remote sensors are out of scope"

Tobie: some things make sense for a local sensor (e.g. sampling frequency), less so for a mile-away remote sensor

Kenneth: it's great to have it in workers, but it should not be workers-only
... e.g. in cases where the UI is mostly based on sensor input
... like a compass or a game
... Maybe we need a SensorWorklet

Anssi: so the main take away from the TAG review is worker & permission

Tobie: on frequency, you should cap, not fail

Kenneth: OK, so there is a bug in the chromium impl which currently fails without giving you info

Tobie: Android & iOS don't provide direct info on available frequencies; we could ask the UA to do it, but that seems costly for something that dev can do on their own with timestamp
... The main reason to go higher than the display rate is to lower the latency for display
... e.g. for VR

Kenneth: what happens if the execution time of processing an event is greater than the frequency of sensor reading?

Tobie: can't remember if we have something for this, but we might

<anssik> https://w3c.github.io/sensors/#sensor-reading-timestamp

Dom: do we have a clear mapping between the TAG feedback and our spec issue list?

Tobie: not yet, it's on my near-feature TODO list

Dom: it sounds like we might need to enable different back-pressure mechanisms

<Hiroki> preesnt+ Hiroki_Endo

<scribe> Scribe: zkis

Generic Sensor API issues

Anssi: topic: looking at open issues; we want to merge level 1 and 2
... feasibility is important
... let us cherry-pick key issues for group discussion

<anssik> https://github.com/w3c/sensors/issues/125

Anssi: level 1 things: #125

Tobie: #125 is basically resolved;

Riju: it is related to #126

Tobie: what "resolved" means: we know what to do about the issue, it will be fixed soon

<anssik> https://github.com/w3c/sensors/issues/103

Tobie: where the data should be: on sensor object or through the event
... should the event tell you that smth changed on the object, or hold the new data
... both have different issues and benefits
... so far we have both on the object and on the event

Zoltan: what about different data representations

Tobie: we have different objects for different data representations

Dom: why did we need it on the event at all

Anssi: let's take the ambient light sensor for example
... it is stored on the object

Tobie: we'll handle representations from constructors init
... where you show the payload depends on how it works with cached data vs one-shot events
... on the object you have a timestamp
... question: how do we cache a reading
... this needs more thinking

Anssi: can we layer this on top of the existing API

Tobie: not sure at this point; if we add it somewhere we can't remove it later;
... simple events make it easier to spec
... we also want to make sure this is the right thing
... avoid developers make wrong assumptions

Kenneth: it's nice to be able to ask for latest data
... so expose it on the other object

Tobie: privacy concerns with that?

Dom: why this would create different privacy?

Tobie, Maryam: cross-context issues

Tobie draws: the problem with one-shot method: we had it on the constructor, not the instance. With start() method this is no longer a problem.

Riju: I like the event model more
... for current sensors we don't have the use case of reading once

Tobie: sending events is much cheaper when we use simple events
... there are race conditions when we expose data through events

Dom: use the simpler version and see if it works

Kenneth: this is also more consistent across multiple sensors

Riju: suppose we have 2 JS objects with 2 different frequencies; if we use readings in the sensors, then we use the higher frequency; if we do events, we can expose the reading at lower AND higher frequency

Kenneth: 2 different callbacks with different frequencies...

Tobie: that is a different question
... events will come at the highest frequency
... setting a frequency is a hint for a minimum and it is not guaranteed

Zoltan: can you set a maximum frequency?

Tobie: no, what is the use case

Dom: battery

Tobie: both are listening, so not an issue
... same model is used by Android and iOS
... we could imagine that each object gets at their own frequency, but that creates weird things when frequencies don't match
... this is a simpler model and works better in most cases
... for high performance scenarios this is not going to be used anyway, they will use direct readings

Anssi: so on #103 we should remove now before it becomes hard, because implementations using it

Tobie: sensor.reading is valuable for many use cases and has benefits of not having to fire events
... the right thing is to remove the payload in the event
... and add back if really needed in the future

Anssi: Rick's feedback: devs want to do stuff with event, to encapsulate all things to be done
... this comes from Node context
... so we don't want to shut these use cases out
... but also think about garbage collection

Tobie: what is the memory footprint penalty of attaching objects to events?

Ningxin: the GC references build-up is a problem
... depending on implementation and JS engine, there could be problems with buffering events etc

Tobie: we should remove now and think later
... before devs use it in applications

<anssik> PROPOSED RESOLUTION: remove event payload from SensorReadingEvent

<anssik> no concerns?

RESOLUTION: remove event payload from SensorReadingEvent

<anssik> https://github.com/w3c/sensors/issues/101

Tobie: defer this until solved in the DOM

<anssik> https://github.com/w3c/sensors/issues/42

Tobie: tending to say no to high level fusion, see issue discussion
... not sure how we want to do this
... issue discussion to be continued

<anssik> https://github.com/w3c/sensors/issues/22

Tobie: WebIDL should fix this
... keeping all permissions in one spec is idealistic

Zoltan: opinions in the issue are that it's still the least complex, and PR's are cheap

Riju: permissions mentioned in the each sensor instance spec
... but with fusion is it the sum of parts?

Zoltan: you'll likely need a new permission for the fusion sensor (i.e. not sum of parts)

Tobie: let's rather have "give me permission to all the things I want"
... combine permissions easily by the User Agent
... popups are a bad idea

Anssi: present "undo" options
... so asking forgiveness, not permission
... one size does not fit all

Tobie: how to spec this; the permission spec asks for a name; let's switch that to a namespace rather than a name
... if the end user is the same, does it make sense to ask separate permissions for everything separately?

Maryam: research show users give access easier for some sensors than for others

Tobie: the question is, how to group permissions together and call it one thing (implementors can already do that); as a UA binding permissions together is possible;
... should we call all separate permissions under a name like "motion"
... or should they be separated

Kenneth: they should be separated

<maryammjd> *People undrestand some sensors better than others (based on the names e.g. orientation and motion vs acc and gyre)

Anssi: the platform will do the mapping
... the platform may do some translation when showing user prompts, but apps should ask permissions separately

Tobie: if you split things it defeats the idea of high level sensor

<riju> all concrete sensors have a permission name as of now, do we need a "blanket" permission depending on the fusion ?

Tobie: because the purpose of high level sensor is an abstraction without charging the user with the details; they won't be able to connect the partial permissions with the final permission
... 2 ways to solve it; 1) each sensor having unique name, the UA can then do the mapping itself
... 2) permission groups in the permission spec itself
... it needs to be discussed in the Permission spec

Kenneth: it would be nice if permission could be given to an object; I want to use this thing

Maryam: 2 different groups of sensors: motion sensors and ambience sensors
... no more categories at the moment

Kenneth: that is why not have permission on a given sensor object
... easier for developers, as they don't need to know permission names

Maryam: easier for users to decide when they have fewer permissions to care about

Tobie: the question is about exposing permissions to developers, not users

<riju> https://github.com/w3c/sensors/issues/132

Tobie: the main question is still if on privacy perspective there is no reason to separate permissions for lower level categories, then let us not do this

Kenneth: giving permissions to Sensor objects is also future proof
... platform can map it to user permission prompts if needed

Anssi: partial enums might not be a solution after all

Anssi updating the issue #22 comments

Tobie: high level sensors have fewer privacy issues than low level ones
... if D is made of A, B, C in an unknown way; the UA controls D in any way it wants, perhaps with less information than A, B, and C separately
... this makes incentive to developers to create these new contracts
... to pick a less invasive version of the same data source

Anssi: this is capturing the discussion, add more comments to #22 if you want
... this concludes Level 1 key issues
... move Level2 and future work for tomorrow

<anssik> https://www.w3.org/2009/dap/wiki/LisbonF2F2016#Agenda

Discussing agenda for next day.

Summary of Action Items

Summary of Resolutions

  1. remove event payload from SensorReadingEvent
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/09/30 13:30:19 $