See also: IRC log
Kerry: co-chair of geospatial, which is in charge of the Sensor ontology maintenance and standardization
Dom: Staff contact for Device Sensors API
Francois: W3C Staff, involved in Web of Things IG where sensors and actuators are being discussed; seems like worthwhile alignment in this space
Linda: I participate to the
Geospatial WG; I work for Geoknow, a dutch org
... we have work around sensors
... I'm also active in the OGC that has a standard sensor API
as well; curious about relation with this
Youenn: I'm interested in
understanding better the API, both for directly connected and
other sensors
... also interested, how we deal with sensors for which having
a dedicated sensor api standard doesn't make sense
tobie: more and more sensors are
exposed directly on devices
... IoT, bluetooth sensors
... it would be nice to have a generic way in which these
sensors are exposed
... it's one of the motivations for this work
... there is also a number of use cases where you need to have
access to lower level of sensor data than is currently possible
in the Web platform, esp. with high performance
requirements
... e.g. for virtual reality where you need direct output of
the gyroscope rather than the typical sensor fusion data
... The generic sensor API is a small building block,
essentially an extension of an EventTarget
... with the goal of making it usable in a wide range of
scenarios
... right now we're focused on exposing sensor data
... in a later phase, we'll look at discovering sensors
youenn: you're targetting both fusion sensors and low-level sensors?
tobie: yes; the idea is that
specs would determine which type of data they would want to
expose (they may expose both)
... the sensor API gives some guidance on when to expose one,
the other or both
... there are a number of privacy concerns that are the same
for all sensors, which this spec tries to define for
reference
... that way, a geolocation api based on this would only have
to deal with geolocation-specific issues from security/privacy
perspective, not the generic aspects of sensor
... the goal is to make it easy to write new sensor APIs
... let's walk through some examples
... [example 3 in spec]
-> https://w3c.github.io/sensors/#model Generic Sensor API, example 3
Tobie: the spec also has an
extensibility section that gives e.g. naming advice, whether to
expose high or low level sensors, how to deal with multiple
sensors
... currently the spec imagines that you can have one or
multiple sensors of the same type on a given device or in a
given environment
... it leaves the specifics of targeting one specific sensor to
specs that build on top of it
Anssi: in the current way we've
been dealing with sensors, there is no way to interact with
more than one sensor
... e.g. for the proximity sensor
... another issue was accessing data of a sensor before any
data changes (which a pure event approach imposed)
... that's how this work is helping
Riju: this was one of the reasons
why we had to reimplement the proximity event
... another one was to be able to specify the frequency of data
reading — the generic sensor api lets you set it in the
constructor
Anssi: this gives control back to
the developer
... BTW, how do you deal with requests for frequences that are
out of bound?
Riju: this is implementation specific
Tobie: there is nothing specified
on this
... the plan is to expose this on the sensor object you get
back
mark: can someone describe the sensor reading algorithm?
tobie: it's not described yet —
note that existing specs don't say anything on this at the
moment
... one of my requirement is to make it usable with
requestAnimationFrame — I'm not sure how to spec that yet
... (if it's even possible)
Mark: there is the Blink scheduling team that controls the animation frame loop and other timing loops in chrome
Tobie: that's the more challenging part of the spec; I haven't looked into it yet
Anssi: Mark, any pointers or contacts you can share?
Mark: I can take an action to probe them on this; not sure how to help on the spec-ing of it
Tobie: one of my concerns about describing the read-steps is that this makes it specific to local sensors
Dom: this could be an "optional" algorithm based on an internal flag (local vs remote sensor)
Mark: or the remote aspects could
be dealt with at discovery time
... starting with local sensors sounds reasonable
Tobie: I'll need to work with Anne on how to spec this
Mark: another question I had: is this continuous reading or deltas?
Tobie: another great
question
... something that I also need to address
... for delta readings, you need to be able to define a
threshold
... it seems in most cases, sensor types tie you with a
specific mode (at least, it's what android seems to be
doing)
... but I'm not sure yet whether that's always true
... if it is, this would be a different subclass; otherwise, a
parameter in the constructor
Riju: from an implementation
perspective, most sensors are dumb, and the content reader has
to take care of building a cache and determine whether changes
pass a given threshold
... that's how light and event orientation are down in
chromium
Mark: my other question is flow
control
... if a page subscribes to events as a rate faster that it can
process them, you end up queuing events, in which case you'll
want a buffer, maybe drop some of them
Riju: from an implementation perspective, right now, we tie events emission to page visibility
Mark: but that's only one case of
flow control
... network APIs e.g. have to manage a buffer size
... this could apply here as well
Riju: I can investigate how this is dealt with at the moment
Dom: what happens if you do prime numbers calculation in an orientation event? :)
Tobie: yeah, I need to look
further into this
... obviously that problem exists only for some sensors (e.g.
device orientation), not others (e.g. ambient light)
Mark: my concern is how developers can deal with this
Anssi: maybe something can be learned from the audio guys; they too have to deal with latency for fast-pace data
Tobie: good to confirm that the questions I have are the right ones :)
Anssi: does this match with existing issues?
Tobie: yes
Anssi: when Riju starts the implementation work, this will also highlight new problems
[discussion on how to deal with threshold callbacks]
Anssi: do platform APIs provide an optimized way to deal with threshold?
Riju: not that I know; this is something we deal with the bridge to Blink
Tobie: while the phone sensors are dumb, some of the phones come with smart hubs
Dom: maybe we could have a
predefined list of threshold functions that would work with
these smart sensors / smart hubs
... e.g. a geo sensor api would have a georadiusthreshold by
default, and a way to define a custom one (with impact on perf
/ battery)
tobie: there are cases where you want batched readings in a single event
francois: I think that was raised in the WoT IG as well
Dom: that's another case where a
comparison with network APIs might be useful
... I know this is something we've been looking at in
WebRTC
... might be worth discussing with networking folks
Anssi: is that something that Streams help solve?
Tobie: not really
Riju: what's the use case of batch if you have frequence?
Tobie: e.g. to smooth the data without having performance cost
Francois: another case is for remote sensors, e.g. with low energy
Anssi: or e.g. radio communication from the sensors
Kerry: a related question: there seems to assume that a given sensor only deals with one type of data
Kerry: Remote sensors may have more sensors per physical sensor
Tobie: For sensors that have, say, temperature and location, you would have two sensor objects that would be independent.
Kerry: But they would not be independent, typically because you would want to batch the transmission.
Tobie: One sensor object is tied
to one physical property. If you have a sensor that combine
many of those, then, what you really want to do that is out of
scope of this specification is to have an object that contains
a number of these sensors.
... That's what the Web of Things IG is considering.
Francois: Such sensors would generate only one "change" event with one timestamp and a bunch of properties, right?
Tobie: Right, so you could use SensorReading to define that bunch of properties.
Dom: That ties in with Youenn's initial point at the beginning of this session realted to custom proprietary properties.
Tobie: Can you generate an object with ad-hoc property values? I don't know how to do that in WebIDL.
Dom: If there was a magic function to address custom sensors which you want to read, you would discover the sensor, which will give you the type of data. When you instantiate the sensor, you get an object that implements that data model.
Francois: OK, that matches discussions in the Web of Things IG, I think.
Tobie: Could this be mapped to postMessage/onmessage?
Dom: That's a good question
Tobie: That would give you a way to listen to readings that are essentially a JSON message.
Dom: Pushing the comparison with networking, is there any reason why we don't adopt the same pattern?
Mark: Networking is a pre-agreed
protocol between two sides. How does the Web developer find out
what the protocol is with the devices?
... I don't think we should tie it with the data you get
back.
Dom: You could do both. Concrete API, or a "generic" API that would allow to get custom properties.
Riju: What I understand from this discussion is something like instantiating the Sensor object
Mark: There's a constructor in the WebIDL. Why is it here?
Tobie: I'm thinking about it. A
few specific use cases that may need this.
... I still need a place that explains what should happen in
the constructor, in the Extensibility section.
Dom: High-level question is: does
it need to be addressed right away? Then there is the question
of possibly converging with postMessage/onmessage
pattern.
... One of the things you can do with WebSockets and WebRTC
data channels is sending the events through the network.
... You get these events through a postMessage/onmessage
event.
... There may be value in reusing that pattern.
Youenn: So you would have a Thing API where you would have "onmessage" to receive readings and "postMessage" to send updates to the thing.
Dom: Right, I'm just thinking out
loud there, so that may not be a good idea.
... It may be a tiny detail.
Anssi: For random sensors that are unspecified, could you elaborate on that?
Dom: Imagine you get some kind of
objects that correspond to some sensor you have discovered. The
browser does not know about the sensor but the application
does.
... Perhaps because of discovery query options.
... We will likely have a Servometer API but we're unlikely to
have an Altimeter API for instance, and yet that could be
useful to access such sensors.
... What the thing is would be part of the Discovery
mechanism.
Mark: A couple of thoughts. If it's really a protocol between the browser and the sensor, then what is the value of the Generic Sensor API there instead of navigator.connect?
Tobie: This is the reason I have
the Constructor in the spec. To have such conversations.
... The added value is for companies that create new sensor
objects to make them available to JS libraries that know what
to do with them.
Mark: I don't understand how you
map this sensor interface with the JSON messaging that you
would have.
... What would frequency mean? What would batch mean?
... If it does not mean anything, you just map the API on top
of the raw socket
Dom: That's something to look into. I don't think we can have a clear decision in this room right now.
Tobie: OK, I have to think about this.
Mark: Another topic: It seems
these sensor objects are expensive. Do users control the
lifecycle of these objects or should the API expose some
disposal mechanism?
... With event handlers, you have to be careful so that they
can be garbage collected.
Dom: What is expensive is the callbacks.
Tobie: If you don't have handlers, you can stop listening to data.
Riju: In IOC, we do discovery where the sensors have specific IDs and choose which one to use.
<riju> https://www.iotivity.org/
<riju> https://github.com/otcshare/iotivity-node
Tobie: I did not add a stop message, but if you have 20 gyroscopes in your phone and are only interesting in one, you do not want to start listening to the 20 gyroscopes.
Francois: Talking about this, the ability to list devices discovered is problematic from a privacy perspective.
Tobie: Right, that's why I will
remove matchAll for now.
... But the start/stop question is still important.
... It would allow to instantiate sensors that I'm not going to
use for everything right away. No need to have sensor
info.
... It solves a bunch of problems.
... Let's break. After, I'd like to list APIs with automotive
folks in the room for instance that could use that Generic
Sensor API.
Mark: We ran through the same
kind of discussions for the Presentation API. Ended up with the
PresentationAvailability object. The implementation can suspend
the availability monitoring at any time.
... One solution to control how often this kind of expensive
operations run in the background.
<anssik> https://w3c.github.io/presentation-api/#interface-presentationavailability
[break]
Tobie: I would be interested to
look into existing APIs and see how they fit in the spec
... Other interests?
Dom: Before that, some rough idea in terms of roadmap?
Tobie: A bunch of things still need to be added to the spec and/or clarified.
Dom: The plan is to plan a draft concrete API and see what emerges from there?
Anssi: Yes. For the Ambient Light
API.
... No need to polish the implementation. Riju can do it in a
private branch.
... We will push it when we consider it is final.
Tobie: Then I think we need a spec such as the Device Orientation spec to ensure that you cover all the needs.
Dom: Do we have anyone lined up for the Device Orientation spec?
Anssi: We may be able to do something there. Tim is in the loop.
Dom: OK, so a potential editor for that spec.
Riju: Proximity, Ambient Light, I can take care. Battery Status.
Dom: any other big one?
Tobie: No, the Device Orientation is the big one.
Dom: I wonder how many we need to validate the approach.
Tobie: I don't think that's how
many but rather how many different types you have.
... You want sensors with multiple values.
... I'd be happy to have Ambient Light and Device Orientation
as outcomes.
Tobie: Then there is also the question of supporting remote sensors
Dom: We could have a v1 for local sensors-only, and v2 that supports remote sensors as well.
Anssi: It's important to set expectations.
Tobie: The last spec I could be interested to have is Geolocation, because it has specific requirements around cached data.
Dom: That particular implementation of the Geolocation API could be done in the Geolocation WG, which would be a good thing to further validate the approach.
Tobie: I have a very light polyfill for a few specs, including Geolocation, although based on a previous version of the spec.
Tobie: Any API you would like to test as a Sensor API?
Adam: Yes, from an automotive perspective, that would be interesting.
<AdamC_> http://rawgit.com/w3c/automotive/master/vehicle_data/data_spec.html
Adam: All of this stuff will be coming from a network. Could that be defined as a sensor source?
Dom: Do you want some way to say that you only want this data only x times per second?
<AdamC_> http://rawgit.com/w3c/automotive/master/vehicle_data/vehicle_spec.html
Adam: Yes. The subscription mechanism lets you specify that.
[Tobie projecting a text editor to work on one of the interfaces]
[Discussing adapting the VehicleInterface and VehicleSignalInterface]
Adam: concept is to define a door interface from which you can subscribe to the front left door and so on.
Dom: This would rather be an array of doors that you listen to in the sensor API approach
Tobie: Essentially, that would reverse the way you approached it. var sensor = new DoorSensor({ zone: "front left" }), and sensor.onchange = ...
Adam: I think it is similar to how the implementations are setup.
Tobie: This would reuse the event system of the platform, instead of creating a new pub/sub model.
Dom: One thing that is useful is
to define one type of event handler that will be used to
process many different types of events. It only works provided
the event model is the same for everything.
... For an app willing to interact with automotive APIs only,
it's ok, but otherwise it makes things more painful to
develop.
[More work on the blackboard on interface definitions]
Dom: Are there cases where you need actuators that would not be sensors?
Adam: No, I don't think so.
... Fundamentally, nothing is really going to change in the
Generic Sensor API?
Tobie: It depends on your timeline.
Adam: Q1 2016 for Last Call equivalent.
Tobie: We should have some good definitive answers by January.
Dom: That might be too late for
the group to re-write the spec.
... Do you think it is likely that you could converge to that
approach?
Adam: I like the idea of converging to a more Web-like approach, but I need to pass it through the group.
Francois: Migration to DOM events does not necessarily mean depending on the Generic Sensor API. That's something that would provide the first bit of alignment and that the Automotive Working Group could discuss and enact on their own.
Adam: Right.