W3C

- DRAFT -

WoT-IG Open Day/Plugfest

11 Apr 2016

See also: IRC log

Attendees

Present
Joerg_Heuer(Siemens), Dave_Raggett(W3C), Johannes_Hund(Siemens), Joao_DE_Sousa(Bosch), Victor)Charpenay(Siemens), Valentin_Heun(MIT_MediaLab), Daniel_Peinter(Siemens), Taki_Kamiya(Fujitsu), Matthias_Kovatsch(Siemens), Soumya_Kanti_Datta(Eurecom), Kazuaki_Nimura(Fujitsu), Ryuichi_Matsukura(Fujitsu), Kaz_Ashimura(W3C), Toru_Kawaguchi(Panasonic), Katsuyoshi_Naka(Panasonic), Kazuo_Kajimoto(Panasonic), Yoshiaki_Ohsumi(Panasonic), Yingying_Chen(W3C), Sebastian_Kaebisch(Siemens), Louay_Bassbouss(Fraunhofer_FOKUS), Claes_Nilsson(Sony), Claes_Nilsson, Ian_Skerrett(Eclipse_Foundation)
Regrets
Chair
Joerg_Heuer
Scribe
Yingying_Chen, yingying

Contents


Welcome to the WoT IG Open Day

-> https://www.w3.org/WoT/IG/wiki/images/4/4c/Open-day-intro.pdf

Joerg: any questions?
... next would be a number of contributions of talk.

1. Soumya Kanti Datta (EURECOM): IoT Application Development using Semantic Web Technologies

2. Valentin Markus Josef Heun of MIT: Reality Editor project

3. Joao Sousa (Bosch): BEZIRK Platform

4. Dave Raggett: implementation work on web of things servers and gateways

5. Takuki Kamiya & Daniel Peintner: EXI for WoT

6. Michael Koster: T.B.D.

7. Matthias, Kajimoto-san, Sebastian, Johannes: Getting Started with a WoT Project

8. Ian Skerrett: "Eclipse IoT: Open Source Building Blocks for IoT Developers"

scribe: after that would be the plugfest.
... now I would ask Soumya to give the presentation on IoT Application Development using Semantic Web Technologies

Soumya presents IoT Application Development using Semantic Web Technologies

<dsr> scribenick: dsr

Soumya’s slide: https://www.w3.org/WoT/IG/wiki/File:Horizontal_IoT_application_development.pdf%7CHorizontal

The URI pasted badly and should be https://www.w3.org/WoT/IG/wiki/File:Horizontal_IoT_application_development.pdf

Soumya describes an approach in which sensors are discovered and provisioned, and provide their data in SenML, which is then translated into RDF

This supports applications through semantic web technologies such as SPARQL.

He describes a process whereby during provisioning, users select semantic templates for their devices.

The RDF is handled in the cloud using the M3 Framework which itself is implemented using the Jena Framework.

An Android device is used as a home gateway that bridges the cloud to the IoT devices

The gateway is based upon the Android SDK and AndroJena

Questions?

Kaz: Referring to slide 12, This shows two sensors A and B. And those sensors A and B know their own capability, i.e., A is thermometer for human body and B is thermometer for room temp. So they have some memory and remember their capability. Right?

right.

Kaz: 2nd question on slide 13. What about coordination and synchronization across a group of devices? Sometimes we want to synchronize multiple devices depending on the application.

Soumya: yes, that is a challenge. here you can have any number of devices.

Johannes: you do reasoning in the cloud, and push actuation to the edge. Have you considered how you would handle actuation in the edge?

Soumya: actually the cloud is only used to hold the data and templates, the rest is handled in the edge

He points to the paper referenced at the bottom of slide 11. This explains everything.

Soumya: the only thing that goes to the cloud is sensor type and domain

We could do the processing in the cloud, but android devices like tablets are powerful enough for the kinds of tasks we’re looking at.

The cloud would be good for analysing very large amounts of data

Question about vehicles

The data is key to enabling new services

Valentin Markus Josef Heun of MIT: Reality Editor project

Joerg introduces Valentin

Valentin: I am a PhD student at the MIT Media Lab.

I did a degree in design and am now working on computer science and bridging the two areas

The IoT area is particularly interesting in respect to the human machine interface.

My thanks to Johannes for inviting me to talk today.

I will talk about an augmented reality interface to the real world.

He shows a smart phone for controlling a smart home.

Will the IoT machines just chat together or do we want to stay in control.

Right now there are a huge number of apps for controlling IoT services, it is completely overwhelming.

How can we recover the relative simplicity of the real world.

We need a way that ordinary people can cope with. Terminals and lots of arkane commands won’t work. Graphical user interfaces were an improvement.

We need to associated icons on the apps with the real world devices, e.g. a door handle, light switch or thermostat

He shows a video of a card with a sensor. The augmented reality display adds a computer generated display that appears to be attached directly to the card

He shows several other examples where human interaction with physical objects drives the human machine interface shown in the augmented reality display.

He asks how can we balance virtual/physical object interaction?

When you buy a product (e.g. a toaster) you first learn how it is operated. You can then set it up to meet your needs

Much less interaction is needed once the set up phase is done

e.g. a car radio where you preset the buttons to your favourite stations

Spatial interaction with physical objects exploits your spatial memory

End user programm is way too complicated. This is why we need to machines to talk to each other and learn from us

Question: you’re switching from operation to programming, …

Answer: let me explain as I continue

For the automotive industry, it takes 5 years to get to market, so they need to predict where to aim for in 5 years time. This is hard.

We need an abstraction layer to relate devices and enable them to talk to each other.

Each physical device has a number of components that you can interact with.

These can be mapped into numeric parameters

He talks about one device without a timer being connected to another device to use that device’s timer.

Many IoT devices commit you to handing control to a cloud based service.

It would be nice to allow users ton retain control and change the service as desired.

In a video of demos, each device has a generalisation of a 2D bar code that the smart phone can use to identify how to talk to it. This makes it easy to create new augmented reality user interfaces for these devices.

The project is implemented entirely on open web standards, e.g. HTTP, WebSockets, JSON, HTML5, JavaScript

He cites the recent news stories of NEST ending support for earlier devices turning them into useless objects.

We need to give users power to control their things

He notes that the devices use multicast broadcasting over UDP.

Each device holds its own description

The reality editor uses IFRAMEs to project web pages onto the physical scene

More details on OpenHybrid.org

Questions:

Sebastian: where is the semantics, that wasn’t clear to me

Valetin: the work comes from the user perspective, semantics are more important in dynamically changing environments

Joerg: you shows how to define behaviours for combinations of physical objects. This will need semantic info to scale up

Valetin: I am exploring this in my thesis work

Johannes: your work relates to things we’ve been thing of at Siemens, e.g. how to decompose things, and how to model them

For example, contrasts between properties and actions, and processes that last for some extended time period

Joao Sousa: how do you cope with different people being confused by how things have been set up, or when you come back to something after a period in which you’ve forgetten what you did earlier

Valentin: the behaviour is not stored in your smart phone

The domain effects what kinds of interactions make sense, i.e. are they safe or would they cause a risk of harm

Joao: what’s the implementation architecture?

Valetin: the bar code for each device identifies a URI on a web server for apps to talk to

Valetin’s contact info: heun@media.mit.edu

We thank Valentin and start a coffee break of 30 mins

Joao Sousa (Bosch): BEZIRK Platform

Joao introduces himself.

I’ve been with Bosch since 2012. 16 years ago I was working on connected things at Carnegie Mellon, …

Zirk comes from the German world for circle

you can also use the sense of machines that have gone berserk! :-)

He mentions a story about a garage door service where customers were shocked to be told not to use their doors for the 4 hours during which the software upgrade was being applied!

Many current devices just talk to their vendors own service, i.e. each vendor operates in its own silo

Little incentive for sharing data beyond the silo

Some data is meant to be shared, others is private and not for sharing

There are concerns about leakage as different info sources are combined to build strongly identifying models about you.

Amazon uses cloud based speech processing, so now Amazon can listen to what’s happening in your home!

Other examples include Jabra (pulse rate), SleepIQ (snoring) and ToTo toilets that perform real-time analysis of bodily wastes - that info would be valuable to insurance companies, This is an unprecedented level of risk to privacy

We’re interested in allowing users to reclaim boundaries of privacy

We do this with what we call “spheres”

Communication between spheres is via “pipes” - these are requested by services and authorised by users.

Spheres of trust are boundaries of confidentiality

It is perhaps unreasonable for a sensor/appliance to be burdened with being a peer on the Internet.

We need good APIs to enable lots of local processing.

IP addresses, geolocation and semantic topic labels are ways of identifying who can receive messages.

He notes that Bosch have cameras that have embedded face recognition ...

This can be used to verify that someone is present

A smart phone can verify its owner is present and operating that device

Bezirk supports directed messages that are sent to a named end-point, and pub-sub models where senders direct messages to parties interested in a given semantic address

The latter can include the geolocation, perhaps expressed semantically rather than as lat/long

Confidentialy spheres of trust are useful socially for privacy and technically for scope and scale.

Joao notes that model interchange protocols are key

He describes context sensitive preferences, e.g. where one person values a warm room, or how strong someone likes their coffee, which might depend on the time of day etc.

IoT services should be able to learn and apply people’s preferences and to balance different people’s preferences where there is a potential conflict

As people interact with devices, the devices advertise the interactions with their users. A personal device like your smart phone can then discover other nearby devices and program them accordingly

Joao cites stepping into a rental car or into a hotel room as examples

Bosch have worked on one protocol for observations of user behaviour and another for requesting user profiles with information restricted to designated topics.

They’ve used a subset of the W3C semantic sensor network ontology

They want to publish the SDK for this work

and open source the implementation

Questions:

Sebastian: how do you describe actuators?

Joao: the observation protocol is about learning behaviour

Devices may combine sensing and actuation roles

Actuation exploits the personal profiles protocol (named penguin)

Sensors just broadcast observations using the protocol named dragonfly

A user agent combines knowledge inference, knowledge modelling and knowledge filtering.

We need a conflict resolution policy for resolving conflicts between rules

If every one’s phone is their user agent, then these agents need to collaborate to resolve conflicts due to different rules for different people.

The slide also allows for resolution in the device being controlled, or perhaps its agent.

The approach allows users to ask why some preference wasn’t applied

Joerg: there could be thousands of people operating devices, so scaling becomes a challenge

It is important to find simple abstractions if we are to succeed

<kaz> scribenick: kaz

Kaz: in that case, maybe we could have multiple levels of manager of interactions, e.g., home level, city level and country level

Joao: possibly. let's talk about that offline.

Kaz: yes.

<kaz> scribenick: dsr

Louay: clients broadcast messages, how do you deal with the management of the spheres of trust

slides to be made public

Implementation work on web of things servers and gateways (Dave Raggett)

<kaz> scribenick: kaz

-> https://www.w3.org/WoT/IG/wiki/images/2/25/Arduino-server-dsr-2016-04-11.pdf Dave's slides

soumya: lot of engineering challenges
... p24, semantic constraints
... what kind of constraints?

dsr: software and power
... maybe energy is a good metric

soumya: you can run it on a smartphone?

dsr: very powerful environment
... what if there is energy constraint

claes: very first slide
... p2
... what is the gateway?

dsr: some powerful device than arduino
... and can talk multiple protocols
... addressing issue is regarding low-layer stuck

sebastian: T2T interaction
... need to think about that

dsr: lot of opportunity for different topologies

claes: direct p2p connection?

dsr: depends on the hardware capability

joerg: constraints with device capability
... controller for communication or just sensing?
... scale from small to large
... we should follow the idea

dsr: there is a rank
... trying to understand what the architectural requirements

valentin: lot of discussions by Intel
... esp. regarding Arduino

dsr: insecure devices and secure devices

valentin: complicated to consider device-to-device infrastracture

joao: distinguish Things?

dsr: the question is how to expose the application
... had lots of discussion during previous meetings
... actions may pass a value when they're invoked

joerg: we're quite behind, so would propose we have lunch now and after that let's do the EXI session
... we'll have the demo session at another room
... so we'd have brief introduction for that later

[ lunch ]

<yingying> scribe: yingying

Takuki Kamiya & Daniel Peintner: EXI for WoT

Takuki shows the slides. He explained the background of XML.

Takuki: there are use cases that XML can not deal with. We need to handle these use cases.
... we saw 2 factors in use cases. One is use of schema. If there is schema, the schema can be totally or partially defined. Sometimes you could not find schema.
... another factor is the use fo compression.
... EXI design is schema-informed. schema is optional. compression is optional.
... this diagram shows the baseline grammar is used when informed no schema.
... this is some encoding result.
... another aspect to consider is the processing efficiency. EXI is good.
... about channeling, markup goes into structure channel and value goes to normal channel. This way EXI does better than xml+gzip.
... about implementations, the top three are open source you can use. the bottom 2 are commercial tools.
... this is my part presentation.

Daniel: I will try to take over.
... what would people get from EXI rather than JSON?
... effiiency, more compact, reuse for XML and JSON.
... this is a simple example transforming JSON and EXI for JSON.
... this is a measurements we did today. EXI4JSON is much better than JSON.
... this is just for JSON using a very generic schema.
... extended string: EXI strings , shared Strings, split strings.
... the WG just goes into a public WG. Here are the links for your convenience.
... comments or questions?
... I put the demo link here.

@pointer

Daniel: left side is JSON, right side is the binary.

Dave: what is the timeline of the WG?

Takuki: Now working on JSON/XML for WoT. CSS compaction just started.

Jeorg: Thank you.
... next topic is a WoT project from Matthias.

Getting Started with a WoT Project

Matthias: there are some of links for documents: architecture and current practice.
... wiki for preparation of next F2F plugfest.
... last ones are plugfest projects available.
... first, Kajimoto-san will introduce the architecture, I will talk about WoT interface, Sebastian will introduce the TD and Johannes will introduce the Scripting API.

Kajimoto: this is an overview of the architecture for WoT servient.
... App script provides accesss to and control of internal data of devices.
... TD is very important to discover the thing.
... simple case: browser knows the semantics of WoT device by TD so that the brouser app calls the client API through WoT API.
... servient will call the corresponding API according to what is requested.
... some powerful devices can have the servient on itself.
... WoT Servient can also be on the smartphone.
... this is a WoT servient on smart home hub.
... this is an example of WoT servient on cloud.

Matthias: let's look at the WoT API.
... above the WoT interface, we have the protocol binding. we can have multiple protocol bindings. Above the protocol bindings is the resource model.
... the servient can have the client role or server role or both.
... if you want to participate the plugfest, you need to figure out what is the role of your device.
... then select the platform that is suitable.
... client role: Angular.js and web browser.
... server role: Sensor/actuator: Arduino, ESP8266,mbed; devce simulators: Node.js.
... for proxy, not yet. Welcome to contribute.
... then pick up the protocol you want to support.
... HTTP, CoAP. for MQTT, eclipse has a project mosquitto.

Sebastian: Things Description.
... I want to use a WoT servient. I want to know several questions. The answer is in the TD
... based on the information in TD, the T2T communication could be set up.
... describe your thing based on JSON-LD.
... how does the TD look like? Here is an example.
... We need the TD context, which has to be standardized by W3C WoT WG.
... There might be external context defined by other organization to enrich the definitions with additional semantics. This is not W3C scope.
... How to create a TD?

-> http://w3c.github.io/wot/current-practices/wot-practices.html#sec-td

Kaz: hierarchy of the TD?

Michael: would discuss that point

Victor: control other things?

Michael: maybe just a collection of things.

Victor: there is convention in JSON-LD.

Michael: That is what I'm just looking for. What we should do?

Dave: do we have defaults?

Michael: there could be things in one level or multiple levels.
... recursively or whatever.

<kaz> scribenick: kaz

Sebastian: would talk about the details during the breakouts

Kajimoto: agree

Michael: about the media types, do we have a better way to organize them instead of just the real order?

Matthias: the question could be a topic for breakout session.

<kaz> scribenick: Kajimoto_Kazuo

Joao: Use case "Tell me the temperature very near around me", then current things description is static, not dynamic semantics description, so current TD cannot represent such semantics.

<kaz> scribenick: kaz

Kaz: basically Joao suggests we should think about concrete use case for each TD sample, and I'd agree
... we should generate concrete use case description for each TD sample

<kaz> scribenick: Kajimoto_Kazuo

Kajimoto: Current TD is basic layer description, to represent dynamic semantics, some upper layer TD scheme is proper to be introduced.

<kaz> scribenick: yingying

Johannes: Scripting API.
... how do we standardize the scripting API.
... let's look into the thing for the 3 kinds of APIs: client API, Server API and Physical API.
... why? to avoid fragmentation,...
... from app development viewpoint, thing vendor
... I'll show example of scripting APIs.

Johannes showed the examples for client API, server API and Physical access.

Johannes showed an example of things discovery.

Johannes: questions?

Claes: for this plugfest, how is it supported on security consideration to access the API?

Johannes: this is an item we identified from last plugfest. Currently we don't have protocol specification on it. However, it's in our roadmap.
... we should come up with examples so that we can see what we could implement in the scripting API or Runtime.

<kaz> scribenick: kaz

Joao: security should be handled by not applications but middleware level

<kaz> scribenick: yingying

Joao: Security is somehow like what is done in Java container.

[some discussion about security model]

<kaz> scribenick: kaz

Kaz: Automotive group has been discussing security use cases and clarified the need for several levels of security, i.e., OS level, middleware level, application level and data level
... also the security TF of this WoT IG has been discussing security need as well, you're welcome to join the discussion on the mailinglist
... and again I completely agree with you we should clarify security use cases as well

<kaz> scribenick: yingying

[tea break]

Ian Skerrett: "Eclipse IoT: Open Source Building Blocks for IoT Developers"

Ian: a technology has to be open if it's widely adopted.
... there is a lot of cases and web is the best one.
... still a lot of silos in IoT which is frustrating.
... look at the MQTT by IBM and other in 1999. around 2011-2012, it was announced open source.
... MQTT is successful scenario in IoT.
... open HW is a key enabler for IoT.
... 5-7 years ago, no open HW available. YOu need to buy the board.
... also open source is another key enabler for IoT.
... it allows you download thing and just try.
... this is a typical IoT Architecture.
... MQTT, CoAP, and LWM2M are the open standards we implemented.
... MQTT adoptions: Amazon, MSFT Azure, IBM Bluemix. Last week Arduino announced it adopts this model.
... bar chart about the protocol in use in IoT.
... we also have IoT GW services. IBM has four implementation of MQTT.
... This is the framework Kura provides.
... We have the building blocks and then we have the IoT Solution Framework.
... in Home automation industry, it's a mess. Hundreds of statndards.
... eclipse smarthome is adopted by many companies.
... We will move on to be a server.
... Haukbit and Homo are what we have as the IoT server platform. Redhat and bosch adopt it.
... Vorto's goal: co-generators for different platforms.
... information about our community: 21 projects, 150+ developers.
... we have a portal and sandbox servers.
... we are look for people to get involved.
... that's it. any questions?

Kaz: is it OK to publish the slides to w3c wiki page?

Ian: yes.

<kaz> scribenick: kaz

Kaz: we might want to add the information in this slides (=Eclipse IoT and Vorto) to our Technology Landscape document at: https://w3c.github.io/wot/landscape.html

<kaz> scribenick: yingying

Matthias: we could add a link of implementation to the protocol section of our landscape.

Dave: test suite?

Ian: MQTT has almost mature test suite.

Johannes: license, patent?

Plugfest and Demos

Joerg: balance between demo and plugfest. this time we donn't add new feature.

Joerg: I would ask Johannes to give you a framework.

Johannes: we use this table to collect people's contribution.

<kaz> Plugfest participation table

Johannes: what is the Thingweb projects
... we are just providing some tools for people to join the WoT development.
... these are findings in this F2F.
... comments or questions?

Joao: snake?

Joerg asked people to introduce what they implemented.

Dave: mDNS gateway

Matthias: CoAP binding implemented.

Louay: SSDP support for thing exposure and discovery.
... for client, I use the tool Node-RED .

<kaz> Discovery Flows

Sebastian: T2T interaction based on common plugfest vocabulary.

<kaz> scribenick: kaz

(discussions on the TD processing flow)

(note that /onOff and /toggle here are device dependent commands translated from the TD for air conditioner and electric stove)

Michael: where the TD comes from?

Valentin: what about generic description for air conditioner, etc.?

[ Open Day adjourned ]

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.128 (CVS log)
$Date: 2016/04/12 10:59:07 $