Skip

Web of Things Applications and Use Cases

Facilitator: Michael McCool

Update on Web of Things (WoT) progress in extending web standards to IoT, and a showcase for recent applications and use case scenarios from our online plugfest.

Slides

Minutes (including discussions that were not audio-recorded)

Previous: Secure Data Store Group All breakouts Next: Smart Cities

Skip

Skip

Transcript

So the slides do include video.

I'm gonna attempt to play the video via Zoom.

That may or may not work so well, but you can always download the slides later and watch the videos on your own time.

Okay, let me get started.

So let me organize things a bit more here.

Okay.

Sorry I'm just moving things around, move out of the way.

Alright (clears throat) So welcome to the Web of Things breakout session.

The purpose of the Web of Things is to apply web technology to IoT use cases, internet of things.

So basically enabling devices to interact on the internet and as web services and with web services.

And, so our major goals is to achieve interoperability.

IoT has a serious problem in that there's lots of narrow verticals, and it's very hard to combine things together to create new applications because everyone has their own standards.

There's, there's too many ways of doing things.

And so we wanted to figure out how to bridge these silos to enable mashups.

And in particular, we're also working on things like a scripting API to allow you to glue things together.

And I'll show several examples of that.

We also wanted to simplify ingestion of data.

So if you get data from an IoT device we'll make it easy to import that, that information into a database or into some analysis surface.

And generally it will make it easier to use IoT devices.

Now we're in our second charter.

In our first charter, we published two documents, an architecture document and a Thing Description.

And also some informative documents for security and for scripting.

We are updating those documents and we are also have several new documents.

We are working on some use cases.

We are working on new normative documents for discovery and for profiles.

Now, I'll just mention, describe briefly our main deliverable so far, which is the Thing Description.

So a Thing Description is a JSON-LD document, which formally describes a device, including metadata about the device, like a description and an ID number and so forth.

And also what its network interface looks like.

What kind of security information is needed to access it, what the interactions are, what the protocols are used, for those interactions, what the data looks like that is retrieved by the device, what the content and coding is and so forth.

Because of this JSON-LD, JSON-Linked Data, it supports the semantic web and it supports semantic annotation.

So we're also been coordinating other activities like One Data Model to allow use of semantic annotation for IoT and for other things in Thing Descriptions.

I should also say the data schemas that are embedded inside Thing Descriptions to describe the data returned by IoT devices is based on JSON schema.

Although you can also have other kinds of structured content like CBOR or XML describing the same schemas.

Now, our new work items include use cases, and we actually have a large catalog of use cases.

And we've been doing a lot of work to identify requirements and gaps and overlaps with other groups.

For example, geospatial information, our architecture, we're also going through and adding information about life cycles and requirements analysis based on the use cases.

The two, our major new normative documents, are discovery and profiles.

So, TDs describe metadata about devices.

It doesn't say how you get the TD for a device.

So discovery is about how those TDs are distributed both locally and globally.

So it's not just local network discovery.

It actually includes things like a search on the internet, including geospatial off search.

We have a strong emphasis on privacy protection.

So in particular, we're using a two-phase approach.

The first phase is open and uses a variety of existing discovery protocols as a first contact mechanism, but does not distribute metadata.

Then after authorization, then you get an exploration phase where you actually get access to Thing Descriptions.

So the idea here is we have a, a very minimal cross section as a first contact protocol, and then you authenticate, and then you get access to metadata.

We also protect queries so that the query itself, first contact query is very simple.

And the detailed queries are only after authentication.

Profiles are about limiting the choices available in the Thing Description.

So right now, Thing Descriptions are very open-ended.

You can have arbitrary content types and arbitrary protocols.

The trouble is, if you wanna implement a device and know in advance that it will work with other devices, we have to have a finite set of protocols that are supported so that you have a fine code base for the device.

And so this requires, you know, a set of constraints to find the Thing Descriptions for what protocols are supported.

So profiles are about defining these, the constrained subsets of Thing Descriptions.

Now, we've been doing a lot of work on implementations and on tools.

So in particular, we have a implementation or a scripting API, which simplifies the use of Thing Descriptions from Node.js and implementation of our scripting API.

And this is an open source project called node-wot under the Eclipse Foundation.

We also have a lot of work done and especially by Hitachi on the integration of Thing Descriptions with Node-RED, which is this visual language you can see over here on the, on the, on the, the right top, basically a boxes and arrows approach to programming IoT orchestration.

And I'll again, give some demonstrations of that.

And we're integrating both of these with discovery mechanisms.

And finally, we have some work again, another open source project on a validation tool that allows you to take a theme description and check whether it's correct.

And that validation tool can be run either as a web service or simply as a Node.js script application.

Now, in the past we have run many in-person Plugfests, where we all brought our devices and tried to get them to all work together.

And the last TPAC, you know, we brought several devices, many many devices and had to inter-operate.

And we also had a workshop in Munich last year.

However, given COVID, we haven't been able to of course meet in person.

So we have shifted the last two times to online Plugfests.

However, we have still accomplished quite a lot using remote access to devices.

However, I just wanna summarize.

On this slide, you actually see several things.

So up in the top left are two images from TPAC last year.

We're looking at some home automation applications.

On the bottom left, you see an example of Hybridcast from NHK, which I will talk about in a minute.

In the upper right, you will see an application from Oracle using digital twins and actually using remote access to actually a pump simulation.

And in the middle, you see a, a factory simulation that was put together by Siemens.

So, and there's many more things that have been done at these Plugfests.

Then I'm gonna talk about a few things we did in our very last Plugfest which was an online Plugfest held this October.

So just before TPAC.

So, so what we did here is Bologna university has been an, and actually Technical University of Munich have both using WoT to teach IoT to students.

And they put together some environments for doing teaching.

So Bologna had a simulated farm that was used to simulate IoT devices and allow people to write scripts to control them.

And TUM had physical devices they set up for their lab and they made it available to the students for doing projects.

And during the online Plugfest, we connected these over the internet and built some mashups combining both of them.

And so it was a little video that kind of walks through both of these.

Walks through the farm simulation and, and shows a mashup with things from TUM.

So the farm simulation is quite interesting 'cause it combined several aspects that are relevant to many use cases.

It includes actuators, output devices.

It includes sensors, input devices.

It includes actions, it includes event notifications it includes geospatial information.

So all these things come together (coughs) in agriculture.

These same things, of course also show other applications such as smart city, smart home, vehicle and logistics, transportation, and so forth.

So this is of course not limited to agriculture, but this just happens to be a, a useful use case to explore and to teach people IoT.

Let me go ahead and show, play this video.

And this video is available online as well, there's a link.

Hello, welcome to this video.

I'm Cristiano and I will show you one of the (indistinct) of this October 2020 online Plugfest.

So in this Plugfest, we designed different tests and scenarios.

One of those was to synchronize (indistinct) agriculture application built with relative psychologies in mind.

You can find more in the description below.

On the other hand, we have a sense HAT device also that remote our UT lab of TUM Institute.

Sense HAT is an add-on board for us verify which feature has set of sense HAT and a eight by eight multicolored matrix.

Today, we'll control this matrix for our application.

If you want to know more, use the link section as well.

Now this, those devices were not meant to work together.

On the contrary, they were designed for different application with different scopes.

Our purpose today is to create a mashup application that turns on a section of the sensor display for each filter sprinkle.

Well, let me explain.

We have four different sprinkles inside of our farm.

It can be active or not.

So we divide the display in four sections and color them with four different colors.

Then secondly, if a sprinkle turns on, we also light up the corresponding land matrix section.

So, this is the application.

Let's give a look at the code.

So here we are.

We have the short script with just 30 line of codes.

Plus these little LED Matrix class, which is an utility class used to enter the state of the LED Matrix itself.

In fact, we have two meters over here.

One for setting the (indistinct) on and off, basically using the color defined over here and an utility two-array metered, which basically turns the matrix to an array because if you have a loop to the sunset in description, you can find that PixelProperty is actually an array, but I convert by 60, for exactly 64 elements.

And each elements is an array of exactly three elements that goes from zero to 255, which is basically the range.

The LGB range.

Okay.

And now let's see the output.

Let's see if it's working basically.

So let's minimize this, open the WoT farm, (typing sounds) in the provided link.

While it's loading, I open also the stream, live stream from to message it.

Yes, there we are.

We are going to use this Raspberry Pi and sensor display.

The second one.

I hope you can see it.

And here, we have our word for sprinkles.

One here, one here and one here.

Yeah, this is sprinkle zero, for example.

So, let's start.

I use this right CPI tester to start the action start (indistinct) sprinkle.

But you can use whatever client you want.

Even C URI is okay.

So, let's start sprinkle zero.

Okay, started and also the corresponding section of LED Matrix display is on.

Now let's do also sprinkle two.

Okay, it's on.

And also the LED Matrix is on.

On (clears throat) and, three.

So you can see the whole LED Matrix is on and all, every single sprinkle is on.

Okay.

And now let's stop each one of them.

Stop (indistinct) here Stop for example sprinkle one, here.

It's down, alright sprinkle three.

It's also down, then sprinkle zero, which is the upper right.

Yes.

And finally, which one this was?

Let's see, sprinkle two.

Sprinkle two.

Done.

So, that's all, I hope you enjoy it.

See you in the next video.

Thanks.

Yeah, and just to point out some things here.

So students might of course do more sophisticated things.

You might create a web interface with the dashboard showing locations of the sprinklers and providing a web interface.

They might connect web services.

There are sensors on the field that give the moisture and temperature content, and they might build applications that optimize the watering plants.

They might connect to a web service, giving them the weather, you know, and then plan out a watering schedule that minimizes the use of external water and doesn't over-water plants and so forth.

So this is actually a very interesting environment because it gives them all kinds of opportunities to, to experiment with applications using IoT, combined with web UIs and web services.

Now moving along, (clears throat) (indistinct) to go to the next slide.

Right.

So let me talk about the Node-RED theme as worked on.

So, not everyone wants to write scripts.

There are lots of applications where the users do not wanna, do not know programming.

And so in particular an app in agriculture, maybe it's a farmer that wants to set up their farm and maybe they're not a programmer, which is probably likely.

So in that case, maybe they wanna use something else like a visual language and Node-RED is just such an application.

However, you know, we don't even want people to have to deal with Thing Descriptions or JSON, or any of that stuff.

And so how can people do applications at a very high level without having to get down to the weeds of looking at JSON or, or writing scripts.

So Hitachi worked on an auto population system.

And the basic idea here is that we can use discovery to find devices.

We can automatically retrieve their Thing Descriptions.

We can then populate a catalog and allow Node-RED using the user interface, graphical user interface to install nodes, representing those devices in, in a Node-RED graph.

So we've got a video for that as well.

Now I want, I'm going to play this video actually.

And you'll note that it's actually very quickly so we can build an application without ever looking at a Thing Description or writing any code.

Is it playing?

So what they're doing they're clicking the refresh button.

That triggers discovery.

So now they've found a bunch of devices.

They're gonna select a device, which is a Hybridcast emulator actually, and you're going to install it.

That is going to pull down and generate a, a, a Node-RED device and add it to the pallet.

So when they close this, it will be adding it to the pallet, as you can see here.

And now there's a new block over there and they can drag and drop into a graph.

They can wire it up with existing devices or other devices.

So lots of the different devices could be existing here, as well as things like functions and, and other kinds of things you could add.

And there might also be descriptive information from Things Description that's showing up here, explaining how to use it and boom!

you're done.

So without doing any code, right, Just dragging and dropping and clicking things on the gooey.

And you can set up an orchestration for IoT devices.

And in fact can connect any device described by a Thing Description.

Okay?

So I think this is, you know, extremely valuable and we have shown this to people in retail as well, and they are very excited about the principle of using this by for example, a store manager in a store.

Okay.

So that's Node-RED, and to be clear, you know, this also actually has accessibility applications because you could, for example, control your television channel using some IoT device to select the channel.

And by wiring (indistinct) like this, you can connect an input device, an arbitrary input device to control arbitrary of the device to the TV set.

Which brings us to this application.

So NHK did some work (clears throat) where they are broadcasting data along with their HDTV programs.

And actually on that picture there, you can see an example of some data coming through from a, from a program.

And this data can be used in various ways.

You might, for example, broadcast the desired temperature or color of the room or the loudness of the speakers.

And, and then you could use that to connect to other IoT devices.

You could also use Hybridcast information to select channels and also to launch applications which do various things.

And so what we did during the Plugfest is, we've already seen the DNS-based discovery.

And to show you a couple more scenarios using Hybridcast and television control, media control.

So scenario, scenario one is simply controlling the channel and the television set and the, looking through the stations of the TV.

This is really simple, but it gets very interesting when you consider that you may have a disabled person needing to have an alternative input device to control the television set.

And so this makes it very easy to build arbitrary devices to enhance accessibility.

Scenario two is events broadcast by television can be used to trigger events in IoT devices.

Now, this scenario is, given is very simple, but you can imagine this being used for things like typhoon alerts or earthquake alerts.

You could broadcast information about that and you could trigger an IoT device to indicate to somebody that there's an incoming earthquake or whatever.

So, so that kind of stuff can also be supported.

So here's the first scenario just controlling a channel.

So first of all, you can get the channels that are available from the receiver.

You can get the current status, the TV, you know, is it on?

Is it using Hybridcast?

And then you can send a command to the television to select a channel.

You could also select instead a Hybridcast application.

So for example, you might launch a weather app or a news app.

These controls may also be driven by input and output devices too.

So maybe you launch the weather app when someone enters the room, right?

With infrared sensor, or maybe if there's new news, you could blink an LED with another output device.

Okay.

Scenario two is, we're gonna control some other devices.

So here we have, you know, we're listening for events and this would listen for a particular event.

In this case, we're watching, you know, a cooking show or, you know, maybe it's Coffee with Susan.

And so, when we log into the show, it's gonna trigger our coffee machine to make a latte.

So we can all have coffee together.

And so this is broadcasting, you know, make some coffee and maybe even broadcasting information about a particular recipe for the coffee and then triggering a coffee machine to, to brew some coffee.

And so, this is actually in the actual mashup, it was an online Plugfest.

So it was actually a Hybridcast receiver in Japan, triggering the coffee being made in Germany.

And, but of course, in other cases it could be in the same country.

There's also a Hitachi LED which is used to indicate that, you know, this is an activity that is going on.

So, of course in other situations we just plugged together what we had.

But you can imagine other use cases where you plugged together various kinds of devices to do various things triggered by events in broadcast media.

Okay.

And this is just a little video showing that in action, and this is actually showing you the channel select, an app select application.

There's no audio, so I'll just speak over it.

But basically you see the television program down here and you see this stuff.

Now, they're just clicking on these buttons to trigger the various actions, but in practice you would probably have other IoT devices providing an input control.

Maybe for a disabled person, you might have a head switch and a selector, that is triggering these various events.

And again, the advantage of using this for accessibility is that you can customize it to the person's needs, very easily by dragging and dropping.

So, consulting could within a few minutes configure a input device for that particular person.

Okay.

So the final thing I wanna talk about is geolocation.

Now, we are currently, you know, we really need to work hard to coordinate our activities around geolocation with other groups doing geolocation.

And that includes groups like the open geospatial consortium and the spatial data on the web group.

And also we have a geolocation API in the browser that we'd like to make consistent what we do in IoT for geolocation.

So we are planning probably mid-November to meet up again with geospatial data on the web working group and to sort out what we have to do here.

However, during the Plugfest, we did prototype various approaches.

And as you can see here, we built a mashup that integrated geolocation information with mapping applications.

And as you can imagine, for example, in agriculture, I might want to be able to map locations of my sprinklers and have a mapped dashboard to control location.

Logistics, I might have other geolocation data that's actually changing over time.

As, for example, trucks make deliveries and so forth.

And so in that case, I need actually dynamic data as geolocation.

In fact, one of our challenges is that geolocation data could either be metadata in a TV, or it could be data delivered by a device dynamically.

And so we have to figure out how to, how to combine those two approaches and make them both work in a consistent way.

So we're very happy to collaborate on this and we wanna find something that's not a reinvent the wheel, but built on existing standards in a good way.

Okay, so just to wrap up and to summarize during our, our, our Plugfest, we worked on NHK Hybridcast integration, including a mashup with devices (indistinct) from NHK, from Germany in fact (clears throat) Hitachi did some great work to do Node-RED integration with our discovery work.

And they also did that with NHK.

They did an NHK mashup too.

We also did mashups between Rishi Bellona and Tom on this smart farm, which is also gonna be a basis or a foundation for lots of great tutorial and experimental work for like teaching and training IoT.

We also filing some work on geolocation and how to embed geolocation information into, into a TV and to visualize on a map.

So, now I wanna say as well, besides the Plugfest, we also recently released the files for our first public working drafts of our initial documents on profiles and discovery and for updated documents for TV and architecture, architecture.

Those are currently pending and should come out next week.

We also updated a note for our scripting API and informative document.

In the TD 1.1, there were some significant updates.

Now, our plan is to update a backup compatible update for TD 1.1, current spec is 1.0 and then we're gonna release you 1.1.

The TD 1.1 includes some enhancements to security.

In particular, we added two OAuth flows for client and device, both of which are very useful for IoT because client does not necessarily require a human to participate in the key, which is very common in fully automated situations.

And device supports things like device pairing.

When a IoT device has no display, it has no user region.

So both of those are very common IoT scenarios.

We also added a combo security scheme to handle more complicated security configurations.

So in particular the combo scheme allows and or combinations of other security schemes.

So for example, suppose you have a proxy and the proxy can use either basic or digest authentication.

And then the proxy goes to an endpoint and the endpoint uses OAuth and allows either client or device authentication.

Okay?

So you actually need to do, need to do digest war basic and client or device, right?

With parentheses inserted as appropriate.

So, so the combo scheme allows you to express them.

The other thing we did is Thing Models.

So right now Thing Descriptions describe a particular device.

A particular instance of a device.

Very often, devices are part of a class of devices, all, say manufactured at the same model.

And so a manufacturer wanted to define a Thing Model, which describes the entire class.

And a Thing Model is basically a Thing Description, but omitting certain information like the ID number or the ID identifier of the ticker device or the base URL, which are only established when the device is manufactured or installed.

So Thing Models have two big use cases.

One is as a developer, I need to know how to write, say an orchestration script for a class of devices.

And then when that script is installed, then it discovers things that belong to that class and then can connect to them.

So I needed to know the information in the model to know how to write my script, what interactions are available in that class of devices.

The other application is I have a digital twin.

I want to read information from a certain class of devices.

So data is coming into a database when going to ingest.

So I might've set up my database in a certain way.

I need to know what the data will come in, what it will look like.

And so again, the Thing Model provide data schemas that can tell us what that data will look like.

So this is closely related to work on digital twins.

And Thing Models are pending to support that.

Now our next steps, and this is leading into discussion.

We have a lot of work going on right now on use case collection.

So the wot-usecases repo has a ton of documents there where we've been collecting use cases.

I should also say one of our major use cases is the Smart City.

And there's a breakout on Thursday to discuss Spinning Up and possibly a new IG just for smart cities.

Well, which will include, but is not limited to IoT, IoT impacts in the smart city.

We also, as I mentioned, we need to coordinate with spatial data on the web around geolocation standards.

Scripting is basically for doing orchestration.

Then the question arises, where does the script run?

Running in the browser really isn't appropriate because when you close the browser, then your orchestration would stop.

And what if it's running your security system for your house, right?

You can't have it stopped when your browser closes.

So would you actually wanna do is install the script into an edge computer that is always on.

And so we're, we're looking at ways to maybe enable that in, in collaboration of web and networks.

So really, one of the places where we might wanna use a WoT Scripting API is in an edge computer that is enabling some kind of ambient activity and is providing a new IoT service that is kind of always on.

For discovery, we've been spending a lot of time thinking about how to do this while maintaining privacy, but we need to do a lot of work to get feedback and input on that, to make sure that we're doing the right thing.

So we certainly encourage people to take a look at that and give us feedback on whether we're aimed in the right direction with regards to privacy.

And finally profiles, you know, we're looking at building constrained TDs, are we adding the right constraints?

Are we making TDs broad enough?

Under profiles to still handle all the important use cases.

So again, we need feedback on, on those profiles to make sure that they handle everything they need to handle while still being constrained enough to be implemented.

Okay, so with that, I'm going to stop.

And I think we probably wanna pause the recording and then start answering and having, having discussion.

Answering questions and having discovery, discussion.

Skip

Sponsors

Platinum sponsor

Coil Technologies,

Media sponsor

Legible

For further details, contact sponsorship@w3.org