W3C Workshop on Web and Machine Learning

Extending W3C ML Work to Embedded Systems - by Peter Hoddie (Moddable Tech)

Previous: Machine Learning in Web Architecture All talks Next: Web Platform: a 30,000 feet view / Web Platform and JS environment constraints



Slide 1 of 40

Hello, my name is Peter Hoddie.

I'm not here to talk about Machine Learning.

I don't know much about it.

I'm certainly not an expert.

I am here to talk about JavaScript, specifically JavaScript beyond the web platform.

I have some experience there, having delivered my first embedded consumer product powered by JavaScript over a dozen years ago.

I am co-founder of Moddable Tech, creators of XS, the only modern JavaScript engine for resource constrained devices.

I am also chair of Ecma TC53, the ECMAScript Modules for Embedded Systems standards committee.

TC53 is defining standard JavaScript APIs for low level device operations -- digital, serial, network sockets.

From there, we are building up to sensors, displays, and more.

Our work allows devices to boot to JavaScript, putting scripts in complete control of the device.

This workshop is about Machine Learning APIs in JavaScript in the browser.

I'm here to explore how it might be extended to low cost, resource constrained embedded devices.

That would be a big win because the web is just part of the world of digital devices.

If developers could share their Machine Learning knowledge, experience, and even code across more devices, that can only accelerate availability of products that benefit users.

Before going further I should explain what I mean by a low cost, resource constrained embedded device.

You might have in mind a Raspberry Pi, an inexpensive Linux computer used in some embedded systems.

I have something much more constrained in mind, something that can't run Linux or Node.

My favorite example is the ESP8266 from Espressif, a $1 module that includes a CPU, some RAM, Wi-Fi, and flash storage.

Moddable runs modern JavaScript -- the ECMAScript 2020 standard -- on these.

For about another dollar, you can get the ESP32 with 4 times more memory, two CPU cores that run three times as fast, and Bluetooth LE. While these devices may not be capable of much Machine Learning, take a look at their big brother.

The i.MX 8M Plus from NXP has a hardware Neural Processing Unit (NPU) that runs up to 2.25 TOPS.

The goal is to move more Machine Learning processing to the edge.

While the i.MX 8M Plus is a relatively high-end embedded processor, NXP has stated their intention to bring similar capabilities to lower cost components.

Other silicon manufacturers are adding hardware acceleration for Machine Learning to their product lines.

We can safely assume there will be Machine Learning hardware accelerators in the embedded silicon that powers IoT products.

But, what APIs will developers use to access it?

If history is any guide, each silicon manufacturer will have their own API.

Determined developers will use these vendor-specific native APIs directly, sacrificing portability.

Some de-facto APIs may emerge, perhaps commercially or through open source, that some silicon manufacturers will grudgingly support.

What about JavaScript?

While these devices can run JavaScript, they often cannot run the JavaScript APIs used on the web because these devices typically have around 1% of the memory and CPU power of a web host.

Web APIs are designed to be powerful, complete, and convenient.

That's great for computer and mobile, but doesn't migrate to embedded.

Two patterns and one anti-pattern have emerged from my work with JavaScript on embedded systems that may be relevant for bridging Machine Learning JavaScript APIs between the web and embedded systems.

In some cases, it is practical to use the same JavaScript API on both web and embedded devices.

The W3C Sensor APIs are a good example.

They give access to sensors in a phone such as the accelerometer and light meter.

The APIs are very simple, making it practical to implement them on resource constrained embedded systems.

In fact, for embedded system uses it turns out the W3C Sensor APIs are too simple.

They do not provide the ability to configure the sensors, manage energy use, or access manufacturer specific capabilities that many embedded products require.

So, TC53 designed a lower level sensor driver that provides all the needed features.

We intentionally designed the TC53 sensor driver so it would be straightforward to implement the W3C Sensor API using TC53 sensor drivers.

For example, no mapping of sensor data value is required -- the TC53 sensor drivers normatively adopt the W3C Sensor values.

In other cases, the JavaScript API from the web is impractical on an embedded device.

The Serial API in Chrome is a good example.

Nearly every embedded device has a serial connection and there's nothing fundamentally different about a serial between a computer and an embedded device.

Unfortunately, because Chrome's Serial API is designed to be convenient to use on the web platform, it makes extensive use of asynchronous promises and powerful (but heavy) streams.

TC53's Serial API provides similar capabilities through a much smaller and lighter API.

To avoid unnecessary differences, TC53 adopted the naming conventions of Chrome where practical.

What's particularly interesting is that we found it found was possible to implement the TC53 Serial API using the Chrome Serial API -- effectively emulating the embedded JavaScript API in the browser.

This might feel a bit upside-down, but it provides a single Serial API for both the web and embedded devices, allowing increased code sharing.

One word of warning.

I would caution against the anti-pattern of creating a “light” version of any web API to use on embedded.

“Light” versions are subsets of a full API.

They are almost always painful to use for a couple reasons.

First, developers expect the full API, and are annoyed as they discover the differences.

Second, because the “Light” API looks about the same, developers try to use it to perform the same operations as the full API, but on less powerful devices.

The results are always disappointing.

It is better to have a dedicated embedded API and a dedicated web API that share concepts and operations to the extent practical.

That gives authors of libraries and framework a foundation to build APIs that support both for specific domains or to emulate the embedded API on the web, as we've done with Serial.

To close, I hope this W3C initiative will consider including low-cost, resource constrained devices in its scope of work in some way.

My impression is that it may be feasible.

If achieved, I have no doubt that it would be valuable to the overall eco-system by expanding the availability of well designed APIs for working with Machine Learning computing resources.

Thank you to the W3C for this opportunity to share my experience and perspective.

Keyboard shortcuts in the video player
  • Play/pause: space
  • Increase volume: up arrow
  • Decrease volume: down arrow
  • Seek forward: right arrow
  • Seek backward: left arrow
  • Captions on/off: C
  • Fullscreen on/off: F
  • Mute/unmute: M
  • Seek percent: 0-9

Previous: Machine Learning in Web Architecture All talks Next: Web Platform: a 30,000 feet view / Web Platform and JS environment constraints

Thanks to Futurice for sponsoring the workshop!


Video hosted by WebCastor on their StreamFizz platform.