Hello, my name is Peter Hoddie.
I'm not here to talk about Machine Learning.
I don't know much about it.
I'm certainly not an expert.
I am also chair of Ecma TC53, the ECMAScript Modules for Embedded Systems standards committee.
From there, we are building up to sensors, displays, and more.
I'm here to explore how it might be extended to low cost, resource constrained embedded devices.
That would be a big win because the web is just part of the world of digital devices.
If developers could share their Machine Learning knowledge, experience, and even code across more devices, that can only accelerate availability of products that benefit users.
Before going further I should explain what I mean by a low cost, resource constrained embedded device.
You might have in mind a Raspberry Pi, an inexpensive Linux computer used in some embedded systems.
I have something much more constrained in mind, something that can't run Linux or Node.
My favorite example is the ESP8266 from Espressif, a $1 module that includes a CPU, some RAM, Wi-Fi, and flash storage.
For about another dollar, you can get the ESP32 with 4 times more memory, two CPU cores that run three times as fast, and Bluetooth LE. While these devices may not be capable of much Machine Learning, take a look at their big brother.
The i.MX 8M Plus from NXP has a hardware Neural Processing Unit (NPU) that runs up to 2.25 TOPS.
The goal is to move more Machine Learning processing to the edge.
While the i.MX 8M Plus is a relatively high-end embedded processor, NXP has stated their intention to bring similar capabilities to lower cost components.
Other silicon manufacturers are adding hardware acceleration for Machine Learning to their product lines.
We can safely assume there will be Machine Learning hardware accelerators in the embedded silicon that powers IoT products.
But, what APIs will developers use to access it?
If history is any guide, each silicon manufacturer will have their own API.
Determined developers will use these vendor-specific native APIs directly, sacrificing portability.
Some de-facto APIs may emerge, perhaps commercially or through open source, that some silicon manufacturers will grudgingly support.
Web APIs are designed to be powerful, complete, and convenient.
That's great for computer and mobile, but doesn't migrate to embedded.
The W3C Sensor APIs are a good example.
They give access to sensors in a phone such as the accelerometer and light meter.
The APIs are very simple, making it practical to implement them on resource constrained embedded systems.
In fact, for embedded system uses it turns out the W3C Sensor APIs are too simple.
They do not provide the ability to configure the sensors, manage energy use, or access manufacturer specific capabilities that many embedded products require.
So, TC53 designed a lower level sensor driver that provides all the needed features.
We intentionally designed the TC53 sensor driver so it would be straightforward to implement the W3C Sensor API using TC53 sensor drivers.
For example, no mapping of sensor data value is required -- the TC53 sensor drivers normatively adopt the W3C Sensor values.
The Serial API in Chrome is a good example.
Nearly every embedded device has a serial connection and there's nothing fundamentally different about a serial between a computer and an embedded device.
Unfortunately, because Chrome's Serial API is designed to be convenient to use on the web platform, it makes extensive use of asynchronous promises and powerful (but heavy) streams.
TC53's Serial API provides similar capabilities through a much smaller and lighter API.
To avoid unnecessary differences, TC53 adopted the naming conventions of Chrome where practical.
This might feel a bit upside-down, but it provides a single Serial API for both the web and embedded devices, allowing increased code sharing.
One word of warning.
I would caution against the anti-pattern of creating a “light” version of any web API to use on embedded.
“Light” versions are subsets of a full API.
They are almost always painful to use for a couple reasons.
First, developers expect the full API, and are annoyed as they discover the differences.
Second, because the “Light” API looks about the same, developers try to use it to perform the same operations as the full API, but on less powerful devices.
The results are always disappointing.
It is better to have a dedicated embedded API and a dedicated web API that share concepts and operations to the extent practical.
That gives authors of libraries and framework a foundation to build APIs that support both for specific domains or to emulate the embedded API on the web, as we've done with Serial.
To close, I hope this W3C initiative will consider including low-cost, resource constrained devices in its scope of work in some way.
My impression is that it may be feasible.
If achieved, I have no doubt that it would be valuable to the overall eco-system by expanding the availability of well designed APIs for working with Machine Learning computing resources.
Thank you to the W3C for this opportunity to share my experience and perspective.