TinyML: The challenges and opportunities of low-power ML applications

In this edition of the Radar column, we look at what’s possible when ML apps can work with minimal or inconsistent power supplies.

By Mike Loukides
October 1, 2019
TinyML: The challenges and opportunities of low-power ML applications

Pete Warden has an ambitious goal: he wants to build machine learning (ML) applications that can run on a microcontroller for a year using only a hearing aid battery for power. This goal means that the system’s power consumption has to be under a milliwatt, ideally a few tens of microwatts. This power level places severe constraints on processor complexity, speed, and memory size. Equally important, it places constraints on communications; communications typically requires more power than computation, even with technologies like Bluetooth Low Energy (BLE).

If a low-power device needs to communicate with the outside world, it can’t be “chatty”; it needs to turn off the radio, turning it on only to transmit short bursts of data. Turning off the radio inverts our models for machine learning on small devices. On a phone, we typically gather data (say, an audio stream) and send it to a server for processing. That’s just too power-hungry for TinyML; we can’t do machine learning by sending data to big iron in the cloud. Any significant processing (for example, listening for a “wake word” like “OK Google”) needs to be done locally–on a small, slow processor with limited memory. The model that detects the phrase can be trained elsewhere, but the model itself has to run on the phone.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

We’ve made a lot of progress at running ML models on small processors. TensorFlow has run on the Raspberry Pi for some time, though Raspberry Pi isn’t really small; it’s easy to use a recent Pi as a personal computer (and it can be used for training). TensorFlow Lite can be used to run models on Android and iOS phones; though, like the Raspberry Pi, it’s hard to consider a modern smartphone a “small” processor.

TensorFlow Lite for Microcontrollers (a port of TensorFlow Lite) takes “small” a big step farther. This version can run on a Cortex M3 processor, occupying only 16KB of RAM for the core (yes, that’s K, not M), and a total of 22KB for a system capable of detecting keywords in speech. If you want to play with this system, the Sparkfun Edge Development Board is available for $15. This board is fairly typical for a microcontroller; if anything, with 384KB of RAM and 1MB of Flash memory, it’s generous. And it draws 1.6 mA, allowing it to run for about 10 days on a common coin battery. That’s not yet where we want to be, but it’s getting close.

That we can make machine learning work on a controller this small is surprising. But what applications can we build, aside from listening for a wake word on a phone? And why do we want to build them?

TinyML can be used anywhere it’s difficult to supply power. “Difficult” doesn’t just mean that power is unavailable; it might mean that supplying power is inconvenient. Think about a factory floor, with hundreds of machines. And think about using thousands of sensors to monitor those machines for problems (vibration, temperature, etc.) and order maintenance before a machine breaks down. You don’t want to string the factory with wires and cables to supply power to all the monitors; that would be a hazard all its own. Ideally, you would like intelligent sensors that can send wireless notifications only when needed; they might be powered by a battery, or even by generating electricity from vibration. The smart sensor might be as simple as a sticker with an embedded processor and a tiny battery. We’re at the point where we can start building that.

Think about medical equipment. Several years ago, a friend of mine built custom equipment for medical research labs. Many of his devices couldn’t tolerate the noise created by a traditional power supply and had to be battery powered. I’m sure the successors to those devices are now incorporating machine learning, and I’m sure that massive battery packs are no more welcome now than they were then.

I have talked to researchers who are building devices to count bird populations on uninhabited islands. (Recognizing birds by bird call is an extremely difficult problem.) I’ve seen displays in city halls that show every unoccupied parking space. There are startups working on garbage collection: rather than send a truck around once a week, they collect your garbage only when the can is full. Farmers use sensors to report soil moisture. All of these applications require machine learning and sensors in places where it’s inconvenient or impossible to supply commercial power. Solar power is available, but solar is intermittent, and places you on a limited power budget.

What could we do if all the devices in our houses were voice controlled? It’s easy to think smart multicolored light bulbs controlled by an app on your phone are silly (I do), but we’ve all needed to turn on a light when our hands were full. We’ve all needed to turn off the stove when our hands were covered in goo. TinyML, with language models capable of recognizing more than a single word, would be ideal for these applications.

Think of all the processors embedded in any car: processors for controlling the brakes, managing the engine, detecting tire pressure, running diagnostics, and many more. These processors have to operate under a very wide range of temperatures, they have to tolerate vibration, they may even need to tolerate moisture. While these constraints differ from TinyML, they have the same solution: processors have to be slow, reliable, with limited memory, and as small a footprint as possible. A high-end Intel or AMD processor, like you’d find in a laptop, can’t work under those conditions. While the constraints are different, the solution is the same; and as these processors incorporate machine learning, the answer will be (and already is) TinyML.

TinyML requires us to address problems that haven’t been solved at any scale. Designers for small, embedded devices have to be aware of the “creep factor”: they have to make it clear what data is being used, how it’s used, and how it’s communicated. One solution might be not to send data at all: to build devices that are never networked, with software that is pre-installed and never updated. A device that’s not on a network can’t become a victim of a hostile attack, nor can it violate the user’s privacy. No device can be more private than one that is never online–though such devices can never be updated with improved, more accurate models. Another challenge will be creating mechanisms for creating and recycling these devices; it’s too easy to imagine millions of tiny devices with batteries leaking mercury or lithium into the environment.

There are many, many opportunities for smart sensors with embedded machine learning. Almost all of these opportunities are in places where supplying power is inconvenient at best. Even when power is available, other constraints often require slow, small, low-power processors. Every device isn’t a laptop, and we shouldn’t expect it to be; paradoxically, our fascination with bigger, faster processors with gigabytes and terabytes of RAM may be causing us to miss opportunities right in front of us.

What can we do with clouds of small, slow-power, memory-limited microcontrollers? What are the opportunities for TinyML? As broad as our imagination. — Mike Loukides


Data points: Recent research and analysis

At Radar, our insights come from many sources: our own reading of the industry tea leaves, our many contacts in the industry, our analysis of usage on the O’Reilly online learning platform, and data we assemble on technology trends.

Every month we share notable, useful, or just plain weird results we find in the data. Below you’ll find nuggets from our recent research and analysis.

Our examination of Strata Data Conference speaker proposals (“Topics to watch at the Strata Data Conference in New York 2019”) surfaced several notable findings:

  • Machine learning (ML) and artificial intelligence (AI) terms predominate. The term “ML” is No. 2 in frequency in proposal topics; a related term, “models,” is No. 1. The term “AI,” meanwhile, is No. 3.
  • Terms that relate to data engineering, data management, and data analytics dominate the top tiers of proposal topics. But although the terms and many of the practices sound familiar, the tools, use cases, and even some of the techniques have changed.
  • Data engineering is an intense focus of interest and innovation, with data-in-motion—e.g., stream, time-series—starting to displace the batch-centric, data-at-rest paradigm.
  • Spark has emerged as the general-purpose data processing engine of choice; interest in Hadoop is waning, although reports of its death are greatly exaggerated.

In “AI adoption is being fueled by an improved tool ecosystem,” Ben Lorica explored results from a World Intellectual Patent Office (WIPO) study that examined worldwide patent filings in areas pertaining to AI and ML.

One of WIPO’s key findings is that the number of patent filings is growing fast: in fact, the ratio of patent filings to scientific publications indicates that patent filings are growing at a faster rate than publications.

The WIPO study also found that “computer vision” is mentioned in 49% of all AI-related patents (167,000+). In addition, the number of computer vision patent filings is growing annually by an average of 24%, with more than 21,000 patent applications filed in 2016 alone.

Post topics: AI & ML, Radar Column
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter