IoT product development home runs
Running the technological bases to home plate.
It’s amazing how quickly we could get an IoT gizmo up and working on the Internet that’s in the ballpark of what we want to build — so many cool and powerful tools! But going from ‘in the ballpark’ to ‘what we actually want to do’… that part’s even way worse than usual.
These were, in essence, the words of my friend Chuck, a very experienced developer of embedded software, during a recent meal together. Over the summer, he’d spent some time working with his son building connected devices, and found it both amazing and frustrating — amazing to start, frustrating to finish.
As experienced product developers, neither Chuck nor I are surprised there’s a ton of effort needed to go from “in the ballpark” to “what we actually want to do.” But when it comes to the Internet of Things, it’s both particularly easy to get something magical working a little bit, and particularly difficult to get the thing completed. When developing IoT products, this giant gap can lead to unrealistic expectations about project scope in the early phases.
The 80/20 rule for time investment: Think 95/5 for IoT
I suspect that most readers are familiar with the 80/20 rule of product development (and many other endeavors ), that the first 80% of the project takes 20% of the time and vice-versa. With the IoT, it’s more like 95/5.
Why such a big gap between, “Hey! That’s pretty cool!” and, “Finally! All set!“? There are some specific reasons, including:
- The general effort to make cool technology easier for experimenters to use, associated with the Maker Movement, can fool us into thinking that technologies are easy to use for products.
- The IoT’s tendency to incorporate sophisticated technologies that can be devilishly tricky to make reliable and to do just what we want in real-world uses. These technologies include wireless, sensors, low power, battery power and charging, low-power operation, and cloud-based services.
Let’s take a brief look at some of the challenges of the core IoT technologies mentioned above, then we’ll consider how combining them only increases our challenges.
Over the past few years, Bluetooth and WiFi development kits have become cheap and easy to get started with. For under $10, you can purchase a module based on the ESP8266 chip, which contains WiFi and a decently powerful microcontroller. After a few minutes of fiddling around with it on a breadboard with a battery and some jumpers, the module’s speaking Arduino or Lua through a USB/Serial converter. An hour or two later, it’s talking on the Internet via a local WiFi access point, and you’re reading Web pages and sending text messages.
Now suppose that you’re using this module (or just the chip itself) in a typical IoT product, rather than as something to tinker around with; things get complex, and fast. Here are just some of the issues you’ll need to address (or face unhappy users):
- Wireless association with the access point of our choosing, which may require a passcode. Without a keyboard and screen, how do you do it? Morse code? Magic spells? Of course, it can be done, but it usually involves writing significant software running on a computer or smartphone (that does have a keyboard and screen) to reach out to our device, and programming some parameters, a process that’s challenging to make intuitive and robust.
- All kinds of errors can crop up: Access points go away or passkeys are changed, microwave ovens (which operate at the same frequency band as WiFi and Bluetooth) can temporarily kill radio frequency (RF) transmissions, and so forth. You should handle these issues gracefully, hopefully in a way that lets the user know what’s gone wrong. And since RF errors are pretty common, they should never brick your device — i.e., leave it in a permanently inoperable state. That may sound obvious, but it happens if you’re not thoughtful. The classic bricking scenario is losing your link while in the middle of updating system firmware over the Internet, a scenario that’s easily mitigated by good product design.
- RF means Federal Communications Commission (FCC) certification: This is required in the U.S. and similar bodies elsewhere, particularly if you put your IoT gadget into production. Your product will need to be certified by a third party as being within legal limits for RF radiation and filed with the FCC, an effort that can range from a lot of work and tens of thousands of dollars to very little work and no extra dollar cost, depending on your needs and how clever you are during development.
I recently hired a good-sized engineering firm to do some work on the product I’m currently helping to develop. They specialize in the engineering of devices for “big physics” — particle accelerators, radio telescopes, that sort of thing. And they do it all, hardware and software, some of the brightest engineers in the world, but there’s one thing they won’t touch because it’s too scary and specialized: analog electronics.
Sensors are (typically) analog electronic components, and that’s what makes sensors tough. In the digital world, a 1 is a 1 and a 0 is a 0; if the voltages representing these bits are off by a few percent, it’s still pretty apparent they’re a 1 or a 0 — our information is functionally unaffected. By contrast, nothing is ever exact in the analog world, and if a signal is off by a few percent, then our information is off by a few percent. Voltages and currents can easily be off by a few percent for all sorts of reasons, and our circuitry needs to handle this gracefully. Very tricky stuff.
The good news is that a growing number of sensors and associated analog circuitry are built into chips with digital I/O that make them much better behaved. But most still have analog sensors at their hearts, which means we have to be very thoughtful about things like tolerance, drift, and temperature constants that are usually not interesting issues with purely digital systems.
Batteries and charging
We tend to think of batteries as power supplies that maintain a constant voltage for some period of time, then stop working, but that isn’t the whole story. The voltage of a new (or freshly charged) battery is pretty predictable when powering a circuit that draws relatively low current, but particularly in small IoT gizmos, things get pretty tricky, particularly as batteries discharge.
Battery voltage can vary over time, depending on charge remaining, internal resistance, load, temperature, age, and other factors. IoT devices can be particularly challenging because they tend to use small batteries that have particularly changeable characteristics under varying conditions. A specific IoT challenge is RF transmission and reception, which can each draw a good slug of energy. (Did you know that active RF receivers can draw as much or more energy than transmitters?) Putting all this together, it’s pretty easy for battery life to be much lower than expected — even when a small battery is theoretically only half-discharged, the sudden power draw from turning on the RF circuitry, even for milliseconds, can pull battery voltage lower than our circuit needs, which can cause a system shutdown or other weirdness.
Low power operation
If you want long battery life and small batteries, ultra-low-power circuits are a must. Designing circuits to conserve every last bit of power is a non-trivial task that requires being clever about both circuits and software — it’s much more than picking “low power” parts. For example, the Arduino Pro Mini 328 3.3v, a postage-stamp-sized variant of the Arduino Uno, can run directly from a CR2032 battery (the largest standard coin cell), but unless you’re clever, it will only run for a few hours before the battery is depleted. Depending on your application, you may be able to enter sleep modes via software to conserve power — it could be possible to extend that out to a few weeks. If you’re willing to roll up your sleeves and use the same microcontroller as the Arduino (the Atmega 328) with your own custom circuitry, rather than the Arduino’s, you might be able to extend that out to years, again depending on the application, but getting there requires methodical and exacting work. Very challenging stuff.
A few years back, I helped develop a medical device that transmitted EKG and other medical data via TCP/IP to back-end servers that analyzed and archived the information. Back then, it cost us many months and millions of dollars to set up geographically redundant servers with backup generators running a fancy DBMS that could support multi-master replication, etc.
Now, you can do that all in a few hours for very little money using cloud services like Amazon AWS and Microsoft Azure. More specialized cloud services also exist for more specialized applications, such as Google Health, Apple Health/Healthkit, IFTTT, Zapier, data.sparkfun.com, and many, many more. By using these services and chaining them together, you can get a very, very impressive proof-of-principle demonstration put together in a few days.
The challenge with these services is that you cede a measure of control in exchange for ease of use. The control limitations break down into a few sub-issues:
- Can you create the user experience that you want?
- Can/will the service scale if your product is a big hit and you sell millions?
- What surprises await?
Creating a great user experience for leading-edge adopters of technologies is not so difficult — users will put up with messing around with all of the settings in services like IFTTT (in fact they often consider futzing around to be a plus), and they’ll live with some rough edges. But most users put a premium on a great user experience that just works — think Apple products — and getting there with a chain of DIY cloud services is pretty unlikely to make the grade. Great user experiences are darned hard to create, and they require a ton of tuning and fine-tuning; it’s unlikely that you’ll be able to achieve that level of tuning using other people’s services, although basic services like AWS and Azure are pretty good in this regard.
Hitting a Web service with a few requests an hour is one thing, but what happens if Home Depot picks up our IoT product and the cool third-party cloud service we’ve been using suddenly has to handle millions of requests per hour. Can they handle that volume of requests? If they can, will they choose to do it? Or will we be left hanging?
Huge surprises can stem from providers changing what they provide. In the case of large established offerings like AWS and Azure, let’s call them fundamental cloud services, changes will be gradual, (usually) reasonable, and announced well in advance so we can adapt. But for small or unprofitable entities, changes can be dramatic and quick — even instant. Investors can pull the plug, and the service is suddenly gone, or the service suddenly changes its offering or licensing in the hope that they can generate more revenue. Suddenly, our IoT product is a “T” without an “Io.” Oops.
My general thinking here is that fundamental cloud services are fine to depend on, although it’s good to do some thinking early on about how costs will scale as you sell a lot of product. Smaller services are a fantastic way to get started and to do some testing because they are cheap and easy to tweak. But once you have enough experience to know what you want, it’s usually best to move away from these and toward building your own specialized services to maximize user experience and minimize bad surprises.
From parts to a system
We’ve seen how many of the core IoT technologies are individually challenging, but things get even more interesting when we put them all together. Systems are, well systems, not simply collections of parts. Turning a collection of parts into an experience that users perceive as a single, seamless system is never easy, and it’s all the more difficult for IoT products because of the sheer number of separate entities that a user must interact with and depend upon.
Even a relatively simple device can become a fairly large system when we take into account all of the different parts of the ecosystem on which it depends. Turning this all into something that feels smooth from end to end, during setup and usage, and when things go wrong, is a significant challenge.
Changing the world is hard work
We’ve been dwelling on the big challenges IoT products face going from “in the ballpark” to “does what we actually want,” but there is an upside: Internet connectivity enables significant change in the way our world works. It’s clear that the IoT will drive changes in the world of things that are every bit as transformational as have been the changes wrought by email and the Web — it’s a matter of building truly useful IoT products and of consumers being ready to accept the changes they bring.
Hopefully this post has helped to give a better idea of the reality we face in creating those truly useful IoT systems — magical products aren’t easy to make!
More detail on developing successful IoT (and other intelligent) products can be found in my new book, Prototype to Product:
- It begins by laying out the fundamental principle of product development and the 11 deadly sins most often responsible for harming development efforts.
- The second section describes the phases of efficient product development by following the development of an actual IoT device: the MicroPed pedometer; I describe the myriad efforts needed to efficiently turn great ideas into great products and how those efforts tie together.
- The book’s final section features deeper dives into technologies and issues that are critical to development, including processor and operating system selection, power, battery and charging issues, low-power design, regulations, requirements, and project planning and management.
Good luck with your own efforts, and feel free to contact me at email@example.com with any questions or comments.