Chapter 4. Overcoming Design Patterns for Insecurity

If we build the IoT like we did the IoC, we’re going to have security trouble.

“Security,” as I like to define it, is the extent to which a system remains in a correct state, despite the efforts of a malicious adversary, perhaps in conspiracy with an uncooperative universe. Typically, the root cause for a successful attack lies in a design or implementation mistake in the system: some error or feature that enables the adversary to come in. These tend to fall into general patterns—and since IT systems have existed for a while now, it’s often hard to come up with completely new ways to do things wrong.

When feeling a bit arrogant (or frustrated), security specialists (such as me) are tempted to label these mistakes as “blunders.” These are not subtle and complex failure cases, but grievous oversights that are obvious—or at least they appear obvious to security specialists looking at them in hindsight. The problem, of course, is that these design decisions need to be made beforehand, and by people who are wrestling with many priorities (and do not realize that my priority is the most important).

This chapter surveys and explains some of the principal categories of IoC blunders (“anti-patterns”) that have affected (or are likely to affect) the IoT:

  • Doing too much

  • Coding blunders

  • Authentication and authorization blunders

  • Cryptography blunders

However, the point here is not that hope is lost, but rather that sensibly moving forward will require an awareness of these patterns and some creative thinking to address them.

Anti-Pattern: Doing Too Much

Helpful employees do more than what’s expected of them; helpful people do their best to understand what was meant by faulty and flawed communication. However, a standard source of security trouble is when system interfaces provide more functionality than they are supposed to, perhaps by accepting and acting on incorrectly formatted input. Such extra functionality can allow an adversary literally to take control of a system by whispering the right bizarre magic words; even helpfully “correcting” incorrect input can lead to trouble when two systems do it differently.

Instance: Failure of Input Validation

In theory, an interface expects inputs following some specific formats; for example, a name of 20 characters here, a number in this range there. In practice, interfaces often accept much more but still implicitly assume the inputs are formatted correctly. As a consequence, in the IoC, perhaps the most common way to attack a system is to give it deviously crafted input that does not fit the rules of what the programmer intended as valid input but which the system accepts nonetheless and which tricks the system into carrying out incorrect behavior.

The classic instantiation of this pattern is the buffer overflow attack (e.g., [1], or Section 6.1 in [49]). Here, the victim system expects a character string from the user and copies it into a fixed-length buffer in the stack frame—but never checks that the string actually fits within the buffer. Consequently, an adversary can provide a very long string crafted to include some executable code and to overlay the “return address” field on the stack frame with the address of this code. When the system’s current function returns, it then begins executing code the adversary injected.

This family of attacks has a long and storied history of variations, countermeasures, countermeasures to the countermeasures, and so on. Unfortunately, but perhaps not unexpectedly, this pattern is already manifesting itself in the IoT. Crafted input attacks have emerged targeting Android [5] devices. IoT thermostats have allowed unprivileged remote adversaries to inject code via network packets that caused a buffer overflow [12]. Home cable modems have allowed unprivileged remote adversaries to inject code via device web requests [16]. In spring 2016, a researcher discovered an input validation failure in firmware used in many dozens of models of CCTV equipment that enabled an adversary to run privileged code by sending a deviously constructed URL to the device’s web server interface [13].

In recent years, researchers have even begun to address the mathematical foundations of this type of attack as a problem recognizing the “formal language” of valid inputs [46]. My own lab has produced tools that have found holes in power grid control systems via fuzzing (that is, automatically modifying otherwise correct input) [47]. Both of these fronts may be helpful in mitigating the pattern in the IoT.

Instance: Excess Power

Besides accepting too much, another pattern we see emerging in the IoT is doing too much. Systems may have more power than they need; and as with the principle of least privilege (taught in security textbooks), having excess power can lead to excess damage should the system be compromised.

One example here that my lab found (discussed further in “Anti-Pattern: Authentication Blunders”) involved a commercial set-top box with a full Linux kernel inside. The box offered some small obstacles to penetration and, once penetrated, provided obstacles to prevent us from adding malicious software to the box itself. However, the box inexplicably had support for the NFS remote filesystem, so we could install our malware simply by having the box remotely mount a remote disk. There was no sensible reason why the box needed NFS support; we hypothesize that it was there simply because it was part of the Linux bundle. Using established components also opens a system up to established attacks—as a colleague’s hospital learned when its IT went down for a week due to a virus infecting the Windows 95 installation hiding inside a radiology machine.

Using a tried-and-true, off-the-shelf component rather than building a new one is good engineering practice, so it often makes sense to choose some standard bundle. However, in cases like these, it opens the door to too much. (An IoT security colleague laments that every IoT device will eventually run full Linux “because it can.”) Moving forward, we may need to rethink the use of tried-and-true IoT components.

Instance: Differential Parsing

The system (attempting to be helpful) can, in fact, do too much when it receives malformed input. For example, multiple different systems, each allegedly speaking the same input language, may take different actions when presented with deviously crafted input. This behavior enables differential parsing, crafting input that only selected listeners will hear, because the others will discard it as malformed (e.g., [34]).

Such attacks have already been demonstrated for wireless stacks used in IoT devices [28, 33].

Key to this research result were tools the researchers built to probe stack interfaces with crafted input—something otherwise difficult in vertically integrated IoT systems. In the IoT, it will be important to think not just of parsing flaws, but also of ensuring the interfaces can be tested before it’s too late.

Anti-Pattern: Coding Blunders

As any programming student knows, it’s easy to make mistakes. The software industry is plagued with errors and bugs, and (in the case of larger, recognizable computers) often depends on a process of “penetrate and patch,” as mentioned earlier, to keep things going. But this penetrate-and-patch approach poses problems for the IoT (especially if the “things” no longer look like computers needing updates), leading to the risk of the forever-days discussed back in Chapter 1.

The IoT has already suffered from its share of coding blunders seen in the IoC. Machines can get infected; body cameras (for the “Internet of First Responder Things”) have shipped with malware installed (presumably unintentionally), ready to infect the systems to which they are connected [10]. Local device action can have global network impact: in one reported case, a smart lightbulb burned out and sent out so many announcements about this fact that it overwhelmed and shut down the smart home’s wireless network [32].

Debug code has been inadvertently left in shipped products, too. A SANS researcher discovered his Netatmo Weather Station still contained debug features that would transmit memory contents—including his home WPA password—back to home base, unencrypted [56].

In the IoC, blunders (as well as more subtle flaws) can be fixed via patching. In the IoT, however, we are already seeing zero-days becoming forever-days. In one example, researchers analyzing a code injection flaw in Android discovered 60 percent of a sample of a half million phones were vulnerable—and “27 percent of those devices were found to be ‘permanently vulnerable’ in that they are too old to receive monthly updates” [4]. And as we saw earlier, in 2015 Trend Micro reported that over six million consumer IoT devices were still vulnerable to a code injection flaw that was patched in 2012 [59]; the patches are just not getting there.

As mentioned in many places in this book, “penetrate and patch” may work for the IoC, but it’s not likely to work in the IoT unless we rethink patching—or perhaps rethink our coding and testing practices to reduce the need for patching.

Anti-Pattern: Authentication Blunders

It’s important that a system verify who is claiming to send it an update, or change its configuration, or act as its absolute master. The IoT has some aspects that make this particularly critical. The intimate connection of IoT systems to the real world can make the consequences of malfeasance more severe than in the IoC. The distribution of an IoT system throughout the real world can greatly increase the attack surface—places the adversary can touch. Finally, effective authentication can be a question of management; the larger and more disorganized a set of entities is, the harder it can be to manage, and it is hard to think of something larger and more disorganized than the aggregation of banal real-world things.

The IoC manifests many common “design patterns for insecurity” with respect to authentication:

  • A service may require no authentication whatsoever.

  • A service or set of services may use default authentication credentials, easily discoverable.

  • A service may have a permanent credential, never changeable.

  • A service may fail to allow for revocation of a credential or privilege.

  • A service may fail to allow for delegation of a privilege to another legitimate party (thus engendering workarounds).

These patterns are already emerging in the IoT. (“Anti-Pattern: Cryptography Blunders” will consider more specific cryptographic flaws.)

Instance: No Authentication

One of the most extreme ways a system can fail at authentication is to overlook it completely and provide a given service to anyone who requests it. Such a design can arise when the designers either did not think about security at all, or had a security model that theoretically disallowed the adversary from even reaching this service.

A timely IoT example of this is the controller area network (CAN) bus central to the IT system in a modern car. A component on this bus listens to see if it is being spoken to—but it has no way of verifying who was doing the speaking; rather, it implicitly assumes that anyone speaking to it has the right to do so. For example, it’s the engine control unit (ECU) that is supposed to tell the brakes to engage, but anything on the CAN bus can actually do this actually do this.

This flaw lays at the heart of the attacks Stefan Savage’s group published in academic venues in 2010 and 2011—including injecting malware into the car’s CD player, which then spoofed commands on the CAN bus [9]. It was also the weakness exploited in the highly publicized shutting down of a Wired reporter’s car (via the cellphone connection) while it was being driven on the highway in 2015 [29]. Researchers subsequently demonstrated attacking the CAN bus via digital radio [57]. There are even firsthand reports of a commercial automobile with a CAN bus connection in the taillight—so even if the car is locked, an adversary can jack in, unlock it, start it, and have all sorts of fun.

Chapter 1 lamented the difficulty of updating software on smart devices. Unfortunately, the opposite can also be a problem: some devices permit updating with no authentication whatsoever—if someone claims to have an update, the device happily reprograms itself. In the attack on the Ukrainian power grid in late 2015 [19], this flaw was used to turn many of the field devices into inert bricks that were beyond field repair, since the updated “software” no longer provided an update feature. This flaw was also discovered in Progressive Insurance’s Snapshot devices, which consumers plug into the diagnostic port in their cars to enable Progressive to measure their driving behavior and charge them lower premiums if the measured behavior implies lower risk [23]. An adversary who gets into this device can then get into the car’s internal network and launch an attack as just described. Direct physical access to the Snapshot would permit the adversary to inject a malicious upgrade. Compromise of the modem connecting the Snapshot to Progressive’s back-end would also enable the adversary to do this—and (as [23] notes) similar devices have been compromised. In 2016, CEO of EyeOs J. C. Norte wrote of discovering a different way to remotely reach into the open CAN network: via internet-connected Telematics Gateway Units (used at automotive repair shops) which themselves offered services worldwide with no authentication [44].

For a higher-cost example of consequences, in 2016 the Japanese space agency lost a $286 million satellite, apparently due to a bad (although, in this case, not malicious) software update [41].

A variation on the “no authentication” pattern is “insufficient authentication.” An interesting example of this in the IoC occurred several years ago [50] when an online third party handling business school applications permitted applicants to learn admissions decisions early by logging in as themselves but then editing the URL to request services for which they were not yet authorized. This pattern has also surfaced in the IoT. For example, CoreLabs discovered that a projector commonly used in smart classrooms requires authentication to go from the index.html page to the interior main.html page, but skips authentication if the user opts to go to directly to main.html [7].

In a summer course I taught on risks of the IoT, students used the Shodan search engine [39] to scan for IoT-connected devices on the internet at our own university, and found printers, smart classroom devices, and videoconference equipment requiring no passwords. In the ethical hacker space (e.g., see the work of Dan Tentlar or Paul McMillan), researchers have discovered tens of thousands of control interfaces for IoT-style cyber-physical systems—with much scarier misuse consequences—on the open internet, with no authentication required, via the Virtual Network Computing (VNC) protocol.

In another variation, Rapid7 discovered a family of internet-connected industrial control systems that required authentication credentials over an SSH connection, but would accept any credentials [55]. Similarly, the “Hello Barbie” internet-connected doll would try to authenticate its network via SSID name, but would accept anything with the name “Barbie” [43].

Instance: Default Credentials

Designers who do in fact include authentication in their systems are then faced with a challenge: what should the system do when it first comes out of the box? A common approach is to ship systems preinstalled with a default password. Unfortunately, these default passwords are usually well known (students find them easily with web searches); and given the natural human inclination to choose the path of least resistance, default passwords often are never changed.

This problem is already surfacing in the IoT. Trend Micro reported finding a backdoor account with a hardcoded, common password in over two million routers [35]. Researchers at EURECOM reported finding “hard-coded web-login admin credentials” in over 100,000 internet-facing IoT devices [18]. My own lab was involved in discovering a commercial set-top box with a default password for root [2]. My students also reported finding a programmable logic controller—the same kind of beast exploited in the Stuxnet attack—used in industrial control, on the internet with a default password. Default SSH backdoors have even been discovered in security devices from Cisco and Fortiguard [20, 17].

For perhaps a more tangibly creepy manifestation of the consequences of default credentials in real life, it can be enlightening to consider network-connected “security” cameras intended to let homeowners and such monitor their households and families from over the network—but which, due to the use of default passwords, let anyone do that. A “Ms. Smith” in NetworkWorld (no relation, as far as I know) writes of a collection of over 73,000 such devices that she discovered were peering into backyards and living rooms and children’s bedrooms. Many similar images can be found with some poking around [48].

For a scarier application domain, ZDNet reports [58]:

An application suite designed to help clinical teams manage patients ahead of surgical operations includes a hidden username and password, which…could allow an attacker to “backdoor” the app to read or change sensitive information on patients, who are about to or have just recently been in surgery.

Ars Technica earlier warned of a similar permanent, built-in account in RuggedCom’s Rugged Operating System, ironically intended for high-assurance devices in industrial control systems [24].

And as I was writing these words, a news report came in from Canada [22]:

Two 14-year-old high school students managed to hack into a Bank of Montreal ATM at a super market during their lunch break using an operator’s manual they found online.

When they brought up the administrator mode screen it asked for a password, to which the teens used the factory default password. To their surprise it worked.

Instance: Permanent Credentials

An authentication credential is an electronic embodiment of the right of some entity to invoke some service. This expression involves three relations:

  • A binding between entity and right

  • A binding between entity and credential

  • A binding between credential and right

Trouble can emerge when not all of these bindings are permanent.

One pattern is when a right is permanently bound to a credential—it is embedded in an IoT device itself and can never be changed. For example, in the set-top box mentioned previously, one could change the root password, but the password would then return to the default after rebooting.

A default password gives the entire world the right to become root. Hardcoding the password makes it impossible to deny that right to adversaries—which also applies in the Trend Micro and EURECOM results mentioned in the previous section. ComfortLink thermostats had hardcoded credentials preinstalled [12]. Penetration specialist Billy Rios found the same thing in an engagement at the Mayo Clinic [45].

Another pattern is when an entity may indeed possess the right to use a service at some T 0 , but then loses that right after some time T 1 > T 0 . If this entity still knows the required credential, then one needs to be able revoke the binding of the credential to the service—otherwise, one permits unauthorized access. However, in the rush to make things work now, it can be easy to overlook revocation. For an amusing example, InfoWorld reported on a web-controllable thermostat that received this favorable review on Amazon [42]:

Little did I know that my ex had found someone that had a bit more money than I did and decided to make other travel plans. Those plans included her no longer being my wife and finding a new travel partner (Carl, a banker). She took the house, the dog and a good chunk of my 401k, but didn’t mess with the wireless access point or the Wi-Fi enabled Honeywell thermostat.

Since this past Ohio winter has been so cold I’ve been messing with the temp while the new love birds are sleeping. Doesn’t everyone want to wake up at 7 AM to a 40 degree house? When they are away on their weekend getaways, I crank the heat up to 80 degrees and back down to 40 before they arrive home. I can only imagine what their electricity bills might be. It makes me smile. I know this won’t last forever, but I can’t help but smile every time I log in and see that it still works. I also can’t wait for warmer weather when I can crank the heat up to 80 degrees while the love birds are sleeping. After all, who doesn’t want to wake up to an 80 degree home in the middle of June?

The article observed that “more than 8,200 of the 8,490 Amazon users who have read the review deemed it ‘useful.’”

Instance: No Delegation

Another challenging aspect of authentication is determining who should be granted access in the first place. (Chapter 6 will look at a similar problem: that of effective creation of privacy policy.) One trouble pattern here arises from keeping this policy creation too far out of the hands of the end users themselves. When end users cannot easily configure a system to permit services they believe should be permitted (or when such configuration is not even possible), then users will often circumvent protections, breaking the security system [53].

For one IoT example of this inflexibility, consider the power grid. One vendor of equipment in this space had a marketing slide showing the default user ID and password on its equipment, and the default user IDs and passwords on equipment from its competitors. The point of this slide was to stress the security of this vendor, since its default password was so much harder to guess than the defaults for the competitors. Security specialists react to this slide with laughter—how can a system with a default and published password be secure? Power specialists react differently; this application domain requires that, in emergencies, repair crews (perhaps borrowed from another region) need to be able to quickly access a device, so a system that does not allow easy and dynamic granting of access does not work. Default passwords provide this feature, which they regard as critical.

Smart medicine deployments reveal similar problems with policy inflexibility [52]. One smart medication administration system ensured that a nurse could only give a patient exactly the medicine and dosage prescribed—a policy that did not account for the reality that there might not be any 20 mg tablets left, but that two 10 mg tablets provide a 20 mg dose. Another enforced that a 10-hour medication regimen must begin exactly when the issuing doctor said, even if the patient did not arrive from dialysis until an hour later.

Instance: Easy Exposure

Another pattern for insecurity is when the infrastructure supporting authentication itself subverts it. One IoC example is all the websites that require a user ID and password (good), but collect them over a plain-text channel (bad—anyone can listen in!) or through a pop-up basic authentication window (also bad—the user cannot easily tell who offered this window). We see similar things happening in the IoT: the set-top box mentioned earlier only provided unencrypted telnet access for root, not ssh (the literature notes a few thousand other similar discoveries); wind turbines have plain-text passwords embedded inside them [21]. The EURECOM researchers extracted “several dozens of hard-coded password hashes” and over 35,000 RSA private keys from the devices they surveyed [18]. (For more on hashing, see “Cryptographic Hashing”.) Other examples abound:

  • It is rumored that one enabling aspect of the Ukrainian power attack was the grid’s internal use of easily harvestable credentials.

  • In 2014, researchers discovered an IoT alarm system that used weak authentication mechanisms—and, furthermore, transmitted them unencrypted [11].

  • In 2016, researchers from the University of Michigan published a way [27] to obtain a user’s credentials for Samsung’s SmartThings smart home devices: tricking the user into entering them into a site controlled by the adversary—the IoC web-spoofing pattern taking root in the IoT.

Moving Forward

All these authentication issues are already problems in the IoC; solving them requires thinking carefully about the players and systems and deployment patterns involved, and the authentication and authorization technology to support this. In the IoT, the number of moving parts scales way up; Chapter 5 will consider some resulting challenges.

Anti-Pattern: Cryptography Blunders

Cryptography, the mathematical and computational systems to transform data into formats that make it harder for inappropriate parties to extract useful information, is central to implementing many authentication and authentication-related techniques. (“The Standard Cryptographic Toolkit” will give more details on the basics of cryptography for the IoT.)

Cryptography is used in the IoT for the same reasons it is used in the IoC: for parties to protect information (confidentiality and integrity) while also working with it in various ways. An IoT device communicating to a backend server over an open channel would use cryptography to protect the data in transit. IoT systems (including backend servers) storing sensitive data would use cryptography to protect it at rest. An IoT device trying to authenticate the provenance of some data (e.g., did this firmware upgrade really come from the right party?) would use cryptographic signatures or message authentication codes (MACs). IoT entities trying to authenticate to one another without actually giving their authentication secrets away would also use cryptography.

Instance: Bad Randomness

As Chapter 5 will discuss, the security of cryptography typically requires that a key is known only by the parties who are supposed to have the privilege that the key embodies. If the keys get revealed, the rest of the system breaks (and indeed, a colleague who worked on nation-state cryptanalysis would quip that it was always easier to steal the keys than to break the mathematics).

In using a public key algorithm such as RSA, the first step is for a key-owning entity to generate its keypair. This process requires using unpredictable randomness—for RSA, the entity uses this randomness to generate a pair of prime numbers which an adversary would not be able to predict. However, in a study surveying (at internet scale) internet-facing machines with keypairs for the TLS and SSH protocols, researchers from UC San Diego and the University of Michigan [29] found something disturbing. A large number of machines either shared the same keypairs or had shared one of these prime number factors (and since efficient algorithms are known to calculate the GCD—greatest common divisor—of two large integers, this suffices to break these keypairs).

They concluded that an underlying problem here might be simple embedded IoT devices that used standard key generation software on standard operating systems such as Linux. For random seeds, these standard tools use /dev/urandom, which gets its entropy from things such as keystroke timing and disk movement and previous memory contents—possibly reasonable on a standard computer, but (as the researchers verified experimentally) highly predictable when these tools are moved to headless IoT systems. The researchers lamented this “boot-time entropy hole.” Moving a standard, trusted component from an IoC device to an IoT device turned out to invalidate implicit assumptions.

This academic paper predicted repeated keys in IoT systems. Interestingly, this prediction appears to have come to pass. For example, in 2015, Network World reported on Shodan finding populations of more than 100,000 devices sharing common keypairs [37]. A month later, ITworld reported on researchers finding 28,000 devices using the same keypair [36]. What’s next?

This blunder also connects to the problem mentioned earlier: conventional wisdom says it’s better to use a tried-and-true component (e.g., a Linux installation with standard key generation code) than to build something new from scratch; but once more, moving a standard IoC solution to an IoT device created new problems. Researchers have already presented some interesting (and scary) work on how reuse of key-generation algorithms may lead to more subtle weaknesses [54].

Instance: Common Keys

Not just RSA but also symmetric cryptography can have problems when keys are repeated. In 2014, security researchers found an example of this in LIFX internet-connected lightbulbs [25, 8]. The implementation encrypted transmissions using the NIST-standard AES algorithm, but used the same common key in all LIFX bulbs—enabling easy snooping by any adversary. (Admirably, after the researchers informed LIFX, the flaw was fixed.)

Instance: Bad PKI

Working with public key cryptography, a relying party might be able to conclude that some remote entity actually knows the private key matching some public key. However, to draw a meaningful conclusion from this information, the relying party usually also has to know something else about that public key, for example, that knowledge of the matching private key means the entity has some particular identity.

Public key infrastructure (PKI) is the term used for all the glue and machinery necessary to establish this additional information. Typically, PKI rests on certificates: statements asserting that the owner of some public key has some given properties. These statements are themselves digitally signed by a certification authority (CA) who presumably is in a position to know. To draw a conclusion about a keyholder, a relying party needs to obtain a set of certificates chaining from a root (whom the party trusts) which satisfies some logical calculus; part of this satisfaction includes verifying that a given certificate has not been revoked.

When it comes to PKI, the IoC itself shows many standard trouble patterns:

  • The relying party fails to check whether a certificate has been revoked. One reason this happens is the performance overhead of doing this checking.

  • The keyholder uses a certificate issued by a bogus CA, such as itself (a self-signed certificate—often the default consequence of standard tool installations).

  • More subtly, the keyholder information established by the standard PKI may not match what the relying party wants to know. For example, Chris Masone’s Ph.D. work [40] explored how standard S/MIME email PKI would not suffice to reproduce how power grid operators authenticated one another over the telephone in the 2003 blackout recovery, because S/MIME did not express what users needed to know. Reality has a more complicated and nuanced ontology than straightforward identity PKI allows for.

We expect revocation and ontology issues to surface in the IoT (see Chapter 5) but have no war stories to share yet—except to observe that Android does not support certificate revocation, and Symantec laments its lack in other IoT systems [3]. We have already seen the bogus CA pattern emerge. As the researchers from UC San Diego and the University of Michigan [30] noted:

At least…85,046 TLS hosts (0.66%) served default Apache certificates (sometimes referred to as snake-oil certificates, because they often include the CN www.snakeoil.com).

(When my students went hunting, they found many invisible computers, such as printers, offering self-signed certificates.)

However, the IoT has already demonstrated a new trouble pattern here: failing to check certificates at all. A high-profile example of this pattern emerged in 2015, when researchers discovered that Samsung’s smart fridge would open an SSL-protected channel to Google—but without actually checking whether the certificate from the alleged Google server was valid [38]. This flaw permits a “man in the middle” to pretend to be Google and collect the consumer’s Gmail credentials and other personal information. Similarly, the certifigate bug family in Android featured many ways code was authenticated via a signature and certificate, but where the adversary could subvert the means by which a certificate was accepted as the right one [6]. Similar reports exist for other IoT products (e.g., [15]).

Again, Chapter 5 will consider these trust management issues when we scale from the IoC to the IoT.

Instance: Aging of Cryptography and Protocols

Looking over the last several decades, one can see many issues where cryptography did not hold up with the passage of time. Keys that were considered long enough to last decades weakened much more quickly. Algorithms such as DES slowly weakened; algorithms such as MD5 or ISO9796-1 weakened dramatically.

The potential long lifetime of IoT devices, coupled with the difficulty of updating their software, suggests a potential new trouble pattern: devices that last longer than their cryptography.

A version of this pattern has already emerged in both the IoC and the IoT: protocols designed to be flexibly compatible with various variations end up backward-compatible with variations now considered insecure. In the IoT, the backend servers to which the “Hello Barbie” toys communicate turned out to be using an SSL implementation that could be tricked (via POODLE) into using weak key lengths. Researchers have found millions of instantiations of mobile applications vulnerable to the similar FREAK attack [14].

Today, one would not trust commodity encryption considered secure in 1990. Will the world of 2045 trust commodity encryption considered secure in 2017? If not, what will we do about the forever-day IoT devices we release in 2017 that are still out there in 2045? Furthermore, problems in a backend server in a data center should easily be fixable by standard IoC patch techniques—but what about IoT toys distributed in homes internationally?

A Better Future

IoT systems will almost certainly suffer from the design patterns for insecurity that have plagued the IoC.

Managing this problem will likely require a mixture of more thorough application of good engineering principles that are already known and development of new techniques and paradigms.

One straightforward approach to this problem is a renewed effort at better security awareness and education for IoT developers—crucial since their products’ attack surfaces and physical reach are so substantial.

When it comes to input validation problems, we might try more principled approaches to specifying and recognizing valid input (e.g., [46]). We might also try new kinds of fuzz testing to discover where validation has failed; for instance, for power-grid SCADA, our lab needed to develop a new kind of fuzzing tool that learned quickly from live input, since a corpus of archived canned input acceptable to a client was not available. To mitigate general blunders, we might learn from application domains such as the telephone network (which developed industry-strength formal model checking tools after an unexpected global effect from local action took down the network in the 1980s), or from the design and testing regimen used in high-reliability software such as fly-by-wire aircraft (Chapter 3).

To mitigate forever-days, we might want to consider combinations of:

  • Making sure the vulnerabilities can be discovered and patches can be created.

  • Making sure patches can be pushed.

  • Making sure someone still exists to push the patches.

  • Making sure the patches do not introduce worse, unknown bugs.

  • Making systems automatically die off, as telomeres enforce in cell biology, if they are not patched (although this feature might have the problem of annoying consumers).

Adding authentication to IoT systems will require a more careful enumeration not only of the policy requirements (recovation? delegation?), but also of the performance requirements. It may very well be that in the case of the CAN bus, as with other specialized control systems (e.g., [51]), timing and data requirements may in fact make textbook security techniques infeasible, and we will need new ways of thinking.

Surveying the recent literature for IoT security design flaws also reveals hints of mitigation techniques. For example, the researcher who discovered the debug exfiltration in the Netatmo Weather Station did so because he had an automatic guard in place looking for his WPA passphrase being transmitted in plain text. Discussions of authentication failures in home routers also include an article noting that the US Federal Trade Commission has taken action against one vendor [26]. Analyses of the weaknesses of smart home devices also can include some good-sense advice on tightening things up [31].

Fixing the future is going to take a combination of big battles and little details, including future-proofing cryptographic authentication (Chapter 5), balancing economic forces (Chapter 7), and crafting public policy and law (Chapter 8) to promote more careful software engineering.

Works Cited

  1. Aleph One, “Smashing the stack for fun and profit,” Phrack, November 8, 1996.

  2. K.-H. Baek and others, “Attacking and defending networked embedded devices,” in Proceedings of the 2nd Workshop on Embedded Systems Security, October 2007.

  3. M. Ballano Barcena and C. Wueest, Insecurity in the Internet of Things. Symantec, March 12, 2015.

  4. D. Bisson, “Attackers can pwn 60% of Android phones using critical flaw,” Graham Cluley, May 23, 2016.

  5. D. Blison, “New Stagefright exploit threatens unpatched Android devices,” Graham Clulely, March 18, 2016.

  6. O. Bobrov and A. Bashan, Certifi-gate: Front Door Access to Pwning Millions of Android Devices. CheckPoint, July 18, 2015.

  7. C. Brook, “Authentication vulnerabilities identified in projector firmware,” ThreatPost, April 28, 2015.

  8. A. Chapman, “Hacking into internet connected light bulbs,” ConCon Blog, July 4, 2014.

  9. S. Checkoway and others, “Comprehensive experimental analysis of automotive attack surfaces,” in Proceedings of the 20th USENIX Security Symposium, 2011.

  10. C. Cimpanu, “Police body cameras shipped with pre-installed Conficker virus,” Softpedia, November 15, 2015.

  11. C. Cimpanu, “RSI videofied security alarm protocol flawed, attackers can intercept alarms,” Softpedia, November 30, 2015.

  12. C. Cimpanu, “Company takes two years to remove hard-coded root passwords from IoT thermostat,” Softpedia, February 8, 2016.

  13. C. Cimpanu, “Vulnerability in 70 CCTV DVRs traced back to Chinese firm who ignores researcher,” Softpedia, March 23, 2016.

  14. A. Connolly, “Thousands of Android and iOS apps are still vulnerable to the FREAK bug,” The Next Web, March 18, 2015.

  15. L. Constantin, “Researchers show that IoT devices are not designed with security in mind,” PC World, April 7, 2015.

  16. L. Constantin, “Cisco patches serious flaws in cable modems and home gateways,” CSO Online, March 10, 2016.

  17. L. Constantin, “FortiGuard SSH backdoor found in more Fortinet security appliances,” CSO Online, January 22, 2016.

  18. A. Costin and others, “A large-scale analysis of the security of embedded firmwares,” in Proceedings of the 23rd USENIX Security Symposium, 2014.

  19. E-ISAC, “Analysis of the cyber attack on the Ukrainian power grid,” SANS Industrial Control Systems, March 18, 2016.

  20. D. Fisher, “Default SSH key found in many Cisco security appliances,” ThreatPost, June 25, 2015.

  21. D. Fisher, “Plaintext credentials threaten WRLE wind turbine HMI,” ThreatPost, June 17, 2015.

  22. J. Foster, “Someone gained access to private PLQ meetings, very easily,” CJAD News, June 17, 2016.

  23. T. Fox-Brewster, “Hacker says attacks on ‘insecure’ progressive insurance dongle in 2 million US cars could spawn road carnage,” Forbes, January 15, 2015.

  24. D. Goodin, “Backdoor in mission-critical hardware threatens power, traffic-control systems,” Ars Technica, April 25, 2012.

  25. D. Goodin, “Crypto weakness in smart LED lightbulbs exposes Wi-Fi passwords,” Ars Technica, July 7, 2014.

  26. D. Goodin, “Asus lawsuit puts entire industry on notice over shoddy router security,” Ars Technica, February 23, 2016.

  27. D. Goodin, “Samsung Smart Home flaws let hackers make keys to front door,” Ars Technica, May 2, 2016.

  28. T. Goodspeed and others, “Packets in packets: Orson Welles’ in-band signaling attacks for modern radios,” in Proceedings of the 5th USENIX Conference on Offensive Technologies, 2011.

  29. A. Greenberg, “Hackers remotely kill a Jeep on the highway—with me in it,” Wired, July 21, 2015.

  30. N. Heninger and others, “Mining your Ps and Qs: Detection of widespread weak keys in network devices (extended version),” in Proceedings of the 21st USENIX Security Symposium, 2012.

  31. S. Higginbotham, “When it comes to smart home security, cameras are the worst,” Gigaom, February 11, 2015.

  32. K. Hill, “This guy’s light bulb performed a DoS attack on his entire smart house,” Fusion, March 3, 2015.

  33. I. R. Jenkins and others, “Short paper: Speaking the local dialect: Exploiting differences between IEEE 802.15.4 receivers with commodity radios for fingerprinting, targeted attacks, and WIDS evasion,” in Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless and Mobile Networks, 2014.

  34. D. Kaminsky and others, “PKI layer cake: New collision attacks against the global X.509 infrastructure,” in Financial Cryptography, 2010.

  35. J. Kirk, “Netcore, Netis routers at serious risk from hardcoded passwords,” InfoWorld, August 26, 2014.

  36. J. Kirk, “Researchers find same RSA encryption key used 28,000 times,” ITworld, March 16, 2015.

  37. J. Kirk, “Tens of thousands of home routers at risk with duplicate SSH keys,” Network World, February 18, 2015.

  38. J. Leyden, “Samsung smart fridge leaves Gmail logins open to attack,” The Register, August 25, 2014.

  39. D. Maas, “The world’s most dangerous search engine,” San Diego CityBeat, February 6, 2013.

  40. C. Masone and S. W. Smith, “ABUSE: PKI for real-world email trust,” in EuroPKI ’09 Proceedings of the 6th European Conference on Public Key Infrastructures, Services and Applications, 2009.

  41. R. Merrriam, “Software update destroys $286 million Japanese satellite,” Hackaday, May 2, 2016.

  42. C. Neagle, “Smart home hacking is easier than you think,” InfoWorld, April 3, 2015.

  43. J. Newman, “Internet-connected Hello Barbie doll can be hacked,” PC World, December 7, 2015.

  44. J. C. Norte, “Hacking industrial vehicles from the internet,” Jose Carlos Norte Personal Blog, March 6, 2016.

  45. M. Reel and J. Robertson, “It’s way too easy to hack the hospital,” Bloomberg Businessweek, November 2015.

  46. L. Sassaman and others, “Security applications of formal language theory,” IEEE Systems Journal, 2013.

  47. R. Shapiro and others, “Identifying vulnerabilities in SCADA systems via fuzz-testing,” in Critical Infrastructure Protection V, Volume 367, 2011.

  48. M. Smith, “Peeping into 73,000 unsecured security cameras thanks to default passwords,” Network World, November 6, 2014.

  49. S. Smith and J. Marchesini, The Craft of System Security. Addison-Wesley, 2008.

  50. S. W. Smith, “Pretending that systems are secure,” IEEE Security and Privacy, November/December 2005.

  51. S. W. Smith, “Room at the bottom: Authenticated encryption on slow legacy networks,” IEEE Security and Privacy, July/August 2011.

  52. S. W. Smith and R. Koppel, “Healthcare information technology’s relativity problems: A typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ,” Journal of the American Medical Informatics Association, June 2013.

  53. S.W. Smith and others, Mismorphism: a Semiotic Model of Computer Security Circumvention (Extended Version). Dartmouth Computer Science Technical Report, March 2015.

  54. P. Svenda and others, “The million-key question—Investigating the origins of RSA public keys,” in Proceedings of the 25th USENIX Security Symposium, 2016.

  55. todb, “Advantech EKI Dropbear authentication bypass,” Rapid7 Community, January 12, 2016.

  56. J. Ullrich, “Did you remove that debug code? Netatmo Weather Station sending WPA passphrase in the clear,” SANS ISC InfoSec Forums, February 12, 2015.

  57. C. Vallance, “Car hack uses digital-radio broadcasts to seize control,” BBC News, July 22, 2015.

  58. Z. Whittaker, “Widely-used patient care app found to include hidden ‘backdoor’ access,” ZDNet, May 27, 2016.

  59. V. Zhang, “High-profile mobile apps at risk due to three-year-old vulnerability,” TrendLabs Security Intelligence Blog, December 8, 2015.

Get The Internet of Risky Things now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.