Secure coding is about increasing the complexity demanded for an attack against the application to succeed. No application can ever be truly secure. With the right resources and time, any application, including those utilizing strong encryption, can be broken. The determination of how secure an application is depends on the trade-off between the time and complexity of an attack versus the value of the resource when it is breached. For example, a list of stolen credit card numbers is very useful to an attacker—if that list is only 10 minutes old. After 24 hours, the value of this data becomes increasingly diminished, and after a week it is virtually worthless. Securing an application is about increasing the complexity needed to attack it, so that the resource—when breached—will have a significantly diminished value to the attacker. Increasing the complexity needed for an attack also reduces the pool size of potential attackers. That is, attacks requiring higher skillsets reduce the number of people capable of attacking your application.
The term mobile security, as used in the marketplace today, has fallen out of sync with this premise. For many, security has become less about attack complexity and more about reducing overhead by depending on a monoculture to provide secure interfaces. As it pertains to iOS, this monoculture consists of a common set of code classes from the manufacturer to provide password encryption routines, user interface security, file system encryption, and so on. In spite of the many great advancements in security that Apple has made, the overall dependence on the operating system has unfortunately had the opposite effect on the security of applications: it has made them less complex, and given the keys out for every single application when the monoculture is breached.
We use words like “encryption” as if they are inherently secure solutions to the decades-old problem of data theft, yet countless millions of seemingly encrypted credit card numbers, social security numbers, and other personal records have been stolen over the years. Application developers are taught to write secure applications, but never told that they can’t even trust their own runtime. Bolting on SSL has become the norm, even though a number of attacks against SSL have been successfully used to rip off credentials and later to empty bank accounts. Everything we are taught about security is wrong, because the implementation is usually wrong. Even well thought out implementations, such as Apple’s, have suffered from chinks in their armor, making them vulnerable to many kinds of attacks. A lot of good ideas have been put in place to protect applications on iOS devices, but at each stage are weakened by critical vulnerabilities. Because most software manufacturers operate within this monoculture, they are at risk of a breach whenever Apple is—and that is often.
Implementation is hard to get right. This is why data is stolen on millions of credit card numbers at a time. The amount of time and effort it takes to invest in a proper implementation can increase costs and add maintenance overhead. To compensate for this, many developers look to the manufacturer’s implementation to handle the security, while they focus on the product itself. Managing data loss, however, is a budget based on disaster recovery—an even higher cost than the maintenance of implementing your own application-level security, and often more costly. Typically, the manufacturer isn’t held liable in the event of security breaches either, meaning your company will have to absorb the enormous cost of code fixes, mitigation of media and PR fallout, and lawsuits by your users. Isn’t it much cheaper then, in the long run, to write more secure code?
As is the case with most monocultures, security ones fail, and fail hard. Numerous security weaknesses on iOS-based devices have emerged over the past few years, leaving the App Store’s some half million applications exposed to a number of security vulnerabilities inherited by the reuse of the manufacturer’s code. This isn’t a new problem either, mind you. Ever since the introduction of enterprise-grade encryption and other security features into iOS, both criminal and security enterprises have found numerous flaws used to protect private data, putting the data on millions of devices at risk.
Unfortunately, the copyright engine in the United States has made it difficult to expose many of these security flaws. Apple took an aggressive legal stance against opening up the device’s otherwise private APIs and attempted to squash much of the ongoing community research into the device, claiming that methods such as jailbreaking were illegal, a violation of copyright. The Electronic Frontier Foundation (EFF) helped to win new legal protections, which have helped security researchers divulge much of what they knew about iOS without having to hide under a rock to do it. In the wake of this battle over copyright, the forced secrecy has led to the weakening of security, and many myths and misconceptions about iOS.
As is the case with any monoculture, having millions of instances of an application relying on the same central security framework makes the framework a considerably lucrative target: hack the security, and you hack every application using it.
Since the release of the original iPhone in 2007, Apple has engaged in a cat-and-mouse game with hackers to secure their suite of devices for what has grown to nearly 100 million end users. Over this time, many improvements have been made to the security of the device, and the stakes have been raised by their introduction into circles with far greater security requirements than the device and its operating system have thus far delivered. The introduction of hardware-accelerated encryption came with the iPhone 3GS, as did many other features, and helped to begin addressing the requirements needed for use in these environments.
Software engineering principles tell us that code reuse is one of the keys to writing good software. Many managers and engineers alike also generally assume that, if a given device (or a security module within that device) is certified or validated by a government agency or consortium, its security mechanisms should be trusted for conducting secure transactions. As a developer, you may put your trust in the suite of classes provided in the iOS SDK to develop secure applications because that’s what you’re led to believe is the best approach. While code reuse has its benefits, a security-oriented monoculture creates a significant amount of risk in any environment. The thought process that typically starts this kind of monoculture seems to follow this pattern:
A third party validates a device’s security features and claims that they meet a certain set of requirements for certification. These requirements are generally broad enough and generic enough to focus on their conceptual function rather than their implementation.
The manufacturer uses this certification as an endorsement for large enterprises and government agencies, which trust in the certification.
Enterprises and government agencies establish requirements using the manufacturer’s interfaces as a trusted means of security, mistakenly believing that deviating from the manufacturer’s recommendation can compromise security, rather than possibly improve it.
Developers write their applications according to the manufacturer’s APIs, believing they are trusted because the module is certified.
Certifications of secure modules, such as those outlined in the National Institute of Standards and Technology’s FIPS 140-2 standards, operate primarily from a conceptual perspective; that is, requirements dictate how the device or module must be designed to function. When a device is hacked, the device is caused to malfunction—that is, operate in a way in which it was not designed. As a result, most certifications do not cover penetration testing, nor do they purport to certify that any given device or module is secure at all, but only that the manufacturer has conceptually designed the security module to be capable of meeting the requirements in the specification. In other words, FIPS 140-2 is about compliance, and not security.
FIPS 140-2 is a standards publication titled Security Requirements for Cryptographic Modules that outlines the requirements of four different levels of security compliance to which a cryptographic module can adhere. The FIPS certification standards were never intended, however, to certify that a given module was hacker-proof—in fact, low-level penetration testing isn’t even considered part of the standard certification process. So why do we, as developers, allow ourselves to be pigeonholed into relying on the manufacturer’s security framework when it was never certified to be secure?
The real engineering-level testing of these devices is left up to independent agencies and their red teams to perform penetration testing and auditing long after the certification process is complete. A red team is a group of penetration testers that assesses the security of a target. Historically, the target has been an organization that often doesn’t even know that its security is being tested. In recent use of the term, red teams have also been assembled to conduct technical penetration tests against devices, cryptographic modules, or other equipment. Many times, the results of such tests aren’t made publicly available, nor are they even available to the manufacturer in some cases. This can be due to information being classified, confidentiality agreements in place, or for other reasons.
Due to the confidential nature of private penetration testing (especially in the intelligence world), a security module may be riddled with holes that the manufacturer may never hear about until a hacker exploits them—perhaps years after its device is certified. If a manufacturer doesn’t embrace full disclosure and attempts to hide these flaws, or if they are not quick enough to address flaws in its operating system, the entire monoculture stands to leave hundreds of thousands of applications, spanning millions of users, exposed to vulnerabilities. This leads us to our first myths about secure computing monocultures.
Myth 1: Certifications mean a device is secure and can be trusted.
Most certifications, including FIPS 140-2 certification, are not intended to make the manufacturer responsible for a device or module being hacker-proof, and do not make that claim. They are designed only to certify that a module conforms to the conceptual functional requirements that give them the capability to deliver a certain level of functionality. The certification process does not generally involve penetration testing, nor does it necessarily involve a review of the same application programming interfaces used by developers.
Myth 2: Depending on a central set of manufacturer’s security mechanisms improves the overall security of your application by reducing points of failure and using mechanisms that have been tested across multiple platforms, in multiple attack scenarios.
Relying on a monoculture actually just makes you a bigger target, and simplifies your code for an attacker. Whether a particular security mechanism is secure today is irrelevant. In a monoculture, the payoff is much bigger, and so the mechanisms will be targeted more often. When they are cracked, so will all of the applications relying on them. In addition to this, you’ll have to wait for the manufacturer to fix the flaw, which could take months, before the data your application uses is secure again.
Apple has incorporated four layers of security in iOS to protect the user and their data.
Techniques to prevent an unauthorized individual from using the device
Techniques to protect the data stored on the device, even if the device is stolen
Tools to encrypt data while it is in transit across a network
Mechanisms to secure the operating system and isolate applications while they are running
Apple’s device security mechanisms help ensure that a user’s device can’t be used by an unauthorized party. The most common device security mechanism is the device’s PIN lock or passcode. Apple allows these locks to be forced on as part of an enterprise policy, or can be set manually by individual users. Enterprises can force a passcode to have a minimum length, alphanumeric composition, complex characters, and even set the maximum age and history policies for a passcode. Users can additionally set the device to automatically wipe itself if the wrong passcode is entered too many times.
In addition to passcode locks, Apple’s device security strategy also includes the use of signed configuration profiles, allowing large enterprises to centrally distribute VPN, WiFi, email, and other configurations to devices in a secure fashion. Central configurations can restrict the device from using certain insecure functionality, such as disabling YouTube or the device’s camera. Installation of third-party applications can also be restricted, further mitigating the risk from unsanctioned applications on the device.
Data security is a primary focus of secure applications, and therefore a primary focus of this book. Apple has incorporated a number of data security approaches to protect sensitive data on the device, with the goal of protecting data even if the device is stolen. These mechanisms include a remote wipe function, encryption, and data protection.
Apple’s remote wipe feature allows the device to be wiped once it’s discovered stolen by the owner, or if too many passcode attempts fail. The device can also be locally wiped by the user within a very short amount of time (usually less than 30 seconds).
The encryption feature causes all data on the device to be encrypted, a feature requirement for many types of certifications. In addition to the data being encrypted, data backed up through iTunes can also be encrypted. A password is set through iTunes, and stored on the device. Whenever a backup is made, the password on the device is used to encrypt the data. Regardless of what desktop computer is performing the backup, the mobile device itself retains the original encryption key that was set when it was activated.
Apple’s data protection mechanisms are one of the most notable (and most targeted) security mechanisms on iOS devices. Data protection uses a hardware encryption accelerator shipped with all iPhone 3GS and newer devices to encrypt selected application data; this functionality is used by Apple itself as well as made available to developers. By combining certain encryption keys stored on the device with a passcode set by the user, the system can ensure that certain protected files on the filesystem can be decrypted only after the user enters her passcode. The concept behind the passcode is that a device can be trusted only until a user puts it down. Protecting certain files in this manner helps to ensure that the user of the device knows something an authorized user would know.
The effectiveness of Apple’s data protection encryption largely depends on the complexity of the passcode selected by the user. Simple four-digit PIN codes, as one might surmise, can be easily broken, as can passwords using dictionary words or other patterns attacked by password cracking tools. There are also a number of ways to hijack data without knowing the passcode at all.
Although the entire filesystem is encrypted, only certain files have Apple’s data protection. The only data files protected on a new device are the user’s email and email attachments. Third-party applications must explicitly add code to their application to enable data protection for specific data files they wish to protect.
Network security has been around as long as networking, and Apple has incorporated many of the same solutions used in secure networking into iOS. These include VPN, SSL/TLS transport encryption, and WEP/WPA/WPA2 wireless encryption and authentication. We will touch on some of the techniques used to penetrate network security in this book, but a number of books exist solely on this topic, as they apply to nearly every device and operating system connected to the Internet.
On an application level, App Store applications are run in a sandbox. Sandboxing refers to an environment where code is deemed untrusted and is therefore isolated from other processes and resources available to the operating system. Apple’s sandbox limits the amount of memory and CPU cycles an application can use, and also restricts it from accessing files from outside of its dedicated home directory. Apple provides classes to interface with the camera, GPS, and other resources on the device, but prevents the application from accessing many components directly. Applications running in the sandbox cannot access other applications or their data, nor can they access system files and other resources.
In addition to restricting the resources an application can access on the device, Apple has incorporated application signing to police the binary code allowed to run on the device. In order for an application to be permitted to run under iOS, it must be signed by Apple or with a certificate issued by Apple. This was done to ensure that applications have not been modified from their original binary. Apple also performs runtime checks to test the integrity of an application to ensure that unsigned code hasn’t been injected into it.
As part of application security, Apple has incorporated an encrypted keychain providing a central facility for storing and retrieving encrypted passwords, networking credentials, and other information. Apple’s Security framework provides low-level functionality for reading and writing data to and from the keychain and performing encryption and decryption. Data in the keychain is logically zoned so that an application cannot access encrypted data stored by a different application.
Apple’s Common Crypto architecture provides common cryptographic APIs for developers who want to add custom encryption to their applications. The Common Crypto architecture includes AES, 3DES, and RC4 encryption. Apple has also married this framework to the device’s hardware-accelerated encryption capabilities, providing accelerated AES encryption and SHA1 hashing, both of which are used by Apple internally as part of their underlying data security framework.
Securing data at rest comes down to the effectiveness of the encryption protecting it. The effectiveness of the encryption largely depends on the strength and secrecy of the key. The filesystem encryption used in iOS as of versions 4 and 5 rests entirely on these keys. Only select files, such as the user’s email and attachments, are encrypted in a way that takes the device passcode into consideration. The rest of the user’s data is at risk from the classic problem of storing the lock with the key.
All iOS-based devices are shipped with two built-in keys. These include a GID key, which is shared by all devices of the same model, and a UID key, which is a key unique to the device (a hardware key). Additional keys are computed when the device is booted as well. These derived keys are dependent on the GID and UID key, and not on a passcode or PIN. They must be operational before the user even enters a passcode, to boot and use the device. A key hierarchy is built upon all of these keys, with the UID and GID keys at the top of the hierarchy. Keys at the top are used to calculate other keys, which protect randomly generated keys used to encrypt data. One important key used to encrypt data is called the Dkey, and is the master encryption key used to encrypt all files that are not specifically protected with Apple’s data protection. This is nearly every user data file, except for email and attachments, or any files that third-party applications specifically protect. The Dkey is stored in effaceable storage to make wiping the filesystem a quick process. Effaceable storage is a region of flash memory on the device that allows small amounts of data to be stored and quickly erased (for example, during a wipe). The Dkey is stored in a locker in the effaceable storage along with other keys used to encrypt the underlying filesystem.
You may have the most secure deadbolt on the market protecting your front door. Perhaps this $799 lock is pick-proof, tool-proof, and built to extreme tolerances making it impossible to open without the key. Now take a spare key and hide it under your doormat. You’ve now made all of the expensive security you paid for entirely irrelevant. This is much the same problem in the digital world that we used to see with digital rights management, which has now made its way into mobile security. People who pay for expensive locks shouldn’t place a spare key under the mat.
Apple has a lot of experience with digital rights management, much more than with mobile security, in fact. The iTunes store existed for years prior to the iPhone, and allows songs to be encrypted and distributed to the user, providing them the keys to play the music only after authenticating. Over time, those who didn’t like to be told what they could and couldn’t do with their music ended up writing many tools to free their music. These tools removed the encryption from songs downloaded through iTunes so that the user could copy it to another machine, back it up, or play it with third-party software. Such tools depend largely on two things the user already has: the encrypted music, and the keys to each song.
The filesystem encryption in iOS is very similar to iTunes Digital Rights Management (DRM), in that the master keys to the filesystem’s encryption are stored on the device—the lock and key together, just as they are in DRM. The key to decrypting the filesystem, therefore, is in knowing where to find the keys. It’s much simpler than that, as you’ll see in this book. In fact, we aren’t dealing with a $799 lock that is pick-proof, and there are many ways to convince the operating system to decrypt the filesystem for you, without even looking for a key. Think “open sesame”.
Myth 3: The iOS file system encryption prevents data on the device from being stolen.
Because iOS filesystem encryption (up to and including iOS 5) relies on an encryption system that stores both keys and data on the same device, an attacker needs only to gain the privilege to run code on the device with escalated permissions to compute the keys and steal data. Therefore, because these keys are digital, whoever has digital possession of the device has both the lock and the key.
With a mobile device, the trade-off between security and convenience of use is more noticeable than that of a desktop machine with a full keyboard. The device’s smaller on-screen keyboard combined with its mobile form factor make unlocking it a productivity nightmare for an enterprise. As a mobile device, an average user will work in short bursts—perhaps a text message or an email at a time—before placing it in his pocket again. To adequately secure a device, it must be unlocked by a password on each and every use, or at the very least every 15 minutes. This generally leads to one inevitable result: weak passwords.
As a result of the inconvenience of unlocking a device several hundred times per day, many enterprises resort to allowing a simple four-digit PIN, a simple word, or a password mirroring an easy to type pattern on the keyboard (dot-space-mzlapq anyone?). All of these have historically been easily hacked by password cracking tools within a fraction of the time a complex password would take. While only a few select files are encrypted using Apple’s data protection APIs, the ones that are protected aren’t protected that much better.
Consider a four-digit PIN, which is the “simple passcode” default when using iOS. A four-digit numeric PIN has only 10,000 possibilities. Existing tools, which you’ll learn about in this book, can iterate through all 10,000 codes in a little less than 20 minutes. Whether you’ve stolen a device or just borrowed it for a little while, this is an extremely short amount of time to steal all of the device’s encryption keys. The problem, however, is that most users will defer to a four-digit PIN, or the simplest complex passcode they can get away with. Why? Because it’s not their job to understand how the iOS passcode is tied to the encryption of their credit card information.
Your users are going to use weak passwords, so you’ll need to either accept this as a fact of life, or prevent it from happening. Unless they’re bound to an enterprise policy forbidding their use, the average user is going to stick with what’s convenient. The inconvenience of corporately owned devices, in fact, is precisely why more employees are using personal devices in the workplace.
Myth 4: Users who are interested in security will use a complex passcode.
Most users, including many criminals, still use a simple four-digit PIN code or an easy-to-crack complex passcode to protect the device. A significant reason for this is that users don’t make the association between the passcode they set and the strength of the encryption on the device. They assume that the mere requirement to enter a passcode is enough of a barrier to discourage others from breaking into the device. This is true for casual passersby and nosy TSA agents needing a little intimacy, but not nearly enough for serious criminals. Because of the impedance to productivity when using a complex passcode, expect that your users will, in general, defer to simple PIN codes or easily breakable passcodes.
Myth 5: Using a strong passcode ensures the user’s data will be safe.
As you’ve just learned, the passcode is incorporated into the encryption for only a very few files, even in iOS 5. These include email, attachments, and any files specifically designated by third-party applications to use Apple’s data protection. The vast majority of user data on the device can still be stolen even if the strongest, most complex passcode is used. Chapter 5 will introduce you to methods that can steal these protected files, as well, without ever cracking the passcode.
Your application might be the most secure application ever written, but unbeknownst to you, the operating system is unintentionally working against your security. I’ve tested many applications that were otherwise securely written, but leaked clear text copies of confidential information into the operating system’s caches. You’ll learn about the different caches in Chapter 4. From web caches that store web page data, to keyboard caches that store everything you type, much of the information that goes through the device can be recovered from cached copies on disk, regardless of how strong your encryption of the original files was.
In addition to forensic trace data, you might also be surprised to find that deleted data can still be carved out of the device. Apple has made some significant improvements to its encrypted filesystem, where each file now has its own encryption key. Making a file unrecoverable is as easy as destroying the key. Unfortunately for developers, traces of these keys can still be recovered, allowing the files they decrypt to be recovered. You’ll learn more about journal carving in Chapter 6.
Myth 6: If an application implements encryption securely, data cannot be recovered from the device.
Copies of some of the data your application works with, including information typed into the keyboard, and your application’s screen contents, can be cached unencrypted in other portions of disk, making it difficult to guarantee any of your application’s data is truly secure.
Myth 7: Once data is deleted on an encrypted filesystem, it’s gone forever.
Even if you’re familiar with how deleted data can be recovered from most filesystems, you may be surprised to know that encryption keys used to encrypt files in iOS can be recovered, even after the file has been deleted. Again, the operating system itself is working against the device’s encryption by caching these transactions in other places.
Even the strongest safe deposit box can be opened with the right key. Your valuables might be safe in the strongest, most fortified bank in the world, but if the key is sitting on the bar with your car keys, it only takes a simple and quick attack to defeat every layer of the bank’s multimillion dollar security. Swiping your key, watching you sign your bill, and forging a fake identification is much easier than defeating a bank’s security system, drilling through six-inch steel walls, and breaking into the right safe deposit box.
Not all data you wish to protect is on the device, but usernames, passwords, and URLs to remote resources can be. All too often developers make the painstaking effort to encrypt all of the user’s confidential data on the device, but then compile in the strings to URLs, global usernames/passwords, or other back doors, such as those of credit card processing systems or other global system. Another common mistake is to write a thin client that stores no user data on the device, but makes the exception of storing the user’s password and/or session cookies there, or common bugs that make such an application susceptible to a man-in-the-middle attack. This makes the nightmare worse because once credentials are stolen (possibly unbeknownst to the device’s owner), the remote resources tied to these credentials can be easily accessed from anywhere.
Myth 8: If I don’t store any data on the device, the user’s data is safe.
Mitigating a data breach is much easier to do if the data is isolated on the stolen device. When credentials to resources spread out across the world are stolen, however, management becomes more of a high maintenance nightmare. If your application includes “back door” credentials into systems storing hardcoded credentials, for example, the breach can sometimes require a massive interruption and redeployment of services to fix, in addition to waiting for software updates to be approved.
When a device is stolen, you have a considerable breach on your hands; possibly an even bigger breach if server credentials are exposed. Securing remote data is just as important as securing the data on the device.
Apart from the most paranoid users (of which you will be, if you are reading this book), most inherently trust the networks their traffic runs across, especially if the network is a cellular network. In spite of the many cellular hacking tools and how-tos widely available today, many still believe that seeing their carrier name at the top of the device’s menu bar is secure enough. You’ll learn how easy it is to redirect traffic bound for the user’s cellular network to your own proxy in Chapter 9.
Myth 9: Only extremely elite hackers can hack a cellular network to intercept traffic.
Chapter 9 will show you how simple it is to redirect all of a device’s traffic to a malicious server transparently; even when a device is used over a cellular network. No network should be trusted, especially if the device’s provisioning can be changed by simply clicking on a link or sending an email.
As you may have guessed, having physical access to a device greatly increases the security risk that is posed to a user’s data. Developers will even dismiss taking more secure approaches to development with the belief that a user will know if her device has been stolen, and can issue a remote wipe or passwords before the data could be cracked. This is a dangerous assumption.
The problem is this: there is no time! Data can be stolen very quickly on an iOS device—in as little as a couple of minutes alone with the device. Your encrypted keychain credentials can be lifted almost instantly—this includes website passwords, session data, and other information. Depending on the amount of data stored on a device, it could take as little as 5 or 10 minutes to steal the entire filesystem. You’ll learn how to do this in Chapter 3.
Because it takes such little time to steal data off of a device, it’s also very easy to do without the device owner’s knowledge. Imagine a pickpocket, who could easily swipe the device, steal data, then return it to the owner’s pocket all before leaving the coffee shop.
Another popular attack, which you’ll also learn about in this book, involves simple social engineering with another iPhone. It’s very easy to swap phones with a target and steal their PIN or passcode, image their device, or even inject spyware all within minutes and without their knowledge.
Once a device is stolen, it’s easy to disable a remote wipe: simply turn it off. This can be done with or without a passcode. Everything a data thief needs to steal data off the device can be done without the device’s operating system even booting up.
Myth 10: A criminal would have to steal and hack on your device for days or months to access your personal data, which may be obsolete by then.
In as little as a couple minutes, a criminal can steal all of your website and application passwords. Given a few more minutes, a criminal can steal a decrypted copy of the data on the device. Data can be ripped so fast that it can often happen without the user’s knowledge. Spyware and other techniques can steal your personal data for months without the user even knowing and, as you’ll learn, is not difficult to inject.
Myth 11: Remote wipe and data erasure features will protect your data in the event of a theft.
Remote wipe can be easily thwarted by simply turning the device off or placing it in airplane mode. In fact, the device’s operating system doesn’t even need to boot in order to steal data from it. When stealing data from iOS devices using many of the methods in this book, the passcode does not need to be entered at all, rendering the iOS “Erase Data” feature dormant.
If you can’t trust your own application, who can you trust? After all, Apple has digitally signed your application and if any modifications are made to it (say, causing it to bypass certain security check), the application should cease to run. Not so, and this is a dangerous assumption made by many developers. I’ve seen this time and time again in applications I review, from passcode screens that serve only as a weak GUI lock, to methods to check whether certain features are enabled, and more importantly, on security checks dealing with financial transactions that should take place on a remote server instead of on the phone. All of these and more can be easily manipulated. App Store developers have even found ways to manipulate their own applications into sneaking in code that Apple hasn’t reviewed.
You’ll learn as early as in Chapter 2 that Apple’s signing mechanism
can be disabled either by a criminal hacker or by jailbreaking your
device, allowing any modifications to be made to the binary itself, or
more importantly in the runtime. In fact, manipulating an application’s
runtime has never been easier than with the Objective-C environment.
Objective-C is a reflective language, allowing it to perceive and modify
its own state as the application runs. You’ll learn about tools in Chapter 7 and Chapter 8 to manipulate the runtime of an
application, allowing a hacker to bypass
UIViewController screens (or any other screen),
throw new objects onto the key window, instantiate and manipulate objects
of any kind, change the value of variables, and even override methods in
your application to inject their own.
Why would a user hack her own application? Well, that is possible, but think more in terms of a criminal running a copy of a victim’s stolen application, with her stolen data. Another common scenario involves malware running on a device to hijack an application. You’ll see many examples in the chapters to come. One of the most notable examples includes manipulating a stolen copy of a merchant’s credit card application to refund the attacker thousands of dollars in products she did not purchase from the merchant, which would be transferred from the merchant’s account, still linked to the stolen application.
Myth 12: Applications can securely manage access control and enforce process rules.
Applications can be easily manipulated to bypass any kind of access control or sanity check, whether on the victim’s device or on a copy running on an attacker’s device at a later time. Manipulating Objective-C applications is very easy, and much more is at risk than just hacking free hours into your Internet music player.
We’ve established that stolen or “borrowed” devices are easy to hack. Physical security is commonly the biggest reason some developers dismiss the notion of stolen data. After all, if someone can steal your wallet with your credit cards, you’re also going to be in for a considerable headache. Historically, a limited number of remote code injection vulnerabilities have been discovered and exploited for iOS. Fortunately, the good guys have found the ones we presently know about, but that is not to say criminal hackers won’t find future remote code injection exploits. The most notable of these exploits include the following:
A TIF image processing vulnerability, several years old, was discovered to exist in an older copy of the libraries used by applications in earlier versions of iOS. This allowed an attacker to load and execute code whenever the device loaded a resource from the Safari web browser. This attack could have also been used to exploit the Mail application. Fortunately, it was the jailbreaking community that discovered this vulnerability. Their response was the website http://www.jailbreakme.com, which users could visit to exploit their own devices. This exploit was used, for a time, to allow users to jailbreak their mobile devices, allowing third-party software to run on them. The downloaded software also fixed the vulnerability months before Apple did so that more malicious groups couldn’t exploit it.
An SSH worm was released into the wild, which took advantage of jailbroken devices running SSH, where the user had not changed the default password. The worm turned every device into a node on AT&T’s network, which sought out and infected other iPhone devices. This worm has since been added to metasploit, where anyone can turn it into a tool to steal private data from an iOS device, install a root-kit to provide remote access, or any other possible number of attacks.
In 2009, Charlie Miller presented a talk at DefCon demonstrating how a malformed SMS text message to a device could execute code remotely. What was unique about this exploit was that it could be pushed to the user; the user did not need to visit a URL or open an email attachment. Miller told Forbes, “This is serious. The only thing you can do to prevent it is turn off your phone. Someone could pretty quickly take over every iPhone in the world with this.” Fortunately, Apple released a firmware update the very next day, unlike other vulnerabilities, which have taken months. Had the bad guys known about this prior, they could have stolen every iPhone user’s personal data simply by texting one user with a worm payload.
In 2011, a remote code injection exploit was crafted from a PDF processing vulnerability, which allowed an attacker to load and execute code onto any iOS device simply by viewing a PDF through the Safari web browser or opening it as an attachment in the Mail application. This exploit was again posted on the popular website http://www.jailbreakme.com, where the hacking community delivered a patch both to fix the vulnerability months before Apple did, and to use it to allow users to jailbreak their devices. This vulnerability affected firmware up to and including version 4.3.3.
Also in 2011, Charlie Miller discovered a vulnerability in the way the Nitro JIT compiler was implemented in iOS, allowing an otherwise innocuous looking application to download and run malicious, unsigned code from a server, and presumably with escalated privileges. Miller released an application into the App Store to demonstrate this, which subjected millions of end users to a potential malware infection. Miller was subsequently thrown out of the App Store for the period of one year.
Myth 13: Physical possession combined with Apple’s existing security mechanisms are enough to prevent data theft.
Although remote code injections are typically only seen, on average, once or twice a year, these types of exploits are capable of affecting a very large number of devices across a worldwide network, causing irreparable damage in the event of a data breach. When these exploits drop, they hit hard. Imagine millions of your users all exploited in the same week. This has been the case with recent 0-day exploits. Fortunately, the security community has released them first, in order to evoke a quick response from Apple. Your application might not be so lucky next time. We really have no idea just how many code injection exploits are being quietly used to attack devices.
Apple has implemented some great security mechanisms in their operating system, but like any technique, they are subject to attack. By depending solely on solutions such as the keychain, passcode keys, and encrypted filesystems, the collective pool of applications stand to be at risk from one of many points of failure within Apple’s opaque architecture. Implementation is key to making any form of security effective. Without a flawless implementation, terms like “hardware encryption” don’t mean anything to criminal hackers, and they stand to provide no real world protection against those who can find flaws in it. Application security can be improved only by having a sober understanding of the shortcomings of the current implementations and either coding to compensate for them, or writing our own implementations that work better.
Apple has done a good job with what is an otherwise sophisticated implementation of a security framework, but iOS still suffers from flaws. With nearly 100 million iPhone devices sold and over a half million applications in Apple’s App Store, many different interest groups ranging from forensic software manufacturers to criminal hackers have targeted iOS security. By relying on the manufacturer’s implementation alone, many have lent themselves to the untimely demise of the customer data stored within their applications.
It’s easier to shoot a big fish in a little pond than the opposite. The chapters to follow will teach you how criminals can hack into iOS to steal data and hijack applications, but more importantly will teach you, the developer, how to better secure your applications to lower the risk of exposure.