Chapter 1. Current Application Threats and Challenges

Code Complexity, Microservices, and Third-Party Libraries

The explosion of open source code has been exponential over the past decade. For developers, this means lots of choices about which libraries to use to minimize development effort. If developers can use a library that manages the ugly underpinnings of encryption, they will. Ultimately developers are tasked with facilitating business outcomes. Any digital plumbing that they can take advantage of in the form of third-party libraries is a boon for productivity.

Savvy attackers are keenly aware of this and are constantly looking for zero-day or even published vulnerabilities that they can exploit in commonly used libraries. OpenSSL is a core example. The OpenSSL library handles core encryption functions, so developers don’t need to. It’s one of the most commonly used third-party libraries. As a result, any vulnerability discovered in such a common core security library carries serious security ramifications.

The Heartbleed Bug was a serious vulnerability in the OpenSSL cryptographic software library. The vulnerability allowed for the theft of information using popular SSLand Transport Layer Security (TLS) protocols that are used to secure much of the communications on the web.

Initially keeping track of these libraries wasn’t that difficult because there was a core set that most developers used. Now, however, it’s estimated that there has been such an exponential increase in the number of third-party libraries used for many common development projects that keeping track of all of them from a security and vulnerability standpoint has become a significant challenge. For the moment, this involves in-house developed applications in particular, but this could affect shrink-wrapped applications or cloud services that use these libraries, as well.

One of the newer trends in the development world is the use of microservices. Microservices, as the name suggests, involves deploying multiple, small, and discrete services. Microservices allow development teams to deploy new functionality iteratively and in quick, small sprints. This is in contrast to the notion of Waterfall development which was traditionally a “big bang” release model. Each microservice potentially represents its own unique attack surface that can be exploited. Developers can and will incorporate third-party libraries in these respective microservices as needed. Many times, in modern DevOps environments these are separate development teams that operate independently and publish their microservice APIs to others, which allows for a loose-coupling methodology. The security ramifications here are that more individual attack surfaces and vulnerable third-party libraries can be introduced and expose your organization to additional risk.

Let’s assume for now that we are just focusing on in-house development. Application development is the realm of the Wild West. Traditionally, anything goes in terms of usage in favor of rushing functional bits of software out the door via Agile development methodologies. Developers have full freedom to pull third-party libraries from anywhere on the web. What if they are downloading and using versions of these libraries that have been modified with backdoors or other malicious code? What if they are downloading and using older versions with known vulnerabilities? Cyber security engineers and Security Operations Center (SOC) analysts wake up in cold sweats over issues like these. They are very real, and they either need to be addressed in the software development life cycle or by way of patching and remediation. Essentially, prevention or detection and correction.

Some good news here is that with the advent of DevOps the ability to lock down source libraries through programmatically managed pipelines and build processes has greatly increased. But, even then, it’s all about design and process. Best practices in DevOps that help address the introduction of security risks via third-party libraries include the following:

  1. Compiling the libraries from source code.

  2. Ensuring that the source code is pulled from a trusted or authoritative source.

  3. Always using the latest versions of third-party libraries.

  4. Using DevOps best practices such as constant environmental refresh or “repaving” to rebuild images (operating system images or containers) that are immutable in nature.

Although these are great principles, many development teams are still in the early phases of adopting mature DevOps deployments and in the meantime this needs to be balanced with compensating controls. The compensating controls I’m referencing here involve detection and correction. Detection might be achieved by way of regular vulnerability scanning or virtual patching and attack detection by using Web Application Firewalls (WAFs).

Microservices and Container Security

Figure 1-1 depicts an Amazon Web Services (AWS) Elastic Container Service (ECS) implementation. It’s a container orchestration engine similar to Kubernetes in function. Rather than building out your own container engine such as Kubernetes, Amazon provides container orchestration as a service.

In Figure 1-1, you are looking at an unprotected set of microservices running in a single Virtual Private Cloud (VPC). VPCs are virtual clouds/overlay networks. You can see in this example that the VPC spans two availability zones (Availability Zones 1 and 2 in the diagram).

wafs 0101
Figure 1-1. An AWS ECS implementation

Figure 1-1 exhibits simple protections in place in the form of IP/Port-based Security Lists. But as we know, this is very rudimentary and does not address security beyond Layer 3.

There are two ECS private subnets that house the Docker instances. There is a single ECS cluster that spans the two private ECS subnets. Each respective ECS instance in the cluster can run multiple docker container images.

Notice that there are two Network Address Translation (NAT) instance subnets, one in each respective availability zone. This is a requirement for Amazon Elastic Compute Cloud (AWS EC2) services. AWS EC2 services are not allowed to have direct access to the internet. Lastly, in this “unprotected” environment there is an internal AWS Load Balancer (ALB) distributing traffic across ECS container instances. In this setup, the Amazon EC2 instances are not directly accessible from the internet.

Now, I’m going to walk you through an example of a WAF-protected AWS EC2 container deployment. First, let’s take a look at Figure 1-2.

wafs 0102
Figure 1-2. A WAF-protected AWS EC2 container deployment

In Figure 1-2 we’ve added some additional components to make the container-based microservices accessible from the internet. First, we’ve created two new subnets that will be allocated exclusively to two WAF virtual appliances. Next, we’ve placed these two virtual WAFs into their respective subnets. We are routing traffic from the WAFs to the internal ALB.

We’ve also added an external facing ALB, which translates public IPs to the private IP address targets of the internal WAF appliances. Our external DNS will resolve to IP addresses bound to the external interface of the external ALB.

Now that the WAFs have been introduced, all of the respective microservices running in Amazon EC2 Container Services are now protected from OWASP Top 10 attacks, account takeover, and many other threats that we cover in greater detail throughout this book.

It’s worth noting that because these WAFs are virtual appliances they can be deployed automatically using AWS CloudFormation templates to automate the deployment process completely.

Note also the presence of a WAF management server. This server is placed in one of the WAF subnets and can be accessed securely for management purposes via a jump-box to limit administrative access.

Any seasoned security professional will tell you that there is no silver bullet. In this case, we are using WAFs to address the “detect” and “correct” aspects of runtime application security that might not have been addressed in the continuous integration (CI)/continuous deployment (CD) pipeline. Having a defense-in-depth strategy can help you to ensure that whatever you don’t catch in development doesn’t come back to haunt you in production.

Industrialization of Attacks Using Botnets

Automation in and of itself is not necessarily a new tool for attackers. However, there are several trends that have converged that have allowed them to become much more efficient in compromising application data across the web. One of these trends is the use of botnets.

Enterprising hackers are entrepreneurial in nature. This means that they are looking for gains in efficiency, impact, and results. Hackers will take advantage of the newest innovations to better scale their efforts. Botnets are usually geographically dispersed groups of machines that have been compromised and back-doored by some sort of malware and are now under the remote control of a centralized group of botnet controllers. Initially, botnets were largely used to facilitate Distributed Denial of Service (DDoS) attacks. This proved effective because in order to block against DDoS attacks the typical methodology is to blacklist individual or groups of IP addresses. If all of the attacks are coming from one identifiable region, the potential impact on your ecommerce sales will likely be low as a result of blocking this range of addresses. DDoS attacks utilizing botnets have become game changers for attackers. The reason is that an attacker can command and control geographically disparate zombies (compromised machines under the control of the botnet commander) which will generate traffic to try to bog down the website to the point that it is inaccessible by legitimate customers. As you can imagine, these zombies can potentially do much more than simply inundate a website with traffic. Attackers have now industrialized botnets in such a way that they can use these botnets to automate attacks for different purposes.

One outcome for repurposing these botnets might simply be to grow the size and scope of the botnet itself. Think of botnets as groups of finite resources ready to be instructed for a given activity. Many times these “activities” are described as campaigns. Now, wait a minute, usually when you use the word campaign you might hear it in the context of an “advertising campaign.” Well, as I mentioned earlier, hackers are entrepreneurial in nature, and their campaigns are trying to get the word out, too, but in a more forceful and malicious manner.

Building botnets takes time and resources, so an emerging trend in this space is the notion of botnets-as-a-service. On the dark web, hackers can purchase these services using Bitcoin or Ethereum to do their bidding for a subscription fee. Again, if this sounds like a structured, legitimate business, you are correct, with the exception of the legitimacy of it. This means that attackers can essentially rent botnets to execute their own campaigns.

Another potential use of botnets outside of simply growing their own networks or executing DDoS attacks is to execute surgical strikes against websites properties and applications. Botnet zombies can be orchestrated to achieve complex tasks in tandem with one another.

One regiment within a botnet might be commanded to create a diversion via a DDoS attack. This is a very common tactic as part of a larger orchestrated attack. The purpose of staging a DDoS attack in this case is to create a diversion that might overwhelm Intrusion Detection Systems (IDS’s), firewalls, and security logs with noise. Think of this noise as a haystack. The purpose of the attacker is to begin surgically implanting needles into these DDoS haystacks.

As part of this industrialized, botnet-as-a-service orchestrated attack, the attacker might have a separate regiment of the botnet carrying out surgical scans of the target network under the cover of the DDoS attack. All the while the attacker is collecting useful information about the attack surface of the application. Subsequently, another regiment within the botnet can be directed to carry out an exploit against an identified component of the site. For example, suppose that a web server is vulnerable to a new vulnerability in a third-party library that allows one of the botnet zombies to access the underlying operating system (OS) and is successful in gaining access to a shell and alerts the botnet commander (the human). This is where a real human hacker might take over and begin moving laterally or deeper into the network to exploit other systems by hand.

Theoretically, even the activities within the network could even be further automated by another botnet regiment but it might be noisy. Meaning that a skilled human hacker will likely be more effective at this point.

If this example sounded a lot like a historical account of the D-Day landing in World War II, you are beginning to get the picture. A well-orchestrated and industrialized system with a defined command and control substrate. This is the industrialization of cyber-attacks through botnets.

Gaining Access to Data Through Code Manipulation or Sensitive Credential Compromise

It’s estimated that 50% of cyberattacks involve compromised credentials. The system of using usernames and passwords to gain access to websites is one that is fundamentally broken but this paradigm continues to perpetuate. The issue, of course, is that fact that users reuse the same username and password information for multiple sites. Therefore, when one of these sites is hacked, bad actors can reuse the credentials to attack other sites.

For attackers, using compromised credentials is the simplest way in the front door. Hackers don’t care how elaborate an attack is, they are interested in the end result. They want to expend the least amount of effort.

You can categorize account compromise into two key buckets. Let’s take a look at those.

End-User Accounts

The first bucket is end-user accounts. This is in line with the aforementioned description about compromised accounts end-user accounts. So, when Yahoo! is hacked, bots can use those credentials by way of credential stuffing attacks. Basically, the botnet is configured so that the variables of username and password are replaced with compromised username and password data in succession by the botnets. These repositories of hacked usernames and passwords can be found on the dark web for sale to anyone who wants to pay the Bitcoin going-market value. And they are not just being sold to one buyer; they are resold over and over again to anyone that wants to pay pennies on the dollar. This of course isn’t limited to usernames: it also can include social security numbers, credit card information, and so on.

Sensitive and Privileged Accounts

Another category of account compromise is sensitive or privileged accounts. These are accounts that have administrative privileges over the OS, databases, and network devices. Let’s revisit the “industrialized attack-bot” example from the previous section. The botnet had gained a foothold to an internal system and had shell access. At this point, the remainder of the attack was to be handed off to a skilled human hacker. Now the account that the hacker has gained access to is not a privileged account. It’s incumbent upon the hacker to escalate his privileges. The attacker now has some advantages in his favor by way of having local access to the system. If you’ve ever read through a vulnerability report, you’ve probably noticed that they are typically classified by attack vectors such as “local” or “remote.” Network and system administrators typically focus on patching remotely accessible vulnerabilities before local vulnerabilities in terms of priority. A likely next step for our hacker is to enumerate as much information as possible about the system to which he has shell access. Useful information will be OS, version, username you are logged in with, and version information about any other software packages running on the system. Armed with this information, an attacker can reconcile vulnerable software versions and identify potential locally exploitable vulnerabilities that allow for escalation of privilege. After a vulnerable library or package has been identified, an attacker can research the web for known exploits. Attackers can gather this type of information from sites such as Packet Storm and CVE Details, among others.

An attacker has many potential avenues to explore, and the point here is not to identify all the permutations but to illustrate the process. Suppose that the attacker is in a Linux shell. The attacker might look for SUID (Set User ID) and SGID (Set Group ID) vulnerabilities. These are the bits that are set when a given application or binary needs to execute with the privileges of a particular user or group. Without getting into too much detail, the attacker can potentially exploit the fact that these applications need to run with escalated privileges. The attacker can thus use this to escalate their own privileges to those of “root,” thereby escalating their privileges. Now, that attacker has effectively gained full control of the system to which they have shell access. They can now fully control this machine to gain lateral or forward access into the network. This can be used as a launchpad to further compromise other systems and data throughout the environment.

There are many other ways by which attackers can gain access to sensitive credentials. I just walked through an example of escalating one user’s privilege to that of another user. Remember that attackers are looking for the easiest way to achieve their objectives, and they have numerous means by which they can actually steal sensitive credentials as opposed to going through the process of privilege escalation.

Attackers can use botnet-driven social engineering methods, including those that deliver malicious payloads such as key-loggers to harvest sensitive credentials. All an attacker needs to do is fool a user who has privileged access into giving up their username and password information. That might sound difficult, but it really isn’t. If an attacker can harvest the email addresses of database and system administrators of Company X from a publicly accessible source such as LinkedIn, they can program their botnets to perform automated spearphishing campaigns. These emails might masquerade as messages from the Help desk team that require the administrator to log in to a portal page and change their password. All an attackers needs is for one administrator to fall for it.

In this chapter, we covered current application threats and challenges, such as the use of third-party code, the industrialization of botnets, and how sensitive credentials can introduce vulnerabilities into your compute environment. In Chapter 2, I break down the types of attacks in detail using various classifications to help you better understand the details of the current threat landscape.

Get Web Application Firewalls now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.