Chapter 4. Diving Deeper into Serverless Security Threats and Risks

In the first few chapters of this book, we discussed what serverless is and how serverless applications look like. At this point, you should have a good understanding on how serverless applications are built and how the different components of the system interact with each other.

Using serverless architectures and services will make systems secure from specific traditional attacks and techniques. For one thing, cloud providers such as AWS, Azure, and GCP perform patch management on behalf of the user behind the scenes. This allows for automatic rollout of security updates which reduces the risk of vulnerabilities that could be exploited in unpatched systems. In addition to this, serverless architectures make use of ephemeral, stateless functions instead of long-standing servers. This limits the scope of a security breach and significantly reduces the attack surface since each function execution is short-lived and operates in isolation. However, these do not mean that utilizing serverless architectures and services would make your system completely immune from attacks. It only means that attackers would have to adjust their strategies and focus on exploiting weaknesses specific to serverless environments.

Note

In some cases, serverless systems are more prone to different types of attacks since developers are unfamiliar on how to manage the security of distributed serverless architectures.

In this chapter, you will delve deeper into the unique security threats and risks associated with serverless architectures. For each of the threats and risks, you’ll examine how attackers adapt their strategies to target these environments and explore effective measures to safeguard against such vulnerabilities.

Leaked Credentials

When building applications and systems, developers may utilize and integrate with various 3rd-party services to avoid building entire modules from scratch and significantly speed up the development cycle. Here are some examples of 3rd-party APIs and services usually integrated when building applications:

  • Payment processing APIs and services for handling online payments and financial transactions

  • Identity management APIs and services for authentication and authorization

  • Email delivery APIs and services for marketing and email delivery

  • Web analytics APIs and services for tracking and reporting website traffic

  • SMS APIs and services for sending text messages and enabling mobile notifications

In order to interact and integrate with these external APIs and services, developers must use the access credentials provided by these platforms. Given that developers are expected to meet aggressive application development timelines, they sometimes skip steps and assume that ignoring some of the security best practices won’t have major negative consequences. One of the mistakes developers would make is for the credentials to be hard coded and included in the source code files.

Here’s an example of how developers may store and use these credentials within their application code:

Example 4-1. Credentials hard-coded in the application code
YOUR_3RD_PARTY_API_KEY = 'XXXXXXXXXXXX'

def send_email(params):
    client = SomeAPIClient(api_key=YOUR_3RD_PARTY_API_KEY)
    client.send(...)

    ...

Unfortunately, once attackers are able to get access to these credentials, they could easily perform actions using your identity. Imagine the users and customers of your application receiving malicious email messages from your company’s official account!

When using version control systems (VCS) like Git, developers sometimes add all the files in the repository using the git add . command. Some of these files could be configuration files that contain credentials used for staging and production environments.

credentials history
Figure 4-1. Credentials still present in the VCS history

These configuration files are ideally not included in the code repository as other developers could have access to resources and accounts they shouldn’t have access to. In addition to this, developers may accidentally include credentials in the VCS history. While configuration files containing the credentials may not be present in the current version of the code stored in the repository, previous versions may still include the credentials as these were committed and included in the previous commits.

That said, the repository commit history should be reviewed and checked for:

  • third-party integration credentials

  • database dumps containing production data

  • production or staging configuration files

A bad actor who has access to the code repository can easily check out an older commit and retrieve the access keys and credentials from an older version of the application code even if these credentials have been removed already in the latest version of the code.

Note

For more information on how to remove sensitive data from a code repository, feel free to check the following link.

In some cases, developers may accidentally push a copy of the entire codebase to a public repository. You would be surprised that it only takes a few seconds to a few minutes before attackers are able to scan the public codebases and extract the credentials stored in the repository. It’s best to assume that once credentials have been included in a commit and pushed to a public repository, it’s already compromised. Once exposed, even if for a brief moment, malicious actors could access and exploit these credentials using automated tools and scanners.

When working with 3rd-party APIs and services, developers may have no choice but to work with the credentials provided by these external platforms. That being said, developers must adopt best practices such as using environment variables or secure vaults to handle these credentials safely. Otherwise, an attacker able to steal and gain access to your application code would get these credentials as well.

Over-Privileged Permissions & Roles

When working with cloud platform APIs, cloud providers like AWS, Azure, and GCP are able to allow your application utilize various APIs without having to hardcode credentials in the application code. Behind the scenes, the libraries and SDKs provided by these platforms simply work with the credentials stored in the environment of the cloud resource where the application is deployed. Cloud resources such as serverless functions and servers can have entities like IAM roles or policies attached to them that allow the code running inside these resources to automatically assume these roles and securely access other services and resources without developers having to include the credentials in the application code.

iam role
Figure 4-2. IAM Role attached to cloud resources

Imagine having a serverless function resource (like an AWS Lambda Function) with an over-privileged IAM role attached to it. Unfortunately, this IAM role has the following permission configuration:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

This permission configuration presents a significant security risk, as it grants the Lambda function unrestricted access to (almost) all actions on all resources within the cloud account. Such over-privileged permissions expose the cloud environment to potential misuse or compromise, especially if the Lambda function is exploited by an attacker.

That being said, asterisks (*) in these types of security configurations should be avoided. They should be replaced with more granular permissions that adhere to the principle of least privilege. This principle dictates that entities (such as IAM roles or users) should only have the permissions necessary to perform their intended functions, and no more. Specifying explicit resources and actions can significantly reduce the attack surface and help in safeguarding against unauthorized access or actions. As you might have probably guessed, it takes a fair amount of time to prepare security configurations with granular permissions. If your serverless application makes use of multiple cloud resources, then you might be tempted to reuse an IAM role attached to one resource and attach this IAM role to other resources as well. If this IAM role is a bit too restrictive for all the resources attached to it, you may be forced to extend the permissions scope beyond what is strictly necessary for each resource to meet the requirements of all associated resources.

iam roles
Figure 4-3. IAM Roles attached to cloud resources

As you can see in Figure 4-3, it is best if we set up and configure a separate IAM role for each of the resources in our cloud account. This allows us to configure these roles separately with granular permissions and enforce the principle of least privilege. With this, even if an attacker is able to compromise one of the resources running in the cloud account, the potential impact of the breach is limited due to the strict security configuration associated with each resource.

Broken Authentication

In serverless applications which involve user interaction and user accounts, authentication mechanisms are critical for securing applications and their resources against unauthorized access. However, when these mechanisms are poorly implemented or configured, the system becomes vulnerable to broken authentication attacks. Attackers can exploit these weaknesses to assume the identities of real users and gain unauthorized access to sensitive information and application features and functionalities. This becomes particularly concerning in serverless applications, where individual functions, often isolated and independently secured, might not uniformly enforce strong authentication, leading to potential security breaches across the system.

Developers generally start with batteries-included web frameworks such as Laravel (PHP), Django (Python), and Rails (Ruby) before they are able to work on their first serverless application. These frameworks already have mature authentication and authorization libraries built-in which took multiple iterations and collective experience before these reached an acceptable level of maturity from a security standpoint. Once developers start building serverless application, they may assume the same level of convenience and security assumptions. Unfortunately, once they work on developing and building serverless applications, they would be working with a distributed setup instead of a monolothic architecture. That being said, they would now have to worry about the secure communication of the resources involved as well as the overall security of the distributed system. While there are libraries which help provide the much needed authentication and authorization features, developers just starting out with serverless application development generally do not have the knowledge and experience to secure these types of applications.

Insecure VPC Network Configuration

An insecure VPC configuration might inadvertently expose serverless resources and systems to the public internet. In addition to this, it might allow unrestricted traffic between resources (that is, internal and external to the network), which can be exploited by malicious actors.

Insecure VPC network configuration in a serverless environment can lead to several types of security risks. One common issue is the improper segmentation of resources, where sensitive data or critical functions are not isolated from less secure elements of the network. This can allow an attacker who gains access to one part of the network to easily move laterally to more sensitive areas. Additionally, inadequate access controls or monitoring can leave the network open to unauthorized access or data exfiltration.

Note

For example, developers may incorrectly or accidentally deploy database cloud resources in a public subnet of a VPC network environment. Why? Attackers may be able to attack and compromise these resources directly if deployed inside public subnets. It is important to note that these cloud resources are ideally deployed inside private subnets so that attackers would have to attack and compromise the cloud resources deployed in the public subnet before they are able to attack the resources deployed in the private subnet.

In addition to this, if network access controls are too permissive, it could allow an attacker to gain access to specific resources, steal the custom code deployed in the serverless resources, and even send a copy of the code to an external resource outside of the network environment prepared ahead of time by the attacker.

Credentials Exfiltration

Credentials exfiltration in serverless environments can occur in several ways. An attacker might exploit vulnerabilities in the code which could allow an attacker to run arbitrary commands that copies the entire codebase (including the hardcoded credentials) to the attacker’s machine. It is also possible for an attacker to run arbitrary commands that can exfiltrate the credentials stored in the environment where the code is running. Once the credentials are compromised, attackers can gain unauthorized access to other resources in the same account, especially if the credentials mapped to an over-privileged IAM resources.

Injection

What makes security tricky in serverless applications is the variety of sources from which events can originate, each presenting unique security challenges. Imagine a serverless architecture where multiple functions are deployed that interact with databases and other resources. These functions, while efficient and scalable, become prime targets for injection attacks. In such an environment, an attacker can exploit vulnerabilities in the function code that processes user input, leading to unauthorized data access or manipulation.

malicious input
Figure 4-4. Malicious input

Injection attacks in serverless architectures occur when an attacker sends malicious code through user inputs, which are not properly sanitized or validated by the serverless function. This can result in the execution of the injected code within the serverless environment.

malicious input sql injection
Figure 4-5. Malicious input (SQL Injection)

For example, an SQL injection could occur if a function resource takes user input and directly uses it to construct a database query without adequate checks. The attacker’s code can manipulate the query, leading to unauthorized data access, data theft, or even database corruption.

Note

Modern Generative AI-powered systems should be checked for potential injection attacks in the form of prompt injection. This involves validating and sanitizing user inputs to ensure that malicious input cannot manipulate the AI’s output in unintended ways (for example, making the AI backend send spam emails or revealing sensitive information). That being said, developers should implement filtering mechanisms to detect and block harmful or exploitative prompts for these types of systems.

The risks associated with these injection attacks are amplified in serverless architectures due to their distributed nature and the potential for escalated privileges. Since serverless functions often have access to a wide range of resources and services within the cloud environment, a successful injection attack could lead to a broader compromise of the system.

Vulnerable App Dependencies

Serverless application may have a few cloud functions relying on various third-party libraries and frameworks. These dependencies can introduce vulnerabilities if they are outdated or poorly maintained. If unresolved, these would expose the serverless application to potential security breaches.

malicious dependency
Figure 4-6. Malicious Dependency

To mitigate these risks, developers should adopt a proactive approach to dependency management which include having regular audits and checks of third-party libraries and frameworks they are using for known vulnerabilities. Automated tools can be employed to scan dependencies for issues which would then alert teams automatically for them to update (or patch) in order to address the security issues.

Note

Security services like Amazon Inspector can be used to scan serverless function services like AWS Lambda. In the past, its supported only scanning of cloud servers and container images. However, due to the more developers using AWS Lambda for application development, AWS has added support for scanning the said service as well. For more information, feel free to check: https://docs.aws.amazon.com/inspector/latest/user/scanning-lambda.html

Security Misconfiguration and Insecure Defaults

When using cloud services while building an application, developers and engineers may forget to configure security settings properly or may inadvertently leave them at insecure defaults. This can lead to vulnerabilities such as:

  • unencrypted and unprotected data storage

  • open access permissions

  • verbose error messages and debugging interfaces leaking sensitive information

That being said, an attacker may take advantage of these security misconfigurations to gain unauthorized access and even execute malicious actions within the cloud environment.

Insecure Deserialization

Applications often serialize and deserialize data (potentially from configuration files) without sufficient input validation, assuming that the data or configuration/input values are trustworthy.

malicious configuration file
Figure 4-7. Malicious Configuration File

This assumption can be exploited by attackers who are able to inject malicious data or configuration that, when deserialized, can execute arbitrary code which can manipulate application logic to the attacker’s benefit.

Note

For instance, if an application deserializes data from cookies, remote APIs, or other external sources without proper sanitization or security checks, an attacker can use this to compromise certain cloud resources which are part of the serverless application.

Denial of Service & Denial of Wallet

Imagine a serverless architecture where functions dynamically scale to meet demand, processing numerous requests per second. In a Denial of Service (DoS) attack, these functions are bombarded with an overwhelming number of requests, depleting system resources and rendering the service unavailable to legitimate users.

ddos
Figure 4-8. Distributed Denial of Service

This surge in demand can lead to a Denial-of-Wallet (DoW) scenario, where the auto-scaling nature of serverless services results in unexpected and significant cost escalations for the cloud resource usage.

These past few years, more companies and organizations around the world have started building Generative AI powered applications as well using various serverless services such as Amazon Bedrock and Azure OpenAI.

serverless gen ai
Figure 4-9. A sample of a serverless Generative AI-powered application

Given that these services generally make use of a pricing structure that follow a pay-as-you-go pricing model and charges cloud account owners based on usage (and in most cases, based on the number of tokens involved in each request and response), it is important that developers using these services are aware of the potential cost implications when not secured against potential Denial-of-Wallet attacks.

Of course, there are ways to mitigate these types of attacks. Cloud platforms generally have built-in mechanisms and services such as rate-limiting requests as well as internal security mechanisms included as part of the service. Unfortunately, developers and engineers may not be aware of the existence of these features and may assume that internal cloud resources won’t need any sort of security protection from a configuration standpoint.

Insecure Storage of Credentials and Secret Keys

There are various ways for serverless architectures to store and manage credentials and secret keys. This includes making use of environment variables, configuration files, or cloud-based secret management services. However, insecure practices like hardcoding these details directly in function code or improperly securing configuration files can lead to their exposure. This vulnerability allows attackers to gain unauthorized access, potentially leading to significant data breaches and compromise of the entire cloud environment.

That said, it’s advisable to utilize credential storage or management services provided by cloud platforms. While these may not solve all security requirements and challenges related to the insecure storage of credentials and secret keys, these would improve the overall security posture of the application by:

  • centralizing the management of secrets

  • encrypting sensitive data both in transit and at rest and

  • providing mechanisms for rotating secrets regularly

In addition to this, these services often offer detailed access logs and audit trails, which can be invaluable for detecting unauthorized access or breaches. By leveraging such services, developers can abstract the complexities of securely managing credentials away from the application code and significantly reduce the risk of accidental exposure.

Insufficient Tracing, Logging, Monitoring, and Alerting

In serverless architectures, having a poorly designed tracing and logging setup can lead to blind spots in understanding how the different resources and serverless components interact with each other. For example, if someone is trying to compromise your system, the lack of detailed logs makes it difficult to trace the origin of the attack as well as the extent of the compromise. This will also delay the response time when responding to different types of threats.

In addition to this, the lack (or even absence) of properly configured monitoring and alerting resources make these attacks hard to detect and manage. Without monitoring and alerting set up before the first production release, unusual activities in the account that could indicate security breaches often go unnoticed until attackers have already caused significant damage.

Business Logic Vulnerabilities

Imagine a serverless application designed for e-commerce, where functions are deployed to manage tasks like user authentication, payment processing, and order fulfillment. In this setup, each function is coded to perform its specific role, relying on the assumption that users will interact with the application as intended. However, attackers may take advantage of this assumption and manipulate the application’s business logic to their advantage.

Business logic vulnerabilities in serverless architectures arise when an attacker identifies and exploits a flaw in the way the application is supposed to function. For instance, an attacker might manipulate the process of a promotional code application in an e-commerce checkout function, enabling them to reuse a single-use discount code multiple times. This type of attack does not rely on traditional application vulnerabilities; instead, it abuses the normal, expected operations of the application in ways that were not anticipated by the developers.

Serverless Security Mechanism Limitations

When a new serverless service is released and announced by a cloud platform, the service may take a few months for some of its other features to be completed. These features may include security features used for managing access and even helping with governance. During this period, early adopters often have to navigate these gaps by implementing workarounds or using third-party tools to fill in the missing features. In other cases, these gaps remain unsolved for a longer period of time. Of course, as more users use these services, additional features get requested and the cloud platform may accelerate the development and release of the features requested by the organizations and users.

That being said, when choosing the services to be used in your serverless application, it is important that the current feature set and roadmap of the service are taken into account before using the service as part of a production environment. In addition to this, it is critical that the documentation and examples for these services are not poorly written and incomplete in ensure that compliance requirements and security considerations are properly addressed.

Summary

In this chapter, we dived deeper into various serverless security risks and threats and discussed how each of these could be used by attackers against your serverless applications and systems. In the next chapter, we will delve into how serverless functions work as a prerequisite to help us understand how to hack and secure these types of cloud resources.

Get Learning Serverless Security now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.