Chapter 1. Introduction to Serverless Computing

Serverless computing has gained immense popularity in recent years, revolutionizing the way applications are developed and deployed. In this chapter, we will delve into the fundamentals of serverless computing, covering key concepts, benefits, and practical use cases. In addition to these, I will address any misconceptions or myths associated with them as well.

Serverless applications have their own set of security challenges that differ from traditional application architectures. Having a solid understanding of what serverless is (and what it is not) will help you a long way in securing serverless applications and systems.

Demystifying Serverless Computing

Serverless is an operational model that allows developers to build applications without having to worry about managing the underlying infrastructure. It abstracts away the responsibilities of server management, scaling, patching, and resource provisioning. This model enables developers to focus on delivering business value using a variety of interconnected tools and services that enable automatic scaling of applications based on demand.

Note

In this book, I will use serverless and serverless computing interchangeably.

With serverless computing, developers can focus on writing code and working on the custom business logic of their applications since they no longer have to worry about having to manage the underlying infrastructure of the resources used. It leverages the use of managed services and cloud resources that can automatically scale up or down based on the workload or traffic. In addition to this, its payment model is generally based on the actual execution of the cloud resources. These bring several advantages such as enhanced scalability, cost-effectiveness, and increased productivity.

Here are a few scenarios where it is generally a good idea to utilize serverless solutions and strategies:

  1. Event-driven workloads where cloud resources are dynamically allocated and scaled based on the incoming events that allows the system to respond and scale in real-time

  2. Short-lived and bursty workloads where the application may experience unpredictable and sudden spikes in demand

  3. Rapid prototyping and development where updates can be made to specific components of a system without disrupting the entire application

  4. Modular and decoupled architectures where modules and components of an application can be developed, tested, and deployed independently

  5. Cost-conscious applications where the right amount of resources need to be provisioned at any given time

  6. Resilient and fault-tolerant applications where the impact of failures on the overall application is minimized and managed automatically in order to maintain high availability

Now that you have a better idea of what serverless is, let’s proceed with talking about the common myths and misconceptions that will help us understand what it is not.

Common Myths and Misconceptions on Serverless Computing

In this section, we will go through a number of relevant myths and misconceptions concerning serverless computing. These myths are as follows:

  • Myth # 1: Serverless == FaaS

  • Myth # 2: Serverless computing and containerization don’t work well together

  • Myth # 3: Serverless applications only support a limited number of languages

  • Myth # 4: Serverless applications are difficult to manage

  • Myth # 5: Serverless applications are immune to security attacks

Without further ado, let’s begin mythbusting!

Myth # 1: Serverless == FaaS

One of the first things that come to mind when talking about serverless computing are Functions-as-a-Service (FaaS) services that include AWS Lambda, Azure Functions, and Google Cloud Functions. With FaaS, developers only have to focus on writing function code that gets triggered by events from sources such as HTTP requests along with events from other cloud resources.

Inside these functions, you can:

  • Write and implement custom business logic code

  • Use libraries and packages included and installed in the application runtime environment

  • Utilize other services and capabilities of the cloud platform using the Application Programming Interfaces (APIs) and Software Development Kits (SDKs) to perform specialized tasks. These specialized tasks may involve processing and transforming data, using AI-powered services to analyze images, or even training machine learning models using a managed cloud service.

  • Use APIs and SDKs to create, manage, modify, or delete other resources in the cloud platform. An example of this would be a serverless function that automatically creates a DevOps pipeline from a configuration file using a number of Infrastructure-as-Code (IaC) services.

  • Trigger other serverless functions

  • Work with other non-serverless resources such as virtual machine (VM) instances and databases

In case you are wondering how these functions are implemented, here is a quick example of an AWS Lambda function implementation in Python

Example 1-1. Sample AWS Lambda function implementation in Python
import json

# ... (insert imports here) ...

def lambda_handler(event, context):
    role = event.get('role')
    endpoint_name = event.get('endpoint_name')
    package_arn = event.get('package_arn')

    model_name = random_string()
    create_model(model_name, package_arn, role)
    endpoint_config_name = create_endpoint_config(model_name)

    create_endpoint(endpoint_name, endpoint_config_name)

    return {
        'statusCode': 200,
        'body': json.dumps(event),
        'model': model_name
    }

Here, we have a function that automatically configures and provisions a serverless machine learning powered endpoint using a managed machine learning service called Amazon SageMaker. It makes use of other custom utility functions (e.g., random_string, create_model, create_endpoint_config, and create_endpoint) imported from another file . The function then executes when triggered by an event from another cloud resource. This cloud resource could be an Amazon API Gateway HTTP API that accepts HTTP requests from the browser and “converts” the requests to events (containing the required set of input parameters) that trigger the serverless function similar to what we have in Figure 1-1.

how serverless functions are triggered
Figure 1-1. How serverless functions are triggered and executed

After the function has finished executing, it sends the function return value back to the Amazon API Gateway HTTP API. This function return value is then converted by the HTTP API to an HTTP response.

Serverless computing is frequently misunderstood as being limited to FaaS. Although FaaS is a popular approach to serverless computing, it is important to note that serverless computing encompasses more than just FaaS. In other words, FaaS is a type of serverless computing, but not all serverless computing is necessarily FaaS.

venn diagram serverless faas
Figure 1-2. Serverless != FaaS

In the context of serverless architectures, various services can be utilized as building blocks that can be interconnected to create intricate and scalable applications. These services may offer features such as event-driven computing, API management, object storage triggers, fully managed databases, workflow coordination, and distributed messaging. By harnessing the interconnected nature of these services, developers have the ability to craft serverless applications that are customized to their unique use cases, requirements, and business needs.

Note

We will dive deeper into the different serverless services along with the common serverless architecture patterns in Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) in Chapter 2: Understanding Serverless Architectures and Implementation Patterns. Having a solid understanding of how serverless systems are implemented will help us secure these systems better.

That said, when talking about serverless security, you will have to take into account all services that fall under the serverless bucket, that may not necessarily be FaaS. At the same time, you also have to worry about non-serverless resources as well since both serverless and non-serverless resources would most likely co-exist in the same cloud environment or account.

Myth # 2: Serverless computing and containerization don’t work well together

In the previous section, I defined serverless computing as an operational model that abstracts the underlying infrastructure so that developers and engineers can focus on writing code. Containerization, on the other hand, provides a portable way to package and run applications across different environments. There is a common misconception that serverless and containerization solutions do not blend well together in the world of cloud computing. This is definitely not the case, as serverless computing and containerization solutions can be effectively combined to build modern applications.

With serverless containers, developers can package their applications in containers, such as Docker containers, and deploy them to container orchestration platforms like AWS Fargate, Azure Container Apps, or Google Cloud Run.

Note

There are a number of services, capabilities, and features in cloud platforms that leverage the strengths of both serverless and containerization paradigms. We will discuss these in the Serverless Containers section of Chapter 2, Understanding Serverless Architectures and Implementation Patterns

Not all serverless implementations involve container solutions and strategies (at least from the point of view of the developer). However, for serverless implementations allowing developers and engineers to provide their own set of custom container images, you will need to take into account the security of the containers and container images as well.

For one thing, it is possible for container images to have older versions of libraries that could be vulnerable to specific types of attacks. It is also possible for developers to accidentally install malicious packages (with malware) in the container image that steal the credentials used by the developers in their applications running inside the container. That said, the containers and container images used in the serverless system must be audited and checked as part of your vulnerability management program.

Note

In Chapter 4: Diving Deeper into Serverless Security Threats and Risks, we will dive deep into a variety of threats and risks affecting both containerized and non-containerized serverless applications.

Myth # 3: Serverless applications only support a limited number of languages

There is a misconception that serverless services have limited language support. This was definitely the case years ago, when these services were still in their infancy and only supported 2 or 3 languages when they were launched-. Of course, as more developers requested for other languages and language versions to be supported, cloud providers expanded their language support in these services to accommodate a wider range of languages, addressing the limitation of early serverless platforms.

It’s important to note that language support in serverless platforms is constantly evolving, with the addition of new languages and updates to existing ones. This reflects the dynamic nature of the serverless computing landscape as cloud platforms strive to meet the needs of developers and make serverless computing more accessible and versatile.

Note

Serverless services such as AWS Lambda and Azure Functions allow developers to use any language and language version through a variety of ways such as custom runtime environments, custom container images, or custom handlers.

The continuously evolving landscape of language support in serverless platforms can have implications on how engineers deal with security requirements when building and managing serverless applications. Different programming languages have their own set of security considerations — including the best practices for securing application code written in those languages. For one thing, the maturity and ecosystem of programming languages can have an impact on how library implementations differ across languages. A library developed for a relatively new programming language may have security vulnerabilities that counterpart libraries in other more mature languages do not have (since these vulnerabilities may have been detected and remediated years ago). Engineers need to be aware of these language-specific security considerations and ensure that their serverless applications are designed, coded, and configured securely regardless of the language used.

Myth # 4: Serverless applications are difficult to manage

There are some scenarios where managing serverless applications can become challenging. This can be the case if the serverless application makes use of multiple functions or microservices interacting with each other in a highly coordinated manner. Managing the coordination and sequencing of 2 functions is definitely much easier compared to managing the coordination and sequencing of 10 different functions (similar to what is shown in Figure 1-3)!

complex serverless systems
Figure 1-3. Managing simple and complex serverless applications

In addition to this, even if serverless services and platforms provide built-in monitoring and debugging capabilities, managing the monitoring and debugging work of relatively large (let’s say around 30 serverless functions) serverless systems can become very challenging if the application has complex event-driven flows. This may also be the case if tracing and debuging issues across multiple functions or services need to be performed. At the same time, preparing granular Identity and Access Management (IAM) roles and policies used by each of the serverless resources (to allow these resources to perform actions) could be very time consuming and hard to manage.

Wait a minute! So are we saying that serverless applications being difficult to manage is not a myth? Like all types of engineering systems, serverless applications and systems with a relatively large number of resources and components can be difficult to manage. Years ago, there were a very limited number of tools and services available to support complex requirements using the serverless services. However, at this point in time, the ecosystem and tooling available to help engineers manage more complex serverless applications is mature enough to support most scenarios and cases.

The following is a list of recommended solutions that can be used when dealing with relatively complex serverless projects:

  1. Adoption of an event-driven architecture

  2. Leveraging managed services such as message queues and event streams

  3. Adoption of a modular and microservices-oriented approach in serverless application design

  4. Usage of serverless frameworks that provide practical abstractions and automation for deploying and managing serverless applications

  5. Usage of error tracing, debugging, and monitoring tools specializing on serverless systems

  6. Implementation and enforcement of infrastructure-as-code (IaC) practices

  7. Usage of orchestration and workflow tools and services for managing the coordination, sequence, and flow of functions along with other resources in the serverless application

  8. Usage of deployment tools to automate and ensure the consistency of the deployment and release management of serverless applications

Of course, there is no need to use all of these recommendations at the same time. Engineering teams should choose the right set of solutions and processes depending on the complexity level of the serverless application involved. That said, the myth that serverless applications are difficult to manage is not entirely accurate. With proper planning along with the use of the right set of tools, services, and strategies, working with complex serverless applications should be more manageable and predictable.

Myth # 5: Serverless applications are immune to security attacks

Developers, engineers, and other technology professionals may think that serverless applications are immune to security attacks. For one thing, the term “serverless” can be misleading, as it implies that there are no servers involved in the application architecture. In reality, serverless applications still run on servers that are managed by cloud providers. Like any other application, serverless applications are susceptible to security vulnerabilities. In addition to this, technology professionals may believe that the managed security features provided by the cloud platforms are sufficient to make serverless applications immune to security attacks, without realizing that additional security measures are still required at the application and configuration level. Some developers and engineers may also be unaware of the potential security risks associated with serverless applications. This lack of awareness or knowledge could also result in a false belief that serverless applications are immune to security attacks.

It’s important to note that while serverless services and architectures may provide certain security advantages, they are not inherently immune to security attacks. Serverless applications utilizing FaaS services still rely on the developer’s code to execute custom business logic. If there are vulnerabilities in the code, these vulnerabilities can be exploited by attackers similar to how other application-level attacks are performed. Serverless sytems can also be vulnerable to Denial-of-Service (DoS) attacks, where an attacker floods the application with requests that overwhelms the deployed resources and causes the application to become unresponsive or unavailable. It is also possible for these systems to be vulnerable to Denial-of-Wallet (DoW) attacks where an attacker floods the application with requests that inflict financial damage to the owner of the account where the application is running (since the account owner will have a significantly higher bill to pay due to the unreasonable increase in resource usage and execution of the deployed serverless resources).

Note

We will discuss these security threats in more detail in Chapter 4: Diving Deeper into Serverless Security Threats and Risks

That said, the myth that serverless applications are immune to security attacks is incorrect. It is essential for developers and engineers to be aware of the potential vulnerabilities in serverless applications and follow the recommended best practices for securing these systems to prevent potential security attacks.

In this chapter, we discussed what serverless is and learned the myths and misconceptions associated with it. In the next chapter, we will dive deep into the common implementation patterns, architectures, and strategies for building serverless applications in the cloud.

Get Learning Serverless Security now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.