Chapter 1. Declarative Deployment

The heart of the Declarative Deployment pattern is Kubernetes’ Deployment resource. This abstraction encapsulates the upgrade and rollback processes of a group of containers and makes its execution a repeatable and automated activity.

Problem

We can provision isolated environments as namespaces in a self-service manner and have the services placed in these environments with minimal human intervention through the scheduler. But with a growing number of microservices, continually updating and replacing them with newer versions becomes an increasing burden too.

Upgrading a service to a next version involves activities such as starting the new version of the Pod, stopping the old version of a Pod gracefully, waiting and verifying that it has launched successfully, and sometimes rolling it all back to the previous version in the case of failure. These activities are performed either by allowing some downtime but no running concurrent service versions, or with no downtime, but increased resource usage due to both versions of the service running during the update process. Performing these steps manually can lead to human errors, and scripting properly can require a significant amount of effort, both of which quickly turn the release process into a bottleneck.

Solution

Luckily, Kubernetes has automated application upgrades as well. Using the concept of Deployment, we can describe how our application should be updated, using different strategies, and tuning the various aspects of the update process. If you consider that you do multiple Deployments for every microservice instance per release cycle (which, depending on the team and project, can span from minutes to several months), this is another effort-saving automation by Kubernetes.

In Chapter 2, we have seen that, to do its job effectively, the scheduler requires sufficient resources on the host system, appropriate placement policies, and containers with adequately defined resource profiles. Similarly, for a Deployment to do its job correctly, it expects the containers to be good cloud-native citizens. At the very core of a Deployment is the ability to start and stop a set of Pods predictably. For this to work as expected, the containers themselves usually listen and honor lifecycle events (such as SIGTERM; see Chapter 3, “Managed Lifecycle”) and also provide health-check endpoints as described in Chapter 2, “Health Probe”, which indicate whether they started successfully.

If a container covers these two areas accurately, the platform can cleanly shut down old containers and replace them by starting updated instances. Then all the remaining aspects of an update process can be defined in a declarative way and executed as one atomic action with predefined steps and an expected outcome. Let’s see the options for a container update behavior.

Rolling Deployment

The declarative way of updating applications in Kubernetes is through the concept of Deployment. Behind the scenes, the Deployment creates a ReplicaSet that supports set-based label selectors. Also, the Deployment abstraction allows shaping the update process behavior with strategies such as RollingUpdate (default) and Recreate. Example 1-1 shows the important bits for configuring a Deployment for a rolling update strategy.

Example 1-1. Deployment for a rolling update
apiVersion: apps/v1
kind: Deployment
metadata:
  name: random-generator
spec:
  replicas: 3              1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1          2
      maxUnavailable: 1    3
  minReadySeconds: 60      4
selector:
    matchLabels:
      app: random-generator
  template:
    metadata:
      labels:
        app: random-generator
    spec:
      containers:
      - image: k8spatterns/random-generator:1.0
        name: random-generator
        readinessProbe:    5
          exec:
            command: [ "stat", "/random-generator-ready" ]
1

Declaration of three replicas. You need more than one replica for a rolling update to make sense.

2

Number of Pods that can be run temporarily in addition to the replicas specified during an update. In this example, it could be a total of four replicas at maximum.

3

Number of Pods that can be unavailable during the update. Here it could be that only two Pods are available at a time during the update.

4

Duration how long all readiness probes for a rolled out Pod needs to be healthy until to continue with the rollout.

5

Readiness probes are very important for a rolling deployment to provide zero downtime—don’t forget them (see Chapter 2, “Health Probe”).

RollingUpdate strategy behavior ensures there is no downtime during the update process. Behind the scenes, the Deployment implementation performs similar moves by creating new ReplicaSets and replacing old containers with new ones. One enhancement here is that with Deployment, it is possible to control the rate of a new container rollout. The Deployment object allows you to control the range of available and excess Pods through maxSurge and maxUnavailable fields.

These two fields can be either absolute numbers of Pods or relative percentages that are applied to configured number of replicas for the Deployment and are rounded up (maxSurge) or down (maxUnavailable) to the next integer value. By default maxSurge and maxUnavailable are both set to 25%.

Another important parameter that influences the rollout behaviour is minReadySeconds. This field specifies the duration in seconds how long the readiness probes of a Pod need to return success until the Pod itself is considered to be available in a rollout. Increasing this value guarantees that your application Pod is succesfully running for some time already before continuing with the rollout. Also, a larger minReadySeconds interval helps in debugging and exploring the new version. A kubectl rollout pause might be easier to leverage when the intervals between the update steps is larger.

Figure 1-1 shows the rolling update process.

Rolling deployment
Figure 1-1. Rolling deployment

To trigger a declarative update, you have three options:

  • Replace the whole Deployment with the new version’s Deployment with kubectl replace.

  • Patch (kubectl patch) or interactively edit (kubectl edit) the Deployment to set the new container image of the new version.

  • Use kubectl set image to set the new image in the Deployment.

See also the full example in our example repository, which demonstrates the usage of these commands, and shows you how you can monitor or roll back an upgrade with kubectl rollout.

In addition to addressing the previously mentioned drawbacks of the imperative way of deploying services, the Deployment brings the following benefits:

  • Deployment is a Kubernetes resource object whose status is entirely managed by Kubernetes internally. The whole update process is performed on the server side without client interaction.

  • The declarative nature of Deployment makes you see how the deployed state should look rather than the steps necessary to get there.

  • The Deployment definition is an executable object, tried and tested on multiple environments before reaching production.

  • The update process is also wholly recorded, and versioned with options to pause, continue, and roll back to previous versions.

Fixed Deployment

A RollingUpdate strategy is useful for ensuring zero downtime during the update process. However, the side effect of this approach is that during the update process, two versions of the container are running at the same time. That may cause issues for the service consumers, especially when the update process has introduced backward-incompatible changes in the service APIs and the client is not capable of dealing with them. For this kind of scenario, there is the Recreate strategy, which is illustrated in Figure 1-2.

Fixed deployment using a Recreate strategy
Figure 1-2. Fixed deployment using a Recreate strategy

The Recreate strategy has the effect of setting maxUnavailable to the number of declared replicas. This means it first kills all containers from the current version and then starts all new containers simultaneously when the old containers are evicted. The result of this sequence of actions is that there is some downtime while all containers with old versions are stopped, and there are no new containers ready to handle incoming requests. On the positive side, there won’t be two versions of the containers running at the same time, simplifying the life of service consumers to handle only one version at a time.

Blue-Green Release

The Blue-Green deployment is a release strategy used for deploying software in a production environment by minimizing downtime and reducing risk. Kubernetes’ Deployment abstraction is a fundamental concept that lets you define how Kubernetes transitions immutable containers from one version to another. We can use the Deployment primitive as a building block, together with other Kubernetes primitives, to implement this more advanced release strategy of a Blue-Green deployment.

A Blue-Green deployment needs to be done manually if no extensions like a Service Mesh or Knative is used, though. Technically it works by creating a second Deployment with the latest version of the containers (let’s call it green) not serving any requests yet. At this stage, the old Pod replicas (called blue) from the original Deployment are still running and serving live requests.

Once we are confident that the new version of the Pods is healthy and ready to handle live requests, we switch the traffic from old Pod replicas to the new replicas. This activity in Kubernetes can be done by updating the Service selector to match the new containers (tagged as green). As demonstrated in Figure 1-3, once the green containers handle all the traffic, the blue containers can be deleted and the resources freed for future Blue-Green deployments.

Blue-Green release
Figure 1-3. Blue-Green release

A benefit of the Blue-Green approach is that there’s only one version of the application serving requests, which reduces the complexity of handling multiple concurrent versions by the Service consumers. The downside is that it requires twice the application capacity while both blue and green containers are up and running. Also, there can be significant complications with long-running processes and database state drifts during the transitions.

Canary Release

Canary release is a way to softly deploy a new version of an application into production by replacing only a small subset of old instances with new ones. This technique reduces the risk of introducing a new version into production by letting only some of the consumers reach the updated version. When we are happy with the new version of our service and how it performed with a small sample of users, we can replace all the old instances with the new version in an additional step after this canary release. Figure 1-4 shows a canary release in action.

Canary release
Figure 1-4. Canary release

In Kubernetes, this technique can be implemented by creating a new ReplicaSet for the new container version (preferably using a Deployment) with a small replica count that can be used as the Canary instance. At this stage, the Service should direct some of the consumers to the updated Pod instances. After the canary release and once we are confident that everything with new ReplicaSet works as expected, we scale the new ReplicaSet up, and the old ReplicaSet down to zero. In a way, we are performing a controlled and user-tested incremental rollout.

Discussion

The Deployment primitive is an example of where Kubernetes turns the tedious process of manually updating applications into a declarative activity that can be repeated and automated. The out-of-the-box deployment strategies (rolling and recreate) control the replacement of old containers by new ones, and the release strategies (blue-green and canary) control how the new version becomes available to service consumers. The latter two release strategies are based on a human decision for the transition trigger and as a consequence are not fully automated but require human interaction. Figure 1-5 shows a summary of the deployment and release strategies, showing instance counts during transitions.

Deployment and release strategies
Figure 1-5. Deployment and release strategies

Every software is different, and deploying complex systems usually requires additional steps and checks. The techniques discussed in this chapter cover the Pod update process, but do not include updating and rolling back other Pod dependencies such as ConfigMaps, Secrets, or other dependent services.

One approach that works today is to create a script to manage the update process of services and their dependencies using the Deployment and other primitives discussed in this book. However, this imperative approach that describes the individual update steps does not match the declarative nature of Kubernetes.

As an alternative, higher-level declarative approaches have emerged on top of Kubernetes. The most important platforms are described in “Higher-level Deployments”. Those techniques work with Operators (see Chapter 23) that take a declarative desciption of the rollout process and perform the necessary actions on the server side, some of them also including automatic rollbacks in case of an update error. For advanced, production-ready rollout scenarios, it is recommended to look at one of those extensions.

Regardless of the deployment strategy you are using, it is essential for Kubernetes to know when your application Pods are up and running to perform the required sequence of steps reaching the defined target deployment state. The next pattern, Health Probe, in Chapter 2 describes how your application can communicate its health state to Kubernetes.

Get Kubernetes Patterns, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.