Chapter 11. Containers/Microservices

Introduction

Containers offer a layer of abstraction at the application layer, shifting the installation of packages and dependencies from the deploy to the build process. This is important because engineers are now shipping units of code that run and deploy in a uniform way regardless of the environment. Promoting containers as runnable units reduces the risk of dependency and configuration snafus between environments. Given this, there has been a large drive for organizations to deploy their applications on container platforms. When running applications on a container platform, it’s common to containerize as much of the stack as possible, including your proxy or load balancer. NGINX and NGINX Plus containerize and ship with ease. They also include many features that make delivering containerized applications fluid. This chapter focuses on building NGINX and NGINX Plus container images, features that make working in a containerized environment easier, and deploying your image on Kubernetes and OpenShift.

DNS SRV Records

Problem

You’d like to use your existing DNS SRV record implementation as the source for upstream servers with NGINX Plus.

Solution

Specify the service directive with a value of http on an upstream server to instruct NGINX to utilize the SRV record as a load-balancing pool:

http {
    resolver 10.0.0.2;

    upstream backend {
        zone backends 64k;
        server api.example.internal service=http resolve;
    }
}

Get NGINX Cookbook now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.