Chapter 11. Containers/Microservices

11.0 Introduction

Containers offer a layer of abstraction at the application layer, shifting the installation of packages and dependencies from the deploy to the build process. This is important because engineers are now shipping units of code that run and deploy in a uniform way regardless of the environment. Promoting containers as runnable units reduces the risk of dependency and configuration snafus between environments. Given this, there has been a large drive for organizations to deploy their applications on container platforms. When running applications on a container platform, it’s common to containerize as much of the stack as possible, including your proxy or load balancer. NGINX containerizes and ships with ease. It also includes many features that make delivering containerized applications fluid. This chapter focuses on building NGINX container images, features that make working in a containerized environment easier, and deploying your image on Kubernetes and OpenShift.

When containerizing, it’s often common to decompose services into smaller applications. When doing so, they’re tied back together by an API gateway. This chapter provides a common case scenario of using NGINX as an API gateway to secure, validate, authenticate, and route requests to the appropriate service.

A couple of architecture considerations about running NGINX in a container should be called out. When containerizing a service, to make use of the Docker log ...

Get NGINX Cookbook, 3rd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.