Chapter 7. Container Orchestration

Throughout this book, you ran many different Docker containers on your development machine. Each time that you ran them, you did so using the same mechanism: manually running docker commands in your terminal. Of course, this is fine for doing local development, and perhaps it can be used to run a single service instance in production, but when it comes to running an entire fleet of services, this approach is going to get rough.

This is where a container orchestration tool comes into play. Loosely put, a container orchestration tool manages the lifetimes of many ephemeral containers. Such a tool has many unique responsibilities and must take into consideration situations like the following:

  • Containers need to scale up and down as load increases and decreases.

  • New containers are occasionally added as additional services are created.

  • New versions of containers need to be deployed to replace old versions.

  • A single machine may not handle all the containers required by an organization.

  • Like-containers should be spread across multiple machines for redundancy.

  • Containers should be able to communicate with one another.

  • Incoming requests for like-containers should be load balanced.

  • If a container is deemed unhealthy, it should be replaced by a healthy one.

Container orchestration works great with stateless services, like a typical Node.js service where instances can be destroyed or re-created without having many side effects. Stateful ...

Get Distributed Systems with Node.js now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.