Chapter 5. Deploying Microservices at Scale with Docker and Kubernetes

In the previous chapters we first talked about microservices at a high level, covering organizational agility, designing with dependency thinking, domain-driven design, and promise theory, then we took a deep dive into the weeds with three popular Java frameworks for developing microservices: Spring Boot, MicroProfile/Thorntail, and Apache Camel. We saw how we can leverage the powerful out-of-the-box capabilities these frameworks provide easily by exposing and consuming REST endpoints, utilizing environment configuration options, packaging as all-in-one executable JAR files, and exposing metrics. These concepts all revolve around a single instance of a microservice. But what happens when you need to manage dependencies, get consistent startup and shutdown behavior, do health checks, and load balance your microservices at scale? In this chapter, we’re going to discuss those high-level concepts so you understand more about the challenges of deploying microservices, regardless of language, at scale.

When we start to break out applications and services into microservices, we end up with more moving pieces by definition: we have more services, more binaries, more configuration, more interaction points, etc. We’ve traditionally dealt with deploying Java applications by building binary artifacts (JARs, WARs, and EARs), staging them somewhere (shared disks, JIRAs, and artifact repositories), opening a ticket, and hoping ...

Get Microservices for Java Developers, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.