Chapter 5. Deploy Microservices at Scale with Docker and Kubernetes

Up to now, we’ve talked about microservices at a higher level, covering organizational agility, designing with dependency thinking, domain-driven design, and promise theory. Then we took a deep dive into the weeds with three popular Java frameworks for developing microservices: Spring Boot, Dropwizard, and WildFly Swarm. We can leverage powerful out-of-the-box capabilities easily by exposing and consuming REST endpoints, utilizing environment configuration options, packaging as all-in-one executable JAR files, and exposing metrics. These concepts all revolve around a single instance of a microservice. But what happens when you need to manage dependencies, get consistent startup or shutdown, do health checks, and load balance your microservices at scale? In this chapter, we’re going to discuss those high-level concepts to understand more about the challenges of deploying microservices, regardless of language, at scale.

When we start to break out applications and services into microservices, we end up with more moving pieces by definition: we have more services, more binaries, more configuration, more interaction points, etc. We’ve traditionally dealt with deploying Java applications by building binary artifacts (JARs, WARs, and EARs), staging them somewhere (shared disks, JIRAs, and artifact repositories), opening a ticket, and hoping the operations team deploys them into an application server as we intended, with ...

Get Microservices for Java Developers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.