Chapter 6. Hands-on Cluster Management, Failover, and Load Balancing

In Chapter 5, we had a quick introduction to Linux containers, and cluster management. Let’s jump into using these things to solve issues with running microservices at scale. For reference, we’ll be using the microservice projects we developed in Chapters 2, 3, and 4 (Spring Boot, MicroProfile, and Apache Camel, respectively). The following steps can be accomplished with any of those three Java frameworks.

Getting Started

To deploy our microservices, we will assume that a Docker image exists. Each microservice described here already has a Docker image available at the Docker Hub registry, ready to be consumed. However, if you want to craft your own Docker image, this chapter will cover the steps to make it available inside your Kubernetes/OpenShift cluster.

Each microservice uses the same base Docker image provided by the Fabric8 team. The image fabric8/java-alpine-openjdk8-jdk uses OpenJDK 8.0 installed on Alpine Linux distribution, which makes the image as small as 74 MB.

This image also provides nice features like adjusting the JVM arguments -Xmx and -Xms, and makes it really simple to run fat JARs.

An example Dockerfile to build a Java fat jar image would be as simple as:

FROM fabric8/java-alpine-openjdk8-jdk
ENV JAVA_APP_JAR <your-fat-jar-name>
ENV AB_OFF true
ADD target/<your-fat-jar-name> /deployments/

The environment variable JAVA_APP_JAR specifies the name of the JAR file that should be called by the ...

Get Microservices for Java Developers, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.