Chapter 4. Dissecting the Monolith

The ultimate goal should be to improve the quality of human life through digital innovation.

Pony Ma Huateng

Throughout history, humans have been obsessed with deconstructing ideas and concepts into simple or composite parts. It is by combining analysis and synthesis that we can achieve a higher level of understanding.

Aristotle called analytics “the resolution of every compound into those things out of which the synthesis is made. For analysis is the converse of synthesis. Synthesis is the road from the principles to those things that derive from the principles, and analysis is the return from the end to the principles.”

Software development follows a similar approach: analyze a system into its composite parts, identifying inputs, desired outputs, and detail functions. During the analytic process of software development, we have realized that non-business-specific functionality is always required to process inputs and to communicate or persist outputs. This makes it obvious that we could benefit from reusable, well-defined, context-bound, atomic functionality that can be shared, consumed, or interconnected to simplify building software.

Allowing developers to focus primarily on implementing business logic to fulfill purposes—like meeting well-defined needs of a client/business, meeting a perceived need of some set of potential users, or using the functionality for personal needs (to automate tasks)—has been a long-held desire. Too much time is wasted every day reinventing one of the most reinvented wheels: reliable boilerplate code.

The microservices pattern has gained notoriety and momentum in recent years because the promised benefits are outstanding. Avoiding known antipatterns, adopting best practices, and understanding core concepts and definitions are paramount in achieving the benefits of this architectural pattern while reducing the drawbacks of adopting it. This chapter covers antipatterns and contains code examples of microservices written with popular microservice frameworks such as Spring Boot, Micronaut, Quarkus, and Helidon.

Traditionally a monolithic architecture delivers or deploys single units or systems, addressing all requirements from a single source application, and two concepts can be identified: the monolith application and the monolithic architecture.

A monolith application has only one deployed instance, responsible for performing all steps needed for a specific function. One characteristic of such an application is a unique interface point of execution.

A monolithic architecture refers to an application for which all requirements are addressed from a single source and all parts are delivered as one unit. Components may have been designed to restrict interaction with external clients in order to explicitly limit access of private functionality. Components in the monolith may be interconnected or interdependent rather than loosely coupled. In other words, from the outside or user perspective, there is little knowledge of the definitions, interfaces, data, and services of other separate components.

Granularity is the aggregation level exposed by a component to other external cooperating or collaborating parts of software. The level of granularity in software depends on several factors, such as the level of confidentiality that must be maintained within a series of components and not be exposed or available to other consumers.

Modern software architectures are increasingly focused on delivering functionality by bundling or combining software components from different sources, resulting in or emphasizing a finer granularity in level of detail. The functionality exposed then to different components, customers, or consumers is greater than in a monolithic application.

To qualify how independent or interchangeable a module is, we should look closely at the following characteristics:

  • Number of dependencies

  • Strength of these dependencies

  • Stability of the modules it depends on

Any high score assigned to the previous characteristics should trigger a second review of the modeling and definition of the module.

Cloud Computing

Cloud computing has several definitions. Peter Mell and Tim Grance define it as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (such as networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

In recent years, cloud computing has increased considerably. For example, cloud infrastructure services spending increased 32% to $39.9 billion in the last quarter of 2020. Total expenditure was more than $3 billion higher than the previous quarter and nearly $10 billion more than Q4 2019, according to Canalys data.

Several providers exist, but the market share is not evenly distributed. The three leading service providers are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. AWS is the leading cloud service provider, accounting for 31% of total spending in Q4 2020. Azure’s growth rate accelerated, up by 50%, with a share close to 20%, whereas Google Cloud accounts for a 7% share of the total market.

Utilization of cloud computing services has been lagging. Cinar Kilcioglu and Aadharsh Kannan reported in 2017 in “Proceedings of the 26th International World Wide Web Conference” that usage of cloud resources in data centers shows a substantial gap between the resources that cloud customers allocate and pay for (leasing VMs), and actual resource utilization (CPU, memory, and so on). Perhaps customers are just leaving their VMs on but not actually using them.

Cloud services are divided into categories used for different types of computing:

Software as a service (SaaS)

The client can use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser, or a program interface. The client does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Platform as a service (PaaS)

The client can deploy onto the cloud infrastructure client-made or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, or storage, but does have control over the deployed applications and possibly configuration settings for the application-hosting environment.

Infrastructure as a service (IaaS)

The client is able to provision processing, storage, networks, and other fundamental computing resources. They can deploy and run arbitrary software, which can include operating systems and applications. The client does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications—and possibly limited control of select networking components.

Microservices

The term microservice is not a recent one. Peter Rodgers introduced the term micro-web services in 2005 while championing the idea of software as micro-web-services. Microservice_architecture—an evolution of service-oriented architecture (SOA)—arranges an application as a collection of relatively lightweight modular services. Technically, microservices is a specialization of an implementation approach for SOA.

Microservices are small and loosely coupled components. In contrast to monoliths, they can be deployed, scaled, and tested independently, and they have a single responsibility, bounded by context, and are autonomous and decentralized. They are usually built around business capabilities, are easy to understand, and may be developed using different technology stacks.

How small should a microservice be? It should be micro enough to allow small, self-contained, and rigidly enforced atoms of functionality that can coexist, evolve, or replace the previous ones according to business needs.

Each component or service has little or no knowledge of the definitions of other separate components, and all interaction with a service is via its API, which encapsulates its implementation details. The messaging between these microservices uses simple protocols and usually is not data intensive.

Antipatterns

The microservice pattern results in significant complexity and is not ideal in all situations. The system is made up of many parts that work independently, and its very nature makes it harder to predict how it will perform in the real world.

This increased complexity is mainly due to the (potentially) thousands of microservices running asynchronously in the distributed computer network. Keep in mind that programs that are difficult to understand are also difficult to write, modify, test, and measure. All these concerns will increase the time teams need to spend on understanding, discussing, tracking, and testing interfaces and message formats.

Several books, articles, and papers are available on this particular topic. I recommend a visit to Microservices.io, Mark Richards’s report Microservices AntiPatterns and Pitfalls (O’Reilly), and “On the Definition of Microservice Bad Smells” by Davide Taibi and Valentina Lenarduzz (published in IEEE Software in 2018).

Some of the most common antipatterns include the following:

API versioning (static contract pitfall)

APIs need to be semantically versioned to allow services to know whether they are communicating with the right version of the service or whether they need to adapt their communication to a new contract.

Inappropriate service privacy interdependency

The microservice requires private data from other services instead of dealing with its own data, a problem that usually is related to a modeling-the-data issue. One solution to consider is merging the microservices.

Multipurpose megaservice

Several business functions are implemented in the same service.

Logging

Errors and microservice information are hidden inside each microservice container. The adoption of a distributed logging system should be a priority as issues are found in all stages of the software lifecycle.

Complex interservice or circular dependencies

A circular service relationship is defined as a relationship between two or more services that are interdependent. Circular dependencies can harm the ability of services to scale or deploy independently, as well as violate the acyclic dependencies principle (ADP).

Missing API gateway

When microservices communicate directly with each other, or when the service consumers communicate directly with each microservice, complexity increases and maintenance decreases in the system. The best practice in this case is to use an API gateway.

An API gateway receives all API calls from clients and then directs them to the appropriate microservice by request routing, composition, and protocol translation. The gateway usually handles the request by calling multiple microservices and aggregating the results to determine the best route. It is also able to translate between web protocols and web-friendly protocols for internal use.

An application may use an API gateway to provide a single endpoint for mobile customers to query all product data with a single request. The API gateway consolidates various services, such as product information and reviews, and combines and exposes the results.

The API gateway is the gatekeeper for applications to access data, business logic, or functions (RESTful APIs or WebSocket APIs) that allow real-time two-way communication applications. The API gateway typically handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, cross-origin resource sharing (CORS) support, authorization and access control, choking, management, and API version control.

Sharing too much

A thin line lies between sharing enough functionality to not repeat yourself and creating a tangled mess of dependencies that prevents service changes from being separated. If an overshared service needs to be changed, evaluating proposed changes in the interfaces will eventually lead to an organizational task involving more development teams.

At some point, the choice of redundancy or library extraction into a new shared service that related microservices can install and develop independently of each other needs to be analyzed.

DevOps and Microservices

Microservices fit perfectly into the DevOps ideal of utilizing small teams to create functional changes to the enterprise’s services one step at a time—the idea of breaking large problems into smaller pieces and tackling them systematically. To reduce the friction between development, testing, and deployment of smaller independent services, a series of continuous delivery pipelines to maintain a steady flow of these stages has to be present.

DevOps is a key factor in the success of this architectural style, providing the necessary organizational changes to minimize coordination between teams responsible for each component and to remove barriers to effective, reciprocal interaction between development and operations teams.

Caution

I strongly dissuade any team from adopting the microservices pattern without a robust CI/CD infrastructure in place or without a widespread understanding of the basic concepts of pipelines.

Microservice Frameworks

The JVM ecosystem is vast and provides plenty of alternatives for a particular use case. Dozens of microservice frameworks and libraries are available, to the point that it can be tricky to pick a winner among candidates.

That said, certain candidate frameworks have gained popularity for several reasons: developer experience, time to market, extensibility, resource (CPU, memory) consumption, startup speed, failure recovery, documentation, third-party integrations, and more. These frameworks—Spring Boot, Micronaut, Quarkus, and Helidon—are covered in the following sections. Take into account that some of the instructions may require additional tweaks based on newer versions, as some of these technologies evolve quite rapidly. I strongly recommend reviewing the documentation of each framework.

Additionally, these examples require Java 11 as a minimum, and trying out Native Image also requires an installation of GraalVM. There are many ways to get these versions installed in your environment. I recommend using SDKMAN! to install and manage them. For brevity, I concentrate on production code alone—a single framework could fill a whole book! It goes without saying that you should take care of tests as well. The goal for each example is to build a trivial “Hello World” REST service that can take an optional name parameter and reply with a greeting.

If you have not worked with GraalVM before, it’s an umbrella project for a handful of technologies that enable the following features:

  • A just-in-time (JIT) compiler written in Java, which compiles code on the fly, transforming interpreted code into executable code. The Java platform has had a handful of JITs, most written using a combination of C and C++. Graal happens to be the most modern one, written in Java.

  • A virtual machine named Substrate VM that’s capable of running hosted languages such as Python, JavaScript, and R on top of the JVM in such a way that the hosted language benefits from tighter integration with JVM capabilities and features.

  • Native Image, a utility that relies on ahead-of-time (AOT) compilation, which transforms bytecode into machine-executable code. The resulting transformation produces a platform-specific binary executable.

All four candidate frameworks covered here provide support for GraalVM in one way or another, chiefly relying on GraalVM Native Image to produce platform-specific binaries with the goal of reducing deployment size and memory consumption. Be aware that there’s a trade-off between using the Java mode and the GraalVM Native Image mode. The latter can produce binaries with a smaller memory footprint and faster startup time but requires longer compilation time; long-running Java code will eventually become more optimized (that’s one of the key features of the JVM), whereas native binaries cannot be optimized while running. Development experience also varies, as you may need to use additional tools for debugging, monitoring, measuring, and so forth.

Spring Boot

Spring Boot is perhaps the most well-known among the four candidates, as it builds on top of the legacy laid out by the Spring Framework. If developer surveys are to be taken at face value, more than 60% of Java developers have some sort of experience interacting with Spring-related projects, making Spring Boot the most popular choice.

The Spring way lets you assemble applications (or microservices, in our case) by composing existing components, customizing their configuration, and promising low-cost code ownership, as your custom logic is supposedly smaller in size than what the framework brings to the table, and for most organizations that’s true. The trick is to find an existing component that can be tweaked and configured before writing your own. The Spring Boot team makes a point of adding as many useful integrations as needed, from database drivers to monitoring services, logging, journaling, batch processing, report generation, and more.

The typical way to bootstrap a Spring Boot project is by browsing to the Spring Initializr, selecting the features you require in your application, and clicking the Generate button. This action creates a ZIP file that you can download to your local environment to get started. In Figure 4-1, I’ve selected the Web and Spring Native features. The first feature adds components that let you expose data via REST APIs; the second enhances the build with an extra packaging mechanism that can create Native Images with Graal.

Unpacking the ZIP file and running the ./mvnw verify command at the root directory of the project ensures a sound starting point. You’ll notice the command will download a set of dependencies if you’ve not built a Spring Boot application before on your target environment. This is normal Apache Maven behavior. These dependencies won’t be downloaded again the next time you invoke a Maven command—unless dependency versions are updated in the pom.xml file.

dtjd 0401
Figure 4-1. Spring Initializr

The project structure should look like this:

.
├── HELP.md
├── mvnw
├── mvnw.cmd
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── example
    │   │           └── demo
    │   │               ├── DemoApplication.java
    │   │               ├── Greeting.java
    │   │               └── GreetingController.java
    │   └── resources
    │       ├── application.properties
    │       ├── static
    │       └── templates
    └── test
        └── java

Our current task requires two additional sources that were not created by the Spring Initializr website: Greeting.java and GreetingController.java. These two files can be created using your text editor or IDE of choice. The first, Greeting.java, defines a data object that will be used to render content as JavaScript Object Notation (JSON), a typical format used to expose data via REST. Additional formats are also supported, but JSON support comes out of the box without any additional dependencies required. This file should look like this:

package com.example.demo;

public class Greeting {
    private final String content;

    public Greeting(String content) {
        this.content = content;
    }

    public String getContent() {
        return content;
    }
}

There’s nothing special about this data holder except that it’s immutable; depending on your use case, you might want to switch to a mutable implementation, but for now this will suffice. Next is the REST endpoint itself, defined as a GET call on a /greeting path. Spring Boot prefers the controller stereotype for this kind of component, no doubt harkening back to the days when Spring MVC (yes, that’s model-view-controller) was the preferred option to create web applications. Feel free to use a different filename, but the component annotation must remain untouched:

package com.example.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class GreetingController {
    private static final String template = "Hello, %s!";

    @GetMapping("/greeting")
    public Greeting greeting(@RequestParam(value = "name",
        defaultValue = "World") String name) {
        return new Greeting(String.format(template, name));
    }
}

The controller may take a name parameter as input and will use the value World when this parameter is not supplied. Notice that the return type of the mapped method is a plain Java type; it’s the data type we just defined in the previous step. Spring Boot will automatically marshal data from and to JSON based on the annotations applied to the controller and its methods, as well as sensible defaults put in place. If we leave the code as is, the return value of the greeting() method will be automatically transformed into a JSON payload. This is the power of Spring Boot’s developer experience, relying on defaults and predefined configuration that may be tweaked as needed.

You can run the application by either invoking the /.mvnw spring-boot:run command, which runs the application as part of the build process, or by generating the application JAR and running it manually—that is, ./mvnw package followed by java -jar target/demo-0.0.1.SNAPSHOT.jar. Either way, an embedded web server will be started listening on port 8080; the /greeting path will be mapped to an instance of GreetingController. All that’s left is to issue a couple of queries, such as the following:

// using the default name parameter
$ curl http://localhost:8080/greeting
{"content":"Hello, World!"}

// using an explicit value for the name parameter
$ curl http://localhost:8080/greeting?name=Microservices
{"content":"Hello, Microservices!"}

Take note of the output generated by the application while running. On my local environment, it shows (on average) that the JVM takes 1.6 seconds to start up, while the application takes 600 milliseconds to initialize. The size of the generated JAR is roughly 17 MB. You may also want to take notes on the CPU and memory consumption of this trivial application. For some time now, it’s been suggested that the use of GraalVM Native Image can reduce startup time and binary size. Let’s see how we can make that happen with Spring Boot.

Remember how we selected the Spring Native feature when the project was created? Unfortunately, by version 2.5.0 the generated project does not include all required instructions in the pom.xml file. We must make a few tweaks. To begin with, the JAR created by spring-boot-maven-plugin requires a classifier; otherwise, the resulting Native Image may not be properly created. That’s because the application JAR already contains all dependencies inside a Spring Boot—specific path that’s not handled by native-image-maven-plugin, which we also have to configure. The updated pom.xml file should look like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
    https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.5.0</version>
    </parent>
    <groupId>com.example</groupId>
    <artifactId>demo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>demo</name>
    <description>Demo project for Spring Boot</description>
    <properties>
        <java.version>11</java.version>
        <spring-native.version>0.10.0-SNAPSHOT</spring-native.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.experimental</groupId>
            <artifactId>spring-native</artifactId>
            <version>${spring-native.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <classifier>exec</classifier>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.experimental</groupId>
                <artifactId>spring-aot-maven-plugin</artifactId>
                <version>${spring-native.version}</version>
                <executions>
                    <execution>
                        <id>test-generate</id>
                        <goals>
                            <goal>test-generate</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>generate</id>
                        <goals>
                            <goal>generate</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
    <repositories>
        <repository>
            <id>spring-release</id>
            <name>Spring release</name>
            <url>https://repo.spring.io/release</url>
        </repository>
    </repositories>
    <pluginRepositories>
        <pluginRepository>
            <id>spring-release</id>
            <name>Spring release</name>
            <url>https://repo.spring.io/release</url>
        </pluginRepository>
    </pluginRepositories>

    <profiles>
        <profile>
            <id>native-image</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.graalvm.nativeimage</groupId>
                        <artifactId>native-image-maven-plugin</artifactId>
                        <version>21.1.0</version>
                        <configuration>
                            <mainClass>
                                com.example.demo.DemoApplication
                            </mainClass>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>native-image</goal>
                                </goals>
                                <phase>package</phase>
                            </execution>
                        </executions>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>
</project>

One more step before we can give it a try: make sure to have a version of GraalVM installed as your current JDK. The selected version should closely match the version of native-image-maven-plugin found in the pom.xml file. The native-image executable must also be installed in your system; you can do that by invoking gu install native-image. The gu command is provided by the GraalVM installation.

With all settings in place, we can generate a native executable by invoking ./mvnw -Pnative-image package. You’ll notice a flurry of text going through the screen as new dependencies may be downloaded, and perhaps a few warnings related to missing classes—that’s normal. The build also takes longer than usual, and here lies the trade-off of this packaging solution: we are increasing development time to speed up execution time at production. Once the command finishes, you’ll notice a new file com.example.demo.demoapplication inside the target directory. This is the native executable. Go ahead and run it.

Did you notice how fast the startup was? On my environment, I get on average a startup time of 0.06 seconds, while the application takes 30 milliseconds to initialize itself. You may recall these numbers were 1.6 seconds and 600 milliseconds when running in Java mode. That’s a serious speed boost! Now have a look at the size of the executable; in my case, it’s around 78 MB. Oh well, looks like some things have grown for the worse—or have they? This executable is a single binary that provides everything needed to run the application, whereas the JAR we used earlier requires a Java runtime to run. The size of a Java runtime is typically in the 200 MB range and is composed of multiple files and directories. Of course, smaller Java runtimes may be created with jlink, in which case that adds another step during the build process. There’s no free lunch.

Let’s stop with Spring Boot for now, keeping in mind that there’s a whole lot more to it than what has been shown here. On to the next framework.

Micronaut

Micronaut began life in 2017 as a reimagination of the Grails framework but with a modern look. Grails is one of the few successful “clones” of the Ruby on Rails (RoR) framework, leveraging the Groovy programming language. Grails made quite the splash for a few years, until the rise of Spring Boot took it out of the spotlight, prompting the Grails team to find alternatives, which resulted in Micronaut. On the surface, Micronaut provides a similar user experience to Spring Boot, as it also allows developers to compose applications based on existing components and sensible defaults.

One of Micronaut’s key differentiators is the use of compile-time dependency injection for assembling the application, as opposed to runtime dependency injection, which is the preferred way of assembling applications with Spring Boot so far. This seemingly trivial change lets Micronaut exchange a bit of development time for a speed boost at runtime as the application spends less time bootstrapping itself; this can also lead to less memory consumption and less reliance on Java reflection, which historically has been slower than direct method invocations.

There are a handful of ways to bootstrap a Micronaut project, but the preferred one is to browse to Micronaut Launch and select the settings and features you’d like to see added to the project. The default application type defines the minimum settings to build a REST-based application such as the one we’ll go through in a few minutes. Once satisfied with your selection, click the Generate Project button, as shown in Figure 4-2, which results in a ZIP file that can be downloaded to your local development environment.

dtjd 0402
Figure 4-2. Micronaut Launch

Similarly as we did for Spring boot, unpacking the ZIP file and running the ./mvnw verify command at the root directory of the project ensures a sound starting point. This command invocation will download plug-ins and dependencies as needed; the build should succeed after a few seconds if everything goes right. The project structure should look like the following one after adding a pair of additional source files:

.
├── README.md
├── micronaut-cli.yml
├── mvnw
├── mvnw.bat
├── pom.xml
└── src
    └── main
        ├── java
        │   └── com
        │       └── example
        │           └── demo
        │               ├── Application.java
        │               ├── Greeting.java
        │               └── GreetingController.java
        └── resources
            ├── application.yml
            └── logback.xml

The Application.java source file defines the entry point, which we’ll leave untouched for now as there’s no need to make any updates. Similarly, we’ll leave the application.yml resource file unchanged as well; this resource supplies configuration properties that don’t require changes at this point.

We need two additional source files: the data object defined by Greeting.java, whose responsibility is to contain a message sent back to the consumer, and the actual REST endpoint defined by GreetingController.java. The controller stereotype goes all the way back to the conventions laid out by Grails, also followed by pretty much every RoR clone. You can certainly change the filename to anything that suits your domain, though you must leave the @Controller annotation in place. The source for the data object should look like this:

package com.example.demo;

import io.micronaut.core.annotation.Introspected;

@Introspected
public class Greeting {
    private final String content;

    public Greeting(String content) {
        this.content = content;
    }

    public String getContent() {
        return content;
    }
}

Once more we rely on an immutable design for this class. Note the use of the ​@Intro⁠spected annotation, which signals Micronaut to inspect the type at compile time and include it as part of the dependency-injection procedure. Usually, the annotation can be left out, as Micronaut will figure out that the class is required. But its use is paramount when it comes to generating the native executable with GraalVM Native Image; otherwise, the executable won’t be complete. The second file should look like this:

package com.example.demo;

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.annotation.QueryValue;

@Controller("/")
public class GreetingController {
    private static final String template = "Hello, %s!";

    @Get(uri = "/greeting")
    public Greeting greeting(@QueryValue(value = "name",
        defaultValue = "World") String name) {
        return new Greeting(String.format(template, name));
    }
}

We can appreciate that the controller defines a single endpoint mapped to /greeting, takes an optional parameter named name, and returns an instance of the data object. By default, Micronaut will marshal the return value as JSON, so no extra configuration is required to make it happen. Running the application can be done in a couple of ways. You can either invoke ./mvnw mn:run, which runs the application as part of the build process, or invoke ./mvnw package, which creates a demo-0.1.jar in the target directory that can be launched in the conventional way—that is, with java -jar target/demo-0.1.jar. Invoking a couple of queries to the REST endpoint may result in output similar to this:

// using the default name parameter
$ curl http://localhost:8080/greeting
{"content":"Hello, World!"}

// using an explicit value for the name parameter
$ curl http://localhost:8080/greeting?name=Microservices
{"content":"Hello, Microservices!"}

Either command launches the application quite quickly. On my local environment, the application is ready to process requests by 500 milliseconds on average, or three times the speed of Spring Boot for equivalent behavior. The size of the JAR file is also a bit smaller, at 14 MB in total. As impressive as these numbers may be, we can get a speed boost if the application were to be transformed using GraalVM Native Image into a native executable. Fortunately for us, the Micronaut way is friendlier with this kind of setup, resulting in everything we require already configured in the generated project. That’s it. No need to update the build file with additional settings—it’s all there.

You do require an installation of GraalVM and its native-image executable, though, as we did before. Creating a native executable is as simple as invoking ./mvnw -Dpackaging=native-image package, and after a few minutes we should get an executable named demo (as a matter of fact, it’s the project’s artifactId if you were wondering) inside the target directory. Launching the application with the native executable results in a 20–millisecond startup time on average, which is a one-third gain in speed compared to Spring Boot. The executable size is 60 MB, which correlates to the reduced size of the JAR file.

Let’s stop exploring Micronaut and move to the next framework: Quarkus.

Quarkus

Although Quarkus was announced in early 2019, work on it began much earlier. Quarkus has a lot of similarities with the two candidates we’ve seen so far. It offers great development experience based on components, convention over configuration, and productivity tools. Even more, Quarkus decided to also use compile-time dependency injection like Micronaut, allowing it to reap the same benefits, such as smaller binaries, faster startup, and less runtime magic. At the same time, Quarkus adds its own flavor and distinctiveness, and perhaps most important for some developers, Quarkus relies more on standards than the other two candidates. Quarkus implements the MicroProfile specifications, which are standards that come from JakartaEE (previously known as JavaEE), and additional standards developed under the MicroProfile project umbrella.

You can get started with Quarkus by browsing to the Quarkus Configure Your Application page to configure values and download a ZIP file. This page is loaded with plenty of goodies, including many extensions to choose from to configure specific integrations such as databases, REST capabilities, monitoring, and more. The RESTEasy Jackson extension must be selected, allowing Quarkus to seamlessly marshal values to and from JSON. Clicking the “Generate your application” button should prompt you to save a ZIP file into your local system, the contents of which should look similar to this:

.
├── README.md
├── mvnw
├── mvnw.cmd
├── pom.xml
└── src
    ├── main
       ├── docker
          ├── Dockerfile.jvm
          ├── Dockerfile.legacy-jar
          ├── Dockerfile.native
          └── Dockerfile.native-distroless
       ├── java
          └── com
              └── example
                  └── demo
                      ├── Greeting.java
                      └── GreetingResource.java
       └── resources
           ├── META-INF
              └── resources
                  └── index.html
           └── application.properties
    └── test
        └── java

We can appreciate that Quarkus adds Docker configuration files out of the box, as it was designed to tackle microservice architectures in the cloud via containers and Kubernetes. But as time has passed, its range has grown wider by supporting additional application types and architectures. The GreetingResource.java file is also created by default, and it’s a typical Jakarta RESTful Web Services (JAX-RS) resource. We’ll have to make some adjustments to that resource to enable it to handle the Greeting.java data object. Here’s the source for that:

package com.example.demo;

public class Greeting {
    private final String content;

    public Greeting(String content) {
        this.content = content;
    }

    public String getContent() {
        return content;
    }
}

The code is pretty much identical to what we’ve seen before in this chapter. There’s nothing new or surprising about this immutable data object. Now, in the case of the JAX-RS resource, things will look similar yet different, as the behavior we seek is the same as before, though the way we instruct the framework to perform its magic is via JAX-RS annotations. Thus the code looks like this:

package com.example.demo;

import javax.ws.rs.DefaultValue;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;

@Path("/greeting")
public class GreetingResource {
    private static final String template = "Hello, %s!";

    @GET
    public Greeting greeting(@QueryParam("name")
        @DefaultValue("World") String name) {
        return new Greeting(String.format(template, name));
    }
}

If you’re familiar with JAX-RS, this code should be no surprise to you. But if you’re not familiar with the JAX-RS annotations, what we do here is mark the resource with the REST path we’d like to react to; we also indicate that the greeting() method will handle a GET call, and that its name parameter has a default value. Nothing more needs to be done to instruct Quarkus to marshal the return value into JSON, as that will happen by default.

Running the application can be done in a couple of ways as well, using the developer mode as part of the build. This is one of the features that has a unique Quarkus flavor, as it lets you run the application and pick up any changes you made automatically without having to manually restart the application. You can activate this mode by invoking /.mvnw compile quarkus:dev. If you make any changes to the source files, you’ll notice that the build will automatically recompile and load the application.

You may also run the application using the java interpreter as we’ve seen before, which results in a command such as java -jar target/quarkus-app/quarkus-run.jar. Note that we’re using a different JAR, although the demo-1.0.0-SNAPSHOT.jar does exist in the target directory; the reason to do it this way is that Quarkus applies custom logic to speed up the boot process even in the Java mode.

Running the application should result in startup times with 600 milliseconds on average, which is pretty close to what Micronaut does. Also, the size of the full application is in the 13 MB range. Sending a couple of GET requests to the application without and with a name parameter results in output similar to the following:

// using the default name parameter
$ curl http://localhost:8080/greeting
{"content":"Hello, World!"}

// using an explicit value for the name parameter
$ curl http://localhost:8080/greeting?name=Microservices
{"content":"Hello, Microservices!"}

It should be no surprise that Quarkus also supports generating native executables via GraalVM Native Image, given that it targets cloud environments where small binary size is recommended. Because of this, Quarkus comes with batteries included, just like Micronaut, and generates everything you need from the get-go. There’s no need to update the build configuration to get started with native executables. As with the other examples, you must ensure that the current JDK points to a GraalVM distribution and that the native-image executable is found in your path. Once this step has been cleared, all that’s left is to package the application as a native executable by invoking ./mvnw -Pnative package. This activates the native profile, which instructs the Quarkus build tools to generate the native executable.

After a couple of minutes, the build should have produced an executable named demo-1.0.0-SNAPSHOT-runner inside the target directory. Running this executable shows that the application starts up in 15 milliseconds on average. The size of the executable is close to 47 MB, which makes Quarkus the framework that yields the fastest startup and smallest executable size so far when compared to previous candidate frameworks.

We’re done with Quarkus for the time being, leaving us with the fourth candidate framework: Helidon.

Helidon

Last but not least, Helidon is a framework specifically crafted for building microservices with two flavors: SE and MP. The MP flavor stands for MicroProfile and lets you build applications by harnessing the power of standards; this flavor is a full implementation of the MicroProfile specifications. The SE flavor, on the other hand, does not implement MicroProfile, yet delivers similar functionality using a different set of APIs. Pick a flavor based on the APIs you’d like to interact with and your preference for standards; either way, Helidon gets the job done.

Given that Helidon implements MicroProfile, we can use yet another site to bootstrap a Helidon project. The MicroProfile Starter site (Figure 4-3) can be used to create projects for all supported implementations of the MicroProfile specification by versions.

dtjd 0403
Figure 4-3. MicroProfile Starter

Browse to the site, select which MP version you’re interested in, choose the MP implementation (in our case, Helidon), and perhaps customize some of the available features. Then click the Download button to download a ZIP file containing the generated project. The ZIP file contains a project structure similar to the following, except of course I’ve already updated the sources with the two files required to make the application work as we want it:

.
├── pom.xml
├── readme.md
└── src
    └── main
        ├── java
        │   └── com
        │       └── example
        │           └── demo
        │               ├── Greeting.java
        │               └── GreetingResource.java
        └── resources
            ├── META-INF
            │   ├── beans.xml
            │   └── microprofile-config.properties
            ├── WEB
            │   └── index.html
            ├── logging.properties
            └── privateKey.pem

As it happens, the source files Greeting.java and GreetingResource.java are identical to the sources we saw in the Quarkus example. How is that possible? First because the code is definitely trivial, but also (and more important) because both frameworks rely on the power of standards. As a matter of fact, the Greeting.java file is pretty much identical across all frameworks—except for Micronaut, which requires an additional annotation, but only if you’re interested in generating native executables; otherwise, it’s 100% identical. If you decided to jump ahead to this section before browsing the others, here’s what the Greeting.java file looks like:

package com.example.demo;

import io.helidon.common.Reflected;

@Reflected
public class Greeting {
    private final String content;

    public Greeting(String content) {
        this.content = content;
    }

    public String getContent() {
        return content;
    }
}

It’s just a regular immutable data object with a single accessor. The Greeting​Re⁠source.java file, which defines the REST mappings needed for the application, follows:

package com.example.demo;

import javax.ws.rs.DefaultValue;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;

@Path("/greeting")
public class GreetingResource {
    private static final String template = "Hello, %s!";

    @GET
    public Greeting greeting(@QueryParam("name")
        @DefaultValue("World") String name) {
        return new Greeting(String.format(template, name));
    }
}

We can appreciate the use of JAX-RS annotations, as we can see there’s no need for Helidon-specific APIs at this point. The preferred way to run a Helidon application is to package the binaries and run them with the java interpreter. That is, we lose a bit of build tool integration (for now), yet we can still use the command line to perform iterative development. Thus invoking mvn package followed by java -jar/demo.jar compiles, packages, and runs the application with an embedded web server listening on port 8080. We can send a couple of queries to it, such as this one:

// using the default name parameter
$ curl http://localhost:8080/greeting
{"content":"Hello, World!"}

// using an explicit value for the name parameter
$ curl http://localhost:8080/greeting?name=Microservices
{"content":"Hello, Microservices!"}

If you look at the output where the application process is running, you’ll see that the application starts with 2.3 seconds on average, which makes it the slowest candidate we have seen so far, while the binaries’ size is close to 15 MB, putting it in the middle of all measurements. But as the adage goes, you can’t judge a book by its cover. Helidon provides more features out of the box automatically configured, which would account for the extra startup time and the larger deployment size.

If startup speed and deployment size were issues, you could reconfigure the build to remove those features that may not be needed, as well as switch to native executable mode. Fortunately, the Helidon team has embraced GraalVM Native Image as well, and every Helidon project, bootstrapped as we’ve done ourselves, comes with the required configuration to create native binaries. There’s no need to tweak the pom.xml file if you follow the conventions. Execute the mvn -Pnative-image package command, and you’ll find a binary executable named demo inside the target directory. This executable weighs about 94 MB, the largest so far, while its startup time is 50 milliseconds on average, in the same range as the previous frameworks.

Up to now, we’ve caught a glimpse of what each framework has to offer, from base features to build tool integration. As a reminder, there are several reasons to pick one candidate framework over another. I encourage you to write down a matrix for each relevant feature/aspect that affects your development requirements and assess each one of those items with every candidate.

Serverless

This chapter began by looking at monolithic applications and architectures, usually pieced together by components and tiers clumped together into a single, cohesive unit. Changes or updates to a particular piece require updating and deploying the whole. Failure at one particular place could bring down the whole as well. Then we moved on to microservices. Breaking the monolith into smaller chunks that can be updated and deployed individually and independently of one another should take care of the previously mentioned issues, but microservices bring a host of other issues.

Before, it was enough to run the monolith inside an application server hosted on big iron, with a handful of replicas and a load balancer for good measure. This setup has scalability issues. With the microservices approach, we can grow or collapse the mesh of services depending on the load. That boosts elasticity, but now we have to coordinate multiple instances and provision runtime environments, load balancers become a must, API gateways are needed, network latency rears its ugly head, and did I mention distributed tracing? Yes, those are a lot of things to be aware of and manage. But what if you didn’t have to? What if someone else took care of the infrastructure, monitoring, and other “minutiae” required to run applications at scale? This is where the serverless approach comes in: where you concentrate on the business logic at hand and let the serverless provider deal with everything else.

While distilling a component into smaller pieces, one thought should come to mind: “What’s the smallest reusable piece of code I can turn this component into?” If your answer is a Java class with a handful of methods and perhaps a couple of injected collaborators/services, you’re close, but you’re not there yet. The smallest piece of reusable code is, as a matter of fact, a single method. Picture a microservice defined as a single class that performs the following steps:

  1. Reads the input arguments and transforms them into a consumable format as required by the next step

  2. Performs the actual behavior required by the service, such as issuing a query to a database, indexing, or logging

  3. Transforms the processed data into an output format

Now, each of these steps may be organized in separate methods. You may soon realize that some of these methods are reusable as is or parameterized. A typical way to solve this would be to provide a common super type among microservices. This creates a strong dependency among types, and for some use cases, that’s all right. But for others, updates to the common code have to happen as soon as possible, in a versioned fashion, without disrupting currently running code, so I’m afraid we may need an alternative.

With this scenario in mind, if the common code were to be provided instead as a set of methods that can be invoked independently of one another, with their inputs and outputs composed in such a way that you establish a pipeline of data transformations, then we arrive at what are now known as functions. Offerings such as function as a service (FaaS) are a common subject among serverless providers.

In summary, FaaS is a fancy way to say that you compose applications based on the smallest deployment unit possible and let the provider figure out all the infrastructure details for you. In the following sections, we’ll build and deploy a simple function to the cloud.

Setting Up

Nowadays every major cloud provider has an FaaS offering at your disposal, with add-ons that hook into other tools for monitoring, logging, disaster recovery, and more; just pick the one that meets your needs. For the sake of this chapter, we’ll pick AWS Lambda, which was, after all, the originator of the FaaS idea. We’ll also pick Quarkus as the implementation framework, as it’s the one that currently provides the smallest deployment size. Be aware that the configuration shown here may need some tweaks or might be totally outdated; always review the latest versions of the tools required to build and run the code. We’ll use Quarkus 1.13.7 for now.

Setting up a function with Quarkus and AWS Lambda requires having an AWS account, the AWS CLI installed on your system, and the AWS Serverless Application Model (SAM) CLI if you’d like to run local tests.

Once you have that covered, the next step is to bootstrap the project, for which we would be inclined to use Quarkus as before except that a function project requires a different setup. So it’s better to switch to using a Maven archetype:

mvn archetype:generate \
    -DarchetypeGroupId=io.quarkus \
    -DarchetypeArtifactId=quarkus-amazon-lambda-archetype \
    -DarchetypeVersion=1.13.7.Final

Invoking this command in interactive mode will ask you a few questions, such as the group, artifact, version (GAV) coordinates for the project, and the base package. For this demo, let’s go with these:

  • groupId: com.example.demo

  • artifactId: demo

  • version: 1.0-SNAPSHOT (the default)

  • package: com.example.demo (same as groupId)

This results in a project structure suitable to build, test, and deploy a Quarkus project as a function deployable to AWS Lambda. The archetype creates build files for both Maven and Gradle, but we don’t need the latter for now; it also creates three function classes, but we need only one. Our aim is to have a file structure similar to this one:

.
├── payload.json
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── example
    │   │           └── demo
    │   │               ├── GreetingLambda.java
    │   │               ├── InputObject.java
    │   │               ├── OutputObject.java
    │   │               └── ProcessingService.java
    │   └── resources
    │       └── application.properties
    └── test
        ├── java
        │   └── com
        │       └── example
        │           └── demo
        │               └── LambdaHandlerTest.java
        └── resources
            └── application.properties

The gist of the function is to capture inputs with the InputObject type, process them with the ProcessingService type, and then transform the results into another type (OutputObject). The GreetingLambda type puts everything together. Let’s have a look at both input and output types first—after all, they are simple types that are concerned with only containing data, with no logic whatsoever:

package com.example.demo;

public class InputObject {
    private String name;
    private String greeting;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getGreeting() {
        return greeting;
    }

    public void setGreeting(String greeting) {
        this.greeting = greeting;
    }
}

The lambda expects two input values: a greeting and a name. We’ll see how they get transformed by the processing service in a moment:

package com.example.demo;

public class OutputObject {
    private String result;
    private String requestId;

    public String getResult() {
        return result;
    }

    public void setResult(String result) {
        this.result = result;
    }

    public String getRequestId() {
        return requestId;
    }

    public void setRequestId(String requestId) {
        this.requestId = requestId;
    }
}

The output object holds the transformed data and a reference to the requestID. We’ll use this field to show how we can get data from the running context.

All right, the processing service is next; this class is responsible for transforming the inputs into outputs. In our case, it concatenates both input values into a single string, as shown here:

package com.example.demo;

import javax.enterprise.context.ApplicationScoped;

@ApplicationScoped
public class ProcessingService {
    public OutputObject process(InputObject input) {
        OutputObject output = new OutputObject();
        output.setResult(input.getGreeting() + " " + input.getName());
        return output;
    }
}

What’s left is to have a look at GreetingLambda, the type used to assemble the function itself. This class requires implementing a known interface supplied by Quarkus, whose dependency should be already configured in the pom.xml file created with the archetype. This interface is parameterized with input and output types. Luckily, we have those already. Every lambda must have a unique name and may access its running context, as shown next:

package com.example.demo;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import javax.inject.Inject;
import javax.inject.Named;

@Named("greeting")
public class GreetingLambda
    implements RequestHandler<InputObject, OutputObject> {
    @Inject
    ProcessingService service;

    @Override
    public OutputObject handleRequest(InputObject input, Context context) {
        OutputObject output = service.process(input);
        output.setRequestId(context.getAwsRequestId());
        return output;
    }
}

All the pieces fall into place. The lambda defines input and output types and invokes the data processiong service. For the purpose of demonstration, this example shows the use of dependency injection, but you could reduce the code by moving the behavior of ProcessingService into GreetingLambda. We can quickly verify the code by running local tests with mvn test, or if you prefer mvn verify, as that will also package the function.

Note that additional files are placed in the target directory when the function is packaged, specifically a script named manage.sh, which relies on the AWS CLI tool to create, update, and delete the function at the target destination associated with your AWS account. Additional files are required to support these operations:

function.zip

The deployment file containing the binary bits

sam.jvm.yaml

Local test with AWS SAM CLI (Java mode)

sam.native.yaml

Local test with AWS SAM CLI (native mode)

The next step requires you to have an execution role configured, for which it’s best to refer to the AWS Lambda Developer Guide in case the procedure has been updated. The guide shows you how to get the AWS CLI configured (if you have not done so already) and create an execution role that must be added as an environment variable to your running shell. For example:

LAMBDA_ROLE_ARN="arn:aws:iam::1234567890:role/lambda-ex"

In this case, 1234567890 stands for your AWS account ID, and lambda-ex is the name of the role of your choosing. We can proceed with executing the function, for which we have two modes (Java, native) and two execution environments (local, production); let’s tackle the Java mode first for both environments and then follow it up with native mode.

Running the function on a local environment requires the use of a Docker daemon, which by now should be commonplace in a developer’s toolbox; we also require using the AWS SAM CLI to drive the execution. Remember the set of additional files found inside the target directory? We’ll use the sam.jvm.yaml file alongside another file that was created by the archetype when the project was bootstrapped, called payload.json. Located at the root of the directory, its contents should look like this:

{
  "name": "Bill",
  "greeting": "hello"
}

This file defines values for the inputs accepted by the function. Given that the function is already packaged, we just have to invoke it, like so:

$ sam local invoke --template target/sam.jvm.yaml --event payload.json
Invoking io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest
(java11)
Decompressing /work/demo/target/function.zip
Skip pulling image and use local one:
amazon/aws-sam-cli-emulation-image-java11:rapid-1.24.1.

Mounting /private/var/folders/p_/3h19jd792gq0zr1ckqn9jb0m0000gn/T/tmppesjj0c8 as
/var/task:ro,delegated inside runtime container
START RequestId: 0b8cf3de-6d0a-4e72-bf36-232af46145fa Version: $LATEST
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
[io.quarkus] (main) quarkus-lambda 1.0-SNAPSHOT on
JVM (powered by Quarkus 1.13.7.Final) started in 2.680s.
[io.quarkus] (main) Profile prod activated.
[io.quarkus] (main) Installed features: [amazon-lambda, cdi]
END RequestId: 0b8cf3de-6d0a-4e72-bf36-232af46145fa
REPORT RequestId: 0b8cf3de-6d0a-4e72-bf36-232af46145fa	Init Duration: 1.79 ms
Duration: 3262.01 ms Billed Duration: 3300 ms
Memory Size: 256 MB	Max Memory Used: 256 MB
{"result":"hello Bill","requestId":"0b8cf3de-6d0a-4e72-bf36-232af46145fa"}

The command will pull a Docker image suitable for running the function. Take note of the reported values, which may differ depending on your setup. On my local environment, this function would cost me 3.3 seconds, and 256 MB for its execution. This can give you an idea of how much you’ll be billed when running your system as a set of functions. However, local is not the same as production, so let’s deploy the function to the real deal. We’ll use the manage.sh script to accomplish this feat, by invoking the following commands:

$ sh target/manage.sh create
$ sh target/manage.sh invoke
Invoking function
++ aws lambda invoke response.txt --cli-binary-format raw-in-base64-out
++ --function-name QuarkusLambda --payload file://payload.json
++ --log-type Tail --query LogResult
++ --output text base64 --decode
START RequestId: df8d19ad-1e94-4bce-a54c-93b8c09361c7 Version: $LATEST
END RequestId: df8d19ad-1e94-4bce-a54c-93b8c09361c7
REPORT RequestId: df8d19ad-1e94-4bce-a54c-93b8c09361c7	Duration: 273.47 ms
Billed Duration: 274 ms	Memory Size: 256 MB
Max Memory Used: 123 MB	Init Duration: 1635.69 ms
{"result":"hello Bill","requestId":"df8d19ad-1e94-4bce-a54c-93b8c09361c7"}

As you can see, the billed duration and memory usage decreased, which is good for our wallet, although the init duration went up to 1.6, which would delay the response, increasing the total execution time across the system. Let’s see how these numbers change when we switch from Java mode to native mode. As you may recall, Quarkus lets you package projects as native executables out of the box, but remember that Lambda requires Linux executables, so if you happen to be running on a non-Linux environment, you’ll need to tweak the packaging command. Here’s what needs to be done:

# for linux
$ mvn -Pnative package

# for non-linux
$ mvn package -Pnative -Dquarkus.native.container-build=true \
 -Dquarkus.native.container-runtime=docker

The second command invokes the build inside a Docker container and places the generated executable on your system at the expected location, whereas the first command executes the build as is. With the native executable now in place, we can execute the new function both in local and production environments. Let’s see the local environment first:

$ sam local invoke --template target/sam.native.yaml --event payload.json
Invoking not.used.in.provided.runtime (provided)
Decompressing /work/demo/target/function.zip
Skip pulling image and use local one:
amazon/aws-sam-cli-emulation-image-provided:rapid-1.24.1.

Mounting /private/var/folders/p_/3h19jd792gq0zr1ckqn9jb0m0000gn/T/tmp1zgzkuhy as
/var/task:ro,delegated inside runtime container
START RequestId: 27531d6c-461b-45e6-92d3-644db6ec8df4 Version: $LATEST
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
[io.quarkus] (main) quarkus-lambda 1.0-SNAPSHOT native
(powered by Quarkus 1.13.7.Final) started in 0.115s.
[io.quarkus] (main) Profile prod activated.
[io.quarkus] (main) Installed features: [amazon-lambda, cdi]
END RequestId: 27531d6c-461b-45e6-92d3-644db6ec8df4
REPORT RequestId: 27531d6c-461b-45e6-92d3-644db6ec8df4	Init Duration: 0.13 ms
Duration: 218.76 ms	Billed Duration: 300 ms Memory Size: 128 MB
Max Memory Used: 128 MB
{"result":"hello Bill","requestId":"27531d6c-461b-45e6-92d3-644db6ec8df4"}

The billed duration decreased by one order of magnitude, going from 3300 ms to just 300 ms, and the used memory was halved; this looks promising compared to its Java counterpart. Will we get better numbers when running on production? Let’s look:

$ sh target/manage.sh native create
$ sh target/manage.sh native invoke
Invoking function
++ aws lambda invoke response.txt --cli-binary-format raw-in-base64-out
++ --function-name QuarkusLambdaNative
++ --payload file://payload.json --log-type Tail --query LogResult --output text
++ base64 --decode
START RequestId: 19575cd3-3220-405b-afa0-76aa52e7a8b5 Version: $LATEST
END RequestId: 19575cd3-3220-405b-afa0-76aa52e7a8b5
REPORT RequestId: 19575cd3-3220-405b-afa0-76aa52e7a8b5	Duration: 2.55 ms
Billed Duration: 187 ms Memory Size: 256 MB	Max Memory Used: 54 MB
Init Duration: 183.91 ms
{"result":"hello Bill","requestId":"19575cd3-3220-405b-afa0-76aa52e7a8b5"}

The total billed duration results in 30% speedup, and the memory usage is less than half of that before; but the real winner is the initialization time, which takes roughly 10% of the previous time. Running your function in native mode results in faster startup and better numbers across the board.

Now it’s up to you to decide the combination of options that will give you the best results. Sometimes staying in Java mode is good enough even for production, or going native all the way may give you the edge. Whichever way it may be, measurements are key—don’t guess!

Summary

We covered a lot of ground in this chapter, starting with a traditional monolith, breaking it into smaller parts with reusable components that can be deployed independently, known as microservices, and going all the way to the smallest deployment unit possible: a function. Trade-offs occur along the way, as microservice architectures are inherently more complex, composed as they are of more moving parts. Network latency becomes a real issue and must be tackled accordingly. Other aspects such as data transactions become more complex as their span may cross service boundaries, depending on the case. The use of Java and native executable mode yields different results and requires to be customized setup, each with its own pros and cons. My recommendation, dear reader, is to evaluate, measure, and then select a combination; keep tabs on numbers and service level agreements (SLAs), because you may need to reevaluate decisions along the road and make adjustments.

Table 4-1 summarizes the measurements obtained by running the sample application on both Java and native image modes, on my local environment and remote, for each one of the candidate frameworks. The size columns show the deployment unit size, while the time columns depict the time from startup up to the first request.

Table 4-1. Measurement summary
Framework Java - size Java - time Native - size Native - time

Spring Boot

17 MB

2200 ms

78 MB

90 ms

Micronaut

14 MB

500 ms

60 MB

20 ms

Quarkus

13 MB

600 ms

47 MB

13 ms

Helidon

15 MB

2300 ms

94 MB

50 ms

As a reminder, you are encouraged to take your own measurements. Changes to the hosting environment, JVM version and settings, framework version, network conditions, and other environment characteristics will yield different results. The numbers shown should be taken with a grain of salt, never as authoritative values.

Get DevOps Tools for Java Developers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.