Chapter 1. Introduction

What It Means to Be Cloud Native

Cloud native applications can be described in a single line: applications that utilize and are optimized, or native, for cloud computing environments. To fully understand what this means, you must understand cloud computing and how it differs from traditional monolith software development. Software professionals, to ensure their companies remain competitive, must employ a modern style of development and deployment that uses the compute and management infrastructure available in cloud environments. In this section, we will discuss cloud native in depth to prepare you for the rest of this book.

Microservice Oriented

First, cloud native architectures break from the traditional design of monoliths and rely on containers (e.g., Docker) and serverless compute platforms. This means that applications are smaller and composed at a higher level. We no longer extend an existing application’s functionality by creating or importing a library into the application, which makes the application binary larger, slower to start and execute, and more memory-intensive. Instead, with cloud native we build new microservices to create a new feature and integrate it with the rest of the application using endpoint type interfacing (such as HTTP) and event type interfacing (such as a messaging platform).

For example, say we needed to add image upload capability to our application. In the past, we would have imported a library to implement this functionality, or we would have written an endpoint where we accept a binary type through a web form and then saved the image locally to our server’s disk. In a cloud native architecture, however, we would create a new microservice to encapsulate our image services (upload, retrieve, etc.). We would then save and retrieve this image, not to disk, but to an object storage service in the cloud (either one we would create or an off-the-shelf service provided by our cloud platform).

This microservice also exposes an HTTP endpoint, but it is isolated from the rest of the application. This isolation allows it to be developed and tested without having to involve the rest of the application—giving us the ability to develop and deploy faster. As it is not tightly coupled with the rest of the application, we can also easily add another way to invoke the routine(s): hooking it into an event-driven messaging system, such as Kafka.

Loosely Coupled

This brings us to our second main discussion point on cloud native: we rely more on services that are loosely coupled, rather than tightly coupled monolith silos. For example, we use an authentication microservice to do the initial authentication. We then use JSON Web Tokens (JWT) to provide the necessary credentials to the rest of our microservices suite to meet the security requirements of our application.

The loose coupling of these small, independent microservices provides immense benefits to us software developers and the businesses that run on these platforms:

Cost

We are able to adapt our compute needs to demand (known as elastic computing).

Maintainability

We are able to update or bug-fix one small part of our application without affecting the entire app.

Flexibility

We can introduce new features as new microservices and do staged rollouts.

Speed of development

As we are not doing low-level management of servers (and dynamic provisioning), we can focus on delivering features.

Security

As we are more nimble, we can patch parts of our application that need urgent fixes without extensive downtime.

Twelve-Factor Methodology

Along with these high-level cloud native traits, we should also discuss the twelve-factor application methodology, a set of guidelines for building applications in cloud native environments. You can read about them in detail on their website, but we’ll summarize them for you here:

  1. A versioned codebase (like a Git repository) matches a deployed service, and the codebase can be used for multiple deployments.

  2. All dependencies should be explicitly declared and should not rely on the presence of system-level tools or libraries. Explicitly declaring and isolating dependencies ensures portability from developer machine to continuous integration/continuous delivery (CI/CD) to production server.

  3. Configuration should be stored in the environment for things that vary between deployments (e.g., environment variables).

  4. Backing services are treated as attached resources, and there is no distinction made between on-premise and third-party resources; all are addressed via locator/credentials or URL, provided via environment configuration.

  5. Strict separation between the stages of build, release, and run ensures reproducibility.

  6. Deploy applications as one or more stateless processes. Shared state should be portable and loadable from a backing service.

  7. Share and export services via a declared port.

  8. Scaling is achieved using horizontal scaling.

  9. Fast startup and graceful shutdown maximize robustness and scaling.

  10. Different environments (dev/test/prod) should be as similar as possible. They must be reproducible, so do not rely on external inputs in their construction.

  11. Logs are to be emitted as event streams (stdout/stderror) for easy aggregation and collection by the cloud platform.

  12. Admin tasks must be in source control, packaged with the application, and able to run in all environments.

Following these best practices will help developers succeed and will reduce manual tasks and “hacks” that can impede the speed of development. It will also help ensure the long-term maintainability of your application.

Rapid Evolution

Cloud native development brings new challenges; for example, developers often see the loss of direct access to the “server” on which their application is running as overly burdensome. However, the tools available for building and managing microservices, as well as cloud provider tools, help developers to detect and troubleshoot warnings and errors. In addition, technologies such as Kubernetes enable developers to manage the additional complexity of more instances of their microservices and containers. The combination of microservices required to build a full, large application, often referred to as a service mesh, can be managed with a tool such as Istio.

Cloud native is rapidly evolving as the developer community better understands how to build applications on cloud computing platforms. Many companies have invested heavily in cloud native and are reaping the benefits outlined in this section: faster time to market, lower overall cost of ownership, and the ability to scale with customer demand.

It’s clear that cloud native is becoming the way to create modern business applications. As the pace of change is fast, it is important to understand how to get the best out of the technology choices available.

Why Java and the Java Virtual Machine for Cloud Native Applications?

In principle any programming language can be used to create microservices. In reality, though, there are several factors that should influence your choice of a programming language.

Innovation and Insight

The first consideration is simply the pace of innovation and where it is taking place. The Java community is probably the place where those with the deepest knowledge and experience in using the internet for business gather. The community that created Enterprise Java and has made it the de facto business platform is also the community that is leading the evolution of cloud native thinking. They are exploring all the aspects of what it means to be cloud native—whether it is serverless, reactive, or even event driven. Cloud native continues to evolve, so it’s important to keep abreast of its direction and utilize the best capabilities as they’re developed in the Java community.

Performance and Economics

Next, consider that a cloud environment has a significantly different profile from a traditional one that runs on a local server. In a cloud environment, the amount of compute resource is usually lower and, of course, you pay for what you use. That means that cloud native applications need to be frugal and yet still performant. In general, developers need runtimes that are fast to start, consume less memory, and still perform at a high level. Couple this need with cloud’s rapid evolution, and you are looking for a runtime with a pedigree of performance and innovation. The Java platform and Java Virtual Machine (JVM) are the perfect mix of performance and innovation. Two decades worth of improvements in performance and steady evolution have made Java an excellent general purpose programming language. Cloud native Java runtimes like Eclipse OpenJ9 offer substantial benefits in runtime costs while maintaining maximum throughput.

Software Design and Cloud Solutions

Finally, it’s important to understand that a modern cloud native application is more complex than traditional applications. This complexity arises because a cloud native solution operates in a world where scale, demand, and availability are increasingly significant factors. Cloud native applications have to be highly available, scale enormously, and handle wide-ranging and dynamic demand. When creating a solution, you must look carefully at what the programming language offers in terms of reducing design issues and bugs. The Java runtime, with its object-oriented approach and built-in memory management, helps remove problems that are challenging to analyze locally, let alone in a highly dynamic cloud environment.

Java and the JVM address these challenges by enabling developers to create applications that are easier to debug, easier to share, and less prone to failure in challenging environments like the cloud.

Summary

In this chapter we outlined the key principles of being cloud native, including being microservice oriented, loosely coupled, and responsive to the fast pace of change. We summarized how following the twelve-factor methodology helps you succeed in being cloud native and why Java is the right choice for building cloud native applications.

In the next chapter we explore the importance of an open approach when choosing what to use for your cloud native applications. An open approach consists of open standards to help you interoperate and insulate your code from vendor lock-in, open source to help you reduce costs and innovate faster, and open governance to help grow communities and ensure technology choices remain independent of undue influence from any one company. We’ll also outline our technology choices for the cloud native microservices shown in the remainder of the book.

Get Developing Open Cloud Native Microservices now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.