Chapter 4. Cloud Native Development
In the previous chapter we saw how to develop REST services that are backed by a database. We discussed how Enterprise Java makes this simple, allowing the developer to focus largely on the business logics and use a small set of annotations to define database persistence and to provide and call REST services with JSON payloads.
This ability to use annotations to reduce the amount of coding required is great for small numbers of simple services, but you’ll soon encounter limitations if you’re scaling to tens or hundreds of services for which individual teams are responsible. Your business logic is now split across processes with remote APIs. These APIs will need to be appropriately secured. A client request potentially passes through tens of services, all managed independently and with networks in between, which adds the potential for latency and reliability problems. Your independent teams now need to be able to collaborate and communicate their service APIs. These are just some of the costs associated with cloud native development, but thankfully there are APIs and technologies available to help.
Securing REST Services
Background
It’s important to start by recalling that by default the HTTP protocol
is stateless. The protocol supports various “verbs” for requests (GET
,
POST
, PUT
, DELETE
, etc.). These are the building blocks for the RESTful
approach underpinning microservices. All calls are stateless, as they are
simply requests to retrieve or modify state on the server. There is no
capability within the protocol to define any sort of relationship
between these calls. This design approach means that HTTP services can
balance workload effectively across multiple servers (and the like) because any call
can be routed to any available responder.
This stateless design is effective for public data where the caller can remain anonymous, but at some point it becomes essential to differentiate one client from another.
As mentioned before, prior to a client authenticating themselves, a service does not need to be able to differentiate between callers. They can remain anonymous and undifferentiated. Once a client is authenticated to the server, however, then they are no longer anonymous. The client may have particular powers to modify the state of the server; hence, the server must ensure there are appropriate controls in place to prevent hijacking of the communications between the user and the server.
Application architectures therefore face a continuous challenge in determining how to communicate securely and statefully with an authenticated client when the underlying protocol is stateless.
The Common Approach
Most application server frameworks provide a basic mechanism to achieve
this via a session mechanism that stores a unique identifier in a cookie
called JSESSIONID
that is sent to the client. In this model the session
ID is simply a randomly generated key that can be used by the client to
show that its request is part of a previous conversation with the
server.
This approach does work, but it has some significant weaknesses:
- Server affinities
-
The session ID is a key to more important data stored on the server. This is fine when there is only one server. When there are multiple servers that could handle the request from the client, though, they must somehow communicate to each other what are valid session IDs and what is the important related data. Otherwise, a client request routed to a new server would find that its
JSESSIONID
is not recognized and hence the request would be rejected. - Session ID hacking or spoofing
-
The session ID on its own merely indicates to the server that the client has been seen before. The random nature of the session ID protects against simple spoofing. If it were a simple numeric value, it’s easy to see how a malicious actor could create a fake session ID and try to break into an existing conversation between client and server. The random nature of the session ID prevents simple numerical attacks but does not prevent stolen session IDs from being reused.
- Lack of access granularity
-
Since the session ID is just a key to identifying the client, it does not restrict the request’s capability. A stolen or hijacked session ID could be used to access the server in any way the client is authorized to do—even if unrelated to the original request from the client.
- Multiple authentication
-
JSESSIONID
is a server concept. It is created by the application server and shared across related instances. If the client needs to talk to different services, then it will need to authenticate separately with them. If they too provideJSESSIONID
s, then the client will need to manage multiple conversations to prevent repeated authentication. Reauthentication is both a waste of resources and a potential security risk. - Access propagation
-
When the application itself is making a request to a service on behalf of the client, the
JSESSIONID
is not suitable to pass on to the next service. The application will need to authenticate with the new service on the client’s behalf. In this case the application has to request additional authentication information from the client or use some preloaded authentication data. Neither of these options is particularly optimal, and both carry a risk of being potential security exposures.
Introducing JSON Web Tokens
In light of all the aforementioned weaknesses, much effort has been applied to creating an improved approach that addresses or reduces these concerns. As one example, in 2015 the IETF published rfc7519, which proposed a compact solution called JSON Web Token (JWT).
The JWT approach is based on providing an encoded, signed token that the client can use to access an application securely without needing the application to hold session state. The token is not specific to any application and can be passed to downstream services with no need for reauthentication. JWT tokens are readable and verifiable by anyone but, because they are signed, cannot be modified without detection.
A JWT token looks like this:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9
.
eyJzdWIiOiIxMjM0NTY3ODkwIiw
iZW1haWwiOiJqb2VAbWFpbHNlcnZlci5jb20iLCJleHAiOjEyMDAxMTY0OTAsImd
yb3VwcyI6WyJtZW1iZXIiXX0
.
MR_71yUi80M9b_Hb9MnCrquuosvanX2hwggNsgV
cMe0
Looking closely, we can see the token is separated into three parts by periods (dots). Logically the token consists of a header, a payload, and a signature. The header and payload are base64 encoded. Once split and decoded, the token is more easily understandable.
Header
{
"alg"
:
"HS256"
,
"typ"
:
"JWT"
}
The JSON that makes up the header typically has two properties. The
"alg"
field defines the encryption algorithm used in the signature. The
"typ"
field specifies the type of the token, which by definition is
"JWT"
.
Payload
This section contains claims, which are optional. There are multiple predefined claims, some of which, although technically optional, are generally essential.
Claims are logically grouped into three types:
- Registered claims
-
Claims that are the most obviously useful or essential. The list includes the token expiration time, the token’s issuer, and the subject or principle of the token.
- Public claims
-
Claims intended to be shared between organizations and to be in some way domain specific. Public claims are registered in the IANA JSON Web Token Registry.
-
Private claims cover all other claims that individuals and groups want for their own usage.
-
Here is a typical JWT claims example:
{
"sub"
:
"joe"
,
"email"
:
"joe@mailserver.com"
,
"exp"
:
1200116490
,
"groups"
:
[
"member"
]
}
This example shows the following claims:
-
"sub"
or subject, the value that will be returned via the MicroProfileJsonWebToken.getCallerPrinciple()
method. -
"email"
, a private claim that can be accessed by usingJsonWebToken.getClaim("email")
. -
"exp"
or expiration date, the date and time after which this token is considered to be invalid. -
"groups"
, the list of groups or roles the subject is a member of. This can be automatically checked with the@RolesAllowed
annotation.
Signature
The signature is an encrypted version of the header and payload joined together. It is created by base64 encoding the header and payload and then encrypting the result.
The signature can be signed by either a public or private key. In either case, the token recipient can easily assert that the data has not been modified. Of course, if the issuer signed the token with a public key, then any potential bad actor could create a fraudulent token by using the public key. So while it is technical feasible to use public keys, it is best to use a private key. Doing so provides additional proof that the token issuer is who they claim to be, as only they have the private key.
JWT with MicroProfile
Eclipse MicroProfile provides first-class support for generating and consuming JWT elements. There are easy-to-use annotations and methods for setting and reading group access, identities of participants, times of token issue and expiration, and, of course, setting and reading custom claims.
Enabling JWT as the authentication method
Using JWT with MicroProfile is straightforward. Use an annotation on the Application
class to enable JWT as the login method:
@LoginConfig
(
authMethod
=
"MP-JWT"
)
public
class
CoffeeShopApplication
extends
Application
{
Consuming JWT
On each endpoint class, use CDI to inject the current JWT instance:
@Path
(
"/orders"
)
public
class
OrdersResource
{
@Inject
private
JsonWebToken
jwtPrincipal
;
Each endpoint using JWT support looks similar to the following example:
@GET
@RolesAllowed
({
"member"
})
@Path
(
"coffeeTypes"
)
public
Response
listSpecialistCoffeeTypes
()
{
...
Notice how by using the @RoleAllowed
annotation we can check that the client user is in the required member group. If the client is not a member, then the server will automatically reject the request.
Additional benefits of using JWT
JWT can be used to store important information about the client that otherwise might have had to be cached on the server. Being able to reduce sensitive data stored on the server greatly helps in situations where the server has been compromised. It is hard to steal sensitive data if it is not actually there!
Imagine that in our coffee-shop example members can order special, super-strong coffees if they are 18 or over. Under normal circumstances, the server will ask for the user’s date of birth, which will be stored in a database. Subsequent interactions with the user will require the server to retrieve the user’s record to check their age and whether they are a coffee club member.
By including private claims in the token that confirms the user is 18 or over and a club member, the server can quickly check that the user is eligible for the special coffees without having to retrieve sensitive data.
This capability is particularly powerful. If the information is not something that should be revealed to the user or a third party, it can be encrypted inside the token.
In this example, the JWT token claims would look something like this:
"sub"
:
"joe"
,
"name "
:
"Joe Black"
,
"exp"
:
1200116490
,
"groups"
:
[
"member"
]
"adult"
:
"true"
The Java code checking the claim during the order would simply be:
@POST
@RolesAllowed
({
"member"
})
@Path
(
"/orderMemberCoffee"
)
public
Response
orderMemberCoffee
(
CoffeeOrder
order
)
{
JsonValue
claim
=
jwtPrincipal
.
getClaim
(
"adult"
);
if
(
claim
==
null
||
claim
!=
JsonValue
.
TRUE
)
{
return
Response
.
status
(
Response
.
Status
.
FORBIDDEN
).
build
();
}
// normal processing of order
Encrypting claims
Since JWT contents are essentially public, if the claim information is sensitive, then it can be worthwhile to encrypt the contents of the claim and even obscure the claim name itself. In this example, the actual age of the user is needed:
"age"
:
"25"
Once encrypted and obfuscated, it appear as follows:
"a7a6a43128392fc"
:
"Bd+vK2AnxSNZduoGxFdbpBOfZ3mkPfBcw14t
4uU29nA="
Final Thoughts on JWT
JWT provides a strong mechanism for validating that the client has not forged any of the claims and is whom they say they are. However, like all publicly shared data in an HTTP or HTTPS request, the data in the JWT token can potentially be stolen and can be used as is against the service. Detecting this kind of spoofing is beyond the scope of this book, but it’s important to know that it can and does occur.
Handling Service Faults
System outages in large businesses can cost them tens of thousands of dollars per minute in lost revenue. High availability (HA) is therefore critical to business success. Many businesses make significant investments with the goal of achieving four-nines availability (99.99% available, or less than 52 minutes, 36 seconds of outage per year) or even five-nines availability (99.999% available, or less than 5 minutes, 15 seconds of outage per year). You may be wondering what this has to do with microservices, and to answer that, we need to do some sums.
Consider a company running a monolithic application with five-nines availability. They split the application up into microservices, each deployed and managed independently. They calculate that on average each request to their application passes through 10 microservices and each individual microservice has five-nines availability. What’s the overall availability of their microservice-based application? It’s actually now only four nines.
This is the probability of a request being successful:
If a request passes through 100 microservices, the availability of that request is actually only three nines, meaning 1 in 1,000 requests will encounter a problem—not great for customer satisfaction.
This is the probability of a request being successful:
The solution to this problem is to expect issues and handle them gracefully. If, as a client of a service, you can be tolerant of its faults and not propagate those issues back to your clients, then your availability is not impacted. Do this for all your service dependencies, and your overall availability is not impacted at all.
Fault tolerance is the concept of designing into a system the ability to gracefully handle faults. There are a number of different strategies for handling faults, and the approach you choose depends on the types of problems you might encounter and the purpose of the service being called. For example, a slow service might require a different strategy from a service that suffers intermittent outages.
MicroProfile Fault Tolerance implements a number of strategies, as summarized here:
- Retry
-
The Retry strategy is useful for short-lived transient failures. You can configure the number of times a service request will be retried and the time interval between retries.
- Timeout
-
Timeout allows you to time a request out before it completes. This is useful if you are calling a service that might not respond in a reasonable amount of time—for example, within your service’s service level agreement (SLA) response time.
- Fallback
-
Fallback allows you to define an alternative action to take in the event of a failure—for example, calling an alternative service or returning cached data.
- Circuit breaker
-
A circuit breaker helps prevent repeated calls to a failing service. If a service begins to have issues, the circuit is opened and immediately fails requests until the service becomes stable and the circuit is closed.
- Bulkhead
-
Bulkhead is useful when you are calling a service that is at risk of being overloaded. You can restrict the number of current requests to a service and queue up or fail requests over this limit.
- Asynchronous
-
Asynchronous allows you to offload requests to separate threads and then use Futures to handle the responses. An object representing the result of an asynchronous computation, the Future can be used to retrieve a result once the computation has completed.
It’s also possible to combine these policies for the same microservice dependency. For example, you can use Retry along with Fallback so that if the retries ultimately fail, you can call a fallback operation to return something useful.
Let’s look at an example of MicroProfile Fault Tolerance. The following code is for a client of the Barista
service:
@Retry
@Fallback
(
fallbackMethod
=
"unknownBrewStatus"
)
public
OrderStatus
retrieveBrewStatus
(
CoffeeOrder
order
)
{
Response
response
=
getBrewStatus
(
order
.
getId
().
toString
());
return
readStatus
(
response
);
}
private
OrderStatus
unknownBrewStatus
(
CoffeeOrder
order
)
{
return
OrderStatus
.
UNKNOWN
;
}
private
Response
getBrewStatus
(
String
id
)
{
return
target
.
resolveTemplate
(
"id"
,
id
)
.
request
().
get
();
}
This code makes a remote call to the Barista
service to retrieve the status of an order. The client may throw an exception, for example, if it is unable to connect to the Barista
service. If this occurs, the @Retry
annotation will cause the request to be retried, and in the event none of the request is successful, the @Fallback
annotation causes the unknownBrewStatus
method to be called, which returns OrderStatus.UNKNOWN
.
Publishing and Consuming APIs
A common characteristic of companies that succeed with microservices is how they organize their teams. Spotify, for example, has squads that are responsible for each microservice; they manage the microservice from concept to development, from test to production, and finally to end of life.
Services don’t live in isolation, so it’s important for teams to be able to communicate to potential users what their services do and how to call them. Ideally that communication should be both human- and machine-readable, enabling a person to understand the service and a service client to easily call it, for example, by generating a service proxy at build time.
Open API is an open specification at the Linux Foundation designed to do just that. Open API is a standardization of Swagger contributed by SmartBear. It describes service APIs in either YAML or JSON format, and there are a number of tools that take these definitions and generate proxies or service stubs for various languages, including Java.
Rather than write Open API definitions from scratch, it’s preferable to generate them from the service implementation. This is simpler for developers, as they don’t need to be familiar with the Open API format or restate things already said in the code. It also ensures that the API definition is in sync with the code and that tests can be used to quickly flag breaking changes. It’s possible to generate a machine-readable Open API definition directly for service implementations, such as those using JAX-RS and JSON-B. To augment the service definition with documentation, MicroProfile provides additional annotations that cover things such as operation documentation and API documentation URLs. The following example shows a JAX-RS/JSON-B service using the @Operation
annotation to add a human-readable description:
@GET
@Path
(
"{id}"
)
@Operation
(
summary
=
"Get a coffee order"
,
description
=
"Returns a CoffeeOrder object for the given order id."
)
public
CoffeeOrder
getOrder
(
@PathParam
(
"id"
)
UUID
id
)
{
return
coffeeShop
.
getOrder
(
id
);
}
The resulting OpenAPI YAML definition would be as follows. For the sake of brevity, only the definitions relating to the getOrder
method are shown:
openapi
:
3.0.0
info
:
title
:
Deployed APIs
version
:
1.0.0
servers
:
-
url
:
http://localhost:9080/coffee-shop
-
url
:
https://localhost:9443/coffee-shop
paths
:
/resources/orders/{id}
:
get
:
summary
:
Get a coffee order
description
:
Returns a CoffeeOrder object for the given...
operationId
:
getOrder
parameters
:
-
name
:
id
in
:
path
required
:
true
schema
:
type
:
string
format
:
uuid
responses
:
default
:
description
:
default response
content
:
application/json
:
schema
:
$ref
:
'#/components/schemas/CoffeeOrder'
components
:
schemas
:
CoffeeOrder
:
type
:
object
properties
:
id
:
type
:
string
format
:
uuid
type
:
type
:
string
enum
:
-
ESPRESSO
-
LATTE
-
POUR_OVER
orderStatus
:
type
:
string
enum
:
-
PREPARING
-
FINISHED
-
COLLECTED
Some environments also provide a UI so that you can try out the API. Figure 4-1 shows the OpenAPI UI for retrieving a coffee order. Clicking “Try it out” allows you to enter the order id
and get back the CoffeeOrder
JSON.
Summary
In this chapter we’ve discussed a number of areas you need to focus on when developing cloud native microservices: end-to-end security through your microservices flow, graceful handling of network and service availability issues to prevent cascading failures, and simple sharing and use of microservices APIs between teams. While these areas aren’t unique to the microservices world, they’re essential to success within it. Without these approaches, your microservice teams will struggle to share and collaborate while remaining autonomous.
We’ve shown how using the open standards of JWT and Open API and their integration into Enterprise Java through MicroProfile, along with MicroProfile’s easy-to-use Fault Tolerance strategies, makes it relatively easy to address these requirements. For additional step by step instructions on how to build a cloud native microservices application in Java, please visit ibm.biz/oreilly-cloud-native-start.
In the next chapter we’ll move on to cloud native microservice deployment and how to take to make your services observable so you can detect, analyze, and resolve problems encountered in production.
Get Developing Open Cloud Native Microservices now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.