Chapter 4. Backing Services

In Chapter 3 we built our first microservice with ASP.NET Core. This service exposed some simple endpoints backed by an in-memory repository to provide consumers with the ability to query and manipulate teams and team membership. While it was enough to get started, it’s far from an example of a production-grade service.

In this chapter we’re going to make our first foray into the world of microservice ecosystems. Services never exist in a vacuum, and most of them need to communicate with other services in order to do their jobs. We call these supporting services backing services, and we’ll explore how to create and consume them by creating a new service and modifying the original team service to communicate with it.

Microservice Ecosystems

As we saw in Chapter 3, it’s pretty easy to fire up a quick set of middleware to host some RESTful resources on an HTTP server. These are just implementation details. The real work lies in designing ecosystems of microservices, where, within a larger community of interconnected services, each service can have its own release cadence, can be deployed on its own, and can scale horizontally on demand.

To achieve this, we need to put a little thought into what we’re doing. While classic “hello world” samples all exist in a vacuum and rely on no other services, we’re rarely going to see a lone service in production (with a few exceptions). This was the driving factor behind the discussion of the concept of API First in the previous chapter.

Once we accept the idea that we’re going to need multiple services, it becomes far too easy to oversimplify the problem. We assume that we’ll have a nice, direct, easy-to-follow dependency chain of services like the one in Figure 4-1.

Overly Simplistic Ecosystem
Figure 4-1. An overly simplistic microservice ecosystem

In this completely unrealistic scenario, service A depends on B, which in turn depends on C. With this clear hierarchy in mind, organizations can often make assumptions about processes for developing, deploying, and supporting services like these. These assumptions are dangerous because they can worm their way through an organization until they are no longer assumptions—they’ve become requirements.

Never assume that there is ever going to be a clear dependency chain or hierarchy of services. Instead, plan for something that looks more like Figure 4-2.

Realistic microservices ecosystem
Figure 4-2. A more realistic microservice ecosystem

In this ecosystem, we have a better representation of reality. However, even this diagram is trivial compared to some large enterprises that build and maintain hundreds or even thousands of services. To further complicate things, some of these lines might represent traditional HTTP calls while others might represent asynchronous, Event Sourcing–style communication (discussed in Chapter 6).

Bound Resources

Every application we build needs resources. In the traditional world of deploying apps and services to specific servers (virtual or physical), we’re used to our applications needing things like files on disk. These apps also have configuration, credentials, and URLs for accessing other services, and any number of other dependencies that often tightly couple the application to the server on which it is supposed to run.

When we’re running our services in the cloud, we need to build our applications with a slightly more abstract notion. Every resource needed by our application should be considered a bound resource, and accessed in a way that doesn’t violate any of the rules of cloud-native applications.

For example, if our application needs to read and write binary files, we can’t assume that we can use System.IO.File to read and write bytes to disk. This is because the disk in the cloud must be considered ephemeral. It is subject to complete removal without our application knowing. This is part of what allows our services to rapidly and dynamically scale—instances can be brought up and shut down anywhere in the world on demand. If it expects a file to exist on a local disk between requests or process starts, our app is going to fail in unpredictable and potentially catastrophic ways.

The solution is to assume that everything, including the filesystem, is a service. Backing services are bound to our application through some mechanism likely facilitated by a cloud provider (PaaS, or Platform as a Service). Instead of opening a file, we communicate with a generalized persistence service. This could be something we build ourselves, or it could be an Amazon Web Services S3 bucket, or it could be any number of other brokered persistence services available.

Likely one of the most common types of bound resources is a  database connection. The binding of this resource contains things we should all be familiar with, such as a connection string and credentials. We’ll see more about bound resources and database connections in Chapter 5.

Lastly, as we’ll see in the samples in this chapter, other microservices are also bound resources. The URLs and credentials to the services on which our own service depends should be considered part of the resource binding.

I should note that the concept of a resource binding is an abstraction, and the implementation of it will vary depending on what cloud platform is hosting your application. These service bindings could just as easily come from environment variables injected by your platform or from an external configuration provider.

Whether you’re using Google Cloud Platform, AWS, Azure, Heroku, or just running a bunch of Docker images manually, the key to enabling communications between services is the combination of externalized configuration and treating everything as a bound resource.

Strategies for Sharing Models Between Services

There are a few things that are required for an environment to be considered a microservice ecosystem. The first, obviously, is that you need more than one service. The second is that the services within this ecosystem communicate with each other. Without the latter, you’re just standing up an array of isolated and unrelated services.

If we’re being diligent about following some cloud-native best practices like API First, then all of our services will have documented, versioned, well-understood public APIs. We might be using a YAML standard like Swagger to document our APIs, or we could be using one based on Markdown, like API Blueprint. The mechanism of defining and documenting our APIs is not nearly as important as the discipline we put into designing our APIs before we write our code.

With a well-defined, versioned API that we know isn’t going to break out from underneath us, the services within our ecosystem can be built by different teams. Consuming the API from those services then becomes merely a matter of writing simple REST clients.

If it’s so simple, then why are we dedicating a section of the chapter to the concept of model sharing? The reason is because as people build ecosystems following the API First rule, once they get into writing the code, they often allow the API boundary to become a soft or blurred boundary.

Teams frequently make some architectural decisions early on during a project that won’t cause trouble until far into the future, when the cost of untangling the mess can get exorbitant.

As an example, let’s say that you’ve got two services in your suite that both operate on invoices. One accepts an invoice from a queue, performs processing, and then submits an updated invoice to another downstream service.

When we look at this solution on paper, it’s very easy to say something like, “Let’s just extract the invoice model and share it among services.” Seems like a great idea, and it’s used frequently enough that it is a named pattern, often called the canonical model pattern.

Fast-forward a few months, and developers on both service teams have been adding features. The invoice model and its validation rules and (de)serialization code have been factored out into a nice shared module. Because it’s easy and gets the job done, both services eventually end up performing their internal processing against the canonical or public model.

Now when one service changes the model in order to accommodate what should be an internal concern, the other service is affected and potentially has builds and tests broken as a result. They’ve lost the flexibility of true independence, and instead of being a source of flexibility, the canonical model is now a source of tight coupling and is preventing independent team deployment schedules.

It is entirely possible to maintain a canonical model without internal pollution, but you absolutely must be ruthless about maintaining a “pure” canonical model and forcing the internals of a service to use an internal model and convert back and forth with an anti-corruption layer (ACL). This is often perceived as a lot of work for little to no benefit, so many teams skip this discipline and lapse into tightly coupling internals to a public model, the consequences of which grow worse exponentially as more and more services adopt this anti-pattern.

Put another way, two services that are tightly coupled to the same shared internal model are as tightly coupled as if they resided within the same monolith.

In my opinion, based on years of building new software and untangling the messes of legacy software, the real answer is to share nothing. A microservice is an embodiment of the Single Responsibility Principle (SRP) and the Liskov Substitution Principle (LSP). A change to one service should never have any impact on any other service. A change to the internal model should be possible without corrupting the service’s public API or any external models.

Lastly, before getting into the nuts and bolts of the code for this chapter, I leave you with this quote from Sam Newman about the perils of sharing in microservices:

The evils of too much coupling between services are far worse than the problems caused by code duplication

Sam Newman, Building Microservices

Building the Location Service

In Chapter 3, we wrote some code as a simplified example of a service designed to manage team information. It allowed for the querying of teams and team members, as well as assignation of members to teams.

We’ve decided that we also want to maintain and query the locations of all of our team members. We’re hoping to eventually build in some map integrations, so as a first step we want to upgrade the team service to contain locations.

But is that really the right way to go? On the surface, it would be really simple to just add a location field to the data store we’re using for members. We could probably have that change written in short order.

What happens, however, if we decide in the near future to change how we manage locations without changing how we manage team memberships? Someone could decide they want to convert all of the location data to a graph database. If location and team membership are part of the same microservice, we’re violating the SRP and forcing the team service to change every time we modify location management.

It makes more sense to put the responsibility of location management into its own service. This service will manage the location history of individuals (without regard for their team membership). We can add location events to a person, query location history, and as a convenience we can also query for the current location of any individual for whom we have location data.

In keeping with our policy of API and test-first development, Table 4-1 describes the public API for the location service. In our domain, a member is a user of the team management application.

Table 4-1. REST API for the location service
Resource Method Description
/locations/{memberID}/latest GET Retrieves the most current location of a member
/locations/{memberID} POST Adds a location record to a member
/locations/{memberID} GET Retrieves the location history of a member

If you want to browse the full code for this solution, check it out on GitHub by looking at the no-database branch.

First, let’s create a model class to hold location records, which are records of events in which a team member was “spotted” at a location or his mobile device reported his current location (Example 4-1).

Example 4-1. src/StatlerWaldorfCorp.LocationService/Models/LocationRecord.cs
public class LocationRecord {
    public Guid ID { get; set; }
    public float Latitude { get; set; }
    public float Longitude { get; set; }
    public float Altitude { get; set; }
    public long Timestamp { get; set; }
    public Guid MemberID { get; set; }

Each location record is uniquely identified by a GUID called ID. This record contains a set of coordinates for latitude, longitude, and altitude; the timestamp for when the location event took place; and the GUID of the individual involved (memberID).

Next we need an interface representing the contract for a location repository (Example 4-2). For this chapter our repository is just going to be a simple in-memory system. In the next chapter we’ll talk about replacing it with a real database.

Example 4-2. src/StatlerWaldorfCorp.LocationService/Models/ILocationRecordRepository.cs
public interface ILocationRecordRepository {
    LocationRecord Add(LocationRecord locationRecord);    
    LocationRecord Update(LocationRecord locationRecord);
    LocationRecord Get(Guid memberId, Guid recordId);
    LocationRecord Delete(Guid memberId, Guid recordId);
    LocationRecord GetLatestForMember(Guid memberId);
    ICollection<LocationRecord> AllForMember(Guid memberId);

Now that we have a model, an interface for a repository, and a repository implementation (it’s just a wrapper around a collection, so to save space in the book I left that code on GitHub), we’re going to create a controller that exposes this public API. As with all controllers, it is extremely lightweight and defers all of the real work to separately testable components. The code in Example 4-3 illustrates that the controller accepts an ILocationRecordRepository instance via constructor injection.

Example 4-3. src/StatlerWaldorfCorp.LocationService/Controllers/LocationRecordController.cs
 public class LocationRecordController : Controller {
    private ILocationRecordRepository locationRepository;

    public LocationRecordController(
       ILocationRecordRepository repository) {
        this.locationRepository = repository;

    public IActionResult AddLocation(Guid memberId, 
       [FromBody]LocationRecord locationRecord) {
        return this.Created(

    public IActionResult GetLocationsForMember(Guid memberId) {            
        return this.Ok(locationRepository.AllForMember(memberId));

    public IActionResult GetLatestForMember(Guid memberId) {
        return this.Ok(

Making the repository available for  dependency injection is just a matter of adding it as a scoped service during startup, as in Example 4-4.

Example 4-4. Startup.cs
 public void ConfigureServices(IServiceCollection services)

Before moving on to the next section, I suggest you build and test out the location service yourself. Grab the latest code from GitHub and issue the following commands:

$ cd src/StatlerWaldorfCorp.LocationService
$ dotnet restore
$ dotnet build

Note that the code in GitHub has more than one branch. The code for this chapter contains only an in-memory repository and is under the no-database branch. If you check out the master branch, you’ll be peeking ahead at the code for the next chapter.

You can run the application as shown here:

$ dotnet run
Hosting environment: Production
Content root path: [...]
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

With the server running, we can POST a new location record using the following syntax. Note that I’ve added newlines to this to make it more readable. The curl command you type will all be on a single line:

$ curl -H "Content-Type: application/json" -X POST 
   -d '{"id": "55bf35ba-deb7-4708-abc2-a21054dbfa13", \ 
        "latitude": 12.56, "longitude": 45.567, \
        "altitude": 1200, "timestamp" : 1476029596, \
        "memberId": "0edaf3d2-5f5f-4e13-ae27-a7fbea9fccfb" }'


We receive back the location record we submitted to indicate that the new record was created. Now we can query the location history for our member (the same memberId we used in the preceding command) with the following command:

$ curl http://localhost:5000/locations/0edaf3d2-5f5f-4e13-ae27-a7fbea9fccfb


Satisfied that our location service is working, we can move on to updating the team service.

Enhancing the Team Service

Now that we’ve created a location service, let’s extend the team service we created in the previous chapter. We’ll modify the service so that when we query the details for a particular team member, we will also include their most current location and when they were spotted or checked into that location.

To do this, we have two main tasks:

  1. Bind the URL for the location service to our team service.
  2. Consume the location service once we have the URL.

To see the full implementation of the enhanced team service, check out the location branch of the team service repository.

Configuring Service URLs with Environment Variables

As mentioned, there are a number of different ways we can “bind” connection information for backing services to our application. The most important thing for us to remember when doing this is that this information must come from the environment. It cannot be information checked in with our codebase.

The simplest way to do this is to set some reasonable defaults in an appsettings.json file, and then allow those defaults to be overridden with environment variables. The defaults are here only to make it easier to work on the code from our workstations, and should never be left intact in real environments:

    "location": {
        "url": "http://localhost:5001"

With this in place, we can override this setting with an environment variable called LOCATION__URL. Note that there are two underscores in this environment variable. Regardless of how the variable was set by the environment, we can query it by checking for the "location:url" configuration setting, thanks to ASP.NET Core’s configuration system creating a universal abstraction around the data hierarchy.

We can modify our startup so that we register an HttpLocationClient instance with the appropriate URL (we’ll see the implementation of this class shortly):

var locationUrl = Configuration.GetSection("location:url").Value;
logger.LogInformation("Using {0} for location service URL.", 
   new HttpLocationClient(locationUrl));

With just a single URL that never changes, this kind of environment-fed configuration is pretty easy. We’ll talk about more robust methods of configuring your applications later in the book.

Consuming a RESTful Service

Now that we know how to set up the order of precedence configuration settings allowing our file settings to be overridden with environment variables, we can focus on implementing a location client that talks to our location service.

Since we want to be able to unit test a controller method in our team service without making HTTP calls, we know we’re going to start off with creating an interface for our location client (Example 4-5).

Example 4-5. src/StatlerWaldorfCorp.TeamService/LocationClient/ILocationClient.cs
public interface ILocationClient
   Task<LocationRecord> GetLatestForMember(Guid memberId);

And Example 4-6 is our implementation of a location client that makes simple HTTP requests. Note that the URL to which this client connects is passed in the constructor we saw in our Startup class earlier.

Example 4-6. src/StatlerWaldorfCorp.TeamService/LocationClient/HttpLocationClient.cs
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using StatlerWaldorfCorp.TeamService.Models;
using Newtonsoft.Json;

namespace StatlerWaldorfCorp.TeamService.LocationClient
  public class HttpLocationClient : ILocationClient
    public String URL {get; set;}

    public HttpLocationClient(string url)
        this.URL = url;

    public async Task<LocationRecord> 
      GetLatestForMember(Guid memberId) 
        LocationRecord locationRecord = null;

        using (var httpClient = new HttpClient())
            httpClient.BaseAddress = new Uri(this.URL);
              new MediaTypeWithQualityHeaderValue(

            HttpResponseMessage response = 
                await httpClient.GetAsync(

            if (response.IsSuccessStatusCode) {
              string json = 
                await response.Content.ReadAsStringAsync();
              locationRecord = 
          return locationRecord;

With a location service client available, we can now modify the controller method in the team service responsible for querying member details. I didn’t explicitly cover the code for this controller in the previous chapter, so if you didn’t write your own you can find a copy in the GitHub repository.

We’ll modify the controller to invoke the location client so we can append the most recent location for the member to the response (Example 4-7).

Example 4-7. src/StatlerWaldorfCorp.TeamService/Controllers/MembersController.cs
public async virtual Task<IActionResult> 
   GetMember(Guid teamID, Guid memberId) 
   Team team = repository.GetTeam(teamID);
   if(team == null) {
     return this.NotFound();
   } else {
     var q = team.Members.Where(m => m.ID == memberId);
     if(q.Count() < 1) {
       return this.NotFound();
     } else {
       Member member = (Member)q.First();

       return this.Ok(new LocatedMember {
         ID = member.ID,
         FirstName = member.FirstName,
         LastName = member.LastName,
         LastLocation = 
           await this.locationClient.GetLatestForMember(member.ID)

It’s also worth pointing out here that the LocationRecord model class we’re using is private to the team service. Per the earlier discussion on model sharing, the team service and location service are not sharing models, which allows the team service to remain coupled only to the public API of the location service, and not the internal implementation.

We’re getting away with what could almost be called an implicit anti-corruption layer here because we’re using the fact that two JSON payloads look the same on both sides of a conversation.

In more typical scenarios, we would invoke some form of translation utility to convert from the location service’s public API format to the type of information we need for our own internal model.

Running the Services

Before we continue, here’s a quick recap of what we’ve done so far. We decided that we wanted to add the ability to maintain a history of location check-ins for each person using our application. To do this, we created a location service that is the sole owner of location data and exposes a convenience endpoint for checking a member’s most recent location.

The new location service is in GitHub, in the no-database branch. The team service we modified to consume the location service can be found at the team service’s location branch.

You can also run both of these branches directly from tagged Docker images on docker hub:

  • Team service: dotnetcoreservices/teamservice:location
  • Location service: dotnetcoreservices/locationservice:nodb

First let’s start up the team service. We need to give it two different configuration parameters via environment variables:

  • Port number, using the PORT variable. We will have to give our services two different ports if we’re running them locally so they don’t collide.
  • Location URL, using the LOCATION__URL variable (remember, two underscores).

Run the following command:

$ docker run -p 5000:5000 -e PORT=5000 \
  -e LOCATION__URL=http://localhost:5001 \
info: Startup[0]
      Using http://localhost:5001 for location service URL.
Hosting environment: Production
Content root path: /pipeline/source/app/publish
Now listening on:
Application started. Press Ctrl+C to shut down.

Backslashes in Mac Terminal Listings

If you’re a Windows user and you’re wondering why there are lots of backslashes in the various listings for terminal commands issued at a Mac or Linux command prompt, this is a line continuation character. It lets the user type multiple lines of input, delaying processing until the final carriage return.

This starts the team service on port 5000, maps port 5000 inside the container to port 5000 on localhost, and tells the team service that it can find the location service at http://localhost:5001.

If you’re on a Mac, you can also pass a one-time environment variable straight through to the dotnet run command as shown here:

LOCATION__URL=http://localhost:5001 dotnet run

With the team service up and running, let’s start the location service:

$ docker run -p 5001:5001 -e PORT=5001 \
Status: Downloaded newer image for dotnetcoreservices/locationservice:nodb
Hosting environment: Production
Content root path: /pipeline/source/app/publish
Now listening on:
Application started. Press Ctrl+C to shut down.

Now we’ve got two services running. You can see the Docker configuration for both of those services by using the docker ps command. Next we’re going to need to run a series of commands to see everything working:

  1. Create a new team.
  2. Add a member to that team.
  3. Query the team details to see the member.
  4. Add a location to that member’s location history.
  5. Query the member’s details from the team service to see their location added to the response.

If you’re using Windows, you can achieve the same effect by using your favorite REST client.

Create a new team:

$ curl -H "Content-Type:application/json" -X POST -d \
 '{"id":"e52baa63-d511-417e-9e54-7aab04286281", \
    "name":"Team Zombie"}' http://localhost:5000/teams

Add a new member by posting to the /teams/{id}/members resource:

$ curl -H "Content-Type:application/json" -X POST -d \
 '{"id":"63e7acf8-8fae-42ce-9349-3c8593ac8292", \
   "firstName":"Al", \
   "lastName":"Foo"}' \

To confirm that everything has worked so far, query the team details resource:

$ curl http://localhost:5000/teams/e52baa63-d511-417e-9e54-7aab04286281

{"name":"Team Zombie",

Now that the team service has been properly primed with a new team and a new member, we can add a location to the location service. Note that we could have just added an arbitrary location, but the team service wouldn’t be able to find it without at least one team with one member with a location record:

$ curl -H "Content-Type:application/json" -X POST -d \
 '{"id":"64c3e69f-1580-4b2f-a9ff-2c5f3b8f0e1f", \
   "latitude":12.0,"longitude":12.0,"altitude":10.0, \
   "timestamp":0, \
   "memberId":"63e7acf8-8fae-42ce-9349-3c8593ac8292"}' \


Finally everything is set up to truly test the integration of both the team and the location service. To do this, we’ll query for the member details from the teams/{id}/members/{id} resource:

$ curl http://localhost:5000/teams/e52baa63-d511-417e-9e54-7aab04286281 \


I apologize for the lack of a shiny interface to these services. This book is all about building services and not about presentation to users. Additionally, given my lack of artistic ability, you really are better off not being subjected to my user interfaces and sticking with curl or generic REST clients.


Microservices are services that do one thing. This implies that services are going to have to talk to each other in order to accomplish multiple things or to join forces to accomplish a “big thing.” While there are those who dislike the idea of deploying dozens or hundreds of tiny services, the payoff is worth it when you are able to build, update, and release services independently without affecting others.

In this chapter we talked about some of the complexities of building ecosystems of microservices, and discussed at length some of the technical challenges involved in allowing one service to communicate with another while not violating any of the rules of cloud-native application development.

In the coming chapters, we’ll start looking into more complexities and more challenges with microservice ecosystems and discuss patterns and code to solve those problems.

Get Building Microservices with ASP.NET Core now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.