Chapter 4. Backend Patterns for Micro-Frontends
You may think that micro-frontends are a possible architecture only when you combine them with microservices because we can have end-to-end technology autonomy.
Maybe you’re thinking that your monolith architecture would never support micro-frontends, or even that having a monolith on the API layer would mean mirroring the architecture on the frontend as well.
However, that’s not the case. There are several nuances to take into consideration and micro-frontends can definitely be used in combination with microservices and monolith.
In this chapter, we review some possible integrations between the frontend and backend layers, in particular, we analyze how micro-frontends can work in combination with a monolith, with microservices, and even with the backend for frontend (BFF) pattern.
Also, we will discuss the best patterns to integrate with different micro-frontends implementations, such as the vertical split, the horizontal split with a client-side composition, and the horizontal split with server-side composition.
Finally, we will explore how GraphQL can be a valid solution for micro-frontends as a single entry point for our APIs.
APIs integration and micro-frontends
Let’s start by defining the different APIs approaches we may have in a web application. As shown in Figure 4-1, we focus our journey on the most used and well-known patterns.
This doesn’t mean micro-frontends work only with these implementations. You can devise the right approach for a WebSocket (a two-way computer communication protocol over a single TCP) or hypermedia (REST can be used with hypermedia links in the response contents, the client that consumes the API can dynamically navigate to the appropriate resources by traversing the hypermedia links), for instance, by learning how to deal with BFF, API gateway, or service dictionary patterns.
The patterns we analyze in this chapter are:
-
Service dictionary. The service dictionary is just a list of services available for the client to consume. It’s used mainly when we are developing an API layer with a monolith or modular monolith architecture; however, it can also be implemented with a microservices architecture with an API gateway, among other architectures. A service dictionary avoids the need to create shared libraries, environment variables, or configurations injected during the CI process or to have all the endpoints hardcoded inside the frontend codebase.
The dictionary is loaded for the first time when the micro-frontend loads, allowing the client to retrieve the URLs to consume directly from the service dictionary.
-
API gateway. Well known in the microservices community, an API gateway is a single entry point for a microservices architecture. The clients can consume the APIs developed inside microservices through one gateway.
The API gateway also allows centralizing a set of capabilities, like:
-
Token validation: validating the signature of a token before passing the request to a microservice
-
Visibility and reporting: we have a centralized way to verify all the inbound and outbound traffic
-
Rate-limiting: API Gateway rejects the request after exceeding a specific threshold, for instance we can set 100 requests per second as limit from a client when the limit is exceeded the API gateway returns errors instead calling the microservice to fulfill the request.
-
-
BFF. The BFF is an extension of the API gateway pattern, creating a single entry point per client type. For instance, we may have a BFF for the web application, another for mobile, and a third for the Internet of Things (IoT) devices we are commercializing.
BFF reduces the chattiness between client and server aggregating the API responses and returning an easy data structure for the client to be parsed and render inside a user interface, allowing a great degree of freedom to shape APIs dedicated to a client and reducing the round trips between a client and the backend layer.
These patterns are not mutually exclusive, either; they can be combined to work together.
An additional possibility worth mentioning is writing an API endpoints library for the client side. However, I discourage this practice with micro-frontends because we risk embedding an older library version in some of them and, therefore, the user interface may have some issues like outdated information or even APIs errors due to dismissal of some APIs. Without strong governance and discipline around this library, we risk having certain micro-frontends using the wrong version of an API. It is way better to rely on the service discovery pattern or similar mechanisms that provide a list of endpoints at runtime.
Domain-driven design (DDD) also influences architecture and infrastructure decisions. Especially with end-to-end distributed systems, we can divide an application into multiple business domains, using the right approach for each business domain.
This level of flexibility provides architects and developers with a variety of choices not possible before. At the same time, however, we need to be careful not to fragment the client-server communication too much, instead introducing a new pattern when it provides a real benefit for our application. A beneficial approach I’ve observed over the years is to start by developing independent solutions for each team and then gradually consolidate them into unified entry points. As teams deploy multiple micro-frontends into production, the necessity to consolidate the API layer becomes apparent, and governance naturally emerges from practical experience. Platform teams that attempt to design everything upfront run a higher risk of overcomplicating the entire process, leading to friction and hindering the swift flow that a distributed system requires, particularly at the onset of the journey.
Working with a Service Dictionary
A service dictionary is nothing more than a list of endpoints available in the API layer provided to a micro-frontend. This allows the API to be consumed without the need to bake the endpoints inside the client-side code to inject them during a continuous integration pipeline or in a shared library.
Usually, a service dictionary is provided via a static JSON file or an API that should be consumed as the first request for a micro-frontend (in the case of a vertical-split architecture) or an application shell (in the case of a horizontal split).
A service dictionary may also be integrated into existing configuration files or APIs to reduce the round trips to the server and optimize the client startup.
In this case, we can have a JSON object containing a list of configurations needed for our clients, where one of the elements is the service dictionary.
An example of service dictionary structure would be:
{ “my_amazing_api”: { “v1”: "https://api.acme.com/v1/my_amazing_api", “v2”: "https://api.acme.com/v2/my_amazing_api", “v3”: "https://api.acme.com/v3/my_amazing_api" }, “my_super_awesome_api”: { “v1”: "https://api.acme.com/v1/my_super_awesome_api" } }
As you can see, we are listing all the APIs supported by the backend. Thanks to API versioning, we can handle cross-platform applications without introducing breaking changes because each client can use the API version that suits it better.
One thing we can’t control in such scenarios is the presence of a new version in every mobile device. When we release a new version of a mobile application, updating may take several days, if not weeks, and in some situations, it may take even longer.
Therefore, versioning the APIs is important to ensure we don’t harm our user experience.
Reviewing the cadence of when to dismiss an API version, then, is important.
One of the main reasons is that potential attacks may harm our platform’s stability.
Usually, when we upgrade an API to a new version, we are improving not only the business logic but also the security. But unless this change can be applicable to all the versions of a specific API, it would be better to assess whether the APIs are still valid for legitimate users and then decide whether to dismiss the support of an API.
To create a frictionless experience for our users, implementing a forced upgrade in every application released via an executable (think about React Native applications for instance) may be a solution, preventing the user from accessing older applications due to drastic updates in our APIs or even in our business model.
Therefore, we must think about how to mitigate these scenarios in order to create a smooth user experience for our customers.
Endpoint discoverability is another reason to use a service dictionary.
Not all companies work with cross-functional teams; many still work with components teams, with some teams fully responsible for the frontend of an application and others for the backend.
Using a service dictionary allows every frontend team to be aware of what’s happening in other teams. If a new version of an API is available or a brand-new API is exposed in the service dictionary, the frontend team will be aware.
This is also a valid argument for cross-functional teams when we develop a cross-functional application.
In fact, it’s very unlikely that inside a “two-pizza team” we would be able to have all the knowledge needed for developing web, backend, mobile (iOS and Android), and maybe even smart TVs and console applications considering many of these devices are supporting HTML and JavaScript.
Using a service dictionary allows every team to have a list of available APIs in every environment just by checking the dictionary.
We often think the problem is just a communication issue that can be resolved with better communication. However, look again at the number of links in a 12-person team. Forgetting to update a team regarding a new API version may happen more often than not.
A service dictionary aids in initiating discussions with the team responsible for the API, particularly in large organizations with distributed teams.
Last but not least, a service dictionary is also helpful for testing micro-frontends with new endpoint versions while in production.
A company that uses a testing-in-production strategy can expand that to its micro-frontends architecture, thanks to the service dictionary, all without affecting the standard user experience.
We can test new endpoints in production by providing a specific header recognized by our service dictionary service. The service will interpret the header value and respond with a custom service dictionary used for testing new endpoints directly in production.
We would choose to use a header instead of a token or any other type of authentication, because it covers authenticated and unauthenticated use cases. Let’s see a high-level design on what the implementation would look like (Figure 4-2).
In Figure 4-2 we can see that the application shell consumes the service dictionary API as the first step. But this time, the application shell passes a header with an ID related to the configuration that needs to be loaded.
In this example, the ID was generated at runtime by the application shell.
When the service dictionary receives the call, it will check for a header in the request. If present, it will load the associated configuration from the database
It then returns the response to the application shell with the specific service dictionary requested. The application shell is now ready to load the micro-frontends to compose the page.
Finally, the custom endpoint configuration associated with the client ID is produced via a dashboard (top right corner of the diagram) used only by the company’s employees.
In this way we may even extend this mechanism for other use cases inside our backend, providing a great level of flexibility for micro-frontends and beyond.
The service dictionary can be implemented with either a monolith or a modular monolith. The important thing to remember is to allow categorization of the endpoints list based on the micro-frontend that requests the endpoints.
For instance we can group the endpoints related to a business subdomain or a bounded context. This is the strategic goal we should aim for.
A service dictionary makes more sense with micro-frontends composed on the client side rather than on the server side. BFFs and API gateways are better suited for the server-side composition, considering the coupling between a micro-frontend and its data layer.
Let’s now explore how to implement the service dictionary in a micro-frontend architecture.
Implementing a Service Dictionary in a Vertical-Split Architecture
The service dictionary pattern can easily be implemented in a vertical-split micro-frontends architecture, where every micro-frontend requests the dictionary related to its business domain.
However, it’s not always possible to implement a service dictionary per domain, such as when we are transitioning from an existing SPA to micro-frontends, where the SPA requires the full list of endpoints because it won’t reload the JavaScript logic until the next user session.
In this case, we may decide to implement a tactical solution, providing the full list of endpoints to the application shell instead of a business domain endpoints list to every single micro-frontend. With this tactical solution, we assume the application shell exposes or injects the list of endpoints for every micro-frontend.
When we are in a position to divide the services list by domain, there will be a minimum effort for removing the logic from the application shell and then moving into every micro-frontend as displayed in Figure 4-3.
The service dictionary approach may also be used with a monolith backend. If we determine that our API layer will never move to microservices, we can still implement a service dictionary divided by domain per every micro-frontend, especially if we implement a modular monolith.
Taking into account Figure 4-3, we can derive a sample of sequence diagrams like the one in Figure 4-4. Bear in mind there may be additional steps to perform either in the application shell or in the micro-frontend loaded, depending on the context we operate in. Take the following sequence diagram just as an example.
As the first step, the application shell loads the micro-frontend requested, in this example the catalogue micro-frontend.
After mounting the micro-frontend, the catalogue initializes and consumes the service dictionary API for rendering the view. It can consume any additional APIs, as necessary.
From this moment on, the catalogue micro-frontend has access to the list of endpoints available and uses the dictionary to retrieve the endpoints to call.
In this way we are loading only the endpoints needed for a micro-frontend, reducing the payload of our configuration and maintaining control of our business domain.
Implementing a Service Dictionary in a Horizontal-Split Architecture
To implement the service dictionary pattern with a micro-frontends architecture using a horizontal split, we have to pay attention to where the service dictionary API is consumed and how to expose it for the micro-frontends inside a single view.
When the composition is managed client side, the recommended way to consume a service dictionary API is inside the application shell or host page. Because the container has visibility into every micro-frontend to load, we can perform just one round trip to the API layer to retrieve the APIs available for a given view and expose or inject the endpoints list to every loaded micro-frontend.
Consuming the service dictionary APIs from every micro-frontend would negatively impact our applications’ performance, so it’s strongly recommended to stick the logic in the micro-frontends container as shown in Figure 4-5.
The application shell should expose the endpoints list via the window object, making it accessible to all the micro-frontends when the technical implementation allows us to do it. Another option is injecting the service dictionary, alongside other configurations, after loading every micro-frontend.
For example, using module federation in a React application requires sharing the data using React context APIs. The context API allows you to expose a context, in our case the service dictionary, to the component tree without having to pass props down manually at every level.
The decision to inject or expose our configurations is driven by the technical implementation.
Let’s see how we can express this use case with the sequence diagram in Figure 4-6.
In this sequence diagram, the request from the host application, or application shell, to the service dictionary is at the very top of the diagram.
The host application then exposes the endpoints list via the window object and starts loading the micro-frontends that compose the view.
Again, in real scenarios we may have a more complex situation. Adapt the technical implementation and business logic to your project needs accordingly.
Working with an API gateway
An API gateway pattern represents a unique entry point for the outside world to consume APIs in a microservices architecture.
Not only does an API gateway simplify access for any frontend to consume APIs by providing a unique entry point, but it’s also responsible for requests routing, API composition and validation, and other edge functions, namely authorization, logging, rate limiting and any other centralized functionality we need to have before the API gateway send the request to a specific microservice.
An API gateway also allows us to keep the same communication protocol between clients and the backend, while the gateway routes a request in the background in the format requested by a microservice (see Figure 4-7).
Imagine a microservice architecture composed with HTTP and gRPC protocols. Without implementing an API gateway, the client won’t be aware of every API or all the communication protocol details. Instead of using the API gateway pattern, we can hide the communication protocols behind the API gateway and leave the client’s implementation dealing with the API contracts and implementing the business logic needed on the user interface.
Other capabilities of edge functions are rate limiting, caching, metrics collection, and log requests.
Without an API gateway, all these functionalities will need to be replicated in every microservice instead of centralized as we can do with a single entry point.
Still, the API gateway also has some downsides.
As a unique entry point, it could be a single point of failure, so we need to have a cluster of API gateways to add resilience to our application. Cloud providers typically offer services that easily address this resilience challenge, providing solutions designed to handle high traffic and well-architected for resilience.
Another challenge is more operational. In a large organization, where we have hundreds of developers working on the same project, we may have many services behind a single API gateway. We’ll need to provide solid governance for adding, changing or removing APIs in the API gateway to prevent teams being frustrated with a cumbersome flow.
Finally, we’ll add some latency to the system if we implement an additional layer between the client and the microservice consumed.
The process for updating the API gateway must be as lightweight as possible, making investing in the governance around this process a mandatory step. Otherwise, developers will be forced to wait in line to update the gateway with a new version of their endpoint.
The API gateway can work in combination with a service dictionary, adding the benefits of a service dictionary to those of the API gateway pattern.
Finally, with micro-architectures, we are opening a new scenario, where it may be possible and easier to manage and control because we are splitting our APIs by domain, having multiple API gateways to gather a group of APIs for instance.
One API entry point per business domain
Another opportunity to consider is creating one API entry point per business domain instead of having one entry point for all the APIs, as with an API gateway.
Multiple API gateways enable you to partition your APIs and policies by solution type and business domain.
In this way, we avoid having a single point of failure in our infrastructure. Part of the application can fail without impacting the rest of the infrastructure. Another important characteristic of this approach is that we can use the best entry point strategy per bounded context based on the requirements needed, as shown in Figure 4-8.
So let’s say we have a bounded context that needs to aggregate multiple APIs from different microservices and return a subset of the body response of every microservice. In this case, a BFF would be a better fit for being consumed by a micro-frontend rather than handing over to the client doing multiple round trips to the server and filtering the APIs body responses for displaying the final result to the user.
But in the same application, we may have a bounded context that doesn’t need a BFF.
Let’s go one step further and say that in this subdomain, we have to validate the user token in every call to the API layer to check whether the user is entitled to access the data.
In this case, using an API gateway pattern with validation at the API gateway level will allow you to fulfill the requirements in a simple way.
With infrastructure ownership, choosing different entry points for our API layer means every team is responsible for building and maintaining the entry point chosen, reducing potential external dependencies across teams, and allowing them to own end-to-end the subdomain they are responsible for. Therefore, potentially we can have a one-to-one relationship between subdomain and entry point.
This approach may require more work to build, but it allows a fine-grained control of identifying the right tool for the job instead of experiencing a trade-off between flexibility and functionalities. It also allows the team to really be independent end to end, allowing engineers to change the frontend, backend, and infrastructure without affecting any other business domain.
A client-side composition, with an API gateway and a service dictionary
Using an API gateway with a client-side micro-frontends composition (either vertical or horizontal split) is not that different from implementing the service dictionary in a monolith backend.
In fact, we can use the service dictionary to provide our micro-frontends with the endpoints to consume, with the same suggestions we provided previously.
The main difference, in this case, will be that the endpoints list will be provided by a microservice responsible for serving the service dictionary or a more generic client-side configuration, depending on our use case.
Another interesting option is that with an API gateway, authorization may happen at the API-gateway level, removing the risk of introducing libraries at the API level, as we can see in Figure 4-9.
Based on the concepts shared with the service dictionary, the backend infrastructure has changes but not the implementation side. As a result, the same implementations applicable to the service dictionary are also applicable in this scenario with the API gateway.
Let’s look at one more interesting use case for the API gateway.
Some applications allow us to use a micro-frontends architecture to provide different flavors of the same product to multiple customers, such as customizing certain micro-frontends on a customer-by-customer basis.
In such cases, we tend to reuse the API layer for all the customers, using part or all of the microservices based on the user entitlement. But in a shared infrastructure we can risk having some customers consuming more of our backend resources than others.
In such scenarios, using API throttling at the API gateway will mitigate this problem by assigning the right limits per customer or per product.
At the micro-frontends level we won’t need to do much more than handle the errors triggered by the API gateway for this use case.
A server-side composition with an API gateway
A microservices architecture opens up the possibility of using a micro-frontends architecture with a server-side composition as explained in chapter 6.
As we can see in Figure 4-10, after the browser’s request to the API gateway, the gateway handles the user authentication/authorization first, then allows the client request to be processed by the UI composition service responsible for calling the microservices needed to aggregate multiple micro-frontends inside a template, with their relative content fetched from the microservices layer.
For the microservices layer, we use a second API gateway to expose the API for internal services, in this case, used by the micro-frontends services for fetching the related API.
Figure 4-11 illustrates a hypothetical implementation with the sequence diagram related to this scenario.
After the API gateway token validation, the client-side request lands at the UI composition service, which calls the micro-frontend to load. The micro-frontend service is then responsible for fetching the data from the API layer and the relative template for the UI and serving a fragment to the UI composition layer that will compose the final result for the user.
This diagram presents an example with a micro-frontend, but it’s applicable for all the others that should be retrieved for composing a user interface.
Usually, the microservice used for fetching the data from the API layer should have a one-to-one relation with the API it consumes, which allows an end-to-end team’s ownership of a specific micro-frontend and microservice.
Working with the BFF pattern
Although the API gateway pattern is a very powerful solution for providing a unique entry point to our APIs, in some situations we have views that require aggregating several APIs to compose the user interface, such as a financial dashboard that may require several endpoints for gathering the data to display inside a unique view.
Sometimes, we aggregate this data on the client side, consuming multiple endpoints and interpolating data for updating our view with the diagrams, tables, and useful information that our application should display. Can we do something better than that? BFF comes to the rescue.
Another interesting scenario where an API gateway may not be suitable is in a cross-platform application where our API layer is consumed by web and mobile applications.
Moreover, the mobile platforms often require displaying the data in a completely different way from the web application, especially taking into consideration screen size.
In this case, many visual components and relative data may be hidden on mobile in favor of providing a more general high-level overview and allowing a user to drill down to a specific metric or information that interests them instead of waiting for all the data to download.
Finally, mobile applications often require a different method for aggregating data and exposing them in a meaningful way to the user. APIs on the backend are the same for all clients, so for mobile applications, we need to consume different endpoints and compute the final result on the device instead of changing the API responses based on the device that consumes the endpoint.
In all these cases, BFF, as described by Phil Calçado (former employee of SoundCloud), comes to the rescue.
The BFF pattern develops niche backends for each user experience.
This pattern will only make sense if and when you have a significant amount of data coming from different endpoints that must be aggregated for improving the client’s performance or when you have a cross-platform application that requires different experiences for the user based on the device used.
This pattern can also help solve the challenge of introducing a layer between the API and the clients, as we can see in Figure 4-12.
Thanks to BFF we can create a unique entry point for a given device group, such as one for mobile and another for a web application.
However, this time we also have the option of aggregating API responses before serving them to the client and, therefore, generating less chatter between clients and the backend because the BFF aggregates the data and serves only what is needed for a client with a structure reflecting the view to populate.
Interestingly, the microservices architecture’s complexity sits behind the BFF, creating a unique entry point for the client to consume the APIs without needing to understand the complexity of a microservices architecture.
BFF can also be used when we want to migrate a monolith to microservices. In fact, thanks to the separation between clients and APIs, we can use the strangler pattern for killing the monolith in an iterative way, as illustrated in Figure 4-13. This technique is also applicable to the API gateway pattern.
Another use case that often comes to mind when we combine BFF and micro-frontends, is aggregating APIs by domain, similar to what we have seen for the API gateway.
Following our subdomain decomposition, we can identify a unique entry point for each subdomain, grouping all the microservices for a specific domain together instead of taking into consideration the type of device that should consume the APIs.
This would allow us to control the response to the clients in a more cohesive way, and allow the application to fail more gracefully than having a single layer responsible for serving all the APIs, as in the previous examples.
Figure 4-14 illustrates how we can have two BFFs, one for the catalogue and one for the Account section, for aggregating and exposing these APIs to different clients. In this way, we can scale the BFFs based on their traffic.
Gathering all the APIs behind a unique layer, however, may lead to an application’s popular subdomains requiring a different treatment compared to less-accessed subdomains.
Dividing by subdomain, then, allows us to apply different infrastructure requirements based on the traffic and characteristics of each domain.
Sometimes BFF raises some concerns due to some inherent pitfalls such as reusability, code duplication and cross boundaries APIs.
In fact, we may need to duplicate some code for implementing similar functionalities across different BFF, especially when we create one per device family. In these cases, we need to assess whether the burden of having teams implementing similar code twice is greater than abstracting (and maintaining) the code.
It is no surprise that identifying domain boundaries for completely self-sufficient APIs is difficult.. Think about a service that is needed for multiple domains, for instance. Imagine an e-commerce where a product’s service is used in multiple domains. In this case, we need to be careful to make sure that every BFF implements the latest API version of the products service. Moreover, every time a new version of the products service is released, we will need to coordinate the release of the BFF layers. Alternatively, we can support multiple versions of the products API for a period of time, allowing each BFF to update independently at its own pace.
A client-side composition, with a BFF and a service dictionary
Because a BFF is an evolution of the API gateway, many of the implementation details for an API gateway are valid for a BFF layer as well, plus we can aggregate multiple endpoints, reducing client chatter with the server.
It’s important to iterate this capability because it can drastically improve application performance.
Yet there are some caveats when we implement either a vertical split or a horizontal one.
For instance, in Figure 4-15, we have a product details page that has to fetch the data for composing the view.
When we want to implement a vertical-split architecture, we may design the BFF to fetch all the data needed for composing this view, as we can see in Figure 4-16.
In this example, we assume the micro-frontend has already retrieved the endpoint for performing the request via a service dictionary and that it consumes the endpoints, leaving the BFF layer to compose the final response.
In this use case we can also easily use a service dictionary for exposing the endpoints available in our BFF to our micro-frontends similar to the way we do it for the API gateway solution.
However, when we have a horizontal split composed on the client side, things become trickier because we need to maintain the micro-frontends’ independence, as well as having the host page domain as unaware as possible.
In this case, we need to combine the APIs in a different way, delegating each micro-frontend to consume the related API, otherwise, we will need to make the host page responsible for fetching the data for all the micro-frontends, which could create a coupling that would force us to deploy the host page with the micro-frontends, breaking the intrinsic characteristic of independence between micro-frontends.
Considering that these micro-frontends and the host page may be developed by different teams, this setup would slow down feature development rather than leveraging the benefits of this architecture.
Moreover, this might lead to creating a global state, implemented at the micro-frontends’ container for simplifying the access for all the micro-frontends present in the view, creating unnecessary coupling.
BFF with a horizontal split composed on the client side could create more challenges than benefits in this case. It’s wise to analyze whether this pattern’s benefits will outweigh the challenges.
A server-side composition, with a BFF and service dictionary
When we implement a horizontal-split architecture with server-side composition and we have a BFF layer, our micro-frontends implementation resembles the API gateway one.
The BFF exposes all the APIs available for every micro-frontend, so using the service dictionary pattern will allow us to retrieve the endpoints for rendering our micro-frontends ready to be composed by a UI composition layer.
Using GraphQL with micro-frontends
In a chapter about APIs and micro-frontends, we couldn’t avoid mentioning GraphQL.
GraphQL is a query language for APIs and a server-side runtime for executing queries by using a type system you define for your data.
GraphQL was created by Facebook and released in 2015. Since then it has gained a lot of traction inside the developers’ community.
Especially for frontend developers, GraphQL represents a great way to retrieve the data needed for rendering a view, decoupling the complexity of an API layer, rationalizing the API response in a graph, and allowing any client to reduce the number of round trips to the server for composing the UI.
The paradigm for designing an API schema with GrapQL should be based on how the view we need to render looks instead of looking at the data exposed by the API layer.
This is a very key distinction compared to how we design our database schemas or our REST APIs.
Two projects in the GraphQL community stand out as providing great support and productivity with the open source tools available, such as Apollo and Rely.
Both projects leverage GraphQL, adding an opinionated view on how to implement this layer inside our application, increasing our productivity thanks to the features available in one or both, like authentication, rate limiting, caching, and schema federations.
GraphQL can be used as a proxy for microservices, orchestrating the requests to multiple endpoints and aggregating the final response for the client.
Remember that GraphQL acts as a unique entry point for your entire API layer. By design GraphQL exposes a unique endpoint where the clients can perform queries against the GraphQL server. Because of this, we tend to not version our GraphQL entry point, although if the project requires a versioning because we don’t have full control of the clients that consume our data, we can version the GraphQL endpoint. Shopify does this by adding the date in the URL and supporting all the versions up to a certain period.
It’s important to highlight that GraphQL works best when it’s created as a unique entry point for the client and not split by domains as seen with BFF. The graph implementation allows every client to query whatever part of the graph is exposed. Splitting it up in multiple domains would just make life harder for developers integrating with multiple graphs that now have to compose the different responses on the client-side. You might wonder how to scale the development of GraphQL across multiple teams, the answer is schema federation.
The schema federation
Schema federation is a feature that allows multiple GraphQL schemas to be composed declaratively into a single data graph.
When we work with GraphQL in a midsize to large organization, we risk creating a bottleneck because all the teams are contributing to the same schema.
But with schema federation, we can have individual teams working on their own schemas and exposing them to the client as unique entry points, just like a traditional data graph.
Apollo Server exposes a gateway with all associated schemas from other services, allowing each team to be independent and not change the way the frontend consumes the data graph.
This technique comes in handy when we work with microservices, though it comes with a caveat.
A GraphQL schema should be designed with the UI in mind, so it’s essential to avoid silos inside the organization. We must facilitate the initial analysis engaging with multiple teams and follow all improvements in order to have the best implementation possible.
Figure 4-17 shows how a schema federation works using the gateway as an entry point for all the implementing services and providing a unique entry point and data graph to query for the clients.
Schema federation represents the evolution of schema stitching, which has been used by many large organizations for similar purposes. It wasn’t well designed, however, which led Apollo to deprecate schema stitching in favor of schema federation.
More information regarding the schema federation is available on Apollo’s documentation website.
Using GraphQL with micro-frontends and client-side composition
Integrating GraphQL with micro-frontends is a trivial task, especially after reviewing the implementation of the API gateway and BFF.
With schema federations, we can have the teams who are responsible for a specific domain’s APIs create and maintain the schema for their domain and then merge all the schemas into a unique data graph for our client applications.
This approach allows the team to be independent, maintaining their schema and exposing what the clients would need to consume.
When we integrate GraphQL with a vertical split and a client-side composition, the integration resembles the others described above: the micro-frontend is responsible for consuming the GraphQL endpoint and rendering the content inside every component present in a view.
Applying such scenarios with microservices become easier thanks to schema federation, as shown in Figure 4-18.
In this case, thanks to the schema federation, we can compose the graph with all the schemas needed and expose a supergraph for a micro-frontend to consume.
Interestingly, with this approach, every micro-frontend will be responsible for consuming the same endpoint. Optionally, we may want to split the BFF into different domains, creating a one-to-one relation with the micro-frontend. This would reduce the scope of work and make our application easier to manage, considering the domain scope is smaller than having a unique data graph for all the applications.
Applying a similar backend architecture to horizontal-split micro-frontends with a client-side composition isn’t too different from other implementations we have discussed in this chapter.
As we see in Figure 4-19, the application shell exposes or injects the GraphQL endpoint to all the micro-frontends and all the queries related to a micro-frontend will be performed by every micro-frontend.
When we have multiple micro-frontends in the same or different view performing the same query, it’s wise to look at the query and response cacheability at different levels, like the CDN used, and otherwise leverage the GraphQL server-client cache.
Caching is a very important concept that has to be leveraged properly; doing so could protect your origin from burst traffic so spend the time. Even when we have dynamic data, caching data for tens of seconds or a few minutes, helps reduce the strain on the origin and the risk of failures.
Using GraphQL with micro-frontends and a server-side composition
The final approach involves using a GraphQL server with a micro-frontends architecture featuring a horizontal split and server-side composition.
When the UI composition requests multiple micro-frontends to their relative microservices, every microservice queries the graph and prepares the view for the final page composition (see Figure 4-20).
In this scenario, every microservice that will query the GraphQL server requires having the unique entry point accessible, authenticating itself, and retrieving the data needed for rendering the micro-frontend requested by the UI composition layer.
This implementation overlaps quite nicely with the others we have seen so far on API gateway and BFF patterns.
Best practices
After discussing how micro-frontends can fit with multiple backend architectures, we must address some topics that are architecture-agnostic but could help with the successful integration of a micro-frontends architecture.
Multiple micro-frontends consuming the same API
When working with a horizontal-split architecture, we might encounter situations where similar micro-frontends exist within the same view, consuming identical APIs with the same payload. This scenario could lead to an increase in backend traffic, necessitating more complex solutions for managing the traffic and its associated costs.
In such instances, it’s crucial to question whether maintaining separate micro-frontends truly adds value to our system. Is grouping them into a single, unified micro-frontend a more effective approach?
A possible solution is transforming the micro-frontends into components and consolidating them within a single micro-frontend, as depicted in Figure 4-21.
In this scenario, the micro-frontends will execute a single request to the API, which will then inject the response into the components within the page, as we are accustomed to implementing with other architectures. These components can be easily imported by the micro-frontends as an NPM library, maintaining clear boundaries and reducing redundant API calls.
Additionally, consider reassessing team ownership. Implementing this solution may increase the team’s cognitive load because of the new micro-frontends containing additional components and handling more business requirements.
Typically, such situations should prompt consideration for architectural enhancement. Do not overlook this signal; instead, reassess the decisions made at the project’s outset with the available information and context, ensuring that making duplicate API requests within the same view is acceptable. If not, be prepared to reevaluate the boundaries of the micro-frontends.
APIs come first, then the implementation
Independently of the architecture we will implement in our projects, we should apply API-first principles to ensure all teams are working with the same understanding of the desired result.
An API-first approach means that for any given development project, your APIs are treated as “first-class citizens.”
As discussed at the beginning of this book, we need to make sure the API identified for communicating between micro-frontends or for client-server communication are defined up front to enable our teams to work in parallel and generate more value in a shorter time.
In fact, investing time at the beginning for analyzing the API contract with different teams will reduce the risk of developing a solution not suitable for achieving the business goals or a smooth integration within the system.
Gathering all the teams involved in the creation and consumption of new APIs can save a lot of time further down the line when the integration starts.
At the end of these meetings, producing an API spec with mock data will allow teams to work in parallel.
The team that has to develop the business logic will have clarity on what to produce and can create tests for making sure they will produce the expected result, and the teams that consume this API will be able to start the integration, evolving or developing the business logic using the mocks defined during the initial meeting.
Moreover, when we have to introduce a breaking change in an API, sharing a request for comments (RFC) with the teams consuming the API may help to update the contract in a collaborative way. This will provide visibility on the business requirements to everyone and allow them to share their thoughts and collaborate on the solution using a standard document for gathering comments.
RFCs are very popular in the software industry. Using them for documenting API changes will allow us to scale the knowledge and reasoning behind certain decisions, especially with distributed teams where it is not always possible to schedule a face-to-face meeting in front of a whiteboard.
RFCs are also used when we want to change part of the architecture, introduce new patterns, or change part of the infrastructure.
API consistency
Another challenge we need to overcome when we work with multiple teams on the same project is creating consistent APIs, standardizing several aspects of an API, such as error handling.
API standardization allows developers to easily grasp the core concepts of new APIs, minimizes the learning curve, and makes the integration of APIs from other domains easier.
A clear example would be standardizing error handling so that every API returns a similar error code and description for common issues like wrong body requests, service not available, or API throttling.
This is true not only for client-server communication but for micro-frontends too. Let’s think about the communication between a component and a micro-fronted or between micro-frontends in the same view. Identifying the events schema and the possibility we grant inside our system is fundamental for the consistency of our application and for speeding up the development of new features.
There are very interesting insights available online for client-server communication, some of which may also be applicable to micro-frontends. Google and Microsoft API guidelines share a well-documented section on this topic, with many details on how to structure a consistent API inside their ecosystems.
Web socket and micro-frontends
In some projects, we need to implement a WebSocket connection for notifying the frontend that something is happening, like a video chat application or an online game.
Using WebSockets with micro-frontends requires a bit of attention because we may be tempted to create multiple socket connections, one per micro-frontend. Instead, we should create a unique connection for the entire application and inject or make available the WebSocket instance to all the micro-frontends loaded during a user session.
When working with horizontal-split architectures, create the socket connection in the application shell and communicate any message or status change (error, exit, and so on) to the micro-frontends in the same view via an event emitter or custom events for managing their visual update.
In this way, the socket connection is managed once instead of multiple times during a user session. There are some challenges to take into consideration, however.
Imagine that some messages are communicated to the client while a micro-frontend is loaded inside the application shell. In this case, creating a message buffer may help to replay the last few messages and allow the micro-frontend to catch up once fully loaded.
Finally, if only one micro-frontend has to listen to a WebSocket connection, encapsulating this logic inside the micro-frontend would not cause any harm because the connection will leave naturally inside its subdomain.
For vertical-split architectures, the approach is less definitive. We may want to load inside every micro-frontend instead of at the application shell, simplifying the lifecycle management of the socket connection.
The right approach for the right subdomain
Working with micro-frontends and microservices provides a level of flexibility we didn’t have before.
To leverage this new quality inside our architecture we need to identify the right approach for the job.
For instance, in some parts of an application, we may want to have some micro-frontends communicating with a BFF instead of a regular service dictionary because that specific domain requires an aggregation of data retrievable by existing microservices but the data should be aggregated in a completely different way.
Using micro-architectures, these decisions are easier to embrace due to the architecture’s intrinsic characteristic. To grant this flexibility, we must invest time at the beginning of the project analyzing the boundaries of every business domain and then refine them every time we see complications in API implementation.
In this way, every team will be entitled to use the right approach for the job instead of following a standard approach that may not be applicable for the solution they are developing.
This is not a one-off decision but it has to evolve and revise with a regular cadence to support the business evolution.
Summary
We have covered how micro-frontends can be integrated with multiple API layers.
Micro-frontends are suitable for not only microservices but also monolith architecture.
There may be strong reasons why we cannot change the monolithic architecture on the backend but we want to create a new interface with multiple teams. Micro-frontends may be the solution to this challenge.
We discussed the service dictionary approach that could help with cross-platform applications and with the previous layer for reducing the need for a shared client-side library that gathers all the endpoints. We also discussed how BBF can be implemented with micro-frontends and a different twist on BFF using API gateways.
In the last part of this chapter, we reviewed how to implement GraphQL with micro-frontends, discovering that the implementation overlaps quite nicely with the one described in the API gateway and BFF patterns.
Finally, we closed the chapter with some best practices, like approaching API design with an API-first approach and leveraging DDD at the infrastructure level for using the right technical approach for a subdomain.
As we have seen, micro-frontends have different implementation models based on the backend architecture we choose.
The quickest approach for starting the integration in a new micro-frontends project is the service dictionary that can evolve overtime to more sophisticated solutions like BFF or GraphQL.
Remember that every solution shared in this chapter brings a fair amount of complexity if not analyzed and contextualized inside the organization structure and communication flow. Don’t focus your attention only on the technical implementation but move a step further by looking into the governance for future APIs integration or breaking changes of an API.
Get Building Micro-Frontends, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.