The benefits of federated load balancing for cloud application resiliency

Federated load balancing makes hosting resilient applications and operating at scale in the public cloud manageable.

By Mark Wilkins
May 9, 2018
Rock stack Rock stack (source: Wendy Cutler on Flickr)

As you move critical infrastructure to the cloud, your load balancing strategy must evolve to meet new demands. Your organization likely already has load balancers that manage available resources within your data center. These load balancers are aptly called “site” (or local) load balancers. Local load balancers operate at the site level and work with available resources, such as servers and storage devices. They may also manage resources across the enterprise, often balancing loads over multiple data centers. For both application servers and data centers, local load balancers apply traffic and workload balancing across the cluster based on health, availability, and performance in delivering a user’s request to the appropriate resource. However, in modern day cloud computing, that alone is not enough.

DNS-based global load balancing (GLB) is the other pivotal component of a load balancing strategy. These load balancers sit at the edge where users first connect with the internet. Geolocation policies determine the proximity of the end user to the application, service, or website being requested, and the GLB will steer traffic accordingly.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Resiliency policies applied at the edge ensure that if the primary endpoint is not available or is experiencing high latency, requests will go to a secondary endpoint. If the shortest path is not available, then once again, the DNS-based global load balancer will find a healthy path to the desired endpoint. But global load balancing alone also isn’t enough. Its primary job is to direct the user to the appropriate resource location; it doesn’t factor in detailed site information.

Neither site-based nor global load balancing alone addresses the bigger picture, nor does either provide full application resiliency independently as the user’s request travels from the edge to the endpoint. For true resiliency, you must ensure that if the primary endpoint is not available or is experiencing high latency, requests will go to a secondary endpoint.

Federated load balancing combines both GLB and site load balancing to reliably deliver users from the network edge to the resource they are trying to reach—finding the best path to the best endpoint. In federated architecture, both the DNS-based global load balancer at the edge and the site load balancer in the data center work together, functioning as a federated system with associated policies and rules that manage the path to the application and efficient access to the desired asset. Each load balancer has a much different sphere of knowledge and control of resources. Federated load balancing ensures that site, enterprise, and edge load balancers work in concert.

By creating a complete and cohesive load balancing strategy via a federated architecture, you can better meet the demands of today’s cloud-hosted applications. Federated load balancing makes hosting resilient applications—operating at scale in the public cloud—manageable, by providing a robust DNS traffic management strategy across the public internet to the site location and allowing you to optimize the user experience from source to destination.

This post is a collaboration between O’Reilly and Oracle Dyn. See our statement of editorial independence.

Post topics: Software Engineering
Share: