What Is Load Balancing and How Does It Work?
Load balancing is the distribution of requests over a network to a pool of shared computing resources. The underlying concept is simple but powerful. Imagine you’re working with a website that needs to serve thousands or even millions of users. Currently, the domain points to the IP address of a single web server. Responding to each request consumes some fraction of the server’s resources. When the server is using all of its resources, it will either take longer to respond to requests or the requests will fail entirely and the user experience will suffer. You can add more RAM, more storage capacity, and, in some cases, additional CPUs, but you can’t scale forever. Enter load balancers.
If our hypothetical website has a load balancer implementation, then the domain name—instead of pointing to a single server—points to the address of the load balancer. Behind the load balancer is a pool of servers, all serving the site content. When a request comes in, the load balancer routes the request to one of the back end servers. In this manner, the load balancer ensures an even distribution of requests to all servers, improving site performance and reliability.
Types of Load Balancers
Load balancers are generally distinguished by the type of load balancing they perform. They are offered in a hardware form-factor by vendors like F5 and Citrix and as software by open-source and cloud vendors. Software load balancers are applications that can be installed and provisioned on more traditional compute resources like servers. Cloud load balancers, a newer paradigm of software load balancing, are offered by cloud vendors like AWS and its Elastic Load Balancer (ELB).
Load balancer types vary according to the OSI model layer at which the load balancer “operates.”
Classic load balancers, also known as “plain old load balancers” (POLB) operate at layer 4 of the OSI. They take client requests, which are composed of TCP or UDP packets, and make decisions that route the requests based on some common algorithms covered later in this article.
Network load balancers also operate at layer 4, but they can scale to handle large amounts of requests and can route traffic using hashing algorithms based on information like port and IP address.
Application load balancers operate at layer 7 of the OSI, making routing decisions based on the actual content of the application traffic, like HTTP headers, queries, and URLs. Choosing what type of load balancer to implement depends heavily on your use case.
Load Balancing Methods
How does a load balancer “decide” where to send requests? While application load balancers make sophisticated decisions based on application traffic, common algorithms form the backbone of most load balancing implementations. These algorithms include:
- Round robin: The load balancer distributes connection requests to a pool of servers in a repeating loop, regardless of relative load or capacity. Server A, server B, server C, server A, server B, etc.
- Weighted round robin: This is like the standard round robin, except for the fact that certain back end servers can be assigned to a higher priority, receiving disproportionally more traffic/requests. Server A, server A, server B, server C, server A, server A, server B, server C, etc.
- Least connections: This algorithm is fairly self-explanatory; the load balancer sends a new request to the back end server with the least number of active connections.
- Weighted least connections: This algorithm is like least connections, but certain back end servers can be assigned a higher priority, receiving disproportionally more traffic/requests. In a scenario where some back end servers have a larger or more performant resource configuration, you would use WLC to route them a greater share of the traffic.
- Random: Requests are sent to back end servers in a completely random fashion. No considerations are made for load levels, connection count, etc.
Now that you have a basic understanding of load balancing, we’ll evaluate some of the more popular load balancing options as well as some best practices that apply to any load balancer solution.
The Top 5 Open Source Load Balancers
In this section, we’ll look at some of the most popular open-source load balancers. Github stars may be an oversimplified measure of popularity; however, since they are widely-known, they’ve been included below.
- 27.7k Github stars
- Application / Layer 7
Traefik bills itself as the “cloud native edge router.” It’s a modern microservices-focused application load balancer and reverse proxy written in Golang. With its emphasis on support for several modern container orchestration platforms, batteries-included logging, and several popular metric formats, Traefik is a top choice for container-based microservices architectures.
- 11.3k Github stars
- Application / Layer 7
Nginx is a name that should be instantly recognizable to anyone involved in web application engineering. This tool offers load balancing capabilities via its ngx_http_upstream_module. A well-established, widely supported option, Nginx offers highly scalable performance out of the box and can be extended with additional modules like Lua.
- 5k Github stars
- Network / Layer 4
Seesaw is another open-source load balancer written in Golang. It was originally created by Google SREs to provide a robust solution for load balancing internal Google infrastructure traffic. When choosing Seesaw, you’re getting the collective engineering acumen of Google’s powerful SRE cohort in an open-source ecosystem.
- 1.1k Github stars
- Network and application / Layers 4 and 7
HAProxy is another common name in the web ecosystem. HAProxy offers reverse proxying and load balancing of TCP and HTTP traffic. When you choose HAProxy, you’re choosing a high-performance, well-established solution.
- 265 Github stars
- Network and application / Layers 4 and 7
A relatively lesser-known offering, Neutrino is a Scala-based software load balancer originally developed by eBay. Neutrino’s strength lies in the broad compatibility of its runtime environment, the JVM.
Choosing the Right Type of Load Balancer
Choosing a load balancer solution depends heavily upon your use case.
If you’re running a containerized, microservices-based architecture, a layer 7 application load balancer is probably your best choice.
Need to route millions of requests to your back-end servers in a performant manner? A network load balancer is the way to go.
And, if you’re a small, nimble development team that just needs to get your application to as many users as possible with as little configuration as possible, a cloud provider like AWS provides you with tight integration and a batteries-included solution, the Elastic Load Balancer.
Choosing the Right Scheduling Algorithm
Choosing the right scheduling algorithm depends on a pragmatic evaluation of the kind of traffic you expect for your application:
IF client requests are small and result in short-lived sessions, THEN a round robin algorithm is probably fine.
IF Longer-lived, more stateful sessions need to be more carefully managed with regards to back end resources, THEN least connections would be the most appropriate choice for this kind of case.
Storing and Analyzing Your Logs
If you want to have a complete and granular look at what’s going on in your load balancer infrastructure, you need to be storing and analyzing the logs that it generates. Making informed decisions about your application performance depends on this data. The load balancers mentioned in this article all offer different mechanisms for logging and metrics, as do the various cloud-provided solutions. It’s up to you to ingest, store, and analyze them. The ELK stack provides a powerful mechanism for evaluating the performance and security of your load balancing.
Load balancers are a powerful piece of any infrastructure. They have enabled the modern web to scale to incredible sizes. Their ability to intelligently route requests to a pool of computing resources has significant implications for the performance of your web application. Making the right load balancing decisions now will pay off with significant dividends in the future.