How the Load Balancer Servers Work
What are Load Balancers? Let's first demystify the term. Load Balancer (further called
LB) is the server that splits the request (of any type) to two or more backend servers. Basically, to understand what are and how the LB servers work let's imagine the scenario:
We have an inadequate hosting server that supposes to serve 3 times of its own Network capacity. What we could do, as it was the old days' practice, is to get a higher capacity dedicated server. However, we can be smarter than that. The best and smartest approach is to set up an infrastructure where we have one (or more - achieved by DNS load balancing) server(s) with the sole purpose to handle the traffic for hosting server mirrors in the back. Let's check out the diagram of this kind of setup:
The diagram above tells the story about spreading the load between three backend servers with mirrored content. Requests from the clients are hitting the LB Server. The LB then distributes all the requests to Backend servers accordingly. This way every server is responsible for approximately ~33% of the total traffic.
The LB server can handle traffic on Application or on the Transport layer of the OSI model. Application load balancers are slower than those who handle traffic on the transport layer but, can support URI and port routing.
One of the most common and recognizable load balancers is the HAProxy Server.
Related article: How HTTPS Protocol works