Load balancing (computing)

Load balancing refers to the practice of distributing communications and processing evenly across a network of computers in order to prevent overloading or overwhelming a single device. This is especially crucial in cases where it is not easy to predict how many requests a server will receive. Usually, a busy website will use at least two web servers to achieve load balancing such that if one server begins to receive too many requests, a server with a higher capacity will have the requests forwarded to it. In some cases, the communications channels themselves will be referred to using the term load balancing.

Load balancing makes it possible for the computing system to do more work within a specified timeframe. It also allows the server respond to clients faster. It is the main reason why computer servers are clustered and it can be executed with software, hardware, or both may be combined.

Approaches to load balancing

There are many different approaches that may be used when trying to balance workload among a series of systems. Among the most popular ways load balancing is used is employing the use of multiple servers to provide one internet service. Approaches to load balancing may also be referred to as load balancing algorithms or load balancing methods. Sometimes, this is referred to as a server farm. Systems that typically use load balancing include databases, DNS (Domain Name System) servers, Network News Transfer Protocol (NNTP) servers, high-bandwidth File Transfer Protocol sites, large Internet Relay Chat networks, and high-traffic websites.

  • Round-robin DNS

This is one of the approaches to load balancing where there is no need for a hardware node or a dedicated software. Here, the servers are selected according to a sequence, with the load balancer sending the first request it gets to the first server on its list. Then it continues like that down the list all the way to the end before starting over at the top.

  • Weighted Round Robin

The weighted round robin method builds on the round robin approach. Here, a static numerical weighting is given to every server and more requests are sent to servers with higher ratings than others in the pool.

  • Least Connections

As the name implies, a server that has been configured to make use of the least connection will choose the serve that has fewer active connections to clients than all the others. Part of the factors considered when determining the server with the least connection is each server’s computing capacity. This method is generally recommended when longer sessions are expected from the traffic that comes in.

  • Least Response Time

In this method of load balancing, the server with the lowest average response time and fewest active connections is selected. The least response time method is used only for SSL (Secure Sockets Layer) and HTTP services. Also known as TTFB or Time to First Byte, the response time refers to the time interval between when a request packet is sent to the server and when the first response is sent from the sever.

  • Source IP hash

The Source IP Hash method of load balancing makes use of an algorithm that combines the destination IP address and the source of the server and client into a unique hash key. Then the generated key is used to assign the client to a specific server. With this method of load balancing, it is possible to direct a client’s request to the same server it used previously. This is so because if the session happens to get broken, the key can be regenerated.

DNS delegation

This is another method for load balancing where DNS is used to delegate a domain as a subdomain, while the same servers that serve the website are also responsible for serving the one for the subdomain. The DNS delegation approach works best in cases where single servers are spread across locations on the internet.

Load balancers

Load balancer refers to a device that distributes application or network traffic across servers in a pool. Load balancers execute application-specific tasks in addition to maintaining and managing network and application sessions to raise the performance levels of applications. Generally, load balancers are classified as either Layer 7 or Layer 4. Load balancers classified as Layer 7 act upon data seen in application layer protocols like HTTP while those classified as Layer 4 are based upon data in transport and network layer protocols like UDP, FTP, TCP, and IP.

There are four major kinds of traffic load balancer administrators can make forwarding rules for. They are HTTP, HTTPS, TCP, and UDP. With HTTP, requests are directed based on the usual HTTP mechanisms and with HTTPS, the procedure is the same, but with encryption. With HTTPS, encryption may be handled by means of SSL termination or SSL pass-through.

Related Links

External Link

testing content


At Freeparking we take pride in making it easy to get online.


Need Help?

0800 FREEPARK (373 372)

If you need more help we want to hear from you. Get in touch and we'll respond not just quickly, but with the right information in language you understand. Contact Us

Check out the step-by-step  How To Guides  and  FAQs  on our Support Site


Your subscription request has been received, please check your email for confirmation.