Load Balancing Algorithms and their applications
WARP from FatPipe Networks provides the famous four methods of load balancing –
Round Robin, Response Time, Fastest Route, and Weighted could be the best option for business continuity of organizations.
Round Robin configures FatPipe WARP to send sessions down lines in rotating order. This method is recommended for similar speed connections to the Internet, even if the connections are not of the same ISP (e.g., combining two similar speed fractional T1s and a DSL line).
Response Time configures FatPipe WARP to balance traffic based on each line’s average response time for Internet requests. This method is recommended for unequal speed connections. The fastest line is used more often with Response Time.
This configures FatPipe WARP to balance traffic in proportion to the WAN weights defined by you. Each interface needs to be assigned a weight. (Default value for each interface is 1.) The ratio of these weights determines the ratio of downloaded traffic on the respective Internet lines, which the load balancing algorithm maintains. For each new outbound session, this algorithm finds the interface whose current throughput to total throughput ratio is farthest below the ratio determined by its weight, and send the session on that interface.
If weights for WAN1, WAN2, WAN3 are 1, 2, 3, respectively, and total download traffic amounts to 600kbps, the traffic will be balanced over respective lines as 100, 200, 300 kbps. Because FatPipe WARP balances sessions rather than packets, real world results will rarely achieve this ideal. In general, the greater the number of sessions, the closer the distribution of traffic will be to the specified weights.
MPVPN's latest version, 3.0 adds a "Fastest Route" load balancing algorithm for faster routing as well as dynamic load balancing between MPVPN peers determines the packet ratio of each line to a given destination.
This is currently available in three different types: a 2Mbits/sec unit; 50 Mbits/sec and a high-end; 155 Mbits/sec. Most customers prefer 50Mbits/sec for their main offices and the 2 Mbits/sec version for the branch offices.
Fastest Route configures FatPipe WARP to balance traffic on a per-destination basis. Each session will go over the fastest line for its destination.
Users can opt for this when you want to make sure each session goes out the line with the fastest route for its destination. (There is slight overhead with this algorithm since SYN packets are sent out on all lines at the start of each session).
- Round Robin
What is a Round Robin?
The Round Robin is a technique of load distribution, fault tolerance, load balancing or provisioning multiple, redundant Internet Protocol service hosts.
The Technology of Round Robin:
Round Robin DNS balances the load of web servers over a geographically distributed area. For example, when a company has one domain name and has three similar web sites that are on three different servers with three un-identical IP addresses, and a user accesses the home page, the access is sent to the first IP address. The next user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address.
How does it work? – Following are the key points on how the functions happen.
- Responding to DNS requests with a list of IP addresses of different servers that host identical services
- The basic IP clients attempt connections with the first address returned from a DNS query.- this helps clients receive service from different providers for distribution of load
- Round Robin is used if there are three lines of same speed – 3 T1s, 3 DSLs- this helps in even split up of traffic across the three lines.
Response time refers to the time a system or a functional unit consumes to respond to an input. In other words, it is the time taken right from the beginning of submission of a request till the response is made. With reference to load balancers, it is used if one had lines of dissimilar speed like a DSL and a T1.
Response time algorithm combined with per node connection limits is a good option/method. One drawback of the algorithm is that, as the load increases in a heterogenous environment, it might result in an unequal distribution. However, it supports the high performing servers. This means that in the instance of a point of traffic performance, there are chances of mortification of the algorithm usage.
Uses and benefits
This algorithm will send more traffic out the line that has more bandwidth so one actually gets to use the added bandwidth of your second line instead of all traffic going out at the speed of the slowest line.
Fastest route picks the fastest line by sending out SYN packets across all lines to each site and then using the line that receives that ACK packet first. It also sends out the traffic on the line that has the best connection to each site that one intends to reach.
This is used to balance traffic in proportion to the WAN weights defined by the user. Each interface is assigned a weight and the ratio of these weights determines the ratio of the downloaded traffic on the respective Internet lines that is maintained by the load balancing algorithm.
Spillover Priority Level Algorithm
This algorithm provides a solution for users that are charged for some of their lines proportionally to the traffic they generate.
This is of great help and the best when an user wants to apply this type of feature as a backup at the times when the user’s network is carrying a high load. Traffic is sent over the lines with the highest priorities set by users.
If the traffic reaches the threshold level and if there is more remaining, it spills over to the next load. This is called Spill over Priority Level.
By assigning lower priority to such a line, user will achieve optimal usage and minimize the cost. Spillover priority level facilitates assigning different priorities to WAN connections to prevent line saturation.