Nobody likes a slow website, does you? the internet users are getting impatience by every passing day and don’t want to wait even for few seconds to load the website in their smart phone or desktop.
As the advancement of the technology, exponential growth of internet users and sheer numbers of businesses going online, running a successful online business (website) is getting tougher.
In online business, milliseconds matters, according to (Kissmetrics) 40% of consumers abandon a website that takes more than 3 seconds to load!
Apart from this search engine like Google give top positions in their search result pages, getting a fast website is critical these days.
So how do you make your website load fast? Good you ask this question.
In this article we are going to show you how you can make your website load faster using a (bit technical) method called “load balancing” of your web server(s).
What is Load Balancing?
Load balancing is the process of distributing network traffic across multiple resources/servers hosting the same application content. This ensures loads on the servers distributes evenly which improves application responsiveness to a great extent.
There are two types of load balancers, Layer 4 load balancer and Layer 7 load balancer. The former is usually implemented with a dedicated hardware while the later is a software based load balancer like NGINX or HAproxy.
In this article, we will confine our discussions to implementing a Layer 7 load balancer using NGINX/CentOS 7.
Prerequisite
You have already configured a chain of web servers (at least 2) hosting the same application content and are ready to accept connections.
Why NGINX ?
NGINX pronounced like “engine-ex” is an open-source, high-performance web server although it can also act as a reverse proxy, HTTP cache, and a load balancer.
Because of NGINX’s non-blocking I/O and event-driven model, it can handle a huge number of concurrent requests without worrying anything about spike in memory usages or degradation in server performance.
This makes NGINX an excellent load balancer apart from its primary role as a web server.
Install NGINX
The standard repositories those are shipped with CentOS 7 does not contain NGINX package. Hence you need to install the EPEL package if it is not there in your system already before proceeding with installing NGINX.
To do so, Update the system to the latest CentOS release followed by installing EPEL package with the following two commands.
# sudo yum update # sudo yum install epel-release
Now that EPEL package has been added to your system, install NGINX by issuing the following yum command:
# sudo yum install nginx
Start NGINX and enable it to start up automatically during system reboot.
# sudo systemctl start nginx # sudo systemctl enable nginx
Finally, add the firewall rules for “http/https” traffic assuming firewall has been installed and enabled in the system otherwise you can skip the following steps.
# sudo firewall-cmd --zone=public --permanent --add-service=http # sudo firewall-cmd --zone=public --permanent --add-service=https # sudo firewall-cmd --reload
iptables service is not installed on CentOS 7 by default, it comes with an alternative service called firewalld which does the same function, in fact iptables command is actually used by firewallid itself.
Open your favorite web browser and type IP address or fully qualified domain name of the CentOS 7 system, you will be greeted by NGINX welcome page.
Configure NGINX as a Load Balancer
Now that NGINX has been installed and accessible, let us configure it to act as a load balancer. This specific configuration will just tell NGINX how to handle incoming requests and disperse them to a chain of web servers sitting in the internal private network which are otherwise known as upstream/back-end servers.
Navigate to the NGINX configuration directory “/etc/nginx/conf.d” and create a configuration file for load balancer. It is also possible to use the existing “default” config for the same.
But here, we will place the load balancer configuration in a new file by the name “load_balancer.conf”.
# cd /etc/nginx/conf.d # vi load_balancer.conf
upstream wordpress_apps { server 172.18.0.2; server 172.18.0.3; server 172.18.0.4; # add more servers here } server { listen 80; server_name test.best-web-hosting.org; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://wordpress_apps; } }
The above configuration uses “upstream” module that enables load balancing by defining a group of servers. Whenever traffic arrives on port 80 for the domain “test.best-web-hosting.org”, NGINX passes them to each web servers defined in the “upstream” section in a round robin way and thus load balances the traffic.
Make sure the hostname for “proxy_pass” directive must match with “upstream” module name.
Now, from terminal check for any syntax error in the above configuration file and restart NGINX.
# sudo nginx -t # sudo systemctl restart nginx
Point your browser to “test.best-web-hosting.org” and refresh it continuously, you will get response from each upstream server in a round robin way through load balancer.
I have configured three upstream servers with slightly modified index page. On curling the URL of load balancer (test.best-web-hosting.org) there will be response from three upstream servers one by one.
Load Balancing of HTTPS
It is always a better idea to encrypt communication between the visitor and your sites leveraging free SSL certificates from LetsEncrypt.
Therefore, you need to install it before proceeding with enabling load balancing with “https” enabled.
Once Lets Encrypt certificate have been fetched, what you need is to add another server block to the above load balancer configuration that will listen to port no 443 and proxy pass the traffic to upstream servers.
Open the load balancer configuration file using your favorite editor and append another server block to accomplish load balancing with “https”.
# cd /etc/nginx/conf.d # vi load_balancer.conf
server { listen 443; server_name test.best-web-hosting.org; ssl_certificate /etc/letsencrypt/live/test.best-web-hosting.org/cert.pem; ssl_certificate_key /etc/letsencrypt/live/test.best-web-hosting.org/privkey.pem; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://wordpress_apps; } }
The above configuration assumes, Lets Encrypt certificate have been fetched in “/etc/letsencrypt/live/test.best-web-hosting.org” directory.
Remember the SSL encryption applies between clients side(browser) to load balancer only.
Since the upstream servers are in the private network there is no security risk to make SSL termination point up-to load balancer.
Like before, check for any syntax error in the load balancer config and restart NGINX.
# sudo nginx -t # sudo systemctl restart nginx
You can now access the domain using both “https” and “http” but the above configuration does not enforces “https”.
To enforce “https” remove or comment the “location” section and add a redirection for “http” connection in the load balancer config like below.
server { listen 80; server_name test.best-web-hosting.org; return 301 https://$server_name$request_uri; # All http traffic will be redirected to https }
Restart NGINX. From now onward all “http” connections will be redirected to “https” thereby protecting your sites visitor’s data .
Choosing a Load Balancing Method
NGINX uses few algorithm to choose an upstream server whenever traffic arrives at it. By default, NGINX uses round robin algorithm to pass the requests to upstream servers.
There is no need to specify any precise configuration and options for this basic setup to work. However, there are other load balancing methods available in NGINX and are as follows.
1. Round Robin
The default algorithm available for load balancing while choosing an upstream server is round robin. With this scheme each upstream server is selected one by one in turns according to the order you place them in the configuration file.
That means, clients request are served by any one of the listed upstream server. The load balancer that we have configured earlier uses this scheme to choose an upstream server.
2. Least Connected
The least connected load balancing scheme will not proxy pass any traffic to a busy upstream server rather a new connection will always proxy pass to an upstream server that has the least number of active connections.
This scheme is useful in situation when active connection to upstream server takes some time to complete or an upstream server is overloaded with active connections.
To configure least connected load balancing, add “least_conn” directive as the first line within the “upstream” module like below.
upstream wordpress_apps { least_conn; server 172.18.0.2; server 172.18.0.3; server 172.18.0.4; # add more servers here }
3. IP Hash
The IP hash load balancing algorithm proxy pass traffic to an upstream server depending on client’s IP address. With this scheme NGINX applies a hash algorithm on IP address of each client and based on the result of hash, it assign an upstream server.
The end result is that request from a client are served by same upstream server. The IP hash algorithm is useful when there is a need to persists session information during subsequent connections from clients.
To configure IP hash load balancing, add “ip_hash” directive as the first line within the “upstream” module like below.
upstream wordpress_apps { ip_hash; server 172.18.0.2; server 172.18.0.3; server 172.18.0.4; # add more servers here }
4. Weighted
There may be a situation where upstream server resources or hardware specification may not be equal. Some capable servers with better resources/specifications may remain idle while the less powerful one may be clogged with overloaded connections.
In this situation, by assigning weights to each upstream server, it is possible to pick the capable server more often for proxy pass leaving the less capable one to serve less request. To configure weighted load balancing, add “weight” directive after the “URL” parameter in the upstream section like below.
upstream wordpress_apps { server 172.18.0.2 weight=2; server 172.18.0.3 weight=3; server 172.18.0.4 weight=5; # add more servers here }
With the above configuration out of every 10 request, 2 requests are forwarded to first server, 3 requests are forwarded to second server and 5 requests are forwarded to third server.
Conclusions
In this tutorial, we have checked how to install NGINX and configure it to act as load balancer for a cluster of upstream server.
The configurations are simple yet powerful that provides a lot of flexibility while scaling and balancing the load of your applications.
Even better, use load balancing with SSL enabled to protect your visitors data by using certificate from Lets Encrypt!