Load balancing and failover with LXC containers
A lot of the advanced networking, clustering, deployment guides are not specific to containers, these can be used across platforms. That's what makes LXC containers so flexible and powerful, they can be used as virtual machines and thus do not need any special tools and fit into any environment.
There are many ways to deploy VM and container app instances. Networking offers endless possibilities and scope for creativity. Please visit the LXC networking guide and advanced networking guide for a quick refresh. We will look at a typical scenario where app instances and the load balancer are in a network behind a public IP.
In a load balanced or reverse proxied architecture it becomes obvious the load balancer is a point of failure, as there is only one instance of the load balancer directing traffic to the various applications.
For instance the Nginx reverse proxy could be serving multiple containers applications instances like wordpress, drupal, a mail server, webmail, discourse etc or multiple instances of an app spread across your network. As you scale you can add more application instances along with necessary db, data, session and state management. However the load balancer remains a single point of failure.
An easy way to fix this to to run multiple instances of the load balancer with failover. The basic idea is you have 2 or more load balancers and if one fails the other takes over. For the failover to work the load balancer (lb) instances need to be on the same network.
There are a number of tools you can use for failover; the 3 most popular ones are keepalived, Vrrpd and Ucarp. All 3 are lightweight and available in most Linux distributions. Since most Flockport containers are Debian based you can apt-get install any of these. We are going to use Vrrpd and Keepalived for this exercise.
Keepalived and Vrrpd are based on VRRP protocol which can be used to achieve high availability by assigning two or more nodes a virtual IP and monitoring those nodes. The VRRP protocol ensures that one of participating nodes is master, and if the master node fails the backup node takes over. Ucarp is a BSD implementation similar to VRRP that is available in Linux.
Failover works on the concept of a virtual IP. Now there are number of ways failover can be configured. For this exercise we are doing load balancing at the layer 7 level using Nginx or HA Proxy. First pick a virtual IP for the failover exercise. We will go with 10.0.3.100. For failover to work lb1 and lb2 need to be configured to be available not on their normal IPs but on the virtual IP 10.0.3.100. That's all it takes and is a cinch to configure.
For this exercise let's use one of the Nginx instances available on Flockport and clone it. If you are using the flockport utility
flockport get nginx
Or download the Nginx container directly to your system and untar it.
tar -xpJf nginx.tar.xz -C /var/lib/lxc --numeric-owner
lxc-clone -o nginx -n nginx2
So let lb1 be Nginx1 and lb2 be Nginx 2. Start both containers and lxc-ls -f to get their IPs.
To observe the failover in action its a good idea to edit the /usr/share/nginx/html/index.html file in each container and add the container name along with its IP.
Failover with Vrrpd
We are going to presume Nginx1 is lb1 on IP 10.0.3.5 and Nginx2 is lb2 is on IP 10.0.3.6. In both containers install vrrpd
apt-get install vrrpd
That's all! Now launch vrrpd in both containers like this.
vrrpd -i eth0 -v 1 10.0.3.100
Now launch a browser instance on the host and enter our failover IP in this case 10.0.3.100. You should see the default Nginx page, since we edited it you should also be able to see the specific container lb instance name and actual IP that was served on the failover IP.
Now shutdown the lb instance that was served on the failover and reach the virtual IP on the browser again. Viola! You will see failover has worked transparently and 10.0.3.100 will now give you the second lb instance.
We used Vrrpd to show you how easy it is. To actually use this in production its better to use Keepalived which is an implementation of the same VRRP protocol used by Vrrpd that we just used. Keepalived is more actively developed and offers a few more options.
Failover with Keepalived
First install keepalived on both Nginx lb instances
apt-get install keepalived
Head to /etc/keepalived/ folder and with your favourite editor create a new keepalived.conf file. There are a number of lines to pay attention to in the keepalived.conf file - state, priority, virtual-router_id, auth_pass and virtual_ipaddress.
Keepalived works in an active passive configuration and the state MASTER/BACKUP is one way to set this. However the priority setting overrides this, so the master should have a priority that is lower than the fail overs, in this case we are using 100 for master and 150 for backup. The virtual_id should be identical for keepalived instances. You should set a password in the auth_pass of the config file, and change the virtual_ip to match your network
nano keepalived.conf
Let's configure the failover on lb1.
! Configuration File for keepalived vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass pass } virtual_ipaddress { 10.0.3.100 } }
Now let's configure failover on lb2. For the second lb change the priority to 150 like below.
! Configuration File for keepalived vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass pass } virtual_ipaddress { 10.0.3.100 } }
Now start keepalived on both lbs
service keepalived start
Head to the virtual IP on your browser in this case 10.0.3.100, and because we have edited the Nginx default index page we would be able to see which container has been served on the failover IP.
Now stop keepalived on the instance that was served on the failover IP, and go to the failover IP again. You should see failover has happened successfully to the second lb instance.
Once you are happy with the configuration please configure keepalived to start on boot like this
update-rc.d keepalived defaults
That's it, you are all set and keepalived should take care of failover for your lb instances.
Continue to Part II of this guide where we look at loadbalancing with Keepalived and LVS