Deploying LXC containers with Nginx : Reverse proxying and loadbalancing containers

Nginx is the web server that powers most Flockport containers. It's a critical component of the Internet powering a large number of websites. Nginx is the Swiss army knife of web servers; lightweight, extremely efficient, highly scalable and feature rich.

Nginx is hugely popular and there are tons of guides online. One of the most useful Nginx functions in the context of LXC containers is Nginx reverse proxy. To understand why this is helpful let's look at the typical website deployment.

LXC containers unless configured otherwise use the default NAT bridge lxcbr0 with the 10.0.3.X subnet for their networking. This gives them private IPs in a NAT network in the host. If you don't know what this means please brush up with our LXC networking guide.

Large scale installations have dedicated servers, multiple public IPsĀ  and a lot of freedom to architect their deployments. In these scenarios a reverse proxy is typically used to serve content from an instance that is deliberately put behind a NAT for security reasons. But in this article we are going to look at scenarios where the user has a single public IP and needs to deploy multiple apps.

In a cloud KVM instance with a single public IP you will typically have a web server like Nginx/Apache configured to serve multiple apps/websites from that single IP. In this case the apps are installed in the cloud instance

With containers you could do the same and install the web server and various apps in the container, port forward the public IP port 80 to the container IP port 80 and serve multiple websites apps. The advantage you get with containers is of course decoupling from the host, the freedom to move your container across servers easily and the ability to back-up, clone and snapshot much easier than if your apps were installed directly on the KVM or bare metal server, and all this with performance gains.

But installing everything in one container takes away the flexibility of containers to break up your workloads. However if you install multiple applications in multiple containers, with a single public IP how will you access all the apps? You can only port forward port 80 of the public IP to one container's port 80. This is where a reverse proxy becomes extremely useful.

Let's see how. Take a typical KVM cloud instance provided by amazon with a single public IP. Install LXC and download 3 Flockport app containers. Let's take WordPress, Drupal and Joomla for this example and the Nginx Flockport container to serve the trio. The 4 containers will have their own private IPs behind the NAT, let's say 10.0.3.5 (Nginx), 10.0.3.27 (WordPress). 10.0.3.28 (Drupal), 10.0.3.29 (Joomla) and your public IP is 1.2.3.4.

First port forward the public IP 1.2.3.4 port 80 to the Nginx container 10.0.3.5 port 80. Now you can configure the Nginx instance to serve WordPress, Drupal and Joomla containers as a reverse proxy.

Breaking up your workloads gives you a boatload of flexibility in architecture, scaling, and management. You can move these workloads independently to different servers, scale as required and manage backup and snapshot operations with more flexibility.

Nginx reverse proxy

A typical nginx reverse proxy configuration looks like this

upstream wordpress  {
      server 10.0.3.27:80; #wordpress
}

server {
        listen  80;
        server_name     mywordpress.org;

        location / {
                proxy_pass      http://wordpress;
                proxy_redirect  off;
                proxy_set_header   Host $host;
                proxy_set_header   X-Real-IP $remote_addr;
                proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
              }
      }

In 'upstream' we are defining a server and associating it with an IP, in this case the WordPress container internal IP 10.0.3.27. Nginx should be able to access these IPs. You can use host names in place of IPs if your DNS server is configured accordingly.

In 'server' we are defining a domain name mywordpress.org. Let's pause here for a second. If you remember we have already configured our public ip 1.2.3.4:80 to our Nginx container 10.0.3.5 port 80 so any web request reaching the public IP on port 80 will actually reach our Nginx instance in the container port 80.

Now Nginx is listening on port 80 of the pubic IP and it gets a request for mywordpress,org. It looks at its config and sees this is to be directed to the WordPress container at 10.0.3.27. It directs it to the container defined, gets the data and serves it to the requestor. And it can do it for n number of containers. Similarly if it gets a request for mydrupal.org it will send the request to the Drupal container IP defined, get the data and serve it back to the requestor.

This is what a reverse proxy does. For LXC users it allows us to deploy individual applications in separate containers and use Nginx to serve all the containers from a single public IP.

Before we proceed, reverse proxying can work in many ways. For instance in the above configuration the WordPress container has its web server listening on port 80 of its container. The 'frontend' Nginx will send a request to port 80 of the container and the 'backend' Nginx in the container will serve it to the frontend which it turn would serve it to the world.

In many cases you don't need a frontend and backend web server and can configure the frontend to directly serve the app. For instance Nodejs or Ruby apps are usually served on a port eg 3001, 4567 etc. The front end Nginx instance can easily be configured to serve the Ruby or Nodejs apps directly from the container.

upstream discourse  {
      server 10.0.3.28:3001; #discourse
}

server {
        listen  80;
        server_name     mydiscourse.org;

        location / {
                proxy_pass      http://discourse;
                proxy_redirect  off;
                proxy_set_header   Host $host;
                proxy_set_header   X-Real-IP $remote_addr;
                proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
              }
      }

The upstream directive is also useful for load balancing. You can easily specify multiple IPs that Nginx can access to load balance multiple instances like below. In the config below requests for mywordpress.org will now be split to the 2 IPs defined.

upstream wordpress  {
      server 10.0.3.27:80; #wordpress
      server 10.0.4.37:80;
}

server {
        listen  80;
        server_name     mywordpress.org;

        location / {
                proxy_pass      http://wordpress;
                proxy_redirect  off;
                proxy_set_header   Host $host;
                proxy_set_header   X-Real-IP $remote_addr;
                proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            }
     }

You can configure this further for basic round robin, least connected or IP hash based load balancing. Learn more about Nginx load balancing at the Nginx website.

This is the first in a series of Nginx guides on Flockport. We will cover optimization and the 2 important Nginx cache modules; the proxy cache and fastcgi cache that can exponentially increase your web servers capability to handle load, and also look at security features like rate limiting.

Stay updated on Flockport news

Recommended Posts
Showing 2 comments
  • mese

    Hello
    I could't find the nginx 32b container in your nginx page containers (there it's only 64b debian and some wordpress in the second button).
    Can you provide a link for me to download the right container?
    Regards

  • admin

    Hi Mese, thanks for reporting this. It's now corrected and links to the 32 bit Nginx container.

    Cheers!

Leave a Comment

Login

Register | Lost your password?