LXC advanced networking guide - Connecting LXC containers across hosts with L3 overlays
In this article we are going to look at options for multi host connectivity. For those who already connect multiple VM hosts it's the same for LXC containers, there are no special requirements. Containers behave like VMs, and you can continue to use those options.
Typically in your own network, connecting multiple LXC or VM hosts is easily done by bridging VMs/containers network directly to the LXC hosts physical network connected to a router/switch. This way all containers are on the same network. The network can be managed at a higher level than the LXC/VM hosts, and it handles IPs, DHCP, subnets and inter host connectivity.
Alternatively if LXC hosts are on the same network you can also simply add a route if you do not wish to bridge the containers to the Host network. But for this to work the LXC subnets ie 10.0.3.0/24 on each LXC host must be different.
For instance if LXC Host 1 is IP 192.168.1.2 and LXC Host 2 is IP 192.168.1.3, containers on Host 2 can be changed to operate on subnet 10.0.4.0/24. Then the containers across the 2 hosts can access each other by a simple routing rule on each host.
On Host 1
ip route add 10.0.4.0/24 via 192.168.1.3
On Host 2
ip route add 10.0.3.0/24 via 192.168.1.2
Now containers on both Hosts 1 and 2 should be able to ping each other.
Connecting multiple LXC hosts in the cloud
A lot of users in the cloud may not have the same freedom or choice. Users may have dedicated servers or KVM instances with little control of the networking layer. How do you connect LXC containers across multiple LXC hosts?
Here we are going to look at 4 ways to create a private network of LXC hosts across networks in the cloud.
GRE tunnel
This is the simplest solution. A GRE tunnel connects 2 LXC hosts and lets containers on both sides connect to each other. You can build multiple GRE tunnels to connect more hosts.
Tinc VPN
Tinc is a fantastic little program to create private networks. We will use Tinc to let containers on both sides talk to each other. The improvement over a plain jane GRE tunnel is Tinc provides an encrypted connection. Tinc is a bit special and among the Internet's best kept secrets. It's a wonderful tool to create distributed private mesh networks with tremendous flexibility. Unlike other VPN solutions Tinc is not a hub and spoke model with single choke or failure points. And we are just scratching the surface here. Tinc affirms the old adage; simplicity works, simplicity scales.
Tinc container to container connection
We use Tinc to connect 2 containers behind NAT networks across the Internet. This is unique because both containers are behind a NAT and we try not to touch the host networking too much. This is a mesh network enabled in the containers networking space and the only setting on the host is a single port forward of tcp/udp port 655 on one of the hosts, to a container. In this scenario containers will have 2 IPs, the container's basic lxcbr0 NAT network and the Tinc vpn network
Open vSwitch (OVS) bridges connected by GRE tunnels
LXC supports multiple virtual network cards. You can create a new bridge interface and create a second network for your LXC containers. You can create a second OVS bridge on LXC hosts and connect the bridges with GRE tunnels. We are going to a link to a guide by Serge Hallyn, LXC lead developer and another by Scott Lowe. OVS is what the networking gurus uses to build networks with things like openflow, vxlan and other fancy functionality. Open vSwitch is a major component of the Openstack networking layer. Option 1,2,3 are easier to setup and use but for more advanced functionality OVS is a great choice.
GRE tunnel
Setting it up
Connecting 2 LXC hosts with a GRE tunnel will enable your LXC containers to access each other. LXC containers will be using their default NAT bridge lxcbr0.
We are going to change the subnet provided by lxcbr0 on one side which is normally 10.0.3.0/24 to 10.0.4.0/24 so there is no clash of IPs on either side and we can route both ways.
So here is the network map. We are going to call our GRE tunnel 'neta' on Host A and 'netb' on Host B (You can call this anything you want)
Host A has public IP 1.2.3.4 Host B has public IP 2.3.4.5 Containers in Host A are on subnet 10.0.4.0/24 via default lxcbr0 nat bridge Containers in Host B are on subnet 10.0.3.0/24 via default lxcbr0 nat bridge
Change the subnet on Host A
To change the subnet edit the /etc/init.d/lxc-net script. Change the subnet entries from 10.0.3.0/24 to 10.0.4.0/24.
Create the GRE tunnel on Host A and B
ip tunnel add neta mode gre remote 2.3.4.5 local 1.2.3.4 ttl 255
ip tunnel add netb mode gre remote 1.2.3.4 local 2.3.4.5 ttl 255
On Host A
ip link set neta up ip addr add 10.0.4.254 neta ip route add 10.0.3.0/24 dev neta
On Host B
ip link set netb up ip addr add 10.0.3.254 netb ip route add 10.0.4.0/24 dev netb
Congratulations! You spanking new GRE tunnel is up and your containers on both sides can ping each other. You can do a traceroute and you will notice the IP address we added to the tunnel on each side 10.0.3/4.254 is being used as the gateway to reach either side. This is a random link IP, you can use anything 10.0.0.1/2 for instance.
To remove the tunnel
ip link set netb down ip tunnel del netb
Tinc VPN
For the networking gurus Tinc can operate as a router in layer 3 or a switch in layer 2 mode, for this example we are using Tinc in its default router mode.
To avoid container IP clash we are going to change the default lxcbr0 subnet 10.0.3.0/24 on one side, let's do it on Host A
Change the subnet on Host A
Edit the /etc/init.d/lxc-net script to change the LXC subnet on lxcbr0 NAT network from 10.0.3.0/24 to 10.0.4.0/24. Before doing this stop containers on Host A, stop the lxc-net service, make the change and then restart the lxc-net service.
service lxc-net stop
Edit the lxc-net script
service lxc-net start
So here is the network map.
Host A has public IP 1.2.3.4 Host B has public IP 2.3.4.5 Containers in Host A are on subnet 10.0.4.0/24 via default lxcbr0 nat bridge Containers in Host B are on subnet 10.0.3.0/24 via default lxcbr0 nat bridge
We are going to use 10.0.0.1 and 10.0.0.2 as the interface IPs for Tinc.
Install Tinc on both Host A and B
apt-get install tinc
Tinc operates on a concept of network names for the private VPN. Let's call our network 'flockport'.
In /etc/tinc/ on both Host A and Host B create a folder called 'flockport' and do the following.
mkdir /etc/tinc/flockport
This will hold our configuration for this network.
Create a 'hosts' folder in the flockport folder
mkdir /etc/tinc/flockport/hosts
Create the following files in the flockport folder - tinc.conf, tinc-up, tinc-down
touch tinc.conf tinc-up tinc-down
Configure Tinc on Host A
nano /etc/tinc/flockport/tinc.conf
Name = hosta (You can use any name for your hosts) AddressFamily = ipv4 Interface = tun0
nano tinc-up
#!/bin/bash ifconfig $interface 10.0.0.1 netmask 255.255.255.0 ip route add 10.0.3.0/24 dev $INTERFACE
nano tinc-down
#!/bin/bash ifconfig $INTERFACE down ip route del 10.0.3.0/24 dev $INTERFACE
nano /etc/tinc/flockport/hosts/hosta
Address 1.2.3.4 Subnet 10.0.4.0/24
Configure Tinc on Host B
nano /etc/tinc/flockport/tinc.conf
Name = hostb AddressFamily = ipv4 Interface = tun0 ConnectTo = hosta
nano tinc-up
#!/bin/bash ifconfig $interface 10.0.0.2 netmask 255.255.255.0 ip route add 10.0.4.0/24 dev $INTERFACE
nano tinc-down
#!/bin/bash ifconfig $INTERFACE down ip route del 10.0.4.0/24 dev $INTERFACE
nano /etc/tinc/flockport/hosts/hostb
Subnet 10.0.3.0/24
Generate keys on both Host A and Host B
tincd -n flockport -K
This will generate private key files in the flockport folder and append public keys to the host files /etc/tinc/flockport/hosts/xxx
Exchange host files on either side
Copy the hosts file with the public keys from /etc/tinc/flockport/hosts/xxx on host A to the hosts folder n Host B and vice versa.
So now your /etc/tinc/flockport/hosts folder on Host A and Host B should have both 'hosta' and 'hostb' files in them
The moment of truth! Run the tincd command on both Host A and Host B
tincd -n flockport
If you followed the guide accurately your containers on both Host A and B should now be able to ping each other
To ensure the Tinc private network starts on reboot edit the /etc/tinc/nets.boot file on Host A and B and add the network name ie in this case flockport. This ensures that the Tinc network startup on boot and is available.
You can easily add more LXC hosts to the network. Tinc has a number of options on optimizing connectivity - compression etc, and choosing the security cipher. Please visit the Tinc website and go through the documentation for more options.
Tinc container to container VPN
The setup is the same as setting up the Tinc LXC network in option 2, only in this case we are going to do it in the containers. Choose 2 random containers from Host A and Host B, let's call them 'debian' from Host A and 'ubuntu' from Host B for this guide.
Installing Tinc in containers need the tun device to be available. So lets create a tun device on both containers first. Default LXC configurations allow the creation of tun devices in containers. Please ensure you are using default lxc templates for this guide.
mkdir /dev/net mknod /dev/net/tun c 10 200 chmod 666 /dev/net/tun
With the devices create on both containers we are ready to proceed. This the network map.
Host A has public IP 1.2.3.4 Host B has public IP 2.3.4.5 Containers in Host A are on subnet 10.0.3.0/24 via default lxcbr0 nat bridge Containers in Host B are on subnet 10.0.3.0/24 via default lxcbr0 nat bridge Debian container is IP 10.0.3.4 Ubuntu container is IP 10.0.3.26
We should put them on different subnets to avoid ip conflicts and facilitate routing but we are building an overlay mesh network and containers will have 2 sets of IPs and actually be available on the network on the subnet 10.0.0.0/24
We are going to use 10.0.0.1 and 10.0.0.2 as the interface IPs for Tinc network
Install Tinc on both Host A and B
apt-get install tinc
Tinc operates on a concept of network names for the private VPN. Let's call our network 'flockport'.
In /etc/tinc/ on both debian and ubuntu containers create a folder called 'flockport' and do the following.
mkdir /etc/tinc/flockport
This will hold our configuration for this network.
Create a 'hosts' folder in the flockport folder
mkdir /etc/tinc/flockport/hosts
Create the following files in the flockport folder - tinc.conf, tinc-up, tinc-down
touch tinc.conf tinc-up tinc-down
Configure Tinc on debian container
nano tinc.conf
Name = debian (You can use any name) AddressFamily = ipv4 Interface = tun0
nano tinc-up
#!/bin/bash ifconfig $interface 10.0.0.1 netmask 255.255.255.0
nano tinc-down
#!/bin/bash ifconfig $INTERFACE down
nano /etc/tinc/flockport/hosts/debian
Address 1.2.3.4 Subnet 10.0.0.1
Configure Tinc on ubuntu container
nano /etc/tinc/flockport/tinc.conf
Name = ubuntu AddressFamily = ipv4 Interface = tun0 ConnectTo = debian
nano tinc-up
#!/bin/bash ifconfig $interface 10.0.0.2 netmask 255.255.255.0
nano tinc-down
#!/bin/bash ifconfig $INTERFACE down
nano /etc/tinc/flockport/hosts/hostb
Subnet 10.0.0.2
Generate keys on both Host A and Host B
tincd -n flockport -K
This will generate a private key in the flockport folder and append a public key to the host file /etc/tinc/flockport/hosts/xxx
Exchange host files on either container
Copy the hosts file with the public keys from /etc/tinc/flockport/hosts/xxx on debian to the hosts folder in ubuntu and vice versa.
So now your /etc/tinc/flockport/hosts folder on debian and ubuntu should have both 'debian' and 'ubuntu' files in them
Since we are in containers how is tinc in ubuntu going to connect to tinc in debian, both behind nat networks? Tinc uses udp/tcp port 655, so we are going to port forward 655 udp and tcp on Debian's host which is Host A public ip 1.2.3.4. This is the ONLY network state we are putting on the host in this configuration.
iptables -t nat -I PREROUTING -i eth0 -p TCP -d 1.2.3.4/32 --dport 655 -j DNAT --to-destination 10.0.3.4:655
iptables -t nat -I PREROUTING -i eth0 -p UDP -d 1.2.3.4/32 --dport 655 -j DNAT --to-destination 10.0.3.4:655
The moment of truth! Run the tincd command on both debian and ubuntu containers
tincd -n flockport
If you followed the guide accurately debian and ubuntu containers should now be able to ping each other.
Add more containers to the network
To add more containers to this network, follow the same process, lets say you want to add container C 'debclone' from Host A to the vpn network. Let's assign a Tinc network IP 10.0.0.3 to debclone and in the tinc.conf for debclone. All the steps remain the same, only add the config below.
Since Container C is on Host A we are using the ConnectTo setting to connect to the Debian container. For a container on Host B we would use this setting to connect to the ubuntu container.
ConnectTo = debian
And in the host file of the Debian container in debclone add the debian container NAT address
Address = 10.0.3.4
Copy the debclone host file to the debian and ubuntu containers and vice versa to debclone.
At this point you have added debclone to the network and debian, ubuntu and debclone will be able to ping each other.
To ensure the TINC private network starts on reboot edit the /etc/tinc/nets.boot file on Host A and B and add the network name ie in this case flockport. This ensures that the tinc network starts up on boot and is available.
Follow the same process to add more containers from other hosts to the Tinc mesh.
Tinc has a number of options to optimize connectivity - compression etc, and choosing the security cipher. Please visit the Tinc website for more options.
With Tinc and GRE tunnels you create layer 3 overlay networks. You can also use IPSEC tunnels to create layer 3 overlays, and since IPSEC is in kernel space it could be more efficient. See our IPSEC guides below.
Connect LXC hosts with IPSEC VPNs
Connect LXC containers across hosts with IPSEC VPNs
You can also create layer 2 overlay networks as we show in the guide below.