Building distributed mesh networks of LXC containers

This is unique as we are going to connect 2 (or more) containers behind a NAT network with Tinc. With this config we try not to touch the host networking layer too much. This is a mesh network enabled in containers networking space and the only setting on the host is a single port forward of tcp/udp 655 on one of the hosts, to a container. In this scenario containers will have 2 IPs, the containers basic lxcbr0 NAT network and the Tinc vpn network

The setup is the same as setting up the Tinc LXC network across hosts, only in this case we are going to do it in the containers. Choose 2 random containers from Host A and Host B, let's call them 'debian' from Host A and 'ubuntu' from Host B for this guide.

This guide presumes you have gone through the basic LXC networking guide and are familiar with LXC and VM networking basics.

Installing Tinc in containers need the tun device to be available. So lets create a tun device on both containers first. Default LXC configurations allow the creation of tun devices in containers. Please ensure you are using default lxc templates for this guide.

mkdir /dev/net
mknod /dev/net/tun c 10 200
chmod 666 /dev/net/tun

With the devices create on both containers we are ready to proceed. This the network map.

Host A has public IP 1.2.3.4
Host B has public IP 2.3.4.5
Containers in Host A are on subnet 10.0.3.0/24 via default lxcbr0 nat bridge
Containers in Host B are on subnet 10.0.3.0/24 via default lxcbr0 nat bridge
Debian container is IP 10.0.3.4
Ubuntu container is IP 10.0.3.26

We should put them on different subnets to avoid ip conflicts and facilitate routing but we are building an overlay mesh network and containers will have 2 sets of IPs and actually be available on the network on the subnet 10.0.0.0/24

We are going to use 10.0.0.1 and 10.0.0.2 as the interface IPs for Tinc network

Install Tinc on both Host A and B

apt-get install tinc

Tinc operates on a concept of network names for the private VPN. Let's call our network 'flockport'.
In /etc/tinc/ on both debian and ubuntu containers create a folder called 'flockport' and do the following.

mkdir /etc/tinc/flockport

This will hold our configuration for this network.

Create a 'hosts' folder in the flockport folder

mkdir /etc/tinc/flockport/hosts

Create the following files in the flockport folder - tinc.conf, tinc-up, tinc-down

touch tinc.conf tinc-up tinc-down

Configure Tinc on debian container
nano tinc.conf

Name = debian (You can use any name)
AddressFamily = ipv4
Interface = tun0

nano tinc-up

#!/bin/bash
ifconfig $interface 10.0.0.1 netmask 255.255.255.0

nano tinc-down

#!/bin/bash
ifconfig $INTERFACE down

nano /etc/tinc/flockport/hosts/debian

Address 1.2.3.4
Subnet 10.0.0.1

Configure Tinc on ubuntu container

nano /etc/tinc/flockport/tinc.conf

Name = ubuntu
AddressFamily = ipv4
Interface = tun0
ConnectTo = debian

nano tinc-up

#!/bin/bash
ifconfig $interface 10.0.0.2 netmask 255.255.255.0

nano tinc-down

#!/bin/bash
ifconfig $INTERFACE down

nano /etc/tinc/flockport/hosts/hostb

Subnet 10.0.0.2

Generate keys on both Host A and Host B

tincd -n flockport -K

This will generate a private key in the flockport folder and append a public key to the host file /etc/tinc/flockport/hosts/xxx

Exchange host files on either container
Copy the hosts file with the public keys from /etc/tinc/flockport/hosts/xxx on debian to the hosts folder in ubuntu and vice versa.

So now your /etc/tinc/flockport/hosts folder on debian and ubuntu should have both 'debian' and 'ubuntu' files in them

Since we are in containers how is tinc in ubuntu going to connect to tinc in debian, both behind nat networks? Tinc uses udp/tcp port 655, so we are going to port forward 655 udp and tcp on Debian's host which is Host A public ip 1.2.3.4. This is the ONLY network state we are putting on the host in this configuration.

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 1.2.3.4/32 --dport 655 -j DNAT --to-destination 10.0.3.4:655
iptables -t nat -I PREROUTING -i eth0 -p UDP -d 1.2.3.4/32 --dport 655 -j DNAT --to-destination 10.0.3.4:655

The moment of truth! Run the tincd command on both debian and ubuntu containers

tincd -n flockport

If you followed the guide accurately debian and ubuntu containers should now be able to ping each other.

Add more containers to the network
To add more containers to this network, follow the same process, lets say you want to add container C 'debclone' from Host A to the vpn network. Let's assign a Tinc network IP 10.0.0.3 to debclone and in the tinc.conf for debclone. All the steps remain the same, only add the config below.

Since Container C is on Host A we are using the ConnectTo setting to connect to the Debian container. For a container on Host B we would use this setting to connect to the ubuntu container.

ConnectTo = debian

And in the host file of the Debian container in debclone add the debian container NAT address

Address = 10.0.3.4

Copy the debclone host file to the debian and ubuntu containers and vice versa to debclone.

At this point you have added debclone to the network and debian, ubuntu and debclone will be able to ping each other.

To ensure the TINC private network starts on reboot edit the /etc/tinc/nets.boot file on Host A and B and add the network name ie in this case flockport. This ensures that the tinc network starts up on boot and is available.

Follow the same process to add more containers from other hosts to the Tinc mesh.

Tinc has a number of options to optimize connectivity - compression etc, and choosing the security cipher. Please visit the Tinc website for more options.

More from the Flockport LXC networking series

Connect LXC  hosts with Tinc VPNs

Connect LXC hosts with GRE tunnels

Connect LXC host with IPSEC VPNs

Connect LXC containers across hosts with IPSEC VPNs

LXC networking deep dive - Extending layer 2 across hosts

Stay updated on Flockport news

Recommended Posts

Leave a Comment

Login

Register | Lost your password?