LXC networking guide

Containers need to have an IP so they are available on the network. In Linux 'bridges' are used to connect VMs to a network. Think of a bridge as a sort of a software switch that is created within the host that VMs connect to. Bridges are a basic functionality of the Linux kernel and are usually created using the bridge-utils package.

Depending on your environment you can configure 2 main types of bridges.

    • Host Bridge - your containers/VMs are directly connected to the host network and appear and behave as other physical machines on your network
    • NAT - a private network (subnet) is created within your host with private IPs for your containers. This is a NAT bridge. To learn more about the NAT bridge please see the NAT and Autostart sections below.

There are 2 types of IPs, public IPs usually provided by your ISP or server provider that can be publicly routed from the internet and private IPs that are private to your network and cannot be accessed from the Internet. A NAT network is a subnet of private IPs, for instance the network set up by your home wifi router usually in the 192.168.1.0/24 subnet for all your computers, mobile and tablet devices is a NAT network.

A NAT network for your containers is similar to this, only your host is acting as a router with the containers and VMs connected to a software switch or bridge on a private subnet created within the host.

Note: The Flockport Debian and Ubuntu LXC packages automatically setup and enable container networking with LXC's default 'lxcbr0' bridge with DHCP so nothing more needs to be done.

This section is a reference for setting up container networking and is going to cover configuring bridged mode networking, NAT, static IPs, Public IPs and deploying in cloud VMs and enabling autostart for containers.

Bridged mode

This bridges containers or VMs to your host network so they appear as other physical machines on your network. This type of bridge is created by  bridging the host's physical interface usually 'eth0'.

In Linux eth0 is typically the name of the first physical network interface, for systems that have multiple network cards,  interfaces will typically be eth0, eth1, eth2 etc

In this type of bridge containers/VMs are on the same layer 2 network as the host have all the network capabilities of the host ie can connect directly to the internet, connect and be reached from other machines on the network, and if assigned public IPs can be reached directly from the Internet.

For instance if you are a home user and have a wifi router that assigns IPs to your devices, with a bridged network containers will get their IPs directly from the router.

To create a direct bridge edit the /etc/network/interfaces file and make sure it looks like below

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

Now you can configure containers to connect to the br0 bridge and they will get their IPs from the router your host is connected to and be on the same network as your host.

You can set static IPs for you containers either from the router settings by specifying specific IPs by mac address, or in the container's network settings.

If you are on a network with public IPs or a service or dedicated server or cloud provider that gives you multiple public IPs (and allows you to bridge) you can bridge your hosts's eth0 to br0 for instance and assign static public IPs to the containers via the container's networking config.

NAT

A NAT bridge is a private network that is not bridged to the host eth0 or physical network. It exists as a private subnet.

In many cases a user may have little control of network DHCP, so getting IPs assigned automatically will be impossible or may not want to bridge the hosts physical interface.  In this case using a NAT bridge is the best option. It has other functions but let's leave that for later.

A NAT bridge is a standalone bridge with no physical interfaces attached to it that basically creates a private network within the host computer with the containers/VMs getting private IPs.

These IPs are not directly accessible from the outside world. Only the host and other containers on the same private subnet can access them.  The containers need to use NAT (network address translation) to access the internet.

LXC ships with a NAT bridge by default called lxcbr0. The lxcbr0 bridge is set up by the lxc-net script that sets up the bridge and basic networking in the 10.0.3.0/24 subnet including internet connectivity for containers. If you have LXC installed you can check the bridge is up

brctl show

You should see an lxcbr0 entry. The containers are configured to connect to this bridge via their config files, and get IPs on this private subnet. Here is part of the container's config file that configures networking. Notice the lxc.network.link value is set to lxcbr0. If you were using the host bridge this would be br0.

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
lxc.network.hwaddr = 00:16:3e:f9:d3:03
lxc.network.mtu = 1500

The lxcbr0 bridge is setup by the lxc-net script which is part of the LXC installation in most distributions. Have a look at the script in /etc/init.d/lxc.net in Debian or /etc/default/lxc.net.conf in Ubuntu.

The script does 3 things, creates the standalone lxcbr0 bridge, setup dnsmasq to serve dhcp on this bridge on the 10.0.3.0/24 subnet and sets up a few iptables rules (masquerade) so the containers have internet access.

In the next section we will give a quick overview of how the lxc-net script works, and should help those who choose to compile LXC on Debian. Note for those installing LXC from our repo or on Ubuntu you DO NOT need  to do any of the below.

For Debian for those compiling you can download this lxc-net init script. We have customised the script to fix a few bugs and enable autostart to work out of the box. You can use it as a reference to create init files for other Linux distributions.

Copy it to your /etc/init.d/ folder and enable it.

chmod +x lxc-net
update-rc.d lxc-net defaults
service lxc-net start

Add a file named lxc to /etc/default/ with the line below

nano /etc/default/lxc
USE_LXC_BRIDGE="true"

Now we just need a DHCP server to assign IPs to containers/VMs.

Install Dnsmasq-base

apt-get install dnsmasq-base

With Dnsmasq base it's a good idea to add a dnsmasq user.

adduser --system --home /var/lib/misc --gecos "dnsmasq" --no-create-home --disabled-password --quiet dnsmasq

Incase you already have Dnsmasq installed (you could have it as it's used by a lot of apps)

Create a lxc config file in /etc/dnsmasq.d/ with nano or your favourite text editor

nano /etc/dnsmasq.d/lxc

Add the lines below to the lxc file

bind-interfaces
except-interface=lxcbr0

And restart dnsmasq

service dnsmasq restart

The above configuration is only needed if your Dnsmasq installation is not configured to bind to a specific interface. If Dnsmasq binds to all interfaces the lxcbr0 bridge will fail to come up.

Security tip for prior Dnsmasq users. It's a good idea to ensure you bind dnsmasq to a specific interface so you are not running an open DNS relay. This can even be a fake interface like 'abc' for instance.

Move the lxc-net scipt to /etc/init.d/ and enable it.

update-rc.d lxc-net defaults

Start the lxc-net service

update-rc.d lxc-net start

Congratulations! The lxcbr0 bridge with Nat is now enabled on reboot.

Deploy containers in cloud KVMs

In a cloud KVM the average user may not have access to the network DHCP or enough public IPs, so a NAT Bridge is often the only option.

To set up a NAT networking for your containers please refer to the NAT section above.

The containers with the default LXC NAT bridge 'lxcbr0' setup, will have access to the internet but if you need to make any services on the container available to the world, you need to configure port forwarding from the host to the container.

For instance to forward port 80 from the host ip 1.1.1.1 to a container with ip 10.0.3.165 you can use the iptables rule below.

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 1.1.1.1/32 --dport 80 -j DNAT --to-destination 10.0.3.165:80

This will make for instance an Nginx web server on port 80 of the container available on port 80 of the host.

For advanced users who do have access and would prefer to use a bridged network and public IPs for containers, follow the instructions in the bridged networking section above.

If you already have a bridge, you can connect LXC containers to your current bridge by specifying it in the LXC container config file.

Configure static IPs for containers

LXC containers have MAC addresses. If you are using the default lxcbr0 network to assign a static IP to a LXC container create a dnsmasq.conf file in /etc/lxc/ and add the line below to assign a specific static IP to a container name.

dhcp-host=containername,10.0.3.21

For those not using the default LXC network, you can assign an LXC container MAC address to a specific IP in your router or DHCP app configuration.

You can also use the container OS's networking configuration to set up a static IP. For instance in a Debian or Ubuntu container to set a static IP 10.0.3.150 for eth0 you can use the configuration like below in the /etc/network/interfaces file.

auto eth0
iface eth0 inet static
address 10.0.3.150
gateway 10.0.3.1
netmask 255.255.255.0

Set-up container autostart

LXC has the capability to start single or groups of containers automatically on boot. Important for servers hosting services.

The Flockport and Ubuntu LXC packages already enables autostart capability for containers by default so nothing needs to be done.

The lxc init script is responsible for autostarting containers.

PLEASE NOTE: The Flockport Debian LXC and the Ubuntu LXC packages enable the capability for container autostart by default. The next section is meant to help those on other Linux distributions and those compiling LXC from source.

For Debian you can download the lxc script linked below. We have customised the script to enable autostart to work in Debian for LXC containers out of the box. You can use it as a reference to create an autostart script for other Linux distributions.

Download the Flockport LXC Debian init script, move it to /etc/init.d/ and enable it.

mv lxc /etc/init.d/
chmod +x lxc
update-rc.d lxc defaults
update-rc.d lxc start

Add a default.conf file to /etc/lxc/ and add the lines below. Change lxcbr0 line if you are using another bridge.

lxc.network.type = veth
lxc.network.link = lxcbr0

You can now configure containers to autostart on boot.

To configure a container to autostart on reboot, add the following line to the container's config file at containername/config

lxc.start.auto = 1

You can also group containers and autostart groups of containers.

lxc.group = groupname

You can also stagger container starts. Please see our LXC advanced guide for more details.

Recommended Posts

Leave a Comment

Login

Register | Lost your password?