LXC advanced user guide
Cgroups
Check memory usage
cat /sys/fs/cgroup/lxc/mycontainer/memory.usage_in_bytes
For instance to limit memory on container p1 to 1 GB you would run
lxc-cgroup -n p1 memory.limit_in_bytes 1G
You can check the cgroup to see if the setting is applied
cat /sys/fs/cgroup/lxc/uploadhome/memory.limit_in_bytes
You can also directly echo the setting to the cgroup.
echo 1G > /sys/fs/cgroup/lxc/p1/memory_limit_in_bytes
Set it in the container config file for persistence.
lxc.cgroup.memory.limit_in_bytes = 1G
See the available cgroups for a container
ls -1 /sys/fs/cgroup/lxc/containername
Suppose you have a 4 core cpu and would like to limit a container to 2 specific cores; 0 and 3. You can set it like this in the container config file.
lxc.cgroup.cpuset.cpus = 0-3
You can also set cpu shares per container. For example if you have 4 containers and would like to allocate specific share of cpu time you can give container A 500 shares, container B 250 shares, container c 100 shares and container D 50 shares. This means A will get 5 times the cpu of container C and 10 times the cpu of container D.
lxc.cgroup.cpu.shares = 512
To limit swap file use to let's say 192M use the cgroup swap limit setting
lxc.cgroup.memory.memsw.limit_in_bytes = 192M
LXC doesn't directly support disk quotas but supports LVM and filesystems like btrfs and zfs that do.
Cloning
lxc-clone -o mycontainer -n mycontainer-clone
o - original container name
n - new container name
This creates a clone of your container. LXC is filesystem neutral but supports btrfs, ZFS, LVM, Overlayfs, Aufs and can use functions specific to those files systems for cloning and snapshot operations.
For instance on a btrfs file system lxc-create and lxc-clone use btrfs subvolumes to create and clone containers.
You can also use the -B option to specify a backingstore.
You can make snapshots with lxc-clone command on supported backingstores.
Suppose you want a temporary snapshot to work on:
lxc-clone -o mycontainer -n mycontainer-snap -B overlayfs -s
B - backingstore specifies the supported backingstore file system
s - snapshot - make a snapshot
You can now make any changes to container and any change will be stored in the “delta0″ directory of mycontainer-snap.
Snapshots
lxc-snapshot -n mycontainer -c snap-comment
n - name of container to be snapshotted
c - comment to be attached to snapshot
This takes a snapshot of mycontainer and attaches the comment specified in the file snap-comment.
To list your snapshots and comments
lxc-snapshot -n mycontainer -LC
This will give you a list of snapshots available for mycontainer and show you any attached comments.
To revert to a snapshot
lxc-snapshot -n mycontainer -r snap0
Or if you want to restore a snapshot as its own container
lxc-snapshot -n mycontainer -r snap0 mycontainer-snap0
Have a look at this post from one of the LXC developers on how to use snapshotting with overlayfs
Container storage
The container file system is in the default LXC container folder usually /var/lib/lxc. The individual container file systems are in the 'containername' folder rootfs directory.
To access host data from inside a container you can simply mount a folder or drive in the host in the container's fstab file located in the container's directory.
/var/www var/www none bind,create=dir
This will mount the /var/www folder from the host in the container at /var/www. You can mount the same location in multiple containers to share data.
This is a big deal and the ease with which it is done makes it hugely useful.
You can also mount a shared folder in a separate container so it acts as a sort of mobile portable storage container outside your container. For instance
/var/lib/lxc/myvolume/rootfs/var/lib/mysql var/lib/mysql
will mount myvolume's /var/lib/mysql folder in mysql container so you can separate the application or container and it's data. This storage container can also be shared across containers if required.
LXC passthrough devices
By default LXC prevents any such access using the devices cgroup as a filtering mechanism. The default LXC config allows certain devices to pass through. This is per container and is set in the container config file.
You can edit the individual container configuration to allow the additional devices and then restart the container. You can see this in our LXC Gluster guide where we use this to pass through the fuse device.
For one-off things, there’s a very convenient tool called 'lxc-device'. With it, you can simply run
lxc-device add -n p1 /dev/ttyUSB0 /dev/ttyS0
Which will add (mknod) /dev/ttyS0 in the container with the same type/major/minor as /dev/ttyUSB0 and then add the matching cgroup entry allowing access from the container.
The same tool also allows moving network devices from the host to within the container.
Autostart
lxc-autostart
lxc-autostart command is used to autostart containers which have autostart enabled in their config files. You can also make a group of containers and set the group to autostart.
There are a number of options for the autostart command that can be specified in the individual container config file.
The autostart command is typically used by the lxc-net init script (which sets up LXC container networking) to autostart containers that have autostart enabled in their config file. You can stagger autostarts for containers that depend on services of other containers.
lxc.start.auto = 0 (disabled) or 1 (enabled) lxc.start.delay = 0 (delay in second to wait after starting the container) lxc.start.order = 0 (priority of the container, higher value means starts earlier) lxc.group = group1,group2,group3,… (groups the container is a member of)
Using kernel modules in container
For our load balancing with LVS or building IPSEC VPNs guides we use this to load LVS module and ipsec modules on the host.
Creating containers - LXC download templates
These are designed for unprivileged containers but work with normal containers.
To use a 'download' template
lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64
-d distribution
-r release
-a architecture
You can get a list of templates available.
lxc-create -t download -n test
When creating a container like above container networking is set up in the container config file in /var/lib/lxc/containername/config depending on the LXC networking configuration in /etc/lxc/default.conf. If this file is empty then the container will have no networking enabled.
Usually the values below is what you need to enable the default container networking if you are using the LXC default networking with lxcbr0 bridge.
lxc.network.type = veth lxc.network.flags = up lxc.network.link = lxcbr0 lxc.network.name = eth0 lxc.network.hwaddr = 00:16:3e:xx:xx:xx
But its much better to add this values to /etc/lxc/default.conf so lxc-create populates the networking values in the config file automatically every time it creates a container.
Note: If you add the values manually to the container config file you need to replace the mac address 'xx' bits with random alphanumeric characters. If you add it to /etc/lxc/default.conf lxc-create is smart enough to automatically generate values for the 'xx' bits.
With that done you can start the container
lxc-start -n containername -d
Check if networking is enabled and by using the lxc-ls -f command. Normally you should see a container name with an IP against it like below.
NAME STATE IPV4 IPV6 AUTOSTART ----------------------------------------------- alpine STOPPED - - NO p1 RUNNING 10.0.3.62 - NO debian32 STOPPED - - NO debian STOPPED - - NO ubuntu STOPPED - - NO
Now you can login to the container with ssh. The latest LXC container OS templates do not ship with ssh installed by default like earlier. So you need to install it first. You can login to the container using lxc-console or lxc-attach
lxc-console -n p1
Quick tip: to exit lxc-console use ctrl-a-q
Once you are inside the container use 'passwd' command to set root password. Then run apt-get update if you are for instance in a Ubuntu or Debian container and install openssh. Once this is done, you can poweroff or exit the container and login to the container with ssh
Useful LXC commands
lxc-info -n mycontainer
Name: uploadhome State: RUNNING PID: 6162 IP: 10.0.3.199 CPU use: 38.13 seconds BlkIO use: 132.70 MiB Memory use: 293.71 MiB Link: vethM2G070 TX bytes: 1.64 MiB RX bytes: 632.05 KiB Total bytes: 2.26 MiB lxc-info
Lxc-monitor is used to monitor containers when required. See sample output below.
lxc-monitor -n mycontainer
'mycontainer' changed state to [STOPPING] 'mycontainer' changed state to [STOPPED] 'mycontainer' changed state to [STARTING] 'mycontainer' changed state to [RUNNING]
lxc-freeze and lxc-unfreeze are used to freeze the container state.
lxc-freeze -n mycontainer
This will freeze the container state, you can unfree it by using the unfreeze command.
Finally lxc-destroy is used to destroy unneeded containers.
lxc-destroy -n mycontainer
It supports btrfs and will destroy a btrfs subvolume if the container is created in one.
The Flockport Team.
Further reading and resources.
Flockport LXC Networking Guide
Understanding the key differences between LXC and Docker
Stephane Graber's excellent 10 part post on LXC.
Stephane and Serge work for Ubuntu and are the lead maintainers of the LXC project.