LXC containers or extremely fast virtualization

Update: Added an hardy i386 template, mentioned the need of bridge-utils and fixed typo (s/addbr/brctl addbr/g)

This (quite long) post is about the LXC (Linux containers), an example of its usage on Karmic is provided after the introduction to contextualization.

Most of you are probably already familiar with “usual” virtualization as kvm/virtualbox/vmware/… These are now extremely fast ways to do “full” virtualization of an OS on a host running either the same OS or a completely different one.
In Ubuntu, the most widely used is probably KVM used with libvirt and virt-manager as frontend.

At Revolution Linux, we have literately hundreds of virtual machines for each of our customers and we noticed that they are all Ubuntu virtual machines running on Ubuntu hosts. Then, running them in a “full” virtualization environment adds unneeded overhead and makes resource assignment quite difficult (you can’t easily change the CPU/RAM/DISK/NIC of a running virtual machine).

So, what we are currently doing is using contextualization instead of regular virtualization.
Contextualization can (in a much simpler way) be seen as improved chroots, these “chroots” are called containers and work just like a regular virtual machine, inside them you have your own network interface, can apply disk/cpu/ram quotas and start/stop/suspend as many of them as you want.
All the quotas and restrictions can be changed on the fly without needing any restart, because it’s technically just a set of process running on the host, not a single process as with virtualization.
It also means that you can list/kill or execute a process in any of these containers, directly from the host (a container obviously can’t access another’s processes).

The technology we have been using for more then a year now has been OpenVZ (open source implementation of Virtuozo) which basically is a huge patchset on top of the Linux kernel and only exists in Ubuntu hardy (8.04 LTS).

What I’ve been looking at more recently and hope to have working correctly in Lucid (10.04 LTS) is LXC. LXC is basically the same as OpenVZ except that it’s in the upstream kernel and uses already existing kernel features such as “cgroups” for example.
LXC is also supported by libvirt although it’s not working in Karmic, that will let users play with it just like any other virtualization technology using their existing scripts and interfaces.

Here’s a quick howto to make it work on Karmic with an Ubuntu 8.04 amd64 container (I’ve had issues making Karmic to work in a container):

  • Install bridge-utils: sudo apt-get install bridge-utils
  • Install LXC from my PPA (upstream snapshot) : https://launchpad.net/~stgraber/+archive/ppa/+packages
  • Create /var/lib/lxc/: mkdir -p /var/lib/lxc/
  • amd64 template (if your computer is running Ubuntu 64bit): Get http://www.stgraber.org/download/lxc-ubuntu-8.04-amd64.tar.gz (Hardy amd64 image)
  • i386 users (if your computer is running Ubuntu 32bit): Get http://www.stgraber.org/download/lxc-ubuntu-8.04-i386.tar.gz (Hardy i386 image)
  • Uncompress it in /var/lib/lxc/ (will create an ubuntu directory containing a configuration file and a root directory)
  • Mount cgroups somewhere: sudo mkdir /dev/cgroup && mount -t cgroup none /dev/cgroup
  • Create a bridge with: sudo brctl addbr br0
  • Set an IP on the bridge: ifconfig br0 192.168.2.1 (VE will be 192.168.2.2 by default)
  • Start the VE: lxc-start -d -n ubuntu
  • Enter the VE: “lxc-console -n ubuntu” or “ssh root@192.168.2.2″ (root password is “password”)

The VE (virtual environment) configuration file is in: /var/lib/lxc/ubuntu/config

Additional information can be found on:

Also, I plan to have a session about it at UDS-Lucid in Dallas

This entry was posted in LXC, Planet Ubuntu. Bookmark the permalink.

19 Responses to LXC containers or extremely fast virtualization

  1. ronen says:

    Hey there,
    iv been trying the small howto & didn’t seem to get it to work, will the provided amd64 image work on a 32bit install? (karmic desktop).

    Id be interested in creating an image from scratch (iv managed to create a rootfs from a mirror), how can approach that?

    Thanks!

    • stgraber says:

      Hi Ronen, thanks for your comment.

      The amd64 image can’t run on a 32bit install.
      I uploaded another image which is 32bit.

      If you want to create your own image, the easiest is to use debootstrap (from the package of the same name) to generate the initial installation, then install SSH server inside and use it as your root directory for the LXC container.

      We have developed a few scripts for OpenVZ (vz-utils on Launchpad) but these don’t seem to work well for LXC and so will need to be updated.

  2. Are you going to make sure the lxc userspace is in shape for Lucid?

    • stgraber says:

      My plan is to make sure LXC works correctly with libvirt and is included in Main so everyone can use it as easily as KVM.

      The actual userspace for Lucid doesn’t worry me too much. The main issue at the moment is the relation between a LXC container and upstart/udev. That’s something I had to workaround for OpenVZ in the past, unfortunately the same workaround doesn’t work for LXC.

      Once LXC starts to be maintained and integrated in Ubuntu (in the very close future I hope), fixing mountall and upstart so they don’t hang with containers will probably be the next step so we don’t have to “patch” the OS to work in a container.

  3. This is good news, because OpenVZ seem to be a dead end… We are currently searching for a viable OpenSource replacement. Right now KVM seem to be the only viable option for the futur, but for our need, full virtualization is overkill. I hope LXC will fill the void.

  4. Ernad says:

    Hi Stephane,

    I have read your posts about lxc. I have sucessfully setup lxc guests (hardy amd64, i386) which you submittedl.

    Thank you for sharing this.

    I have tried to setup lxc ubuntu karmic/lucid guest but with no success (stuck in upstream – mountall procedure – already reported here https://bugs.launchpad.net/ubuntu/+source/mountall/+bug/461438).

    On your microblogging site I have found thaty you have succees with running lucid guest. How did you resolve udev/upstart issues ? Can you share this configuration with us ?

  5. jean-Marc Pigeon says:

    Hi guys,

    Just to tell you, a little lxc experiment of mine, named ‘vzgot,’ reached a working stage.

    I was able to run a wide range of distribution rh7.3, rh8.0, rh9, fc2 ->fc12 and Centos-4.[6,7,8], Centos-5.[2,3,4], rhel-4 on a recent unmodified kernel (2.6.31.6-162.fc12).

    I was able to rpmbuild clamav on those 33 distributions, so I am confident enough about reliability (charge reached 70.0 on the hardware host (a Dell-2800)).

    VE network is working fine (VE yum usage is fully working) as long you have
    set the bridge interface (br0) and use quagga
    .
    Next step is to work on ressources contention (cgroup).

    You can access the RPM
    ftp://ftp.safe.ca/pub/linux/vzgot/
    Theoretically RPM is comprehensive enough to allow you to duplicate result.

    Please give me feedback.

    • Jamie Nicholson says:

      Hi,
      I’ve followed this blog tut on LXC and got it working, however I notice my container cannot access the internet. I’ve updated the /etc/hosts and /etc/resolv.conf and still no luck.

      I’m assuming quagga will be able to route the packets from 192.168.2.2 through the bridge 192.168.1.1 to the router say 192.168.1.1 and out to the web.

      Do you have a quick example on how to get quagga going?

      Cheers

  6. boblin says:

    It’s looking very nice. Thanks for good work!

  7. boblin says:

    Your templates are working fine. I try to convert xen vm into lxc, but it does not work.

    lxc-info -n guest
    ‘guest’ is RUNNING

    ping to guest working

    but i can’t get console:
    # lxc-console -n guest

    Type to exit the console

    in syslog of guest system i see only:
    init: tty2 main process (609) killed by TERM signal
    init: tty3 main process (610) killed by TERM signal
    init: tty1 main process (763) killed by TERM signal
    init: tty6 main process (612) killed by TERM signal

    i also can not connect via ssh. Por 22 is not open.

    How can I convert vm to lxc?

    Thank’s for help.

  8. bodhi.zazen says:

    I have been working with LXC and am happy to learn it is under development.

    I posted a few blogs on LXC :

    http://blog.bodhizazen.net/linux/lxc-linux-containers/

    http://blog.bodhizazen.net/linux/lxc-configure-ubuntu-karmic-containers/

    http://blog.bodhizazen.net/linux/lxc-configure-ubuntu-lucid-containers/

    I have space on a server for LXC containers, similar to openvz :

    http://bodhizazen.fivebean.net/LXC/

    Right now I only have a few , but as time allows I will add to it.

    Hopefully some of this information will help others.

  9. Lux says:

    hi
    with lxc is it possible ( now or in the future ) to have 3d accel ?

  10. osvaldo says:

    Ubutu 10.04 Beta 1

    Why a have this?

    root@srv01:~# lxc-start -n proxy1
    lxc-start: No such file or directory – failed to mount ‘/home/lxc/proxy1/rootfs.ubuntu’->’/tmp/lxc-rdqdyBc’
    lxc-start: failed to set rootfs for ‘proxy1′
    lxc-start: failed to setup the container
    root@srv01:~# lxc-start -n proxy1
    lxc-start: No such file or directory – failed to mount ‘/home/lxc/proxy1/rootfs.ubuntu’->’/tmp/lxc-rXeJjg8′
    lxc-start: failed to set rootfs for ‘proxy1′
    lxc-start: failed to setup the container
    root@srv01:~# lxc-start -n proxy1
    lxc-start: No such file or directory – failed to mount ‘/home/lxc/proxy1/rootfs.ubuntu’->’/tmp/lxc-rs2TxjR’
    lxc-start: failed to set rootfs for ‘proxy1′

  11. Osvaldo says:

    SSH: PTY allocation request failed on channel 0

    Where my tty?

    lxc-ps –lxc

    CONTAINER PID TTY TIME CMD
    prizm1 1330 ? 00:00:00 init
    prizm1 1406 ? 00:00:00 upstart-udev-br
    prizm1 1420 ? 00:00:00 udevd
    prizm1 1427 ? 00:00:00 getty
    prizm1 1434 ? 00:00:00 getty
    prizm1 1700 ? 00:00:00 sshd
    prizm1 1702 ? 00:00:00 apache2
    prizm1 1722 ? 00:00:00 getty
    prizm1 1723 ? 00:00:00 apache2
    prizm1 1724 ? 00:00:00 apache2
    prizm1 1725 ? 00:00:00 apache2
    prizm1 1726 ? 00:00:00 apache2
    prizm1 1727 ? 00:00:00 apache2
    prizm1 1903 ? 00:00:00 apache2

    —————————————————

    Script for /dev

    #!/bin/bash

    # bodhi.zazen’s lxc-config
    # Makes default devices needed in lxc containers
    # modified from http://lxc.teegra.net/

    ROOT=$(pwd)
    DEV=${ROOT}/rootfs/dev
    if [ $ROOT = '/' ]; then
    printf “\033[22;35m\nDO NOT RUN ON THE HOST NODE\n\n”
    tput sgr0
    exit 1
    fi
    if [ ! -d $DEV ]; then
    printf “\033[01;33m\nRun this script in rootfs\n\n”
    tput sgr0
    exit 1
    fi
    rm -rf ${DEV}
    mkdir ${DEV}
    mknod -m 666 ${DEV}/null c 1 3
    mknod -m 666 ${DEV}/zero c 1 5
    mknod -m 666 ${DEV}/random c 1 8
    mknod -m 666 ${DEV}/urandom c 1 9
    mkdir -m 755 ${DEV}/pts
    mkdir -m 1777 ${DEV}/shm
    mknod -m 666 ${DEV}/tty c 5 0
    mknod -m 666 ${DEV}/tty0 c 4 0
    mknod -m 666 ${DEV}/tty1 c 4 1
    mknod -m 666 ${DEV}/tty2 c 4 2
    mknod -m 666 ${DEV}/tty3 c 4 3
    mknod -m 600 ${DEV}/console c 5 1
    mknod -m 666 ${DEV}/full c 1 7
    mknod -m 600 ${DEV}/initctl p
    mknod -m 666 ${DEV}/ptmx c 5 2

    exit 0

    ———————————————–
    My config.lxc

    lxc.utsname = prizm1
    lxc.network.type = veth
    lxc.network.flags = up
    lxc.network.link = br0
    lxc.network.ipv4 = 192.168.1.13/24
    lxc.network.hwaddr = 00:16:EC:7E:E5:19
    lxc.network.name = eth0
    lxc.mount = /home/lxc/prizm/fstab
    lxc.rootfs = /home/lxc/prizm/rootfs

    lxc.tty = 0
    #lxc.pseudo = 1024

    #lxc.cgroup.devices.deny = a # Deny all access to devices
    # /dev/null and zero
    lxc.cgroup.devices.allow = c 1:3 rwm
    lxc.cgroup.devices.allow = c 1:5 rwm

    # consoles
    lxc.cgroup.devices.allow = c 5:1 rwm
    lxc.cgroup.devices.allow = c 5:0 rwm
    lxc.cgroup.devices.allow = c 4:0 rwm
    lxc.cgroup.devices.allow = c 4:1 rwm
    lxc.cgroup.devices.allow = c 4:2 rwm
    lxc.cgroup.devices.allow = c 4:3 rwm

    # /dev/{,u}random
    lxc.cgroup.devices.allow = c 1:9 rwm
    lxc.cgroup.devices.allow = c 1:8 rwm
    # /dev/pts/* – pts namespaces are “coming soon”
    lxc.cgroup.devices.allow = c 136:* rwm
    lxc.cgroup.devices.allow = c 5:2 rwm
    # rtc
    lxc.cgroup.devices.allow = c 254:0 rwm

  12. Paul says:

    Hey has anyone got this working with libvirt? There is a container driver http://www.libvirt.org/drvlxc.html but I can’t get it to work. I try prepping the root fs with these instructions but when I go to start the domain nothings happening. I suspect that the /sbin/init process is crashing. I can get a console however if the start process is /bin/bash. Any ideas?

  13. Pingback: State of LXC in Ubuntu 11.04 | Stéphane Graber's website

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>