LXC 1.0: Advanced container usage [3/10]

This is post 3 out of 10 in the LXC 1.0 blog post series.

Exchanging data with a container

Because containers directly share their filesystem with the host, there’s a lot of things that can be done to pass data into a container or to get stuff out.

The first obvious one is that you can access the container’s root at:
/var/lib/lxc/<container name>/rootfs/

That’s great, but sometimes you need to access data that’s in the container and on a filesystem which was mounted by the container itself (such as a tmpfs). In those cases, you can use this trick:

sudo ls -lh /proc/$(sudo lxc-info -n p1 -p -H)/root/run/

Which will show you what’s in /run of the running container “p1”.

Now, that’s great to have access from the host to the container, but what about having the container access and write data to the host?
Well, let’s say we want to have our host’s /var/cache/lxc shared with “p1”, we can edit /var/lib/lxc/p1/fstab and append:

/var/cache/lxc var/cache/lxc none bind,create=dir

This line means, mount “/var/cache/lxc” from the host as “/var/cache/lxc” (the lack of initial / makes it relative to the container’s root), mount it as a bind-mount (“none” fstype and “bind” option) and create any directory that’s missing in the container (“create=dir”).

Now restart “p1” and you’ll see /var/cache/lxc in there, showing the same thing as you have on the host. Note that if you want the container to only be able to read the data, you can simply add “ro” as a mount flag in the fstab.

Container nesting

One pretty cool feature of LXC (though admittedly not very useful to most people) is support for nesting. That is, you can run LXC within LXC with pretty much no overhead.

By default this is blocked in Ubuntu as allowing this at the moment requires letting the container mount cgroupfs which will let it escape any cgroup restrictions that’s applied to it. It’s not an issue in most environment, but if you don’t trust your containers at all, then you shouldn’t be using nesting at this point.

So to enable nesting for our “p1” container, edit /var/lib/lxc/p1/config and add:

lxc.aa_profile = lxc-container-default-with-nesting

And then restart “p1”. Once that’s done, install lxc inside the container. I usually recommend using the same version as the host, though that’s not strictly required.

Once LXC is installed in the container, run:

sudo lxc-create -t ubuntu -n p1

As you’ve previously bind-mounted /var/cache/lxc inside the container, this should be very quick (it shouldn’t rebootstrap the whole environment). Then start that new container as usual.

At that point, you may now run lxc-ls on the host in nesting mode to see exactly what’s running on your system:

stgraber@castiana:~$ sudo lxc-ls --fancy --nesting
NAME    STATE    IPV4                 IPV6   AUTOSTART  
------------------------------------------------------
p1      RUNNING  10.0.3.82, 10.0.4.1  -      NO       
 \_ p1  RUNNING  10.0.4.7             -      NO       
p2      RUNNING  10.0.3.128           -      NO

There’s no real limit to the number of level you can go, though as fun as it may be, it’s hard to imagine why 10 levels of nesting would be of much use to anyone 🙂

Raw network access

In the previous post I mentioned passing raw devices from the host inside the container. One such container I use relatively often is when working with a remote network over a VPN. That network uses OpenVPN and a raw ethernet tap device.

I needed to have a completely isolated system access that VPN so I wouldn’t get mixed routes and it’d appear just like any other machine to the machines on the remote site.

All I had to do to make this work was set my container’s network configuration to:

lxc.network.type = phys
lxc.network.hwaddr = 00:16:3e:c6:0e:04
lxc.network.flags = up
lxc.network.link = tap0
lxc.network.name = eth0

Then all I have to do is start OpenVPN on my host which will connect and setup tap0, then start the container which will steal that interface and use it as its own eth0.The container will then use DHCP to grab an IP and will behave just like if it was a physical machine connect directly in the remote network.

About Stéphane Graber

Project leader of Linux Containers, Linux hacker, Ubuntu core developer, conference organizer and speaker.
This entry was posted in Canonical voices, LXC, Planet Ubuntu and tagged . Bookmark the permalink.

20 Responses to LXC 1.0: Advanced container usage [3/10]

  1. bob says:

    Can you publish a similar set of tutorials for FreeBSD jails?

    1. Not really. My knowledge of BSD jails is limited to knowing that they exist. I’m not a BSD user nor do I have any interest in becoming one.

      I also doubt that as I’m myself one of the main developers of LXC that I would be able to provide an unbiased review of another container technology 🙂

  2. Seth Arnold says:

    Nice articles! This was exactly what I felt I was missing about LXC from only reading portions of the source code. It’s wonderful to see it in action.

    I was surprised that your last example has OpenVPN running on the host rather than a container — why did you decide to run it a different way than my intuition would suggest? Was there any rhyme or reason to the MAC address chosen?

    Thanks! Looking forward to the rest!

    1. I’m running OpenVPN on the host because the container’s default gateway (well, the whole eth0) will be the VPN, so it wouldn’t be able to connect to it in the first place (chicken and egg kind of problem).

      My goal was to have a way to simulate a standard client on the remote network but running on my laptop and without affecting my laptop’s networking in any way.

      As for the mac address, I just didn’t want to have the remote DHCP server hand me a different IP address every time I booted the container (as the default is for kernel randomly generated addresses) so I just set a random MAC address from our usual range (the same range as Xen, short of us having our own…).

  3. hello_world says:

    Wow,
    finally there is a great tutorial.
    Thanks

  4. Avi says:

    Hi Stephane,
    I have to implement a VRF (Virtual Router Forwarder) in a router. the application should create (user request) multiple VRF’s each CRF has its own network-interfaces, routing table, sockets etc.
    i did some research and thought to implement is with network-namespace.
    but now after i read some LXC articles – i’m not sure how to implement.
    with network namespace or with LXC containers ?. (i understand that LXC uses the namespace Linux feature , but probably more than that).
    I’ll appreciate if u can elaborate what are the functionality diff. (if any) between network namespace and network LXC.
    Thanks avi.

    1. If all you are interested in is moving interfaces between namespaces, having multiple routing tables, firewalling tables and a separate abstract socket namespace, you can probably just stick to using network namespaces directly. Either using the netns commands and parameters of iproute2 or directly by using clone(NEW_NETNS) and setns to attach inside them.

      LXC is mostly useful when you want to run a standard workload because it will use all the available namespaces to create an environment as close to a normal system as possible. It can also do extra restrictions such as integrating with apparmor/selinux/seccomp, drop some capabilities, limit I/O, memory, cpu using cgroups, … but if you are only interested by the network side of things, it may be a bit overkill with the IP commands being enough for your needs.

  5. Sineer says:

    Salut Stephane,

    Thank you very much for this very useful blog posts and all your hard work on LXC!

    I’m a huge fan of FreeBSD Jails myself for years.. It’s first time I get to do something similar on Linux.. I’m impressed by what you and your team were able to achieve!

    Regards,

    J.

  6. I need to provide NAT networking to my LXC containers, much like using virbr0 from libvirt, except that nesting a libvirt-lxc container does not work. So if I nest an LXC container, how would you create a bridge that would nat-forward the packets to the network?

  7. Jake says:

    Hi Stephane,

    I’m a long time openvz user and I’ve started following along on your blog series, hoping to come up to speed on lxc. I’ve run into a problem on your example with a nested container.

    Running ubuntu 14.04, I can install lxc inside the ubuntu CT and I can create the nested CT but when I try to start it as in your example, it fails with the following output:

    root@p1:~# lxc-start -n p1
    lxc-start: cgroupfs failed to detect cgroup metadata
    lxc-start: failed initializing cgroup support
    lxc-start: failed to spawn ‘p1’
    root@p1:~

    Has something changed upstream to make this example no longer valid?

  8. Jake says:

    Hi Stephane,

    I was able to work around the issue by setting

    lxc.aa_profile = unconfined

    in /var/lib/lxc/p1/config, which allowed me to start a nested CT.

    However, there are still some issues – e.g. things like this:

    root@pandora:~# lxc-ls –fancy –nesting
    lxc_container: call to cgmanager_move_pid_sync failed: invalid request
    lxc_container: Failed to enter group lxc/p1-1
    lxc_container: error communicating with child process

  9. Phi-Ho says:

    Hi Jake,

    > I was able to work around the issue by setting
    > lxc.aa_profile = unconfined
    > in /var/lib/lxc/p1/config, which allowed me to start a nested CT.

    Thanks for the tip.

    For lxc 1.0.3 on Ubuntu 14.04 (with latest update/upgrade):

    $ sudo lxc-ls –fancy –nesting
    lxc_container: call to cgmanager_move_pid_sync failed: invalid request
    lxc_container: Failed to enter group lxc/p1-1
    lxc_container: error communicating with child process
    NAME STATE IPV4 IPV6 AUTOSTART
    ——————————————————
    p1 RUNNING 10.0.3.155, 10.0.4.1 – YES (1)
    \_ p1 RUNNING 10.0.4.75 – NO
    p2 RUNNING 10.0.3.204 – YES
    $

  10. Shailesh says:

    Hi Jake,
    I am still hitting this error,
    lxc-start: cgroupfs failed to detect cgroup metadata
    lxc-start: failed initializing cgroup support
    lxc-start: failed to spawn ‘container’

    The workaround suggested above is not helping me.
    Any other solution.

    Thanks,
    Shailesh

    1. jnvui says:

      Hi
      Did you find and issue ?? I have the same problem

      thx

  11. chris says:

    Hello Everyone,

    Have you guys got any issue with rsyslog service in centos 7 container? I find out that my rsyslog service may not be able to run successfully every time.It usually gets ran successfully on the frist boot up but, may fail in the second boots. Sometimes it will boots successfully in the second time or 3rd but failed in some other boots.

    I have no change anything in between boots, but the result are always different, and it drive me nut.

    regards

    Chris

  12. Ludwig says:

    Hi,

    I managed to get the vpn-setup running but it has one problem:
    when I establish the vpn-connection on the host (tap0), it gets a random ip address assigned, so after I start the container, I have to manually set the same ip-address for the interface inside the container.
    Is there a solution where I don’t have to do that?

    regards
    Ludwig

  13. Bill says:

    This seems a tangent, but I don’t know where else to start but with this article. I’m being forced to move from Solaris zones, and LXC is very very new to me.

    I have a need for the containers NOT to use NAT nor DHCP, and to have separate containers be the DNS and DHCP servers for our network (what our zones currently do). Is this possible with LXC without NAT or on-LXC-host routing? I can use dedicated interfaces or VNICs, if that is supported by LXC. Zones do this quite brilliantly, but I digress….

    Without getting into details that get us off track, using docker is not our answer. To summarize, I need each container to appear as a unique host with a static IP, reachable from our network without routing by the LXC host nor by using NAT.

    I really hope this makes sense. I’m tired. Thanks for any pointers or help.

  14. CHANDAN MOHANTY says:

    Hi,
    I have created a LXC container(Ubuntu 20.04).However I am unable to mount the host directory /run/vpp inside the LXC.
    I have below config

    # Container specific configuration
    lxc.rootfs.path = dir:/var/lib/lxc/chandan/rootfs
    lxc.uts.name = chandan

    lxc.mount.entry = /run/vpp run/vpp none rw,bind 0 0

    Host:
    root@osboxes:/run/vpp# pwd
    /run/vpp
    root@osboxes:/run/vpp# ls
    root@osboxes:/run/vpp#

    LXC rootfs dir on host:
    root@osboxes:/var/lib/lxc/chandan/rootfs/run/vpp# pwd
    /var/lib/lxc/chandan/rootfs/run/vpp
    root@osboxes:/var/lib/lxc/chandan/rootfs/run/vpp# ls
    root@osboxes:/var/lib/lxc/chandan/rootfs/run/vpp#

    Request your kind input

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.