Merge lp:~serge-hallyn/serverguide/serverguide-lxc into lp:~ubuntu-core-doc/serverguide/precise

Proposed by Serge Hallyn
Status: Merged
Merged at revision: 46
Proposed branch: lp:~serge-hallyn/serverguide/serverguide-lxc
Merge into: lp:~ubuntu-core-doc/serverguide/precise
Diff against target: 1772 lines (+1764/-0)
1 file modified
serverguide/C/virtualization.xml (+1764/-0)
To merge this branch: bzr merge lp:~serge-hallyn/serverguide/serverguide-lxc
Reviewer Review Type Date Requested Status
Peter Matulis Approve
Serge Hallyn (community) Needs Resubmitting
Review via email: mp+97238@code.launchpad.net

Description of the change

This merge introduces a new LXC section. Some subsections are yet to be written, because they are contingent on work still going into precise.

To post a comment you must log in.
Revision history for this message
Peter Matulis (petermatulis) wrote :

This is a very significant contribution to the guide. Thank you!

Technical:

All tests performed using default settings and with a simple ubuntu-based container. I did not check all commands.

1. I tried to start a container (cn1) on a KVM guest. I was able to log in but shutting down threw warnings/errors. Normal?

------------------------
$ sudo poweroff
[sudo] password for ubuntu:
$
Broadcast message from ubuntu@cn1
        (/dev/lxc/console) at 18:17 ...

The system is going down for power off NOW!
 * Asking all remaining processes to terminate...
   ...done.
 * All processes ended within 1 seconds....
   ...done.
 * Deconfiguring network interfaces...
   ...done.
 * Deactivating swap...
   ...fail!
umount: /run/lock: not mounted
umount: /dev/shm: not mounted
mount: / is busy
 * Will now halt
------------------------

2. This command output doesn't look right. Normal?:

------------------------
$ sudo lxc-start -n cn1 -d
$ lxc-ls
cn1
cn1

$ lxc-list
RUNNING

STOPPED
------------------------

3. Should include how to escape from a container console (Ctrl-a q).

Style:

Under "Host Setup", for /etc/default/lxc, since you say "true by default" for value of USE_LXC_BRIDGE, it makes sense to also say "true by default" for value of LXC_AUTO.

In general, consider using italics when introducing new terms/commands/package_names or when trying to emphasize a word. Example: "a package was introduced called ``lxcguest''..." or "...have various ``leaks'' which allow...". This quoting style is awkward.

We should encourage proper practice and prepend all commands requiring privileged access with 'sudo'.

I question the section title of "Container Introspection". The term introspection pertains to the fields of philosophy and psychology. Maybe "Inspection" is better.

Under "Advanced namespace usage", there is a block of code that is not formatted properly. It shows '<pre>' tags and I don't think the red colour is called for. I also think you should provide an external resource/link to 'private namespaces' as well as giving a one-line description of a basic use-case.

Standardize on using "LXC" and not "lxc" or "Lxc" except when referring to a package name? Dunno, 'lxc.sourceforge.net' shows kind of the reverse: "LXC is the userspace control package for Linux Containers" and "Linux Containers (lxc) implement:". Can be confusing to readers. Proceed as you see fit.

"IP address" instead of "ip address".

Awkward: "The type can be one of several types."

Missing a 'The'? "Following is an example configuration file..."

Extra character? "...which require cap_sys_admin}."

You've made bold man page references before. "See capabilities(7) for a list..."

Rework: "For instance, if a package's postinst fails if it cannot open a block device..."

The standard is to capitalize release codenames: "In the natty and oneiric releases of Ubuntu..."

Add resources section (external links, man pages) at end of page. See end of https://help.ubuntu.com/11.10/serverguide/C/openldap-server.html for an example.

review: Needs Fixing
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

On 03/16/2012 02:52 PM, Peter Matulis wrote:
> Review: Needs Fixing
>
> This is a very significant contribution to the guide. Thank you!
>
> Technical:
>
> All tests performed using default settings and with a simple ubuntu-based container. I did not check all commands.
>
> 1. I tried to start a container (cn1) on a KVM guest. I was able to log in but shutting down threw warnings/errors. Normal?

Yes, the errors are normal. Should that be explained somewhere?

> ------------------------
> $ sudo poweroff
> [sudo] password for ubuntu:
> $
> Broadcast message from ubuntu@cn1
> (/dev/lxc/console) at 18:17 ...
>
> The system is going down for power off NOW!
> * Asking all remaining processes to terminate...
> ...done.
> * All processes ended within 1 seconds....
> ...done.
> * Deconfiguring network interfaces...
> ...done.
> * Deactivating swap...
> ...fail!
> umount: /run/lock: not mounted
> umount: /dev/shm: not mounted
> mount: / is busy
> * Will now halt
> ------------------------
>
>
> 2. This command output doesn't look right. Normal?:
>
> ------------------------
> $ sudo lxc-start -n cn1 -d
> $ lxc-ls
> cn1
> cn1
>
> $ lxc-list
> RUNNING
>
> STOPPED
> ------------------------

The lxc-list output doesn't look right - cn1 should show up in both
lists. The lxc-ls output shouldn't have the third (empty) line but
otherwise looks fine. I can't reproduce this.

I will address the rest in a merge proposal update. Thanks for the
comments!

50. By Serge Hallyn

Address a number of pmatulis' comments.

51. By Serge Hallyn

sudo

52. By Serge Hallyn

standardize use of LXC

53. By Serge Hallyn

remove <pre>

54. By Serge Hallyn

fix parse errors

55. By Serge Hallyn

namespac

56. By Serge Hallyn

write security section

57. By Serge Hallyn

remove udev comment

58. By Serge Hallyn

comment out virt-install libvirt-lxc section (it never worked for me)

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

I believe the comments are now addressed, thanks.

review: Needs Resubmitting
Revision history for this message
Peter Matulis (petermatulis) wrote :

All good. Very nice!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'serverguide/C/virtualization.xml'
--- serverguide/C/virtualization.xml 2012-03-11 16:42:45 +0000
+++ serverguide/C/virtualization.xml 2012-03-18 23:46:47 +0000
@@ -2215,4 +2215,1768 @@
22152215
2216 </sect2>2216 </sect2>
2217 </sect1> 2217 </sect1>
2218 <sect1 id='lxc' status='review'>
2219 <title>LXC</title>
2220 <para>
2221 Containers are a lightweight virtualization technology. They are
2222 more akin to an enhanced chroot than to full virtualization like
2223 Qemu or VMware, both because they do not emulate hardware and
2224 because containers share the same operating system as the host.
2225 Therefore containers are better compared to Solaris zones or BSD
2226 jails. Linux-vserver and OpenVZ are two pre-existing, independently
2227 developed implementations of containers-like functionality for
2228 Linux. In fact, containers came about as a result of the work to
2229 upstream the vserver and OpenVZ functionality. Some vserver and
2230 OpenVZ functionality is still missing in containers, however
2231 containers can <emphasis>boot</emphasis> many Linux distributions and have the
2232 advantage that they can be used with an un-modified upstream kernel.
2233 </para>
2234
2235 <para>
2236 There are two user-space implementations of containers, each
2237 exploiting the same kernel features. Libvirt allows the use of
2238 containers through the LXC driver by connecting to 'lxc:///'. This
2239 can be very convenient as it supports the same usage as its other
2240 drivers. The other implementation, called simply 'LXC', is not
2241 compatible with libvirt, but is more flexible with more userspace
2242 tools. It is possible to switch between the two, though there are
2243 peculiarities which can cause confusion.
2244 </para>
2245
2246 <para>
2247 In this document we will mainly describe the <application>lxc</application> package. Toward
2248 the end, we will describe how to use the libvirt LXC driver.
2249 </para>
2250
2251 <para>
2252 In this document, a container name will be shown as CN, C1, or C2.
2253 </para>
2254
2255 <sect2 id="lxc-installation" status="review">
2256 <title>Installation</title>
2257 <para>
2258 The <application>lxc</application> package can be installed using
2259 </para>
2260
2261<screen>
2262<command>
2263sudo apt-get install lxc
2264</command>
2265</screen>
2266
2267 <para>
2268 This will pull in the required and recommended dependencies, including
2269 cgroup-lite, lvm2, and debootstrap. To use libvirt-lxc, install libvirt-bin.
2270 LXC and libvirt-lxc can be installed and used at the same time.
2271 </para>
2272 </sect2>
2273
2274 <sect2 id="lxc-hostsetup" status="review">
2275 <title>Host Setup</title>
2276 <sect3 id="lxc-layout" status="review">
2277 <title>Basic layout of LXC files</title>
2278 <para>
2279 Following is a description of the files and directories which
2280 are installed and used by LXC.
2281 </para>
2282
2283 <itemizedlist>
2284 <listitem>
2285 <para>There are two upstart jobs:</para>
2286
2287 <itemizedlist> <!-- nested list -->
2288 <listitem>
2289 <para>
2290 <filename>/etc/init/lxc-net.conf:</filename> is an optional job which
2291 only runs if <filename> /etc/default/lxc</filename> specifies
2292 USE_LXC_BRIDGE (true by default). It sets up a NATed bridge for
2293 containers to use.
2294 </para>
2295 </listitem>
2296
2297 <listitem>
2298 <para>
2299 <filename>/etc/init/lxc.conf:</filename> runs if LXC_AUTO (true by
2300 default) is set to
2301 true in <filename>/etc/default/lxc</filename>. It looks for entries
2302 under <filename>/etc/lxc/auto/</filename> which are symbolic links to
2303 configuration files for the containers which should be started at boot.
2304 </para>
2305 </listitem>
2306 </itemizedlist>
2307
2308 </listitem>
2309 <listitem>
2310 <para>
2311 <filename>/etc/lxc/lxc.conf:</filename>
2312 There is a default container creation configuration file,
2313 <filename>/etc/lxc/lxc.conf</filename>, which directs containers to use
2314 the LXC bridge created by the lxc-net upstart job. If no configuration
2315 file is specified when creating a container, then this one will be used.
2316 </para>
2317 </listitem>
2318
2319 <listitem>
2320 <para>
2321 Examples of other container creation configuration files are
2322 found under <filename>/usr/share/doc/lxc/examples</filename>. These show how to
2323 create containers without a private network, or using macvlan,
2324 vlan, or other network layouts.
2325 </para>
2326 </listitem>
2327
2328 <listitem>
2329 <para>
2330 The various container administration tools are found under
2331 <filename>/usr/bin</filename>.
2332 </para>
2333 </listitem>
2334
2335 <listitem>
2336 <para>
2337 <filename>/usr/lib/lxc/lxc-init</filename> is a very minimal and lightweight init
2338 binary which is used by lxc-execute. Rather than `booting' a
2339 full container, it manually mounts a few filesystems, especially
2340 <filename>/proc</filename>, and executes its arguments. You are not likely to need to
2341 manually refer to this file.
2342 </para>
2343 </listitem>
2344
2345 <listitem>
2346 <para>
2347 <filename>/usr/lib/lxc/templates/</filename> contains the `templates' which can be
2348 used to create new containers of various distributions and
2349 flavors. Not all templates are currently supported.
2350 </para>
2351 </listitem>
2352
2353 <listitem>
2354 <para>
2355 <filename>/etc/apparmor.d/usr.bin.lxc-start</filename> contains the (active by default)
2356 apparmor MAC policy which works to protect the host from containers.
2357 Please see the <xref linkend="lxc-security">Security</xref> section for more information.
2358 </para>
2359 </listitem>
2360
2361 <listitem>
2362 <para>
2363 There are various man pages for the LXC administration tools as well
2364 as the <filename>lxc.conf</filename> container configuration file.
2365 </para>
2366 </listitem>
2367
2368 <listitem>
2369 <para>
2370 <filename>/var/lib/lxc</filename> is where containers and their configuration information
2371 are stored.
2372 </para>
2373 </listitem>
2374
2375 <listitem>
2376 <para>
2377 <filename>/var/cache/lxc</filename> is where caches of distribution data are stored to
2378 speed up multiple container creations.
2379 </para>
2380 </listitem>
2381 </itemizedlist>
2382 </sect3>
2383
2384 <sect3 id="lxcbr0" status="review">
2385 <title>lxcbr0</title>
2386 <para>
2387 When USE_LXC_BRIDGE is set to true in /etc/default/lxc (as it is by
2388 default), a bridge called lxcbr0 is created at startup. This bridge is
2389 given the private address 10.0.3.1, and containers using this bridge will
2390 have a 10.0.3.0/24 address. A dnsmasq instance is run listening on that
2391 bridge, so if another dnsmasq has bound all interfaces before the lxc-net
2392 upstart job runs, lxc-net will fail to start and lxcbr0 will not exist.
2393 </para>
2394
2395 <para>
2396 If you have another bridge - libvirt's default virbr0, or a br0
2397 bridge for your default NIC - you can use that bridge in place of
2398 lxcbr0 for your containers.
2399 </para>
2400 </sect3>
2401
2402 <sect3 id="lxc-partitions" status="review">
2403 <title>Using a separate filesystem for the container store</title>
2404 <para>
2405 LXC stores container information and (with the default backing store) root
2406 filesystems under <filename>/var/lib/lxc</filename>. Container creation
2407 templates also tend to store cached distribution information under
2408 <filename>/var/cache/lxc</filename>.
2409 </para>
2410
2411 <para>
2412 If you wish to use another filesystem than
2413 <filename>/var</filename>, you can mount a filesystem which has more space into those
2414 locations. If you have a disk dedicated for this, you can simply
2415 mount it at <filename>/var/lib/lxc</filename>. If you'd like to use another location, like
2416 <filename>/srv</filename>, you can bind mount it or use a symbolic link. For instance, if
2417 <filename>/srv</filename> is a large mounted filesystem, create and symlink two directories:
2418 </para>
2419
2420<screen>
2421<command>
2422sudo mkdir /srv/lxclib /srv/lxccache
2423sudo rm -rf /var/lib/lxc /var/cache/lxc
2424sudo ln -s /srv/lxclib /var/lib/lxc
2425sudo ln -s /srv/lxccache /var/cache/lxc
2426</command>
2427</screen>
2428
2429 <para>
2430 or, using bind mounts:
2431 </para>
2432
2433<screen>
2434<command>
2435sudo mkdir /srv/lxclib /srv/lxccache
2436sudo sed -i '$a \
2437/srv/lxclib /var/lib/lxc none defaults,bind 0 0 \
2438/srv/lxccache /var/cache/lxc none defaults,bind 0 0' /etc/fstab
2439sudo mount -a
2440</command>
2441</screen>
2442
2443 </sect3>
2444
2445 <sect3 id="lxc-lvm" status="review">
2446 <title>Containers backed by lvm</title>
2447
2448 <para>
2449 It is possible to use LVM partitions as the backing stores for
2450 containers. Advantages of this include flexibility in storage
2451 management and fast container cloning. The tools
2452 default to using a VG (volume group) named <emphasis>lxc</emphasis>, but another
2453 VG can be used through command line options. When a LV is used
2454 as a container backing store, the container's configuration file
2455 is still <filename>/var/lib/lxc/CN/config</filename>, but the root fs
2456 entry in that file (<emphasis>lxc.rootfs</emphasis>) will point to the lV block
2457 device name, i.e. <filename>/dev/lxc/CN</filename>.
2458 </para>
2459
2460 <para>
2461 Containers with directory tree and LVM backing stores can
2462 co-exist.
2463 </para>
2464 </sect3>
2465
2466 <sect3 id="lxc-btrfs" status="review">
2467 <title>Btrfs</title>
2468 <para>
2469 If your host has a btrfs <filename>/var</filename>, the LXC administration
2470 tools will detect this and automatically exploit it by
2471 cloning containers using btrfs snapshots.
2472 </para>
2473 </sect3>
2474
2475 <sect3 id="lxc-apparmor" status="review">
2476 <title>Apparmor</title>
2477 <para>
2478 LXC ships with an apparmor profile intended to protect the host
2479 from accidental misuses of privilege inside the container. For
2480 instance, the container will not be able to write to
2481 <filename>/proc/sysrq-trigger</filename> or to most <filename>/sys</filename> files.
2482 </para>
2483 </sect3>
2484
2485 <sect3 id="lxc-cgroups" status="review">
2486 <title>Control Groups</title>
2487 <para>
2488 Control groups (cgroups) are a kernel feature providing hierarchical
2489 task grouping and per-cgroup resource accounting and limits. They are
2490 used in containers to limit block and character device access and to
2491 freeze (suspend) containers. They can be further used to limit memory
2492 use and block i/o, guarantee minimum cpu shares, and to lock containers
2493 to specific cpus. By default, LXC depends on the cgroup-lite package to be installed, which
2494 provides the proper cgroup initialization at boot. The cgroup-lite
2495 package mounts each cgroup subsystem separately under
2496 <filename>/sys/fs/cgroup/SS</filename>, where SS is the subsystem name. For instance
2497 the freezer subsystem is mounted under <filename>/sys/fs/cgroup/freezer</filename>.
2498 LXC cgroup are kept under <filename>/sys/fs/cgroup/SS/INIT/lxc</filename>, where
2499 INIT is the init task's cgroup. This is <filename>/</filename> by default, so
2500 in the end the freezer cgroup for container CN would be
2501 <filename>/sys/fs/cgroup/freezer/lxc/CN</filename>.
2502 </para>
2503 </sect3>
2504
2505 <sect3 id="lxc-privs" status="review">
2506 <title>Privilege</title>
2507 <para>
2508 The container administration tools must be run with root user
2509 privilege. A utility called <filename>lxc-setup</filename> was written with the
2510 intention of providing the tools with the needed file capabilities to
2511 allow non-root users to run the tools with sufficient privilege.
2512 However, as root in a container cannot yet be reliably contained, this
2513 is not worthwhile. It is therefore recommended to not use
2514 <filename>lxc-setup</filename>, and to provide the LXC administrators the needed
2515 sudo privilege.
2516 </para>
2517
2518 <para>
2519 The user namespace, which is expected to be available in the next Long Term
2520 Support (LTS) release, will allow containment of the container root user, as
2521 well as reduce the amount of privilege required for creating and administering
2522 containers.
2523 </para>
2524 </sect3>
2525
2526 <sect3 id="lxc-upstart" status="review">
2527 <title>LXC Upstart Jobs</title>
2528 <para>
2529 As listed above, the <application>lxc</application> package includes two upstart jobs. The
2530 first, <filename>lxc-net</filename>, is always started when the other,
2531 <filename>lxc</filename>, is about to begin, and stops when it stops. If the
2532 USE_LXC_BRIDGE variable is set to false in <filename>/etc/defaults/lxc</filename>,
2533 then it will immediately exit. If it is true, and an error occurs
2534 bringing up the LXC bridge, then the <filename>lxc</filename> job will not start.
2535 <filename>lxc-net</filename> will bring down the LXC bridge when stopped, unless
2536 a container is running which is using that bridge.
2537 </para>
2538
2539 <para>
2540 The <filename>lxc</filename> job starts on runlevel 2-5. If the LXC_AUTO variable
2541 is set to true, then it will look under <filename>/etc/lxc</filename> for containers
2542 which should be started automatically. When the <filename>lxc</filename> job is
2543 stopped, either manually or by entering runlevel 0, 1, or 6, it will
2544 stop those containers.
2545 </para>
2546
2547 <para>
2548 To register a container to start automatically, create a symbolic
2549 link <filename>/etc/default/lxc/name.conf</filename> pointing to the container's
2550 config file. For instance, the configuration file for a container
2551 <filename>CN</filename> is <filename>/var/lib/lxc/CN/config</filename>. To make that container
2552 auto-start, use the command:
2553 </para>
2554
2555<screen>
2556<command>
2557sudo ln -s /var/lib/lxc/CN/config /etc/lxc/auto/CN.conf
2558</command>
2559</screen>
2560 </sect3>
2561
2562 </sect2>
2563
2564 <sect2 id="lxc-admin" status="review">
2565 <title>Container Administration</title>
2566 <sect3 id="lxc-creation" status="review">
2567 <title>Creating Containers</title>
2568
2569 <para>
2570 The easiest way to create containers is using <command>lxc-create</command>. This
2571 script uses distribution-specific templates under
2572 <filename>/usr/lib/lxc/templates/</filename> to set up container-friendly chroots under
2573 <filename>/var/lib/lxc/CN/rootfs</filename>, and initialize the configuration in
2574 <filename>/var/lib/lxc/CN/fstab</filename> and
2575 <filename>/var/lib/lxc/CN/config</filename>, where CN is the container name
2576 </para>
2577
2578 <para>
2579 The simplest container creation command would look like:
2580 </para>
2581
2582<screen>
2583<command>
2584sudo lxc-create -t ubuntu -n CN
2585</command>
2586</screen>
2587
2588 <para>
2589 This tells lxc-create to use the ubuntu template (-t ubuntu) and to call
2590 the container CN (-n CN). Since no configuration file was specified
2591 (which would have been done with `-f file'), it will use the default
2592 configuration file under <filename>/etc/lxc/lxc.conf</filename>. This gives the container
2593 a single veth network interface attached to the lxcbr0 bridge.
2594 </para>
2595
2596 <para>
2597 The container creation templates can also accept arguments. These can
2598 be listed after --. For instance
2599 </para>
2600
2601<screen>
2602<command>
2603sudo lxc-create -t ubuntu -n oneiric1 -- -r oneiric
2604</command>
2605</screen>
2606
2607 <para>
2608 passes the arguments '-r oneiric1' to the ubuntu template.
2609 </para>
2610
2611 <sect4 id="lxc-help" status="review">
2612 <title>Help</title>
2613 <para>
2614 Help on the lxc-create command can be seen by using<command> lxc-create -h</command>.
2615 However, the templates also take their own options. If you do
2616 </para>
2617
2618<screen>
2619<command>
2620sudo lxc-create -t ubuntu -h
2621</command>
2622</screen>
2623
2624 <para>
2625 then the general <command>lxc-create</command> help will be followed by help output
2626 specific to the ubuntu template. If no template is specified, then only
2627 help for <command>lxc-create</command> itself will be shown.
2628 </para>
2629 </sect4>
2630
2631 <sect4 id="lxc-ubuntu" status="review">
2632 <title>Ubuntu template</title>
2633
2634 <para>
2635 The ubuntu template can be used to create Ubuntu system containers with any
2636 release at least as new as 10.04 LTS. It uses debootstrap to create
2637 a cached container filesystem which gets copied into place each time a
2638 container is created. The cached image is saved and only re-generated
2639 when you create a container
2640 using the <emphasis>-F</emphasis> (flush) option to the template, i.e.:
2641 </para>
2642
2643<screen>
2644<command>
2645sudo lxc-create -t ubuntu -n CN -- -F
2646</command>
2647</screen>
2648
2649 <para>
2650 The Ubuntu release installed by the template will be the same as that on
2651 the host, unless otherwise specified with the <emphasis>-r</emphasis> option, i.e.
2652 </para>
2653
2654<screen>
2655<command>
2656sudo lxc-create -t ubuntu -n CN -- -r lucid
2657</command>
2658</screen>
2659
2660 <para>
2661 If you want to create a 32-bit container on a 64-bit host, pass <emphasis>-a i386</emphasis>
2662 to the container. If you have the qemu-user-static package installed, then you can
2663 create a container using any architecture supported by qemu-user-static.
2664 </para>
2665
2666 <para>
2667 The container will have a user named <emphasis>ubuntu</emphasis> whose password is <emphasis>ubuntu</emphasis>
2668 and who is a member of the <emphasis>sudo</emphasis> group. If you wish to inject a public ssh
2669 key for the <emphasis>ubuntu</emphasis> user, you can do so with <emphasis>-S sshkey.pub</emphasis>.
2670 </para>
2671
2672 <para>
2673 You can also <emphasis>bind</emphasis> user jdoe from the host into the container using
2674 the <emphasis>-b jdoe</emphasis> option. This will copy jdoe's password and shadow
2675 entries into the container, make sure his default group and shell are
2676 available, add him to the sudo group, and bind-mount his home directory
2677 into the container when the container is started.
2678 </para>
2679
2680 <para>
2681 When a container is created, the <filename>release-updates</filename> archive is added
2682 to the container's <filename>sources.list</filename>, and its package archive will be
2683 updated. If the container release is older than 12.04 LTS, then the
2684 lxcguest package will be automatically installed. Alternatively, if the <emphasis>--trim</emphasis>
2685 option is specified, then the lxcguest package will not be installed,
2686 and many services will be removed from the container. This will result
2687 in a faster-booting, but less upgrade-able container.
2688 </para>
2689 </sect4>
2690
2691 <sect4 id="lxc-ubuntu-cloud" status="review">
2692 <title>Ubuntu-cloud template</title>
2693
2694 <para>
2695 The ubuntu-cloud template creates Ubuntu containers by downloading and
2696 extracting the published Ubuntu cloud images. It accepts some of the same
2697 options as the ubuntu template, namely <emphasis>-r release</emphasis>, <emphasis>-S sshkey.pub</emphasis>,
2698 <emphasis>-a arch</emphasis>, and <emphasis>-F</emphasis> to flush the cached image. It also accepts a few
2699 extra options. The <emphasis>-C</emphasis> option will create a <emphasis>cloud</emphasis> container,
2700 configured for use with a metadata service. The <emphasis>-u</emphasis> option accepts a
2701 cloud-init user-data file to configure the container on start. If <emphasis>-L</emphasis>
2702 is passed, then no locales will be installed. The <emphasis>-T</emphasis> option can be
2703 used to choose a tarball location to extract in place of the published
2704 cloud image tarball. Finally the <emphasis>-i</emphasis> option sets a host id for
2705 cloud-init, which by default is set to a random string.
2706 </para>
2707 </sect4>
2708
2709 <sect4 id="lxc-other-templates" status="review">
2710 <title> Other templates</title>
2711
2712 <para>
2713 The ubuntu and ubuntu-cloud templates are well supported. Other
2714 templates are available however. The debian template creates a
2715 Debian based container, using debootstrap much as the ubuntu
2716 template does. By default it installs a <emphasis>debian squeeze</emphasis>
2717 image. An alternate release can be chosen by setting the SUITE
2718 environment variable, i.e.:
2719 </para>
2720
2721<screen>
2722<command>
2723sudo SUITE=sid lxc-create -t debian -n d1
2724</command>
2725</screen>
2726
2727 <para>
2728 Since debian cannot be safely booted inside a container, debian
2729 containers will be trimmed as with the <emphasis>--trim</emphasis> option to
2730 the ubuntu template.
2731 </para>
2732
2733 <para>
2734 To purge the container image cache, call the template directly
2735 and pass it the <emphasis>--clean</emphasis> option.
2736 </para>
2737
2738<screen>
2739<command>
2740sudo SUITE=sid /usr/lib/lxc/templates/lxc-debian --clean
2741</command>
2742</screen>
2743
2744 <para>
2745 A fedora template exists, which creates containers based on
2746 fedora releases &lt;= 14. Fedora release 15 and higher are
2747 based on systemd, which the template is not yet able to convert
2748 into a container-bootable setup. Before the fedora template is
2749 able to run, you'll need to make sure that <command>yum</command> and <command>curl</command>
2750 are installed. A fedora 12 container can be created with
2751 </para>
2752
2753<screen>
2754<command>
2755sudo lxc-create -t fedora -n fedora12 -- -R 12
2756</command>
2757</screen>
2758
2759 <para>
2760 A OpenSuSE template exists, but it requires the <command>zypper</command> program,
2761 which is not yet packaged. The OpenSuSE template is therefore
2762 not supported.
2763 </para>
2764
2765 <para>
2766 Two more templates exist mainly for experimental purposes. The
2767 busybox template creates a very small system container based
2768 entirely on busybox. The sshd template creates an application
2769 container running sshd in a private network namespace. The
2770 host's library and binary directories are bind-mounted into the
2771 container, though not its <filename>/home</filename> or
2772 <filename>/root</filename>. To create, start, and ssh into an ssh
2773 container, you might:
2774 </para>
2775
2776<screen>
2777<command>
2778sudo lxc-create -t sshd -n ssh1
2779ssh-keygen -f id
2780sudo mkdir /var/lib/lxc/ssh1/rootfs/root/.ssh
2781sudo cp id.pub /var/lib/lxc/ssh1/rootfs/root/.ssh/authorized_keys
2782sudo lxc-start -n ssh1 -d
2783ssh -i id root@ssh1.
2784</command>
2785</screen>
2786
2787 </sect4>
2788
2789 <sect4 id="lxc-backing-stores" status="review">
2790 <title> Backing Stores</title>
2791
2792 <para>
2793By default, <command>lxc-create</command> places the container's root
2794filesystem as a directory tree at <filename>/var/lib/lxc/CN/rootfs.</filename>
2795Another option is to use LVM logical volumes. If a volume group named <emphasis>lxc</emphasis>
2796exists, you can create an lvm-backed container called CN using:
2797 </para>
2798
2799<screen>
2800<command>
2801sudo lxc-create -t ubuntu -n CN -B lvm
2802</command>
2803</screen>
2804
2805 <para>
2806 If you want to use a volume group named schroots, with a 5G xfs
2807 filesystem, then you would use
2808 </para>
2809
2810<screen>
2811<command>
2812sudo lxc-create -t ubuntu -n CN -B lvm --vgname schroots --fssize 5G --fstype xfs
2813</command>
2814</screen>
2815 </sect4>
2816
2817 </sect3>
2818
2819 <sect3 id="lxc-cloning" status="review">
2820 <title>Cloning</title>
2821
2822 <para>
2823 For rapid provisioning, you may wish to customize a canonical
2824 container according to your needs and then make multiple copies of it.
2825 This can be done with the <command>lxc-clone</command> program. Given an existing
2826 container called C1, a new container called C2 can be created
2827 using
2828 </para>
2829
2830
2831<screen>
2832<command>
2833sudo lxc-clone -o C1 -n C2
2834</command>
2835</screen>
2836
2837 <para>
2838 If <filename>/var/lib/lxc</filename> is a btrfs filesystem, then
2839 <command>lxc-clone</command> will create C2's filesystem as a snapshot of
2840 C1's. If the container's root filesystem is lvm backed, then you can
2841 specify the <emphasis>-s</emphasis> option to create the new rootfs as a lvm snapshot of the
2842 original as follows:
2843 </para>
2844
2845<screen>
2846<command>
2847sudo lxc-clone -s -o C1 -n C2
2848</command>
2849</screen>
2850
2851 <para>
2852 Both lvm and btrfs snapshots will provide fast cloning with very
2853 small initial disk usage.
2854 </para>
2855 </sect3>
2856
2857 <sect3 id="lxc-start-stop" status="review">
2858 <title>Starting and stopping</title>
2859
2860 <para>
2861 To start a container, use <command>lxc-start -n CN</command>. By default
2862 <command>lxc-start</command> will execute <filename>/sbin/init</filename>
2863 in the container. You can provide a different program to execute, plus
2864 arguments, as further arguments to <command>lxc-start</command>:
2865 </para>
2866
2867<screen>
2868<command>
2869sudo lxc-start -n container /sbin/init loglevel=debug
2870</command>
2871</screen>
2872
2873 <para>
2874 If you do not specify the <emphasis>-d</emphasis> (daemon) option, then you will see a
2875 console (on the container's <filename>/dev/console</filename>, see
2876 <xref linkend="lxc-consoles"/> for more information) on the terminal. If
2877 you specify the <emphasis>-d</emphasis> option, you will not see that console, and lxc-start
2878 will immediately exit success - even if a later part of container startup
2879 has failed. You can use <command>lxc-wait</command> or
2880 <command>lxc-monitor</command> (see <xref
2881 linkend="lxc-monitoring"/>) to check on the success or failure of the
2882 container startup.
2883 </para>
2884
2885 <para>
2886 To obtain LXC debugging information, use <emphasis>-o filename -l debuglevel</emphasis>,
2887 for instance:
2888 </para>
2889
2890<screen>
2891<command>
2892sudo lxc-start -o lxc.debug -l DEBUG -n container
2893</command>
2894</screen>
2895
2896 <para>
2897 Finally, you can specify configuration parameters inline using <emphasis>-s</emphasis>.
2898 However, it is generally recommended to place them in the container's
2899 configuration file instead. Likewise, an entirely alternate config
2900 file can be specified with the <emphasis>-f</emphasis> option, but this is not
2901 generally recommended.
2902 </para>
2903
2904 <para>
2905 While <command>lxc-start</command> runs the container's
2906 <filename>/sbin/init</filename>, <command>lxc-execute</command> uses a
2907 minimal init program called <command>lxc-init</command>, which attempts to
2908 mount <filename>/proc</filename>, <filename>/dev/mqueue</filename>, and
2909 <filename>/dev/shm</filename>, executes the programs specified on the
2910 command line, and waits for those to finish executing.
2911 <command>lxc-start</command> is intended to be used for <emphasis>system containers</emphasis>,
2912 while <command>lxc-execute</command> is intended for <emphasis>application
2913 containers</emphasis> (see <ulink url="https://www.ibm.com/developerworks/linux/library/l-lxc-containers/">
2914 this article</ulink> for more).
2915 </para>
2916
2917 <para>
2918 You can stop a container several ways. You can use <command>shutdown</command>,
2919 <command>poweroff</command> and <command>reboot</command> while logged into
2920 the container. To cleanly shut down a container externally (i.e. from the host), you can issue
2921 the <command>sudo lxc-shutdown -n CN</command> command. This takes an optional
2922 timeout value. If not specified, the command issues a SIGPWR signal to the
2923 container and immediately returns. If the option is used, as in
2924 <command>sudo lxc-shutdown -n CN -t 10</command>, then the command will wait the
2925 specified number of seconds for the container to cleanly shut down. Then,
2926 if the container is still running, it will kill it (and any running
2927 applications). You can also immediately kill the container (without any
2928 chance for applications to cleanly shut down) using
2929 <command>sudo lxc-stop -n CN</command>. Finally,
2930 <command>lxc-kill</command> can be used more generally to send any signal
2931 number to the container's init.
2932 </para>
2933
2934 <para>
2935 While the container is shutting down, you can expect to see some (harmless)
2936 error messages, as follows:
2937 </para>
2938
2939<screen>
2940$ sudo poweroff
2941[sudo] password for ubuntu: =
2942
2943$ =
2944
2945Broadcast message from ubuntu@cn1
2946 (/dev/lxc/console) at 18:17 ...
2947
2948The system is going down for power off NOW!
2949 * Asking all remaining processes to terminate...
2950 ...done.
2951 * All processes ended within 1 seconds....
2952 ...done.
2953 * Deconfiguring network interfaces...
2954 ...done.
2955 * Deactivating swap...
2956 ...fail!
2957umount: /run/lock: not mounted
2958umount: /dev/shm: not mounted
2959mount: / is busy
2960 * Will now halt
2961</screen>
2962
2963 <para>
2964 A container can be frozen with <command>sudo lxc-freeze -n CN</command>. This
2965 will block all its processes until the container is later unfrozen using
2966 <command>sudo lxc-unfreeze -n CN</command>.
2967 </para>
2968
2969 </sect3>
2970
2971 <sect3 id="lxc-monitoring" status="review">
2972 <title>Monitoring container status </title>
2973
2974 <para>
2975 Two commands are available to monitor container state changes.
2976 <command>lxc-monitor</command> monitors one or more containers for any
2977 state changes. It takes a container name as usual with the <emphasis>-n</emphasis> option,
2978 but in this case the container name can be a posix regular expression to
2979 allow monitoring desirable sets of containers.
2980 <command>lxc-monitor</command> continues running as it prints container
2981 changes. <command>lxc-wait</command> waits for a specific state change and
2982 then exits. For instance,
2983 </para>
2984
2985
2986<screen>
2987<command>
2988sudo lxc-monitor -n cont[0-5]*
2989</command>
2990</screen>
2991
2992 <para>
2993 would print all state changes to any containers matching the
2994 listed regular expression, whereas
2995 </para>
2996
2997<screen>
2998<command>
2999sudo lxc-wait -n cont1 -s 'STOPPED|FROZEN'
3000</command>
3001</screen>
3002
3003 <para>
3004 will wait until container cont1 enters state STOPPED or state FROZEN
3005 and then exit.
3006 </para>
3007 </sect3>
3008
3009 <sect3 id="lxc-consoles" status="review">
3010 <title>Consoles</title>
3011
3012 <para>
3013 Containers have a configurable number of consoles. One always exists on
3014 the container's <filename>/dev/console.</filename> This is shown on the
3015 terminal from which you ran <command>lxc-start</command>, unless the <emphasis>-d</emphasis>
3016 option is specified. The output on <filename>/dev/console</filename> can
3017 be redirected to a file using the <emphasis>-c console-file</emphasis> option to
3018 <command>lxc-start</command>. The number of extra consoles is specified by
3019 the <command>lxc.tty</command> variable, and is usually set to 4. Those
3020 consoles are shown on <filename>/dev/ttyN</filename> (for 1 &lt;= N &lt;=
3021 4). To log into console 3 from the host, use
3022 </para>
3023
3024<screen>
3025<command>
3026sudo lxc-console -n container -t 3
3027</command>
3028</screen>
3029
3030 <para>
3031 or if the <emphasis>-t N</emphasis> option is not specified, an unused console will be
3032 automatically chosen. To exit the console, use the escape sequence
3033 Ctrl-a q. Note that the escape sequence does not work in the console
3034 resulting from <command>lxc-start</command> without the <emphasis>-d</emphasis>
3035 option.
3036 </para>
3037
3038 <para>
3039 Each container console is actually a Unix98 pty in the host's (not the
3040 guest's) pty mount, bind-mounted over the guest's
3041 <filename>/dev/ttyN</filename> and <filename>/dev/console</filename>.
3042 Therefore, if the guest unmounts those or otherwise tries to access the
3043 actual character device <command>4:N</command>, it will not be serving
3044 getty to the LXC consoles. (With the default settings, the container will
3045 not be able to access that character device and getty will therefore fail.)
3046 This can easily happen when a boot script blindly mounts a new
3047 <filename>/dev</filename>.
3048 </para>
3049 </sect3>
3050
3051 <sect3 id="lxc-introspection" status="review">
3052 <title>Container Inspection</title>
3053
3054 <para>
3055 Several commands are available to gather information on existing
3056 containers. <command>lxc-ls</command> will report all existing containers
3057 in its first line of output, and all running containers in the second line.
3058 <command>lxc-list</command> provides the same information in a more verbose
3059 format, listing running containers first and stopped containers next.
3060 <command>lxc-ps</command> will provide lists of processes in containers.
3061 To provide <command>ps</command> arguments to <command>lxc-ps</command>,
3062 prepend them with <command>--</command>. For instance, for listing of all
3063 processes in container plain,
3064 </para>
3065
3066<screen>
3067<command>
3068sudo lxc-ps -n plain -- -ef
3069</command>
3070</screen>
3071
3072 <para>
3073 <command>lxc-info</command> provides the state of a container and the pid of its init
3074 process. <command>lxc-cgroup</command> can be used to query or set the values of a
3075 container's control group limits and information. This can be more convenient
3076 than interacting with the <command>cgroup</command> filesystem. For instance, to query
3077 the list of devices which a running container is allowed to access,
3078 you could use
3079 </para>
3080
3081<screen>
3082<command>
3083sudo lxc-cgroup -n CN devices.list
3084</command>
3085</screen>
3086
3087 <para>
3088 or to add mknod, read, and write access to <filename>/dev/sda</filename>,
3089 </para>
3090
3091<screen>
3092<command>
3093sudo lxc-cgroup -n CN devices.allow "b 8:* rwm"
3094</command>
3095</screen>
3096
3097 <para>
3098 and, to limit it to 300M of RAM,
3099 </para>
3100
3101<screen>
3102<command>
3103lxc-cgroup -n CN memory.limit_in_bytes 300000000
3104</command>
3105</screen>
3106
3107 <para>
3108 <command>lxc-netstat</command> executes <command>netstat</command> in the
3109 running container, giving you a glimpse of its network state.
3110 </para>
3111
3112 <para>
3113 <command>lxc-backup</command> will create backups of the root filesystems
3114 of all existing containers (except lvm-based ones), using
3115 <command>rsync</command> to back the contents up under
3116 <filename>/var/lib/lxc/CN/rootfs.backup.1</filename>. These backups can be
3117 restored using <command>lxc-restore.</command> However,
3118 <command>lxc-backup</command> and <command>lxc-restore</command> are
3119 fragile with respect to customizations and therefore their use is not
3120 recommended.
3121 </para>
3122
3123 </sect3>
3124
3125 <sect3 id="lxc-destroying" status="review">
3126 <title>Destroying containers</title>
3127
3128 <para>
3129 Use <command>lxc-destroy</command> to destroy an existing container.
3130 </para>
3131
3132<screen>
3133<command>
3134sudo lxc-destroy -n CN
3135</command>
3136</screen>
3137
3138 <para>
3139 If the container is running, <command>lxc-destroy</command> will exit with a message
3140 informing you that you can force stopping and destroying the container
3141 with
3142 </para>
3143
3144<screen>
3145<command>
3146sudo lxc-destroy -n CN -f
3147</command>
3148</screen>
3149
3150 </sect3>
3151
3152 <sect3 id="lxc-namespaces" status="review">
3153 <title>Advanced namespace usage</title>
3154
3155 <para>
3156 One of the Linux kernel features used by LXC to create containers is
3157 private namespaces. Namespaces allow a set of tasks to have private
3158 mappings of names to resources for things like pathnames and process
3159 IDs. (See <xref linkend="lxc-resources">Resources</xref> for a link
3160 to more information). Unlike control groups and other mount features which
3161 are also used to create containers, namespaces cannot be manipulated using
3162 a filesystem interface. Therefore, LXC ships with the <command>lxc-unshare</command>
3163 program, which is mainly for testing. It provides the ability to create
3164 new tasks in private namespaces. For instance,
3165 </para>
3166
3167<screen>
3168<command>
3169sudo lxc-unshare -s 'MOUNT|PID' /bin/bash
3170</command>
3171</screen>
3172
3173 <para>
3174 creates a bash shell with private pid and mount namespaces.
3175 In this shell, you can do
3176 </para>
3177
3178<screen>
3179root@ubuntu:~# mount -t proc proc /proc
3180root@ubuntu:~# ps -ef
3181UID PID PPID C STIME TTY TIME CMD
3182root 1 0 6 10:20 pts/9 00:00:00 /bin/bash
3183root 110 1 0 10:20 pts/9 00:00:00 ps -ef
3184</screen>
3185
3186 <para>
3187 so that <command>ps</command> shows only the tasks in your new namespace.
3188 </para>
3189 </sect3>
3190
3191 <sect3 id="lxc-ephemeral" status="review">
3192 <title>Ephemeral containers</title>
3193
3194 <para>
3195 Ephemeral containers are one-time containers. Given an existing
3196 container CN, you can run a command in an ephemeral container
3197 created based on CN, with the host's jdoe user bound into the
3198 container, using:
3199 </para>
3200
3201<screen>
3202<command>
3203lxc-start-ephemeral -b jdoe -o CN -- /home/jdoe/run_my_job
3204</command>
3205</screen>
3206
3207 <para>
3208 When the job is finished, the container will be discarded.
3209 </para>
3210
3211 </sect3>
3212 <sect3 id="lxc-commands" status="review">
3213 <title>Container Commands</title>
3214
3215Following is a table of all container commands:
3216
3217<table>
3218<title> Container commands</title>
3219<tgroup cols="2" rowsep="1">
3220<thead>
3221 <row>
3222 <entry valign="left"><para>Command</para></entry>
3223 <entry valign="left"><para>Synopsis</para></entry>
3224 </row>
3225</thead>
3226<tbody>
3227 <row>
3228 <entry><para>lxc-attach </para></entry>
3229 <entry><para>(NOT SUPPORTED) Run a command in a running container</para></entry>
3230 </row>
3231 <row>
3232 <entry><para>lxc-backup </para></entry>
3233 <entry><para>Back up the root filesystems for all lvm-backed containers</para></entry>
3234 </row>
3235 <row>
3236 <entry><para>lxc-cgroup </para></entry>
3237 <entry><para>View and set container control group settings</para></entry>
3238 </row>
3239 <row>
3240 <entry><para>lxc-checkconfig </para></entry>
3241 <entry><para>Verify host support for containers</para></entry>
3242 </row>
3243 <row>
3244 <entry><para>lxc-checkpoint </para></entry>
3245 <entry><para>(NOT SUPPORTED) Checkpoint a running container</para></entry>
3246 </row>
3247 <row>
3248 <entry><para>lxc-clone </para></entry>
3249 <entry><para>Clone a new container from an existing one</para></entry>
3250 </row>
3251 <row>
3252 <entry><para>lxc-console </para></entry>
3253 <entry><para>Open a console in a running container</para></entry>
3254 </row>
3255 <row>
3256 <entry><para>lxc-create </para></entry>
3257 <entry><para>Create a new container</para></entry>
3258 </row>
3259 <row>
3260 <entry><para>lxc-destroy </para></entry>
3261 <entry><para>Destroy an existing container</para></entry>
3262 </row>
3263 <row>
3264 <entry><para>lxc-execute </para></entry>
3265 <entry><para>Run a command in a (not running) application container</para></entry>
3266 </row>
3267 <row>
3268 <entry><para>lxc-freeze </para></entry>
3269 <entry><para>Freeze a running container</para></entry>
3270 </row>
3271 <row>
3272 <entry><para>lxc-info </para></entry>
3273 <entry><para>Print information on the state of a container</para></entry>
3274 </row>
3275 <row>
3276 <entry><para>lxc-kill </para></entry>
3277 <entry><para>Send a signal to a container's init</para></entry>
3278 </row>
3279 <row>
3280 <entry><para>lxc-list </para></entry>
3281 <entry><para>List all containers</para></entry>
3282 </row>
3283 <row>
3284 <entry><para>lxc-ls </para></entry>
3285 <entry><para>List all containers with shorter output than lxc-list</para></entry>
3286 </row>
3287 <row>
3288 <entry><para>lxc-monitor </para></entry>
3289 <entry><para>Monitor state changes of one or more containers</para></entry>
3290 </row>
3291 <row>
3292 <entry><para>lxc-netstat </para></entry>
3293 <entry><para>Execute netstat in a running container</para></entry>
3294 </row>
3295 <row>
3296 <entry><para>lxc-ps </para></entry>
3297 <entry><para>View process info in a running container</para></entry>
3298 </row>
3299 <row>
3300 <entry><para>lxc-restart </para></entry>
3301 <entry><para>(NOT SUPPORTED) Restart a checkpointed container</para></entry>
3302 </row>
3303 <row>
3304 <entry><para>lxc-restore </para></entry>
3305 <entry><para>Restore containers from backups made by lxc-backup</para></entry>
3306 </row>
3307 <row>
3308 <entry><para>lxc-setcap </para></entry>
3309 <entry><para>(NOT RECOMMENDED) Set file capabilities on LXC tools</para></entry>
3310 </row>
3311 <row>
3312 <entry><para>lxc-setuid </para></entry>
3313 <entry><para>(NOT RECOMMENDED) Set or remove setuid bits on LXC tools</para></entry>
3314 </row>
3315 <row>
3316 <entry><para>lxc-shutdown </para></entry>
3317 <entry><para>Safely shut down a container</para></entry>
3318 </row>
3319 <row>
3320 <entry><para>lxc-start </para></entry>
3321 <entry><para>Start a stopped container</para></entry>
3322 </row>
3323 <row>
3324 <entry><para>lxc-start-ephemeral </para></entry>
3325 <entry><para>Start an ephemeral (one-time) container</para></entry>
3326 </row>
3327 <row>
3328 <entry><para>lxc-stop </para></entry>
3329 <entry><para>Immediately stop a running container</para></entry>
3330 </row>
3331 <row>
3332 <entry><para>lxc-unfreeze </para></entry>
3333 <entry><para>Unfreeze a frozen container</para></entry>
3334 </row>
3335 <row>
3336 <entry><para>lxc-unshare </para></entry>
3337 <entry><para>Testing tool to manually unshare namespaces</para></entry>
3338 </row>
3339 <row>
3340 <entry><para>lxc-version </para></entry>
3341 <entry><para>Print the version of the LXC tools</para></entry>
3342 </row>
3343 <row>
3344 <entry><para>lxc-wait </para></entry>
3345 <entry><para>Wait for a container to reach a particular state</para></entry>
3346 </row>
3347 </tbody>
3348 </tgroup>
3349</table>
3350
3351 </sect3>
3352 </sect2>
3353
3354 <sect2 id="lxc-conf" status="review">
3355 <title>Configuration File</title>
3356
3357 <para>
3358 LXC containers are very flexible. The Ubuntu <application>lxc</application> package sets defaults
3359 to make creation of Ubuntu system containers as simple as possible.
3360 If you need more flexibility, this chapter will show how to fine-tune
3361 your containers as you need.
3362 </para>
3363
3364 <para>
3365 Detailed information is available in the <command>lxc.conf(5)</command> man page.
3366 Note that the default configurations created by the ubuntu templates
3367 are reasonable for a system container and usually do not need
3368 customization.
3369 </para>
3370
3371 <sect3 id="lxc-conf-options" status="review">
3372 <title>Choosing configuration files and options</title>
3373
3374 <para>
3375 The container setup is controlled by the LXC configuration options.
3376 Options can be specified at several points:
3377 </para>
3378
3379 <itemizedlist>
3380 <listitem><para>
3381 During container creation, a configuration file can be specified.
3382 However, creation templates often insert their own configuration
3383 options, so we usually specify only network configuration options at
3384 this point. For other configuration, it is usually better to edit the
3385 configuration file after container creation.
3386 </para></listitem>
3387
3388 <listitem><para>
3389 The file <filename>/var/lib/lxc/CN/config</filename> is used at
3390 container startup by default.
3391 </para></listitem>
3392
3393 <listitem><para>
3394 <command>lxc-start</command> accepts an alternate configuration file with
3395 the <emphasis>-f filename</emphasis> option.
3396 </para></listitem>
3397
3398 <listitem><para>
3399 Specific configuration variables can be overridden at <command>lxc-start</command>
3400 using <emphasis>-s key=value</emphasis>. It is generally better to edit the container
3401 configuration file.
3402 </para></listitem>
3403
3404 </itemizedlist>
3405
3406 </sect3>
3407
3408 <sect3 id="lxc-conf-net" status="review">
3409 <title>Network Configuration</title>
3410
3411 <para>
3412 Container networking in LXC is very flexible. It is triggered by
3413 the <command>lxc.network.type</command> configuration file entries.
3414 If no such entries exist, then the container will share the host's
3415 networking stack. Services and connections started in the container
3416 will be using the host's IP address.
3417 If at least one <command>lxc.network.type</command> entry is present, then the container
3418 will have a private (layer 2) network stack. It will have its own
3419 network interfaces and firewall rules. There are several options
3420 for <command>lxc.network.type</command>:
3421 </para>
3422
3423 <itemizedlist>
3424 <listitem><para>
3425 <command>lxc.network.type=empty</command>:
3426 The container will have no network interfaces other than loopback.
3427 </para></listitem>
3428
3429 <listitem><para>
3430 <command>lxc.network.type=veth</command>:
3431 This is the default when using the ubuntu or ubuntu-cloud templates,
3432 and creates a veth network tunnel. One end of this tunnel
3433 becomes the network interface inside the container. The other end
3434 is attached to a bridged on the host. Any number of such tunnels
3435 can be created by adding more <command>lxc.network.type=veth</command>
3436 entries in the container configuration file. The bridge to which the
3437 host end of the tunnel will be attached is specified with
3438 <command>lxc.network.link = lxcbr0</command>.
3439 </para></listitem>
3440
3441 <listitem><para>
3442 <command>lxc.network.type=phys</command>
3443 A physical network interface (i.e. eth2) is passed into the container.
3444 </para></listitem>
3445 </itemizedlist>
3446
3447 <para>
3448 Two other options are to
3449 use vlan or macvlan, however their use is more complicated and is
3450 not described here. A few other networking options exist:
3451 </para>
3452
3453 <itemizedlist>
3454 <listitem><para>
3455 <command>lxc.network.flags</command> can only be set to <emphasis>up</emphasis> and ensures that the network interface is up.
3456 </para></listitem>
3457
3458 <listitem><para>
3459 <command>lxc.network.hwaddr</command> specifies a mac address to assign the the
3460 nic inside the container.
3461 </para></listitem>
3462
3463 <listitem><para>
3464 <command>lxc.network.ipv4</command> and <command>lxc.network.ipv6</command>
3465 set the respective IP addresses, if those should be static.
3466 </para></listitem>
3467
3468 <listitem><para>
3469 <command>lxc.network.name</command> specifies a name to assign inside the
3470 container. If this is not specified, a good default (i.e. eth0 for the
3471 first nic) is chosen.
3472 </para></listitem>
3473
3474 <listitem><para>
3475 <command>lxc.network.lxcscript.up</command> specifies a script to be called
3476 after the host side of the networking has been set up. See the
3477 <command>lxc.conf(5)</command> manual page for details.
3478 </para></listitem>
3479 </itemizedlist>
3480
3481 </sect3>
3482
3483 <sect3 id="lxc-conf-cgroup" status="review">
3484 <title>Control group configuration</title>
3485
3486 <para>
3487 Cgroup options can be specified using <command>lxc.cgroup</command>
3488 entries. <command>lxc.cgroup.subsystem.item = value</command> instructs
3489 LXC to set cgroup <command>subsystem</command>'s <command>item</command> to
3490 <command>value</command>. It is perhaps simpler to realize that this will
3491 simply write <command>value</command> to the file <command>item</command>
3492 for the container's control group for subsystem
3493 <command>subsystem</command>. For instance, to set the memory limit to
3494 320M, you could add
3495 </para>
3496
3497<screen>
3498<command>
3499lxc.cgroup.memory.limit_in_bytes = 320000000
3500</command>
3501</screen>
3502
3503 <para>
3504 which will cause 320000000 to be written to the file
3505 <filename>/sys/fs/cgroup/memory/lxc/CN/limit_in_bytes</filename>.
3506 </para>
3507 </sect3>
3508
3509 <sect3 id="lxc-conf-mounts" status="review">
3510 <title>Rootfs, mounts and fstab</title>
3511
3512 <para>
3513 An important part of container setup is the mounting of various
3514 filesystems into place. The following is an example configuration file
3515 excerpt demonstrating the commonly used configuration options:
3516 </para>
3517
3518<screen>
3519<command>
3520lxc.rootfs = /var/lib/lxc/CN/rootfs
3521lxc.mount.entry=proc /var/lib/lxc/CN/rootfs/proc proc nodev,noexec,nosuid 0 0
3522lxc.mount = /var/lib/lxc/CN/fstab
3523</command>
3524</screen>
3525
3526 <para>
3527 The first line says that the container's root filesystem is already mounted
3528 at <filename>/var/lib/lxc/CN/rootfs</filename>. If the filesystem is a
3529 block device (such as an LVM logical volume), then the path to the block
3530 device must be given instead.
3531 </para>
3532
3533 <para>
3534 Each <command>lxc.mount.entry</command> line should contain an item to
3535 mount in valid fstab format. The target directory should be prefixed by
3536 <filename>/var/lib/lxc/CN/rootfs</filename>, even if
3537 <command>lxc.rootfs</command> points to a block device.
3538 </para>
3539
3540 <para>
3541 Finally, <command>lxc.mount</command> points to a file, in fstab format,
3542 containing further items to mount. Note that all of these entries will be
3543 mounted by the host before the container init is started. In this way it
3544 is possible to bind mount various directories from the host into the
3545 container.
3546 </para>
3547 </sect3>
3548
3549 <sect3 id="lxc-conf-other" status="review">
3550 <title>Other configuration options</title>
3551
3552 <itemizedlist>
3553
3554 <listitem>
3555 <para>
3556 <command>lxc.cap.drop</command> can be used to prevent the container from having
3557 or ever obtaining the listed capabilities. For instance, including
3558 </para>
3559
3560<screen>
3561<command>
3562lxc.cap.drop = sys_admin
3563</command>
3564</screen>
3565
3566 <para>
3567 will prevent the container from mounting filesystems, as well as all other
3568 actions which require cap_sys_admin. See the <command>capabilities(7)</command>
3569 manual page for a list of capabilities and their meanings.
3570 </para>
3571 </listitem>
3572
3573 <listitem><para>
3574 <command>lxc.console=/path/to/consolefile</command> will cause console
3575 messages to be written to the specified file.
3576 </para></listitem>
3577
3578 <listitem><para>
3579 <command>lxc.arch</command> specifies the architecture for the container, for instance
3580 x86, or x86_64.
3581 </para></listitem>
3582
3583 <listitem><para>
3584 <command>lxc.tty=5</command> specifies that 5 consoles (in addition to
3585 <filename>/dev/console</filename>) should be created. That is, consoles
3586 will be available on <filename>/dev/tty1</filename> through
3587 <filename>/dev/tty5</filename>. The ubuntu templates set this value to 4.
3588 </para></listitem>
3589
3590 <listitem>
3591 <para>
3592 <command>lxc.pts=1024</command> specifies that the container should have a
3593 private (Unix98) devpts filesystem mount. If this is not specified, then
3594 the container will share <filename>/dev/pts</filename> with the host, which
3595 is rarely desired. The number 1024 means that 1024 ptys should be allowed
3596 in the container, however this number is currently ignored. Before
3597 starting the container init, LXC will do (essentially) a
3598 </para>
3599
3600<screen>
3601<command>
3602sudo mount -t devpts -o newinstance devpts /dev/pts
3603</command>
3604</screen>
3605
3606 <para>
3607 inside the container. It is important to realize that the container should
3608 not mount devpts filesystems of its own. It may safely do bind or move
3609 mounts of its mounted <filename>/dev/pts</filename>. But if it does
3610 </para>
3611
3612<screen>
3613<command>
3614sudo mount -t devpts devpts /dev/pts
3615</command>
3616</screen>
3617
3618 <para>
3619 it will remount the host's devpts
3620 instance. If it adds the newinstance mount option, then it will mount a new
3621 private (empty) instance. In neither case will it remount the instance
3622 which was set up by LXC. For this reason, and to prevent the container
3623 from using the host's ptys, the default apparmor policy will not allow
3624 containers to mount devpts filesystems after the container's init has been
3625 started.
3626 </para>
3627 </listitem>
3628
3629 <listitem><para>
3630 <command>lxc.devttydir</command> specifies a directory under
3631 <filename>/dev</filename> in which LXC will create its console devices. If
3632 this option is not specified, then the ptys will be bind-mounted over
3633 <filename>/dev/console</filename> and <filename>/dev/ttyN.</filename>
3634 However, rare package updates may try to blindly <emphasis>rm -f</emphasis> and then
3635 <emphasis>mknod</emphasis> those devices. They will fail (because the file has been
3636 bind-mounted), causing the package update to fail. When
3637 <command>lxc.devttydir</command> is set to LXC, for instance, then LXC will
3638 bind-mount the console ptys onto <filename>/dev/lxc/console</filename> and
3639 <filename>/dev/lxc/ttyN,</filename> and subsequently symbolically link them
3640 to <filename>/dev/console</filename> and <filename>/dev/ttyN.</filename>
3641 This allows the package updates to succeed, at the risk of making future
3642 gettys on those consoles fail until the next reboot. This problem will be
3643 ideally solved with device namespaces.
3644 </para></listitem>
3645
3646 </itemizedlist>
3647
3648 </sect3>
3649
3650 </sect2>
3651
3652 <sect2 id="lxc-container-updates" status="review">
3653 <title>Updates in Ubuntu containers</title>
3654
3655 <para>
3656 Because of some limitations which are placed on containers, package upgrades at
3657 times can fail. For instance, a package install or upgrade might fail if it
3658 is not allowed to create or open a block device. This often blocks all future
3659 upgrades until the issue is resolved. In some cases,
3660 you can work around this by chrooting into the container, to avoid the
3661 container restrictions, and completing the upgrade in the chroot.
3662 </para>
3663
3664 <para>
3665 Some of the specific things known to occasionally impede package
3666 upgrades include:
3667 </para>
3668
3669 <itemizedlist>
3670 <listitem><para>
3671 The container modifications performed when creating containers with the
3672 --trim option.
3673 </para></listitem>
3674 <listitem><para>
3675 Actions performed by lxcguest. For instance, because
3676 <filename>/lib/init/fstab</filename> is bind-mounted from another file,
3677 mountall upgrades which insist on replacing that file can fail.
3678 </para></listitem>
3679 <listitem><para>
3680 The over-mounting of console devices with ptys from the host can
3681 cause trouble with udev upgrades.
3682 </para></listitem>
3683 <listitem><para>
3684 Apparmor policy and devices cgroup restrictions can prevent
3685 package upgrades from performing certain actions.
3686 </para></listitem>
3687 <listitem><para>
3688 Capabilities dropped by use of <command>lxc.cap.drop</command> can likewise stop package
3689 upgrades from performing certain actions.
3690 </para></listitem>
3691 </itemizedlist>
3692 </sect2>
3693
3694 <sect2 id="lxc-libvirt" status="review">
3695 <title>Libvirt LXC</title>
3696
3697 <para>
3698Libvirt is a powerful hypervisor management solution with which you can
3699administer Qemu, Xen and LXC virtual machines, both locally and remote.
3700The libvirt LXC driver is a separate implementation from what we normally
3701call <emphasis>LXC</emphasis>. A few differences include:
3702 </para>
3703
3704 <itemizedlist>
3705 <listitem><para>
3706 Configuration is stored in xml format
3707 </para></listitem>
3708 <listitem><para>
3709 There no tools to facilitate container creation
3710 </para></listitem>
3711 <listitem><para>
3712 By default there is no console on <filename>/dev/console</filename>
3713 </para></listitem>
3714 <listitem><para>
3715 There is no support (yet) for container reboot or full shutdown
3716 </para></listitem>
3717 </itemizedlist>
3718
3719<!--
3720 <sect3 id="lxc-libvirt-virtinst" status="review">
3721 <title>virt-install</title>
3722
3723 <para>
3724 virt-install can be used to create an LXC container. (test and
3725 verify). Serge hasn't gotten this to work.
3726 </para>
3727
3728 </sect3>
3729 -->
3730
3731 <sect3 id="lxc-libvirt-convert" status="review">
3732 <title>Converting a LXC container to libvirt-lxc</title>
3733
3734 <para>
3735
3736 <xref linkend="lxc-creation"/> showed how to create LXC containers.
3737 If you've created a valid LXC container in this way, you can
3738 manage it with libvirt. Fetch a sample xml file from
3739 </para>
3740
3741<screen>
3742<command>
3743wget http://people.canonical.com/~serge/o1.xml
3744</command>
3745</screen>
3746
3747 <para>
3748 Edit this file to replace the container name and root
3749 filesystem locations. Then you can define the container with:
3750 </para>
3751
3752<screen>
3753<command>
3754virsh -c lxc:/// define o1.xml
3755</command>
3756</screen>
3757 </sect3>
3758
3759 <sect3 id="lxc-libvirt-fromcloud" status="review">
3760 <title>Creating a container from cloud image</title>
3761
3762 <para>
3763If you prefer to create a pristine new container just for LXC, you
3764can download an ubuntu cloud image, extract it, and point a libvirt
3765LXC xml file to it. For instance, find the url for a root tarball
3766for the latest daily Ubuntu 12.04 LTS cloud image using
3767 </para>
3768
3769<screen>
3770<command>
3771url1=`ubuntu-cloudimg-query precise daily $arch --format "%{url}\n"`
3772url=`echo $url1 | sed -e 's/.tar.gz/-root\0/'`
3773wget $url
3774filename=`basename $url`
3775</command>
3776</screen>
3777
3778 <para>
3779 Extract the downloaded tarball, for instance
3780 </para>
3781
3782<screen>
3783<command>
3784mkdir $HOME/c1
3785cd $HOME/c1
3786sudo tar zxf $filename
3787</command>
3788</screen>
3789
3790 <para>
3791 Download the xml template
3792 </para>
3793
3794<screen>
3795<command>
3796wget http://people.canonical.com/~serge/o1.xml
3797</command>
3798</screen>
3799
3800 <para>
3801 In the xml template, replace the name o1 with c1 and the source directory
3802 <filename>/var/lib/lxc/o1/rootfs</filename> with
3803 <filename>$HOME/c1</filename>. Then define the container using
3804 </para>
3805
3806<screen>
3807<command>
3808virsh define o1.xml
3809</command>
3810</screen>
3811
3812 </sect3>
3813
3814 <sect3 id="lxc-libvirt-interacting" status="review">
3815 <title>Interacting with libvirt containers</title>
3816
3817 <para>
3818 As we've seen, you can create a libvirt-lxc container using
3819 </para>
3820
3821<screen>
3822<command>
3823virsh -c lxc:/// define container.xml
3824</command>
3825</screen>
3826
3827 <para>
3828 To start a container called <emphasis>container</emphasis>, use
3829 </para>
3830
3831<screen>
3832<command>
3833virsh -c lxc:/// start container
3834</command>
3835</screen>
3836
3837 <para>
3838 To stop a running container, use
3839 </para>
3840
3841<screen>
3842<command>
3843virsh -c lxc:/// destroy container
3844</command>
3845</screen>
3846
3847 <para>
3848 Note that whereas the <command>lxc-destroy</command> command deletes the
3849 container, the <command>virsh destroy</command> command stops a running
3850 container. To delete the container definition, use
3851 </para>
3852
3853<screen>
3854<command>
3855virsh -c lxc:/// undefine container
3856</command>
3857</screen>
3858
3859 <para>
3860 To get a console to a running container, use
3861 </para>
3862
3863<screen>
3864<command>
3865virsh -c lxc:/// console container
3866</command>
3867</screen>
3868
3869 <para>
3870 Exit the console by simultaneously pressing control and ].
3871 </para>
3872
3873 </sect3>
3874
3875 </sect2>
3876
3877 <sect2 id="lxc-guest" status="review">
3878 <title>The lxcguest package</title>
3879
3880 <para>
3881 In the 11.04 (Natty) and 11.10 (Oneiric) releases of Ubuntu, a package was introduced called
3882 <emphasis role="italic">lxcguest</emphasis>. An unmodified root image could not be safely booted inside a
3883 container, but an image with the lxcguest package installed could be
3884 booted as a container, on bare hardware, or in a Xen, kvm, or VMware virtual
3885 machine.
3886 </para>
3887
3888 <para>
3889 As of the 12.04 LTS release, the work previously done by the lxcguest package
3890 was pushed into the core packages, and the lxcguest package was removed.
3891 As a result, an unmodified 12.04 LTS image can be booted as a
3892 container, on bare hardware, or in a Xen, kvm, or VMware virtual machine.
3893 To use an older release, the lxcguest package should still be used.
3894 </para>
3895
3896 </sect2>
3897
3898 <sect2 id="lxc-security" status="review">
3899 <title>Security</title>
3900
3901 <para>
3902 A namespace maps ids to resources. By not providing a container any id
3903 with which to reference a resource, the resource can be protected. This
3904 is the basis of some of the security afforded to container users. For
3905 instance, IPC namespaces are completely isolated. Other namespaces,
3906 however, have various <emphasis role="italic">leaks</emphasis> which allow privilege to be
3907 inappropriately exerted from a container into another container or to
3908 the host.
3909 </para>
3910
3911 <para>
3912 By default, LXC containers are started under a apparmor policy to
3913 restrict some actions. However, while stronger security is a goal
3914 for future releases, in 1204 LTS the goal of the apparmor policy is not
3915 to stop malicious actions but rather to stop accidental harm of the
3916 host by the guest.
3917 </para>
3918
3919 <para>
3920 See the <ulink url="http://wiki.ubuntu.com/LxcSecurity">LXC security</ulink>
3921 wiki page for more, uptodate information.
3922 </para>
3923
3924 <sect3 id="lxc-seccomp" status="review">
3925 <title>Exploitable system calls</title>
3926
3927 <para>
3928 It is a core container feature that containers share a kernel with the
3929 host. Therefore, if the kernel contains any exploitable system calls,
3930 the container can exploit these as well. Once the container controls the
3931 kernel it can fully control any resource known to the host.
3932 </para>
3933
3934 </sect3>
3935 </sect2>
3936
3937 <sect2 id="lxc-resources" status="review">
3938 <title>Resources</title>
3939 <itemizedlist>
3940
3941 <listitem>
3942 <para>
3943 The DeveloperWorks article <ulink url="https://www.ibm.com/developerworks/linux/library/l-lxc-containers/">LXC: Linux container tools</ulink> was an early introduction to the use of containers.
3944 </para>
3945 </listitem>
3946
3947 <listitem>
3948 <para>
3949 The <ulink url="http://www.ibm.com/developerworks/linux/library/l-lxc-security/index.html"> Secure Containers Cookbook</ulink> demonstrated the use of security modules to make containers more secure.
3950 </para>
3951 </listitem>
3952
3953 <listitem>
3954 <para>
3955 Manual pages referenced above can be found at:
3956<programlisting>
3957<ulink url="http://manpages.ubuntu.com/manpages/en/man7/capabilities.7.html">capabilities</ulink>
3958<ulink url="http://manpages.ubuntu.com/manpages/en/man5/lxc.conf.5.html">lxc.conf</ulink>
3959</programlisting>
3960 </para>
3961 </listitem>
3962
3963 <listitem>
3964 <para>
3965 The upstream LXC project is hosted at <ulink url="http://lxc.sf.net">Sourceforge</ulink>.
3966 </para>
3967 </listitem>
3968
3969 <listitem>
3970 <para>
3971 LXC security issues are listed and discussed at <ulink url="http://wiki.ubuntu.com/LxcSecurity">the LXC Security wiki page</ulink>
3972 </para>
3973 </listitem>
3974
3975 <listitem>
3976 <para> For more on namespaces in Linux, see: S. Bhattiprolu, E. W. Biederman, S. E. Hallyn, and D. Lezcano. Virtual Servers and Check- point/Restart in Mainstream Linux. SIGOPS Op- erating Systems Review, 42(5), 2008.</para>
3977 </listitem>
3978
3979 </itemizedlist>
3980 </sect2>
3981 </sect1>
2218</chapter>3982</chapter>

Subscribers

People subscribed via source and target branches