Merge lp:~serge-hallyn/serverguide/serverguide-lxc into lp:~ubuntu-core-doc/serverguide/precise

Proposed by Serge Hallyn
Status: Merged
Merged at revision: 46
Proposed branch: lp:~serge-hallyn/serverguide/serverguide-lxc
Merge into: lp:~ubuntu-core-doc/serverguide/precise
Diff against target: 1772 lines (+1764/-0)
1 file modified
serverguide/C/virtualization.xml (+1764/-0)
To merge this branch: bzr merge lp:~serge-hallyn/serverguide/serverguide-lxc
Reviewer Review Type Date Requested Status
Peter Matulis Approve
Serge Hallyn Needs Resubmitting
Review via email: mp+97238@code.launchpad.net

Description of the change

This merge introduces a new LXC section. Some subsections are yet to be written, because they are contingent on work still going into precise.

To post a comment you must log in.
Revision history for this message
Peter Matulis (petermatulis) wrote :

This is a very significant contribution to the guide. Thank you!

Technical:

All tests performed using default settings and with a simple ubuntu-based container. I did not check all commands.

1. I tried to start a container (cn1) on a KVM guest. I was able to log in but shutting down threw warnings/errors. Normal?

------------------------
$ sudo poweroff
[sudo] password for ubuntu:
$
Broadcast message from ubuntu@cn1
        (/dev/lxc/console) at 18:17 ...

The system is going down for power off NOW!
 * Asking all remaining processes to terminate...
   ...done.
 * All processes ended within 1 seconds....
   ...done.
 * Deconfiguring network interfaces...
   ...done.
 * Deactivating swap...
   ...fail!
umount: /run/lock: not mounted
umount: /dev/shm: not mounted
mount: / is busy
 * Will now halt
------------------------

2. This command output doesn't look right. Normal?:

------------------------
$ sudo lxc-start -n cn1 -d
$ lxc-ls
cn1
cn1

$ lxc-list
RUNNING

STOPPED
------------------------

3. Should include how to escape from a container console (Ctrl-a q).

Style:

Under "Host Setup", for /etc/default/lxc, since you say "true by default" for value of USE_LXC_BRIDGE, it makes sense to also say "true by default" for value of LXC_AUTO.

In general, consider using italics when introducing new terms/commands/package_names or when trying to emphasize a word. Example: "a package was introduced called ``lxcguest''..." or "...have various ``leaks'' which allow...". This quoting style is awkward.

We should encourage proper practice and prepend all commands requiring privileged access with 'sudo'.

I question the section title of "Container Introspection". The term introspection pertains to the fields of philosophy and psychology. Maybe "Inspection" is better.

Under "Advanced namespace usage", there is a block of code that is not formatted properly. It shows '<pre>' tags and I don't think the red colour is called for. I also think you should provide an external resource/link to 'private namespaces' as well as giving a one-line description of a basic use-case.

Standardize on using "LXC" and not "lxc" or "Lxc" except when referring to a package name? Dunno, 'lxc.sourceforge.net' shows kind of the reverse: "LXC is the userspace control package for Linux Containers" and "Linux Containers (lxc) implement:". Can be confusing to readers. Proceed as you see fit.

"IP address" instead of "ip address".

Awkward: "The type can be one of several types."

Missing a 'The'? "Following is an example configuration file..."

Extra character? "...which require cap_sys_admin}."

You've made bold man page references before. "See capabilities(7) for a list..."

Rework: "For instance, if a package's postinst fails if it cannot open a block device..."

The standard is to capitalize release codenames: "In the natty and oneiric releases of Ubuntu..."

Add resources section (external links, man pages) at end of page. See end of https://help.ubuntu.com/11.10/serverguide/C/openldap-server.html for an example.

review: Needs Fixing
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

On 03/16/2012 02:52 PM, Peter Matulis wrote:
> Review: Needs Fixing
>
> This is a very significant contribution to the guide. Thank you!
>
> Technical:
>
> All tests performed using default settings and with a simple ubuntu-based container. I did not check all commands.
>
> 1. I tried to start a container (cn1) on a KVM guest. I was able to log in but shutting down threw warnings/errors. Normal?

Yes, the errors are normal. Should that be explained somewhere?

> ------------------------
> $ sudo poweroff
> [sudo] password for ubuntu:
> $
> Broadcast message from ubuntu@cn1
> (/dev/lxc/console) at 18:17 ...
>
> The system is going down for power off NOW!
> * Asking all remaining processes to terminate...
> ...done.
> * All processes ended within 1 seconds....
> ...done.
> * Deconfiguring network interfaces...
> ...done.
> * Deactivating swap...
> ...fail!
> umount: /run/lock: not mounted
> umount: /dev/shm: not mounted
> mount: / is busy
> * Will now halt
> ------------------------
>
>
> 2. This command output doesn't look right. Normal?:
>
> ------------------------
> $ sudo lxc-start -n cn1 -d
> $ lxc-ls
> cn1
> cn1
>
> $ lxc-list
> RUNNING
>
> STOPPED
> ------------------------

The lxc-list output doesn't look right - cn1 should show up in both
lists. The lxc-ls output shouldn't have the third (empty) line but
otherwise looks fine. I can't reproduce this.

I will address the rest in a merge proposal update. Thanks for the
comments!

50. By Serge Hallyn

Address a number of pmatulis' comments.

51. By Serge Hallyn

sudo

52. By Serge Hallyn

standardize use of LXC

53. By Serge Hallyn

remove <pre>

54. By Serge Hallyn

fix parse errors

55. By Serge Hallyn

namespac

56. By Serge Hallyn

write security section

57. By Serge Hallyn

remove udev comment

58. By Serge Hallyn

comment out virt-install libvirt-lxc section (it never worked for me)

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

I believe the comments are now addressed, thanks.

review: Needs Resubmitting
Revision history for this message
Peter Matulis (petermatulis) wrote :

All good. Very nice!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'serverguide/C/virtualization.xml'
2--- serverguide/C/virtualization.xml 2012-03-11 16:42:45 +0000
3+++ serverguide/C/virtualization.xml 2012-03-18 23:46:47 +0000
4@@ -2215,4 +2215,1768 @@
5
6 </sect2>
7 </sect1>
8+ <sect1 id='lxc' status='review'>
9+ <title>LXC</title>
10+ <para>
11+ Containers are a lightweight virtualization technology. They are
12+ more akin to an enhanced chroot than to full virtualization like
13+ Qemu or VMware, both because they do not emulate hardware and
14+ because containers share the same operating system as the host.
15+ Therefore containers are better compared to Solaris zones or BSD
16+ jails. Linux-vserver and OpenVZ are two pre-existing, independently
17+ developed implementations of containers-like functionality for
18+ Linux. In fact, containers came about as a result of the work to
19+ upstream the vserver and OpenVZ functionality. Some vserver and
20+ OpenVZ functionality is still missing in containers, however
21+ containers can <emphasis>boot</emphasis> many Linux distributions and have the
22+ advantage that they can be used with an un-modified upstream kernel.
23+ </para>
24+
25+ <para>
26+ There are two user-space implementations of containers, each
27+ exploiting the same kernel features. Libvirt allows the use of
28+ containers through the LXC driver by connecting to 'lxc:///'. This
29+ can be very convenient as it supports the same usage as its other
30+ drivers. The other implementation, called simply 'LXC', is not
31+ compatible with libvirt, but is more flexible with more userspace
32+ tools. It is possible to switch between the two, though there are
33+ peculiarities which can cause confusion.
34+ </para>
35+
36+ <para>
37+ In this document we will mainly describe the <application>lxc</application> package. Toward
38+ the end, we will describe how to use the libvirt LXC driver.
39+ </para>
40+
41+ <para>
42+ In this document, a container name will be shown as CN, C1, or C2.
43+ </para>
44+
45+ <sect2 id="lxc-installation" status="review">
46+ <title>Installation</title>
47+ <para>
48+ The <application>lxc</application> package can be installed using
49+ </para>
50+
51+<screen>
52+<command>
53+sudo apt-get install lxc
54+</command>
55+</screen>
56+
57+ <para>
58+ This will pull in the required and recommended dependencies, including
59+ cgroup-lite, lvm2, and debootstrap. To use libvirt-lxc, install libvirt-bin.
60+ LXC and libvirt-lxc can be installed and used at the same time.
61+ </para>
62+ </sect2>
63+
64+ <sect2 id="lxc-hostsetup" status="review">
65+ <title>Host Setup</title>
66+ <sect3 id="lxc-layout" status="review">
67+ <title>Basic layout of LXC files</title>
68+ <para>
69+ Following is a description of the files and directories which
70+ are installed and used by LXC.
71+ </para>
72+
73+ <itemizedlist>
74+ <listitem>
75+ <para>There are two upstart jobs:</para>
76+
77+ <itemizedlist> <!-- nested list -->
78+ <listitem>
79+ <para>
80+ <filename>/etc/init/lxc-net.conf:</filename> is an optional job which
81+ only runs if <filename> /etc/default/lxc</filename> specifies
82+ USE_LXC_BRIDGE (true by default). It sets up a NATed bridge for
83+ containers to use.
84+ </para>
85+ </listitem>
86+
87+ <listitem>
88+ <para>
89+ <filename>/etc/init/lxc.conf:</filename> runs if LXC_AUTO (true by
90+ default) is set to
91+ true in <filename>/etc/default/lxc</filename>. It looks for entries
92+ under <filename>/etc/lxc/auto/</filename> which are symbolic links to
93+ configuration files for the containers which should be started at boot.
94+ </para>
95+ </listitem>
96+ </itemizedlist>
97+
98+ </listitem>
99+ <listitem>
100+ <para>
101+ <filename>/etc/lxc/lxc.conf:</filename>
102+ There is a default container creation configuration file,
103+ <filename>/etc/lxc/lxc.conf</filename>, which directs containers to use
104+ the LXC bridge created by the lxc-net upstart job. If no configuration
105+ file is specified when creating a container, then this one will be used.
106+ </para>
107+ </listitem>
108+
109+ <listitem>
110+ <para>
111+ Examples of other container creation configuration files are
112+ found under <filename>/usr/share/doc/lxc/examples</filename>. These show how to
113+ create containers without a private network, or using macvlan,
114+ vlan, or other network layouts.
115+ </para>
116+ </listitem>
117+
118+ <listitem>
119+ <para>
120+ The various container administration tools are found under
121+ <filename>/usr/bin</filename>.
122+ </para>
123+ </listitem>
124+
125+ <listitem>
126+ <para>
127+ <filename>/usr/lib/lxc/lxc-init</filename> is a very minimal and lightweight init
128+ binary which is used by lxc-execute. Rather than `booting' a
129+ full container, it manually mounts a few filesystems, especially
130+ <filename>/proc</filename>, and executes its arguments. You are not likely to need to
131+ manually refer to this file.
132+ </para>
133+ </listitem>
134+
135+ <listitem>
136+ <para>
137+ <filename>/usr/lib/lxc/templates/</filename> contains the `templates' which can be
138+ used to create new containers of various distributions and
139+ flavors. Not all templates are currently supported.
140+ </para>
141+ </listitem>
142+
143+ <listitem>
144+ <para>
145+ <filename>/etc/apparmor.d/usr.bin.lxc-start</filename> contains the (active by default)
146+ apparmor MAC policy which works to protect the host from containers.
147+ Please see the <xref linkend="lxc-security">Security</xref> section for more information.
148+ </para>
149+ </listitem>
150+
151+ <listitem>
152+ <para>
153+ There are various man pages for the LXC administration tools as well
154+ as the <filename>lxc.conf</filename> container configuration file.
155+ </para>
156+ </listitem>
157+
158+ <listitem>
159+ <para>
160+ <filename>/var/lib/lxc</filename> is where containers and their configuration information
161+ are stored.
162+ </para>
163+ </listitem>
164+
165+ <listitem>
166+ <para>
167+ <filename>/var/cache/lxc</filename> is where caches of distribution data are stored to
168+ speed up multiple container creations.
169+ </para>
170+ </listitem>
171+ </itemizedlist>
172+ </sect3>
173+
174+ <sect3 id="lxcbr0" status="review">
175+ <title>lxcbr0</title>
176+ <para>
177+ When USE_LXC_BRIDGE is set to true in /etc/default/lxc (as it is by
178+ default), a bridge called lxcbr0 is created at startup. This bridge is
179+ given the private address 10.0.3.1, and containers using this bridge will
180+ have a 10.0.3.0/24 address. A dnsmasq instance is run listening on that
181+ bridge, so if another dnsmasq has bound all interfaces before the lxc-net
182+ upstart job runs, lxc-net will fail to start and lxcbr0 will not exist.
183+ </para>
184+
185+ <para>
186+ If you have another bridge - libvirt's default virbr0, or a br0
187+ bridge for your default NIC - you can use that bridge in place of
188+ lxcbr0 for your containers.
189+ </para>
190+ </sect3>
191+
192+ <sect3 id="lxc-partitions" status="review">
193+ <title>Using a separate filesystem for the container store</title>
194+ <para>
195+ LXC stores container information and (with the default backing store) root
196+ filesystems under <filename>/var/lib/lxc</filename>. Container creation
197+ templates also tend to store cached distribution information under
198+ <filename>/var/cache/lxc</filename>.
199+ </para>
200+
201+ <para>
202+ If you wish to use another filesystem than
203+ <filename>/var</filename>, you can mount a filesystem which has more space into those
204+ locations. If you have a disk dedicated for this, you can simply
205+ mount it at <filename>/var/lib/lxc</filename>. If you'd like to use another location, like
206+ <filename>/srv</filename>, you can bind mount it or use a symbolic link. For instance, if
207+ <filename>/srv</filename> is a large mounted filesystem, create and symlink two directories:
208+ </para>
209+
210+<screen>
211+<command>
212+sudo mkdir /srv/lxclib /srv/lxccache
213+sudo rm -rf /var/lib/lxc /var/cache/lxc
214+sudo ln -s /srv/lxclib /var/lib/lxc
215+sudo ln -s /srv/lxccache /var/cache/lxc
216+</command>
217+</screen>
218+
219+ <para>
220+ or, using bind mounts:
221+ </para>
222+
223+<screen>
224+<command>
225+sudo mkdir /srv/lxclib /srv/lxccache
226+sudo sed -i '$a \
227+/srv/lxclib /var/lib/lxc none defaults,bind 0 0 \
228+/srv/lxccache /var/cache/lxc none defaults,bind 0 0' /etc/fstab
229+sudo mount -a
230+</command>
231+</screen>
232+
233+ </sect3>
234+
235+ <sect3 id="lxc-lvm" status="review">
236+ <title>Containers backed by lvm</title>
237+
238+ <para>
239+ It is possible to use LVM partitions as the backing stores for
240+ containers. Advantages of this include flexibility in storage
241+ management and fast container cloning. The tools
242+ default to using a VG (volume group) named <emphasis>lxc</emphasis>, but another
243+ VG can be used through command line options. When a LV is used
244+ as a container backing store, the container's configuration file
245+ is still <filename>/var/lib/lxc/CN/config</filename>, but the root fs
246+ entry in that file (<emphasis>lxc.rootfs</emphasis>) will point to the lV block
247+ device name, i.e. <filename>/dev/lxc/CN</filename>.
248+ </para>
249+
250+ <para>
251+ Containers with directory tree and LVM backing stores can
252+ co-exist.
253+ </para>
254+ </sect3>
255+
256+ <sect3 id="lxc-btrfs" status="review">
257+ <title>Btrfs</title>
258+ <para>
259+ If your host has a btrfs <filename>/var</filename>, the LXC administration
260+ tools will detect this and automatically exploit it by
261+ cloning containers using btrfs snapshots.
262+ </para>
263+ </sect3>
264+
265+ <sect3 id="lxc-apparmor" status="review">
266+ <title>Apparmor</title>
267+ <para>
268+ LXC ships with an apparmor profile intended to protect the host
269+ from accidental misuses of privilege inside the container. For
270+ instance, the container will not be able to write to
271+ <filename>/proc/sysrq-trigger</filename> or to most <filename>/sys</filename> files.
272+ </para>
273+ </sect3>
274+
275+ <sect3 id="lxc-cgroups" status="review">
276+ <title>Control Groups</title>
277+ <para>
278+ Control groups (cgroups) are a kernel feature providing hierarchical
279+ task grouping and per-cgroup resource accounting and limits. They are
280+ used in containers to limit block and character device access and to
281+ freeze (suspend) containers. They can be further used to limit memory
282+ use and block i/o, guarantee minimum cpu shares, and to lock containers
283+ to specific cpus. By default, LXC depends on the cgroup-lite package to be installed, which
284+ provides the proper cgroup initialization at boot. The cgroup-lite
285+ package mounts each cgroup subsystem separately under
286+ <filename>/sys/fs/cgroup/SS</filename>, where SS is the subsystem name. For instance
287+ the freezer subsystem is mounted under <filename>/sys/fs/cgroup/freezer</filename>.
288+ LXC cgroup are kept under <filename>/sys/fs/cgroup/SS/INIT/lxc</filename>, where
289+ INIT is the init task's cgroup. This is <filename>/</filename> by default, so
290+ in the end the freezer cgroup for container CN would be
291+ <filename>/sys/fs/cgroup/freezer/lxc/CN</filename>.
292+ </para>
293+ </sect3>
294+
295+ <sect3 id="lxc-privs" status="review">
296+ <title>Privilege</title>
297+ <para>
298+ The container administration tools must be run with root user
299+ privilege. A utility called <filename>lxc-setup</filename> was written with the
300+ intention of providing the tools with the needed file capabilities to
301+ allow non-root users to run the tools with sufficient privilege.
302+ However, as root in a container cannot yet be reliably contained, this
303+ is not worthwhile. It is therefore recommended to not use
304+ <filename>lxc-setup</filename>, and to provide the LXC administrators the needed
305+ sudo privilege.
306+ </para>
307+
308+ <para>
309+ The user namespace, which is expected to be available in the next Long Term
310+ Support (LTS) release, will allow containment of the container root user, as
311+ well as reduce the amount of privilege required for creating and administering
312+ containers.
313+ </para>
314+ </sect3>
315+
316+ <sect3 id="lxc-upstart" status="review">
317+ <title>LXC Upstart Jobs</title>
318+ <para>
319+ As listed above, the <application>lxc</application> package includes two upstart jobs. The
320+ first, <filename>lxc-net</filename>, is always started when the other,
321+ <filename>lxc</filename>, is about to begin, and stops when it stops. If the
322+ USE_LXC_BRIDGE variable is set to false in <filename>/etc/defaults/lxc</filename>,
323+ then it will immediately exit. If it is true, and an error occurs
324+ bringing up the LXC bridge, then the <filename>lxc</filename> job will not start.
325+ <filename>lxc-net</filename> will bring down the LXC bridge when stopped, unless
326+ a container is running which is using that bridge.
327+ </para>
328+
329+ <para>
330+ The <filename>lxc</filename> job starts on runlevel 2-5. If the LXC_AUTO variable
331+ is set to true, then it will look under <filename>/etc/lxc</filename> for containers
332+ which should be started automatically. When the <filename>lxc</filename> job is
333+ stopped, either manually or by entering runlevel 0, 1, or 6, it will
334+ stop those containers.
335+ </para>
336+
337+ <para>
338+ To register a container to start automatically, create a symbolic
339+ link <filename>/etc/default/lxc/name.conf</filename> pointing to the container's
340+ config file. For instance, the configuration file for a container
341+ <filename>CN</filename> is <filename>/var/lib/lxc/CN/config</filename>. To make that container
342+ auto-start, use the command:
343+ </para>
344+
345+<screen>
346+<command>
347+sudo ln -s /var/lib/lxc/CN/config /etc/lxc/auto/CN.conf
348+</command>
349+</screen>
350+ </sect3>
351+
352+ </sect2>
353+
354+ <sect2 id="lxc-admin" status="review">
355+ <title>Container Administration</title>
356+ <sect3 id="lxc-creation" status="review">
357+ <title>Creating Containers</title>
358+
359+ <para>
360+ The easiest way to create containers is using <command>lxc-create</command>. This
361+ script uses distribution-specific templates under
362+ <filename>/usr/lib/lxc/templates/</filename> to set up container-friendly chroots under
363+ <filename>/var/lib/lxc/CN/rootfs</filename>, and initialize the configuration in
364+ <filename>/var/lib/lxc/CN/fstab</filename> and
365+ <filename>/var/lib/lxc/CN/config</filename>, where CN is the container name
366+ </para>
367+
368+ <para>
369+ The simplest container creation command would look like:
370+ </para>
371+
372+<screen>
373+<command>
374+sudo lxc-create -t ubuntu -n CN
375+</command>
376+</screen>
377+
378+ <para>
379+ This tells lxc-create to use the ubuntu template (-t ubuntu) and to call
380+ the container CN (-n CN). Since no configuration file was specified
381+ (which would have been done with `-f file'), it will use the default
382+ configuration file under <filename>/etc/lxc/lxc.conf</filename>. This gives the container
383+ a single veth network interface attached to the lxcbr0 bridge.
384+ </para>
385+
386+ <para>
387+ The container creation templates can also accept arguments. These can
388+ be listed after --. For instance
389+ </para>
390+
391+<screen>
392+<command>
393+sudo lxc-create -t ubuntu -n oneiric1 -- -r oneiric
394+</command>
395+</screen>
396+
397+ <para>
398+ passes the arguments '-r oneiric1' to the ubuntu template.
399+ </para>
400+
401+ <sect4 id="lxc-help" status="review">
402+ <title>Help</title>
403+ <para>
404+ Help on the lxc-create command can be seen by using<command> lxc-create -h</command>.
405+ However, the templates also take their own options. If you do
406+ </para>
407+
408+<screen>
409+<command>
410+sudo lxc-create -t ubuntu -h
411+</command>
412+</screen>
413+
414+ <para>
415+ then the general <command>lxc-create</command> help will be followed by help output
416+ specific to the ubuntu template. If no template is specified, then only
417+ help for <command>lxc-create</command> itself will be shown.
418+ </para>
419+ </sect4>
420+
421+ <sect4 id="lxc-ubuntu" status="review">
422+ <title>Ubuntu template</title>
423+
424+ <para>
425+ The ubuntu template can be used to create Ubuntu system containers with any
426+ release at least as new as 10.04 LTS. It uses debootstrap to create
427+ a cached container filesystem which gets copied into place each time a
428+ container is created. The cached image is saved and only re-generated
429+ when you create a container
430+ using the <emphasis>-F</emphasis> (flush) option to the template, i.e.:
431+ </para>
432+
433+<screen>
434+<command>
435+sudo lxc-create -t ubuntu -n CN -- -F
436+</command>
437+</screen>
438+
439+ <para>
440+ The Ubuntu release installed by the template will be the same as that on
441+ the host, unless otherwise specified with the <emphasis>-r</emphasis> option, i.e.
442+ </para>
443+
444+<screen>
445+<command>
446+sudo lxc-create -t ubuntu -n CN -- -r lucid
447+</command>
448+</screen>
449+
450+ <para>
451+ If you want to create a 32-bit container on a 64-bit host, pass <emphasis>-a i386</emphasis>
452+ to the container. If you have the qemu-user-static package installed, then you can
453+ create a container using any architecture supported by qemu-user-static.
454+ </para>
455+
456+ <para>
457+ The container will have a user named <emphasis>ubuntu</emphasis> whose password is <emphasis>ubuntu</emphasis>
458+ and who is a member of the <emphasis>sudo</emphasis> group. If you wish to inject a public ssh
459+ key for the <emphasis>ubuntu</emphasis> user, you can do so with <emphasis>-S sshkey.pub</emphasis>.
460+ </para>
461+
462+ <para>
463+ You can also <emphasis>bind</emphasis> user jdoe from the host into the container using
464+ the <emphasis>-b jdoe</emphasis> option. This will copy jdoe's password and shadow
465+ entries into the container, make sure his default group and shell are
466+ available, add him to the sudo group, and bind-mount his home directory
467+ into the container when the container is started.
468+ </para>
469+
470+ <para>
471+ When a container is created, the <filename>release-updates</filename> archive is added
472+ to the container's <filename>sources.list</filename>, and its package archive will be
473+ updated. If the container release is older than 12.04 LTS, then the
474+ lxcguest package will be automatically installed. Alternatively, if the <emphasis>--trim</emphasis>
475+ option is specified, then the lxcguest package will not be installed,
476+ and many services will be removed from the container. This will result
477+ in a faster-booting, but less upgrade-able container.
478+ </para>
479+ </sect4>
480+
481+ <sect4 id="lxc-ubuntu-cloud" status="review">
482+ <title>Ubuntu-cloud template</title>
483+
484+ <para>
485+ The ubuntu-cloud template creates Ubuntu containers by downloading and
486+ extracting the published Ubuntu cloud images. It accepts some of the same
487+ options as the ubuntu template, namely <emphasis>-r release</emphasis>, <emphasis>-S sshkey.pub</emphasis>,
488+ <emphasis>-a arch</emphasis>, and <emphasis>-F</emphasis> to flush the cached image. It also accepts a few
489+ extra options. The <emphasis>-C</emphasis> option will create a <emphasis>cloud</emphasis> container,
490+ configured for use with a metadata service. The <emphasis>-u</emphasis> option accepts a
491+ cloud-init user-data file to configure the container on start. If <emphasis>-L</emphasis>
492+ is passed, then no locales will be installed. The <emphasis>-T</emphasis> option can be
493+ used to choose a tarball location to extract in place of the published
494+ cloud image tarball. Finally the <emphasis>-i</emphasis> option sets a host id for
495+ cloud-init, which by default is set to a random string.
496+ </para>
497+ </sect4>
498+
499+ <sect4 id="lxc-other-templates" status="review">
500+ <title> Other templates</title>
501+
502+ <para>
503+ The ubuntu and ubuntu-cloud templates are well supported. Other
504+ templates are available however. The debian template creates a
505+ Debian based container, using debootstrap much as the ubuntu
506+ template does. By default it installs a <emphasis>debian squeeze</emphasis>
507+ image. An alternate release can be chosen by setting the SUITE
508+ environment variable, i.e.:
509+ </para>
510+
511+<screen>
512+<command>
513+sudo SUITE=sid lxc-create -t debian -n d1
514+</command>
515+</screen>
516+
517+ <para>
518+ Since debian cannot be safely booted inside a container, debian
519+ containers will be trimmed as with the <emphasis>--trim</emphasis> option to
520+ the ubuntu template.
521+ </para>
522+
523+ <para>
524+ To purge the container image cache, call the template directly
525+ and pass it the <emphasis>--clean</emphasis> option.
526+ </para>
527+
528+<screen>
529+<command>
530+sudo SUITE=sid /usr/lib/lxc/templates/lxc-debian --clean
531+</command>
532+</screen>
533+
534+ <para>
535+ A fedora template exists, which creates containers based on
536+ fedora releases &lt;= 14. Fedora release 15 and higher are
537+ based on systemd, which the template is not yet able to convert
538+ into a container-bootable setup. Before the fedora template is
539+ able to run, you'll need to make sure that <command>yum</command> and <command>curl</command>
540+ are installed. A fedora 12 container can be created with
541+ </para>
542+
543+<screen>
544+<command>
545+sudo lxc-create -t fedora -n fedora12 -- -R 12
546+</command>
547+</screen>
548+
549+ <para>
550+ A OpenSuSE template exists, but it requires the <command>zypper</command> program,
551+ which is not yet packaged. The OpenSuSE template is therefore
552+ not supported.
553+ </para>
554+
555+ <para>
556+ Two more templates exist mainly for experimental purposes. The
557+ busybox template creates a very small system container based
558+ entirely on busybox. The sshd template creates an application
559+ container running sshd in a private network namespace. The
560+ host's library and binary directories are bind-mounted into the
561+ container, though not its <filename>/home</filename> or
562+ <filename>/root</filename>. To create, start, and ssh into an ssh
563+ container, you might:
564+ </para>
565+
566+<screen>
567+<command>
568+sudo lxc-create -t sshd -n ssh1
569+ssh-keygen -f id
570+sudo mkdir /var/lib/lxc/ssh1/rootfs/root/.ssh
571+sudo cp id.pub /var/lib/lxc/ssh1/rootfs/root/.ssh/authorized_keys
572+sudo lxc-start -n ssh1 -d
573+ssh -i id root@ssh1.
574+</command>
575+</screen>
576+
577+ </sect4>
578+
579+ <sect4 id="lxc-backing-stores" status="review">
580+ <title> Backing Stores</title>
581+
582+ <para>
583+By default, <command>lxc-create</command> places the container's root
584+filesystem as a directory tree at <filename>/var/lib/lxc/CN/rootfs.</filename>
585+Another option is to use LVM logical volumes. If a volume group named <emphasis>lxc</emphasis>
586+exists, you can create an lvm-backed container called CN using:
587+ </para>
588+
589+<screen>
590+<command>
591+sudo lxc-create -t ubuntu -n CN -B lvm
592+</command>
593+</screen>
594+
595+ <para>
596+ If you want to use a volume group named schroots, with a 5G xfs
597+ filesystem, then you would use
598+ </para>
599+
600+<screen>
601+<command>
602+sudo lxc-create -t ubuntu -n CN -B lvm --vgname schroots --fssize 5G --fstype xfs
603+</command>
604+</screen>
605+ </sect4>
606+
607+ </sect3>
608+
609+ <sect3 id="lxc-cloning" status="review">
610+ <title>Cloning</title>
611+
612+ <para>
613+ For rapid provisioning, you may wish to customize a canonical
614+ container according to your needs and then make multiple copies of it.
615+ This can be done with the <command>lxc-clone</command> program. Given an existing
616+ container called C1, a new container called C2 can be created
617+ using
618+ </para>
619+
620+
621+<screen>
622+<command>
623+sudo lxc-clone -o C1 -n C2
624+</command>
625+</screen>
626+
627+ <para>
628+ If <filename>/var/lib/lxc</filename> is a btrfs filesystem, then
629+ <command>lxc-clone</command> will create C2's filesystem as a snapshot of
630+ C1's. If the container's root filesystem is lvm backed, then you can
631+ specify the <emphasis>-s</emphasis> option to create the new rootfs as a lvm snapshot of the
632+ original as follows:
633+ </para>
634+
635+<screen>
636+<command>
637+sudo lxc-clone -s -o C1 -n C2
638+</command>
639+</screen>
640+
641+ <para>
642+ Both lvm and btrfs snapshots will provide fast cloning with very
643+ small initial disk usage.
644+ </para>
645+ </sect3>
646+
647+ <sect3 id="lxc-start-stop" status="review">
648+ <title>Starting and stopping</title>
649+
650+ <para>
651+ To start a container, use <command>lxc-start -n CN</command>. By default
652+ <command>lxc-start</command> will execute <filename>/sbin/init</filename>
653+ in the container. You can provide a different program to execute, plus
654+ arguments, as further arguments to <command>lxc-start</command>:
655+ </para>
656+
657+<screen>
658+<command>
659+sudo lxc-start -n container /sbin/init loglevel=debug
660+</command>
661+</screen>
662+
663+ <para>
664+ If you do not specify the <emphasis>-d</emphasis> (daemon) option, then you will see a
665+ console (on the container's <filename>/dev/console</filename>, see
666+ <xref linkend="lxc-consoles"/> for more information) on the terminal. If
667+ you specify the <emphasis>-d</emphasis> option, you will not see that console, and lxc-start
668+ will immediately exit success - even if a later part of container startup
669+ has failed. You can use <command>lxc-wait</command> or
670+ <command>lxc-monitor</command> (see <xref
671+ linkend="lxc-monitoring"/>) to check on the success or failure of the
672+ container startup.
673+ </para>
674+
675+ <para>
676+ To obtain LXC debugging information, use <emphasis>-o filename -l debuglevel</emphasis>,
677+ for instance:
678+ </para>
679+
680+<screen>
681+<command>
682+sudo lxc-start -o lxc.debug -l DEBUG -n container
683+</command>
684+</screen>
685+
686+ <para>
687+ Finally, you can specify configuration parameters inline using <emphasis>-s</emphasis>.
688+ However, it is generally recommended to place them in the container's
689+ configuration file instead. Likewise, an entirely alternate config
690+ file can be specified with the <emphasis>-f</emphasis> option, but this is not
691+ generally recommended.
692+ </para>
693+
694+ <para>
695+ While <command>lxc-start</command> runs the container's
696+ <filename>/sbin/init</filename>, <command>lxc-execute</command> uses a
697+ minimal init program called <command>lxc-init</command>, which attempts to
698+ mount <filename>/proc</filename>, <filename>/dev/mqueue</filename>, and
699+ <filename>/dev/shm</filename>, executes the programs specified on the
700+ command line, and waits for those to finish executing.
701+ <command>lxc-start</command> is intended to be used for <emphasis>system containers</emphasis>,
702+ while <command>lxc-execute</command> is intended for <emphasis>application
703+ containers</emphasis> (see <ulink url="https://www.ibm.com/developerworks/linux/library/l-lxc-containers/">
704+ this article</ulink> for more).
705+ </para>
706+
707+ <para>
708+ You can stop a container several ways. You can use <command>shutdown</command>,
709+ <command>poweroff</command> and <command>reboot</command> while logged into
710+ the container. To cleanly shut down a container externally (i.e. from the host), you can issue
711+ the <command>sudo lxc-shutdown -n CN</command> command. This takes an optional
712+ timeout value. If not specified, the command issues a SIGPWR signal to the
713+ container and immediately returns. If the option is used, as in
714+ <command>sudo lxc-shutdown -n CN -t 10</command>, then the command will wait the
715+ specified number of seconds for the container to cleanly shut down. Then,
716+ if the container is still running, it will kill it (and any running
717+ applications). You can also immediately kill the container (without any
718+ chance for applications to cleanly shut down) using
719+ <command>sudo lxc-stop -n CN</command>. Finally,
720+ <command>lxc-kill</command> can be used more generally to send any signal
721+ number to the container's init.
722+ </para>
723+
724+ <para>
725+ While the container is shutting down, you can expect to see some (harmless)
726+ error messages, as follows:
727+ </para>
728+
729+<screen>
730+$ sudo poweroff
731+[sudo] password for ubuntu: =
732+
733+$ =
734+
735+Broadcast message from ubuntu@cn1
736+ (/dev/lxc/console) at 18:17 ...
737+
738+The system is going down for power off NOW!
739+ * Asking all remaining processes to terminate...
740+ ...done.
741+ * All processes ended within 1 seconds....
742+ ...done.
743+ * Deconfiguring network interfaces...
744+ ...done.
745+ * Deactivating swap...
746+ ...fail!
747+umount: /run/lock: not mounted
748+umount: /dev/shm: not mounted
749+mount: / is busy
750+ * Will now halt
751+</screen>
752+
753+ <para>
754+ A container can be frozen with <command>sudo lxc-freeze -n CN</command>. This
755+ will block all its processes until the container is later unfrozen using
756+ <command>sudo lxc-unfreeze -n CN</command>.
757+ </para>
758+
759+ </sect3>
760+
761+ <sect3 id="lxc-monitoring" status="review">
762+ <title>Monitoring container status </title>
763+
764+ <para>
765+ Two commands are available to monitor container state changes.
766+ <command>lxc-monitor</command> monitors one or more containers for any
767+ state changes. It takes a container name as usual with the <emphasis>-n</emphasis> option,
768+ but in this case the container name can be a posix regular expression to
769+ allow monitoring desirable sets of containers.
770+ <command>lxc-monitor</command> continues running as it prints container
771+ changes. <command>lxc-wait</command> waits for a specific state change and
772+ then exits. For instance,
773+ </para>
774+
775+
776+<screen>
777+<command>
778+sudo lxc-monitor -n cont[0-5]*
779+</command>
780+</screen>
781+
782+ <para>
783+ would print all state changes to any containers matching the
784+ listed regular expression, whereas
785+ </para>
786+
787+<screen>
788+<command>
789+sudo lxc-wait -n cont1 -s 'STOPPED|FROZEN'
790+</command>
791+</screen>
792+
793+ <para>
794+ will wait until container cont1 enters state STOPPED or state FROZEN
795+ and then exit.
796+ </para>
797+ </sect3>
798+
799+ <sect3 id="lxc-consoles" status="review">
800+ <title>Consoles</title>
801+
802+ <para>
803+ Containers have a configurable number of consoles. One always exists on
804+ the container's <filename>/dev/console.</filename> This is shown on the
805+ terminal from which you ran <command>lxc-start</command>, unless the <emphasis>-d</emphasis>
806+ option is specified. The output on <filename>/dev/console</filename> can
807+ be redirected to a file using the <emphasis>-c console-file</emphasis> option to
808+ <command>lxc-start</command>. The number of extra consoles is specified by
809+ the <command>lxc.tty</command> variable, and is usually set to 4. Those
810+ consoles are shown on <filename>/dev/ttyN</filename> (for 1 &lt;= N &lt;=
811+ 4). To log into console 3 from the host, use
812+ </para>
813+
814+<screen>
815+<command>
816+sudo lxc-console -n container -t 3
817+</command>
818+</screen>
819+
820+ <para>
821+ or if the <emphasis>-t N</emphasis> option is not specified, an unused console will be
822+ automatically chosen. To exit the console, use the escape sequence
823+ Ctrl-a q. Note that the escape sequence does not work in the console
824+ resulting from <command>lxc-start</command> without the <emphasis>-d</emphasis>
825+ option.
826+ </para>
827+
828+ <para>
829+ Each container console is actually a Unix98 pty in the host's (not the
830+ guest's) pty mount, bind-mounted over the guest's
831+ <filename>/dev/ttyN</filename> and <filename>/dev/console</filename>.
832+ Therefore, if the guest unmounts those or otherwise tries to access the
833+ actual character device <command>4:N</command>, it will not be serving
834+ getty to the LXC consoles. (With the default settings, the container will
835+ not be able to access that character device and getty will therefore fail.)
836+ This can easily happen when a boot script blindly mounts a new
837+ <filename>/dev</filename>.
838+ </para>
839+ </sect3>
840+
841+ <sect3 id="lxc-introspection" status="review">
842+ <title>Container Inspection</title>
843+
844+ <para>
845+ Several commands are available to gather information on existing
846+ containers. <command>lxc-ls</command> will report all existing containers
847+ in its first line of output, and all running containers in the second line.
848+ <command>lxc-list</command> provides the same information in a more verbose
849+ format, listing running containers first and stopped containers next.
850+ <command>lxc-ps</command> will provide lists of processes in containers.
851+ To provide <command>ps</command> arguments to <command>lxc-ps</command>,
852+ prepend them with <command>--</command>. For instance, for listing of all
853+ processes in container plain,
854+ </para>
855+
856+<screen>
857+<command>
858+sudo lxc-ps -n plain -- -ef
859+</command>
860+</screen>
861+
862+ <para>
863+ <command>lxc-info</command> provides the state of a container and the pid of its init
864+ process. <command>lxc-cgroup</command> can be used to query or set the values of a
865+ container's control group limits and information. This can be more convenient
866+ than interacting with the <command>cgroup</command> filesystem. For instance, to query
867+ the list of devices which a running container is allowed to access,
868+ you could use
869+ </para>
870+
871+<screen>
872+<command>
873+sudo lxc-cgroup -n CN devices.list
874+</command>
875+</screen>
876+
877+ <para>
878+ or to add mknod, read, and write access to <filename>/dev/sda</filename>,
879+ </para>
880+
881+<screen>
882+<command>
883+sudo lxc-cgroup -n CN devices.allow "b 8:* rwm"
884+</command>
885+</screen>
886+
887+ <para>
888+ and, to limit it to 300M of RAM,
889+ </para>
890+
891+<screen>
892+<command>
893+lxc-cgroup -n CN memory.limit_in_bytes 300000000
894+</command>
895+</screen>
896+
897+ <para>
898+ <command>lxc-netstat</command> executes <command>netstat</command> in the
899+ running container, giving you a glimpse of its network state.
900+ </para>
901+
902+ <para>
903+ <command>lxc-backup</command> will create backups of the root filesystems
904+ of all existing containers (except lvm-based ones), using
905+ <command>rsync</command> to back the contents up under
906+ <filename>/var/lib/lxc/CN/rootfs.backup.1</filename>. These backups can be
907+ restored using <command>lxc-restore.</command> However,
908+ <command>lxc-backup</command> and <command>lxc-restore</command> are
909+ fragile with respect to customizations and therefore their use is not
910+ recommended.
911+ </para>
912+
913+ </sect3>
914+
915+ <sect3 id="lxc-destroying" status="review">
916+ <title>Destroying containers</title>
917+
918+ <para>
919+ Use <command>lxc-destroy</command> to destroy an existing container.
920+ </para>
921+
922+<screen>
923+<command>
924+sudo lxc-destroy -n CN
925+</command>
926+</screen>
927+
928+ <para>
929+ If the container is running, <command>lxc-destroy</command> will exit with a message
930+ informing you that you can force stopping and destroying the container
931+ with
932+ </para>
933+
934+<screen>
935+<command>
936+sudo lxc-destroy -n CN -f
937+</command>
938+</screen>
939+
940+ </sect3>
941+
942+ <sect3 id="lxc-namespaces" status="review">
943+ <title>Advanced namespace usage</title>
944+
945+ <para>
946+ One of the Linux kernel features used by LXC to create containers is
947+ private namespaces. Namespaces allow a set of tasks to have private
948+ mappings of names to resources for things like pathnames and process
949+ IDs. (See <xref linkend="lxc-resources">Resources</xref> for a link
950+ to more information). Unlike control groups and other mount features which
951+ are also used to create containers, namespaces cannot be manipulated using
952+ a filesystem interface. Therefore, LXC ships with the <command>lxc-unshare</command>
953+ program, which is mainly for testing. It provides the ability to create
954+ new tasks in private namespaces. For instance,
955+ </para>
956+
957+<screen>
958+<command>
959+sudo lxc-unshare -s 'MOUNT|PID' /bin/bash
960+</command>
961+</screen>
962+
963+ <para>
964+ creates a bash shell with private pid and mount namespaces.
965+ In this shell, you can do
966+ </para>
967+
968+<screen>
969+root@ubuntu:~# mount -t proc proc /proc
970+root@ubuntu:~# ps -ef
971+UID PID PPID C STIME TTY TIME CMD
972+root 1 0 6 10:20 pts/9 00:00:00 /bin/bash
973+root 110 1 0 10:20 pts/9 00:00:00 ps -ef
974+</screen>
975+
976+ <para>
977+ so that <command>ps</command> shows only the tasks in your new namespace.
978+ </para>
979+ </sect3>
980+
981+ <sect3 id="lxc-ephemeral" status="review">
982+ <title>Ephemeral containers</title>
983+
984+ <para>
985+ Ephemeral containers are one-time containers. Given an existing
986+ container CN, you can run a command in an ephemeral container
987+ created based on CN, with the host's jdoe user bound into the
988+ container, using:
989+ </para>
990+
991+<screen>
992+<command>
993+lxc-start-ephemeral -b jdoe -o CN -- /home/jdoe/run_my_job
994+</command>
995+</screen>
996+
997+ <para>
998+ When the job is finished, the container will be discarded.
999+ </para>
1000+
1001+ </sect3>
1002+ <sect3 id="lxc-commands" status="review">
1003+ <title>Container Commands</title>
1004+
1005+Following is a table of all container commands:
1006+
1007+<table>
1008+<title> Container commands</title>
1009+<tgroup cols="2" rowsep="1">
1010+<thead>
1011+ <row>
1012+ <entry valign="left"><para>Command</para></entry>
1013+ <entry valign="left"><para>Synopsis</para></entry>
1014+ </row>
1015+</thead>
1016+<tbody>
1017+ <row>
1018+ <entry><para>lxc-attach </para></entry>
1019+ <entry><para>(NOT SUPPORTED) Run a command in a running container</para></entry>
1020+ </row>
1021+ <row>
1022+ <entry><para>lxc-backup </para></entry>
1023+ <entry><para>Back up the root filesystems for all lvm-backed containers</para></entry>
1024+ </row>
1025+ <row>
1026+ <entry><para>lxc-cgroup </para></entry>
1027+ <entry><para>View and set container control group settings</para></entry>
1028+ </row>
1029+ <row>
1030+ <entry><para>lxc-checkconfig </para></entry>
1031+ <entry><para>Verify host support for containers</para></entry>
1032+ </row>
1033+ <row>
1034+ <entry><para>lxc-checkpoint </para></entry>
1035+ <entry><para>(NOT SUPPORTED) Checkpoint a running container</para></entry>
1036+ </row>
1037+ <row>
1038+ <entry><para>lxc-clone </para></entry>
1039+ <entry><para>Clone a new container from an existing one</para></entry>
1040+ </row>
1041+ <row>
1042+ <entry><para>lxc-console </para></entry>
1043+ <entry><para>Open a console in a running container</para></entry>
1044+ </row>
1045+ <row>
1046+ <entry><para>lxc-create </para></entry>
1047+ <entry><para>Create a new container</para></entry>
1048+ </row>
1049+ <row>
1050+ <entry><para>lxc-destroy </para></entry>
1051+ <entry><para>Destroy an existing container</para></entry>
1052+ </row>
1053+ <row>
1054+ <entry><para>lxc-execute </para></entry>
1055+ <entry><para>Run a command in a (not running) application container</para></entry>
1056+ </row>
1057+ <row>
1058+ <entry><para>lxc-freeze </para></entry>
1059+ <entry><para>Freeze a running container</para></entry>
1060+ </row>
1061+ <row>
1062+ <entry><para>lxc-info </para></entry>
1063+ <entry><para>Print information on the state of a container</para></entry>
1064+ </row>
1065+ <row>
1066+ <entry><para>lxc-kill </para></entry>
1067+ <entry><para>Send a signal to a container's init</para></entry>
1068+ </row>
1069+ <row>
1070+ <entry><para>lxc-list </para></entry>
1071+ <entry><para>List all containers</para></entry>
1072+ </row>
1073+ <row>
1074+ <entry><para>lxc-ls </para></entry>
1075+ <entry><para>List all containers with shorter output than lxc-list</para></entry>
1076+ </row>
1077+ <row>
1078+ <entry><para>lxc-monitor </para></entry>
1079+ <entry><para>Monitor state changes of one or more containers</para></entry>
1080+ </row>
1081+ <row>
1082+ <entry><para>lxc-netstat </para></entry>
1083+ <entry><para>Execute netstat in a running container</para></entry>
1084+ </row>
1085+ <row>
1086+ <entry><para>lxc-ps </para></entry>
1087+ <entry><para>View process info in a running container</para></entry>
1088+ </row>
1089+ <row>
1090+ <entry><para>lxc-restart </para></entry>
1091+ <entry><para>(NOT SUPPORTED) Restart a checkpointed container</para></entry>
1092+ </row>
1093+ <row>
1094+ <entry><para>lxc-restore </para></entry>
1095+ <entry><para>Restore containers from backups made by lxc-backup</para></entry>
1096+ </row>
1097+ <row>
1098+ <entry><para>lxc-setcap </para></entry>
1099+ <entry><para>(NOT RECOMMENDED) Set file capabilities on LXC tools</para></entry>
1100+ </row>
1101+ <row>
1102+ <entry><para>lxc-setuid </para></entry>
1103+ <entry><para>(NOT RECOMMENDED) Set or remove setuid bits on LXC tools</para></entry>
1104+ </row>
1105+ <row>
1106+ <entry><para>lxc-shutdown </para></entry>
1107+ <entry><para>Safely shut down a container</para></entry>
1108+ </row>
1109+ <row>
1110+ <entry><para>lxc-start </para></entry>
1111+ <entry><para>Start a stopped container</para></entry>
1112+ </row>
1113+ <row>
1114+ <entry><para>lxc-start-ephemeral </para></entry>
1115+ <entry><para>Start an ephemeral (one-time) container</para></entry>
1116+ </row>
1117+ <row>
1118+ <entry><para>lxc-stop </para></entry>
1119+ <entry><para>Immediately stop a running container</para></entry>
1120+ </row>
1121+ <row>
1122+ <entry><para>lxc-unfreeze </para></entry>
1123+ <entry><para>Unfreeze a frozen container</para></entry>
1124+ </row>
1125+ <row>
1126+ <entry><para>lxc-unshare </para></entry>
1127+ <entry><para>Testing tool to manually unshare namespaces</para></entry>
1128+ </row>
1129+ <row>
1130+ <entry><para>lxc-version </para></entry>
1131+ <entry><para>Print the version of the LXC tools</para></entry>
1132+ </row>
1133+ <row>
1134+ <entry><para>lxc-wait </para></entry>
1135+ <entry><para>Wait for a container to reach a particular state</para></entry>
1136+ </row>
1137+ </tbody>
1138+ </tgroup>
1139+</table>
1140+
1141+ </sect3>
1142+ </sect2>
1143+
1144+ <sect2 id="lxc-conf" status="review">
1145+ <title>Configuration File</title>
1146+
1147+ <para>
1148+ LXC containers are very flexible. The Ubuntu <application>lxc</application> package sets defaults
1149+ to make creation of Ubuntu system containers as simple as possible.
1150+ If you need more flexibility, this chapter will show how to fine-tune
1151+ your containers as you need.
1152+ </para>
1153+
1154+ <para>
1155+ Detailed information is available in the <command>lxc.conf(5)</command> man page.
1156+ Note that the default configurations created by the ubuntu templates
1157+ are reasonable for a system container and usually do not need
1158+ customization.
1159+ </para>
1160+
1161+ <sect3 id="lxc-conf-options" status="review">
1162+ <title>Choosing configuration files and options</title>
1163+
1164+ <para>
1165+ The container setup is controlled by the LXC configuration options.
1166+ Options can be specified at several points:
1167+ </para>
1168+
1169+ <itemizedlist>
1170+ <listitem><para>
1171+ During container creation, a configuration file can be specified.
1172+ However, creation templates often insert their own configuration
1173+ options, so we usually specify only network configuration options at
1174+ this point. For other configuration, it is usually better to edit the
1175+ configuration file after container creation.
1176+ </para></listitem>
1177+
1178+ <listitem><para>
1179+ The file <filename>/var/lib/lxc/CN/config</filename> is used at
1180+ container startup by default.
1181+ </para></listitem>
1182+
1183+ <listitem><para>
1184+ <command>lxc-start</command> accepts an alternate configuration file with
1185+ the <emphasis>-f filename</emphasis> option.
1186+ </para></listitem>
1187+
1188+ <listitem><para>
1189+ Specific configuration variables can be overridden at <command>lxc-start</command>
1190+ using <emphasis>-s key=value</emphasis>. It is generally better to edit the container
1191+ configuration file.
1192+ </para></listitem>
1193+
1194+ </itemizedlist>
1195+
1196+ </sect3>
1197+
1198+ <sect3 id="lxc-conf-net" status="review">
1199+ <title>Network Configuration</title>
1200+
1201+ <para>
1202+ Container networking in LXC is very flexible. It is triggered by
1203+ the <command>lxc.network.type</command> configuration file entries.
1204+ If no such entries exist, then the container will share the host's
1205+ networking stack. Services and connections started in the container
1206+ will be using the host's IP address.
1207+ If at least one <command>lxc.network.type</command> entry is present, then the container
1208+ will have a private (layer 2) network stack. It will have its own
1209+ network interfaces and firewall rules. There are several options
1210+ for <command>lxc.network.type</command>:
1211+ </para>
1212+
1213+ <itemizedlist>
1214+ <listitem><para>
1215+ <command>lxc.network.type=empty</command>:
1216+ The container will have no network interfaces other than loopback.
1217+ </para></listitem>
1218+
1219+ <listitem><para>
1220+ <command>lxc.network.type=veth</command>:
1221+ This is the default when using the ubuntu or ubuntu-cloud templates,
1222+ and creates a veth network tunnel. One end of this tunnel
1223+ becomes the network interface inside the container. The other end
1224+ is attached to a bridged on the host. Any number of such tunnels
1225+ can be created by adding more <command>lxc.network.type=veth</command>
1226+ entries in the container configuration file. The bridge to which the
1227+ host end of the tunnel will be attached is specified with
1228+ <command>lxc.network.link = lxcbr0</command>.
1229+ </para></listitem>
1230+
1231+ <listitem><para>
1232+ <command>lxc.network.type=phys</command>
1233+ A physical network interface (i.e. eth2) is passed into the container.
1234+ </para></listitem>
1235+ </itemizedlist>
1236+
1237+ <para>
1238+ Two other options are to
1239+ use vlan or macvlan, however their use is more complicated and is
1240+ not described here. A few other networking options exist:
1241+ </para>
1242+
1243+ <itemizedlist>
1244+ <listitem><para>
1245+ <command>lxc.network.flags</command> can only be set to <emphasis>up</emphasis> and ensures that the network interface is up.
1246+ </para></listitem>
1247+
1248+ <listitem><para>
1249+ <command>lxc.network.hwaddr</command> specifies a mac address to assign the the
1250+ nic inside the container.
1251+ </para></listitem>
1252+
1253+ <listitem><para>
1254+ <command>lxc.network.ipv4</command> and <command>lxc.network.ipv6</command>
1255+ set the respective IP addresses, if those should be static.
1256+ </para></listitem>
1257+
1258+ <listitem><para>
1259+ <command>lxc.network.name</command> specifies a name to assign inside the
1260+ container. If this is not specified, a good default (i.e. eth0 for the
1261+ first nic) is chosen.
1262+ </para></listitem>
1263+
1264+ <listitem><para>
1265+ <command>lxc.network.lxcscript.up</command> specifies a script to be called
1266+ after the host side of the networking has been set up. See the
1267+ <command>lxc.conf(5)</command> manual page for details.
1268+ </para></listitem>
1269+ </itemizedlist>
1270+
1271+ </sect3>
1272+
1273+ <sect3 id="lxc-conf-cgroup" status="review">
1274+ <title>Control group configuration</title>
1275+
1276+ <para>
1277+ Cgroup options can be specified using <command>lxc.cgroup</command>
1278+ entries. <command>lxc.cgroup.subsystem.item = value</command> instructs
1279+ LXC to set cgroup <command>subsystem</command>'s <command>item</command> to
1280+ <command>value</command>. It is perhaps simpler to realize that this will
1281+ simply write <command>value</command> to the file <command>item</command>
1282+ for the container's control group for subsystem
1283+ <command>subsystem</command>. For instance, to set the memory limit to
1284+ 320M, you could add
1285+ </para>
1286+
1287+<screen>
1288+<command>
1289+lxc.cgroup.memory.limit_in_bytes = 320000000
1290+</command>
1291+</screen>
1292+
1293+ <para>
1294+ which will cause 320000000 to be written to the file
1295+ <filename>/sys/fs/cgroup/memory/lxc/CN/limit_in_bytes</filename>.
1296+ </para>
1297+ </sect3>
1298+
1299+ <sect3 id="lxc-conf-mounts" status="review">
1300+ <title>Rootfs, mounts and fstab</title>
1301+
1302+ <para>
1303+ An important part of container setup is the mounting of various
1304+ filesystems into place. The following is an example configuration file
1305+ excerpt demonstrating the commonly used configuration options:
1306+ </para>
1307+
1308+<screen>
1309+<command>
1310+lxc.rootfs = /var/lib/lxc/CN/rootfs
1311+lxc.mount.entry=proc /var/lib/lxc/CN/rootfs/proc proc nodev,noexec,nosuid 0 0
1312+lxc.mount = /var/lib/lxc/CN/fstab
1313+</command>
1314+</screen>
1315+
1316+ <para>
1317+ The first line says that the container's root filesystem is already mounted
1318+ at <filename>/var/lib/lxc/CN/rootfs</filename>. If the filesystem is a
1319+ block device (such as an LVM logical volume), then the path to the block
1320+ device must be given instead.
1321+ </para>
1322+
1323+ <para>
1324+ Each <command>lxc.mount.entry</command> line should contain an item to
1325+ mount in valid fstab format. The target directory should be prefixed by
1326+ <filename>/var/lib/lxc/CN/rootfs</filename>, even if
1327+ <command>lxc.rootfs</command> points to a block device.
1328+ </para>
1329+
1330+ <para>
1331+ Finally, <command>lxc.mount</command> points to a file, in fstab format,
1332+ containing further items to mount. Note that all of these entries will be
1333+ mounted by the host before the container init is started. In this way it
1334+ is possible to bind mount various directories from the host into the
1335+ container.
1336+ </para>
1337+ </sect3>
1338+
1339+ <sect3 id="lxc-conf-other" status="review">
1340+ <title>Other configuration options</title>
1341+
1342+ <itemizedlist>
1343+
1344+ <listitem>
1345+ <para>
1346+ <command>lxc.cap.drop</command> can be used to prevent the container from having
1347+ or ever obtaining the listed capabilities. For instance, including
1348+ </para>
1349+
1350+<screen>
1351+<command>
1352+lxc.cap.drop = sys_admin
1353+</command>
1354+</screen>
1355+
1356+ <para>
1357+ will prevent the container from mounting filesystems, as well as all other
1358+ actions which require cap_sys_admin. See the <command>capabilities(7)</command>
1359+ manual page for a list of capabilities and their meanings.
1360+ </para>
1361+ </listitem>
1362+
1363+ <listitem><para>
1364+ <command>lxc.console=/path/to/consolefile</command> will cause console
1365+ messages to be written to the specified file.
1366+ </para></listitem>
1367+
1368+ <listitem><para>
1369+ <command>lxc.arch</command> specifies the architecture for the container, for instance
1370+ x86, or x86_64.
1371+ </para></listitem>
1372+
1373+ <listitem><para>
1374+ <command>lxc.tty=5</command> specifies that 5 consoles (in addition to
1375+ <filename>/dev/console</filename>) should be created. That is, consoles
1376+ will be available on <filename>/dev/tty1</filename> through
1377+ <filename>/dev/tty5</filename>. The ubuntu templates set this value to 4.
1378+ </para></listitem>
1379+
1380+ <listitem>
1381+ <para>
1382+ <command>lxc.pts=1024</command> specifies that the container should have a
1383+ private (Unix98) devpts filesystem mount. If this is not specified, then
1384+ the container will share <filename>/dev/pts</filename> with the host, which
1385+ is rarely desired. The number 1024 means that 1024 ptys should be allowed
1386+ in the container, however this number is currently ignored. Before
1387+ starting the container init, LXC will do (essentially) a
1388+ </para>
1389+
1390+<screen>
1391+<command>
1392+sudo mount -t devpts -o newinstance devpts /dev/pts
1393+</command>
1394+</screen>
1395+
1396+ <para>
1397+ inside the container. It is important to realize that the container should
1398+ not mount devpts filesystems of its own. It may safely do bind or move
1399+ mounts of its mounted <filename>/dev/pts</filename>. But if it does
1400+ </para>
1401+
1402+<screen>
1403+<command>
1404+sudo mount -t devpts devpts /dev/pts
1405+</command>
1406+</screen>
1407+
1408+ <para>
1409+ it will remount the host's devpts
1410+ instance. If it adds the newinstance mount option, then it will mount a new
1411+ private (empty) instance. In neither case will it remount the instance
1412+ which was set up by LXC. For this reason, and to prevent the container
1413+ from using the host's ptys, the default apparmor policy will not allow
1414+ containers to mount devpts filesystems after the container's init has been
1415+ started.
1416+ </para>
1417+ </listitem>
1418+
1419+ <listitem><para>
1420+ <command>lxc.devttydir</command> specifies a directory under
1421+ <filename>/dev</filename> in which LXC will create its console devices. If
1422+ this option is not specified, then the ptys will be bind-mounted over
1423+ <filename>/dev/console</filename> and <filename>/dev/ttyN.</filename>
1424+ However, rare package updates may try to blindly <emphasis>rm -f</emphasis> and then
1425+ <emphasis>mknod</emphasis> those devices. They will fail (because the file has been
1426+ bind-mounted), causing the package update to fail. When
1427+ <command>lxc.devttydir</command> is set to LXC, for instance, then LXC will
1428+ bind-mount the console ptys onto <filename>/dev/lxc/console</filename> and
1429+ <filename>/dev/lxc/ttyN,</filename> and subsequently symbolically link them
1430+ to <filename>/dev/console</filename> and <filename>/dev/ttyN.</filename>
1431+ This allows the package updates to succeed, at the risk of making future
1432+ gettys on those consoles fail until the next reboot. This problem will be
1433+ ideally solved with device namespaces.
1434+ </para></listitem>
1435+
1436+ </itemizedlist>
1437+
1438+ </sect3>
1439+
1440+ </sect2>
1441+
1442+ <sect2 id="lxc-container-updates" status="review">
1443+ <title>Updates in Ubuntu containers</title>
1444+
1445+ <para>
1446+ Because of some limitations which are placed on containers, package upgrades at
1447+ times can fail. For instance, a package install or upgrade might fail if it
1448+ is not allowed to create or open a block device. This often blocks all future
1449+ upgrades until the issue is resolved. In some cases,
1450+ you can work around this by chrooting into the container, to avoid the
1451+ container restrictions, and completing the upgrade in the chroot.
1452+ </para>
1453+
1454+ <para>
1455+ Some of the specific things known to occasionally impede package
1456+ upgrades include:
1457+ </para>
1458+
1459+ <itemizedlist>
1460+ <listitem><para>
1461+ The container modifications performed when creating containers with the
1462+ --trim option.
1463+ </para></listitem>
1464+ <listitem><para>
1465+ Actions performed by lxcguest. For instance, because
1466+ <filename>/lib/init/fstab</filename> is bind-mounted from another file,
1467+ mountall upgrades which insist on replacing that file can fail.
1468+ </para></listitem>
1469+ <listitem><para>
1470+ The over-mounting of console devices with ptys from the host can
1471+ cause trouble with udev upgrades.
1472+ </para></listitem>
1473+ <listitem><para>
1474+ Apparmor policy and devices cgroup restrictions can prevent
1475+ package upgrades from performing certain actions.
1476+ </para></listitem>
1477+ <listitem><para>
1478+ Capabilities dropped by use of <command>lxc.cap.drop</command> can likewise stop package
1479+ upgrades from performing certain actions.
1480+ </para></listitem>
1481+ </itemizedlist>
1482+ </sect2>
1483+
1484+ <sect2 id="lxc-libvirt" status="review">
1485+ <title>Libvirt LXC</title>
1486+
1487+ <para>
1488+Libvirt is a powerful hypervisor management solution with which you can
1489+administer Qemu, Xen and LXC virtual machines, both locally and remote.
1490+The libvirt LXC driver is a separate implementation from what we normally
1491+call <emphasis>LXC</emphasis>. A few differences include:
1492+ </para>
1493+
1494+ <itemizedlist>
1495+ <listitem><para>
1496+ Configuration is stored in xml format
1497+ </para></listitem>
1498+ <listitem><para>
1499+ There no tools to facilitate container creation
1500+ </para></listitem>
1501+ <listitem><para>
1502+ By default there is no console on <filename>/dev/console</filename>
1503+ </para></listitem>
1504+ <listitem><para>
1505+ There is no support (yet) for container reboot or full shutdown
1506+ </para></listitem>
1507+ </itemizedlist>
1508+
1509+<!--
1510+ <sect3 id="lxc-libvirt-virtinst" status="review">
1511+ <title>virt-install</title>
1512+
1513+ <para>
1514+ virt-install can be used to create an LXC container. (test and
1515+ verify). Serge hasn't gotten this to work.
1516+ </para>
1517+
1518+ </sect3>
1519+ -->
1520+
1521+ <sect3 id="lxc-libvirt-convert" status="review">
1522+ <title>Converting a LXC container to libvirt-lxc</title>
1523+
1524+ <para>
1525+
1526+ <xref linkend="lxc-creation"/> showed how to create LXC containers.
1527+ If you've created a valid LXC container in this way, you can
1528+ manage it with libvirt. Fetch a sample xml file from
1529+ </para>
1530+
1531+<screen>
1532+<command>
1533+wget http://people.canonical.com/~serge/o1.xml
1534+</command>
1535+</screen>
1536+
1537+ <para>
1538+ Edit this file to replace the container name and root
1539+ filesystem locations. Then you can define the container with:
1540+ </para>
1541+
1542+<screen>
1543+<command>
1544+virsh -c lxc:/// define o1.xml
1545+</command>
1546+</screen>
1547+ </sect3>
1548+
1549+ <sect3 id="lxc-libvirt-fromcloud" status="review">
1550+ <title>Creating a container from cloud image</title>
1551+
1552+ <para>
1553+If you prefer to create a pristine new container just for LXC, you
1554+can download an ubuntu cloud image, extract it, and point a libvirt
1555+LXC xml file to it. For instance, find the url for a root tarball
1556+for the latest daily Ubuntu 12.04 LTS cloud image using
1557+ </para>
1558+
1559+<screen>
1560+<command>
1561+url1=`ubuntu-cloudimg-query precise daily $arch --format "%{url}\n"`
1562+url=`echo $url1 | sed -e 's/.tar.gz/-root\0/'`
1563+wget $url
1564+filename=`basename $url`
1565+</command>
1566+</screen>
1567+
1568+ <para>
1569+ Extract the downloaded tarball, for instance
1570+ </para>
1571+
1572+<screen>
1573+<command>
1574+mkdir $HOME/c1
1575+cd $HOME/c1
1576+sudo tar zxf $filename
1577+</command>
1578+</screen>
1579+
1580+ <para>
1581+ Download the xml template
1582+ </para>
1583+
1584+<screen>
1585+<command>
1586+wget http://people.canonical.com/~serge/o1.xml
1587+</command>
1588+</screen>
1589+
1590+ <para>
1591+ In the xml template, replace the name o1 with c1 and the source directory
1592+ <filename>/var/lib/lxc/o1/rootfs</filename> with
1593+ <filename>$HOME/c1</filename>. Then define the container using
1594+ </para>
1595+
1596+<screen>
1597+<command>
1598+virsh define o1.xml
1599+</command>
1600+</screen>
1601+
1602+ </sect3>
1603+
1604+ <sect3 id="lxc-libvirt-interacting" status="review">
1605+ <title>Interacting with libvirt containers</title>
1606+
1607+ <para>
1608+ As we've seen, you can create a libvirt-lxc container using
1609+ </para>
1610+
1611+<screen>
1612+<command>
1613+virsh -c lxc:/// define container.xml
1614+</command>
1615+</screen>
1616+
1617+ <para>
1618+ To start a container called <emphasis>container</emphasis>, use
1619+ </para>
1620+
1621+<screen>
1622+<command>
1623+virsh -c lxc:/// start container
1624+</command>
1625+</screen>
1626+
1627+ <para>
1628+ To stop a running container, use
1629+ </para>
1630+
1631+<screen>
1632+<command>
1633+virsh -c lxc:/// destroy container
1634+</command>
1635+</screen>
1636+
1637+ <para>
1638+ Note that whereas the <command>lxc-destroy</command> command deletes the
1639+ container, the <command>virsh destroy</command> command stops a running
1640+ container. To delete the container definition, use
1641+ </para>
1642+
1643+<screen>
1644+<command>
1645+virsh -c lxc:/// undefine container
1646+</command>
1647+</screen>
1648+
1649+ <para>
1650+ To get a console to a running container, use
1651+ </para>
1652+
1653+<screen>
1654+<command>
1655+virsh -c lxc:/// console container
1656+</command>
1657+</screen>
1658+
1659+ <para>
1660+ Exit the console by simultaneously pressing control and ].
1661+ </para>
1662+
1663+ </sect3>
1664+
1665+ </sect2>
1666+
1667+ <sect2 id="lxc-guest" status="review">
1668+ <title>The lxcguest package</title>
1669+
1670+ <para>
1671+ In the 11.04 (Natty) and 11.10 (Oneiric) releases of Ubuntu, a package was introduced called
1672+ <emphasis role="italic">lxcguest</emphasis>. An unmodified root image could not be safely booted inside a
1673+ container, but an image with the lxcguest package installed could be
1674+ booted as a container, on bare hardware, or in a Xen, kvm, or VMware virtual
1675+ machine.
1676+ </para>
1677+
1678+ <para>
1679+ As of the 12.04 LTS release, the work previously done by the lxcguest package
1680+ was pushed into the core packages, and the lxcguest package was removed.
1681+ As a result, an unmodified 12.04 LTS image can be booted as a
1682+ container, on bare hardware, or in a Xen, kvm, or VMware virtual machine.
1683+ To use an older release, the lxcguest package should still be used.
1684+ </para>
1685+
1686+ </sect2>
1687+
1688+ <sect2 id="lxc-security" status="review">
1689+ <title>Security</title>
1690+
1691+ <para>
1692+ A namespace maps ids to resources. By not providing a container any id
1693+ with which to reference a resource, the resource can be protected. This
1694+ is the basis of some of the security afforded to container users. For
1695+ instance, IPC namespaces are completely isolated. Other namespaces,
1696+ however, have various <emphasis role="italic">leaks</emphasis> which allow privilege to be
1697+ inappropriately exerted from a container into another container or to
1698+ the host.
1699+ </para>
1700+
1701+ <para>
1702+ By default, LXC containers are started under a apparmor policy to
1703+ restrict some actions. However, while stronger security is a goal
1704+ for future releases, in 1204 LTS the goal of the apparmor policy is not
1705+ to stop malicious actions but rather to stop accidental harm of the
1706+ host by the guest.
1707+ </para>
1708+
1709+ <para>
1710+ See the <ulink url="http://wiki.ubuntu.com/LxcSecurity">LXC security</ulink>
1711+ wiki page for more, uptodate information.
1712+ </para>
1713+
1714+ <sect3 id="lxc-seccomp" status="review">
1715+ <title>Exploitable system calls</title>
1716+
1717+ <para>
1718+ It is a core container feature that containers share a kernel with the
1719+ host. Therefore, if the kernel contains any exploitable system calls,
1720+ the container can exploit these as well. Once the container controls the
1721+ kernel it can fully control any resource known to the host.
1722+ </para>
1723+
1724+ </sect3>
1725+ </sect2>
1726+
1727+ <sect2 id="lxc-resources" status="review">
1728+ <title>Resources</title>
1729+ <itemizedlist>
1730+
1731+ <listitem>
1732+ <para>
1733+ The DeveloperWorks article <ulink url="https://www.ibm.com/developerworks/linux/library/l-lxc-containers/">LXC: Linux container tools</ulink> was an early introduction to the use of containers.
1734+ </para>
1735+ </listitem>
1736+
1737+ <listitem>
1738+ <para>
1739+ The <ulink url="http://www.ibm.com/developerworks/linux/library/l-lxc-security/index.html"> Secure Containers Cookbook</ulink> demonstrated the use of security modules to make containers more secure.
1740+ </para>
1741+ </listitem>
1742+
1743+ <listitem>
1744+ <para>
1745+ Manual pages referenced above can be found at:
1746+<programlisting>
1747+<ulink url="http://manpages.ubuntu.com/manpages/en/man7/capabilities.7.html">capabilities</ulink>
1748+<ulink url="http://manpages.ubuntu.com/manpages/en/man5/lxc.conf.5.html">lxc.conf</ulink>
1749+</programlisting>
1750+ </para>
1751+ </listitem>
1752+
1753+ <listitem>
1754+ <para>
1755+ The upstream LXC project is hosted at <ulink url="http://lxc.sf.net">Sourceforge</ulink>.
1756+ </para>
1757+ </listitem>
1758+
1759+ <listitem>
1760+ <para>
1761+ LXC security issues are listed and discussed at <ulink url="http://wiki.ubuntu.com/LxcSecurity">the LXC Security wiki page</ulink>
1762+ </para>
1763+ </listitem>
1764+
1765+ <listitem>
1766+ <para> For more on namespaces in Linux, see: S. Bhattiprolu, E. W. Biederman, S. E. Hallyn, and D. Lezcano. Virtual Servers and Check- point/Restart in Mainstream Linux. SIGOPS Op- erating Systems Review, 42(5), 2008.</para>
1767+ </listitem>
1768+
1769+ </itemizedlist>
1770+ </sect2>
1771+ </sect1>
1772 </chapter>

Subscribers

People subscribed via source and target branches