Merge lp:~serge-hallyn/serverguide/lxc-trusty-update into lp:serverguide/trunk

Proposed by Serge Hallyn on 2014-02-10
Status: Merged
Approved by: Doug Smythies on 2014-02-12
Approved revision: 185
Merged at revision: 185
Proposed branch: lp:~serge-hallyn/serverguide/lxc-trusty-update
Merge into: lp:serverguide/trunk
Diff against target: 2377 lines (+654/-1605)
1 file modified
serverguide/C/virtualization.xml (+654/-1605)
To merge this branch: bzr merge lp:~serge-hallyn/serverguide/lxc-trusty-update
Reviewer Review Type Date Requested Status
Doug Smythies 2014-02-10 Approve on 2014-02-12
Review via email: mp+205671@code.launchpad.net

Description of the change

This updates the lxc server guide for trusty. I'll continue to proofread and update, but don't want to hang onto my changes out of tree for too long.

To post a comment you must log in.
Doug Smythies (dsmythies) wrote :

Serge, thanks for your work on this. In the end, I am not a subject matter expert on this, so can only check some things.

Currently, the code does not validate. It must validate. Please use this as a quick how to reference:

https://wiki.ubuntu.com/DocumentationTeam/SystemDocumentation/UbuntuServerGuide

Here is a cut and paste from my computer:

doug@s15:~/sguide-1404/saucy$ bzr pull lp:serverguide
Enter passphrase for key '/home/doug/.ssh/id_rsa':
No revisions or tags to pull.
doug@s15:~/sguide-1404/saucy$ scripts/validate.sh serverguide/C/serverguide.xml
 --Validating serverguide/C/serverguide.xml ...
doug@s15:~/sguide-1404/saucy$ bzr merge lp:~serge-hallyn/serverguide/lxc-trusty-update
Enter passphrase for key '/home/doug/.ssh/id_rsa':
 M serverguide/C/virtualization.xml
All changes applied successfully.
doug@s15:~/sguide-1404/saucy$ scripts/validate.sh serverguide/C/serverguide.xml
 --Validating serverguide/C/serverguide.xml ...
virtualization.xml:2168: element xref: validity error : IDREF attribute linkend references an unknown ID "lxc-conf-other"
virtualization.xml:2382: element xref: validity error : IDREF attribute linkend references an unknown ID "lxc-conf-other"
Document serverguide/C/serverguide.xml does not validate

review: Needs Fixing
184. By Serge Hallyn on 2014-02-11

remove dead links

Serge Hallyn (serge-hallyn) wrote :

Thanks, Doug. updated.

Doug Smythies (dsmythies) wrote :

Hi Serge, This is really great work. I really wish all subject matter experts would help to make the serverguide great and up to date.

I do have some typos and such to point out. However, if you agree and you are busy, I can fix them.

Line 1511: lxc commands a sthe root user; or unprivileged, by running the
"as the"

Line 1584: in Ubuntu Trusty, they are by default offered a range of userids.
Can "Trusty" be deleted. Why? Because it will become stale in future and nobody wants the burden of changing it. Also, I don't think the use of the automatic release name subsitution is appropriate here. Maybe it needs something similar to further below "Starting with Ubuntu 14.04..."

Line 1583: which both container "0 0 429496729". When new users are created
"contain" and the number seems strange. Shouldn't it be 4294967295? (I do not know, I'm just asking.)

Line 2039: (XXX point to some page listing the bugs; ask apw)
Huh??? I think this must have been a note to self type of thing. Do we want a link to a page that might soon become obsolete or unmaintained?

Line 2351: by mapping root in the container to un unprivileged host userid. This
"an"

Line 2427: point/Restart in Mainstream Linux. SIGOPS Op- erating Systems
"Operating" (I realize that this one actually wasn't newly added.)

review: Needs Fixing
185. By Serge Hallyn on 2014-02-12

address typos found by Doug

Serge Hallyn (serge-hallyn) wrote :

Thanks, Doug. Update pushed.

Doug Smythies (dsmythies) wrote :

Serge: Do you have any comment on the /proc/self/uid_map and /proc/self/gid_map range comment I made above? I only ask again because I get hundreds of hits for a search of "/proc/self/uid_map 4294967295", but zero hits for a search of "/proc/self/uid_map 429496729".

For example:
https://lists.linuxcontainers.org/pipermail/lxc-devel/2013-October/005854.html
(craftily selected because you wrote it.)

If it is supposed to be 4294967295, I'll change it.

Do you mind if I change this:

When new users are created
in Ubuntu 14.04, they are by default offered a range of userids.
The list of assigned ids can be seen in the files

To This:

As of Ubuntu 14.04, when new users are created they are by default offered a range of userids.
The list of assigned ids can be seen in the files

Serge Hallyn (serge-hallyn) wrote :

Quoting Doug Smythies (<email address hidden>):
> Serge: Do you have any comment on the /proc/self/uid_map and
> /proc/self/gid_map range comment I made above? I only ask again
> because I get hundreds of hits for a search of "/proc/self/uid_map
> 4294967295", but zero hits for a search of "/proc/self/uid_map
> 429496729".

Sorry, I misunderstood your comment :) I thought you were asking
whether the '0 0' needed to be dropped.

You're right, it should in fact be 4294967295. Cut-paste has failed
me.

> If it is supposed to be 4294967295, I'll change it.

Thanks.

> Do you mind if I change this:
>
> When new users are created
> in Ubuntu 14.04, they are by default offered a range of userids.
> The list of assigned ids can be seen in the files
>
> To This:
>
> As of Ubuntu 14.04, when new users are created they are by default offered a range of userids.
> The list of assigned ids can be seen in the files

Nope, that's great.

thanks,
-serge

Doug Smythies (dsmythies) wrote :

Approving this one, and thanks again. There will two trivial edits, as discussed above, between this MP and what I actually commit.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'serverguide/C/virtualization.xml'
2--- serverguide/C/virtualization.xml 2014-01-19 20:44:48 +0000
3+++ serverguide/C/virtualization.xml 2014-02-12 17:53:19 +0000
4@@ -1464,10 +1464,7 @@
5 are two pre-existing, independently developed implementations of
6 containers-like functionality for Linux. In fact, containers came about as
7 a result of the work to upstream the vserver and OpenVZ functionality.
8- Some vserver and OpenVZ functionality is still missing in containers,
9- however containers can <emphasis>boot</emphasis> many Linux distributions
10- and have the advantage that they can be used with an un-modified upstream
11- kernel.</para>
12+ </para>
13
14 <para>There are two user-space implementations of containers, each
15 exploiting the same kernel features. Libvirt allows the use of containers
16@@ -1479,8 +1476,9 @@
17 confusion.</para>
18
19 <para>In this document we will mainly describe the
20- <application>lxc</application> package. Toward the end, we will describe
21- how to use the libvirt LXC driver.</para>
22+ <application>lxc</application> package. Use of libvirt-lxc is not
23+ generally recommended due to a lack of Apparmor protection for
24+ libvirt-lxc containers.</para>
25
26 <para>In this document, a container name will be shown as CN, C1, or
27 C2.</para>
28@@ -1491,214 +1489,398 @@
29 <para>The <application>lxc</application> package can be installed
30 using</para>
31
32- <screen>
33+<screen>
34 <command>
35 sudo apt-get install lxc
36 </command>
37 </screen>
38
39 <para>This will pull in the required and recommended dependencies,
40- including cgroup-lite, lvm2, and debootstrap. To use libvirt-lxc,
41- install libvirt-bin. LXC and libvirt-lxc can be installed and used at
42- the same time.</para>
43- </sect2>
44-
45- <sect2 id="lxc-hostsetup" status="review">
46- <title>Host Setup</title>
47-
48- <sect3 id="lxc-layout" status="review">
49- <title>Basic layout of LXC files</title>
50-
51- <para>Following is a description of the files and directories which
52- are installed and used by LXC.</para>
53-
54- <itemizedlist>
55- <listitem>
56- <para>There are two upstart jobs:</para>
57-
58- <itemizedlist>
59- <!-- nested list -->
60-
61- <listitem>
62- <para><filename>/etc/init/lxc-net.conf:</filename> is an
63- optional job which only runs if <filename>
64- /etc/default/lxc</filename> specifies USE_LXC_BRIDGE (true by
65- default). It sets up a NATed bridge for containers to
66- use.</para>
67- </listitem>
68-
69- <listitem>
70- <para><filename>/etc/init/lxc.conf:</filename> runs if
71- LXC_AUTO (true by default) is set to true in
72- <filename>/etc/default/lxc</filename>. It looks for entries
73- under <filename>/etc/lxc/auto/</filename> which are symbolic
74- links to configuration files for the containers which should
75- be started at boot.</para>
76- </listitem>
77- </itemizedlist>
78- </listitem>
79-
80- <listitem>
81- <para><filename>/etc/lxc/lxc.conf:</filename> There is a default
82- container creation configuration file,
83- <filename>/etc/lxc/lxc.conf</filename>, which directs containers
84- to use the LXC bridge created by the lxc-net upstart job. If no
85- configuration file is specified when creating a container, then
86- this one will be used.</para>
87- </listitem>
88-
89- <listitem>
90- <para>Examples of other container creation configuration files are
91- found under <filename>/usr/share/doc/lxc/examples</filename>.
92- These show how to create containers without a private network, or
93- using macvlan, vlan, or other network layouts.</para>
94- </listitem>
95-
96- <listitem>
97- <para>The various container administration tools are found under
98- <filename>/usr/bin</filename>.</para>
99- </listitem>
100-
101- <listitem>
102- <para><filename>/usr/lib/lxc/lxc-init</filename> is a very minimal
103- and lightweight init binary which is used by lxc-execute. Rather
104- than `booting' a full container, it manually mounts a few
105- filesystems, especially <filename>/proc</filename>, and executes
106- its arguments. You are not likely to need to manually refer to
107- this file.</para>
108- </listitem>
109-
110- <listitem>
111- <para><filename>/usr/share/lxc/templates/</filename> contains the
112- `templates' which can be used to create new containers of various
113- distributions and flavors. Not all templates are currently
114- supported.</para>
115- </listitem>
116-
117- <listitem>
118- <para><filename>/etc/apparmor.d/lxc/lxc-default</filename>
119- contains the default Apparmor MAC policy which works to protect
120- the host from containers. Please see the <xref
121- linkend="lxc-apparmor"/> for more information.</para>
122- </listitem>
123-
124- <listitem>
125- <para><filename>/etc/apparmor.d/usr.bin.lxc-start</filename>
126- contains a profile to protect the host from
127- <command>lxc-start</command> while it is setting up the
128- container.</para>
129- </listitem>
130-
131- <listitem>
132- <para><filename>/etc/apparmor.d/lxc-containers</filename> causes
133- all the profiles defined under
134- <filename>/etc/apparmor.d/lxc</filename> to be loaded at
135- boot.</para>
136- </listitem>
137-
138- <listitem>
139- <para>There are various man pages for the LXC administration tools
140- as well as the <filename>lxc.conf</filename> container
141- configuration file.</para>
142- </listitem>
143-
144- <listitem>
145- <para><filename>/var/lib/lxc</filename> is where containers and
146- their configuration information are stored.</para>
147- </listitem>
148-
149- <listitem>
150- <para><filename>/var/cache/lxc</filename> is where caches of
151- distribution data are stored to speed up multiple container
152- creations.</para>
153- </listitem>
154- </itemizedlist>
155- </sect3>
156-
157- <sect3 id="lxcbr0" status="review">
158- <title>lxcbr0</title>
159-
160- <para>When USE_LXC_BRIDGE is set to true in /etc/default/lxc (as it is
161- by default), a bridge called lxcbr0 is created at startup. This bridge
162- is given the private address 10.0.3.1, and containers using this
163- bridge will have a 10.0.3.0/24 address. A dnsmasq instance is run
164- listening on that bridge, so if another dnsmasq has bound all
165- interfaces before the lxc-net upstart job runs, lxc-net will fail to
166- start and lxcbr0 will not exist.</para>
167-
168- <para>If you have another bridge - libvirt's default virbr0, or a br0
169- bridge for your default NIC - you can use that bridge in place of
170- lxcbr0 for your containers.</para>
171- </sect3>
172-
173- <sect3 id="lxc-partitions" status="review">
174- <title>Using a separate filesystem for the container store</title>
175-
176- <para>LXC stores container information and (with the default backing
177- store) root filesystems under <filename>/var/lib/lxc</filename>.
178- Container creation templates also tend to store cached distribution
179- information under <filename>/var/cache/lxc</filename>.</para>
180-
181- <para>If you wish to use another filesystem than
182- <filename>/var</filename>, you can mount a filesystem which has more
183- space into those locations. If you have a disk dedicated for this, you
184- can simply mount it at <filename>/var/lib/lxc</filename>. If you'd
185- like to use another location, like <filename>/srv</filename>, you can
186- bind mount it or use a symbolic link. For instance, if
187- <filename>/srv</filename> is a large mounted filesystem, create and
188- symlink two directories:</para>
189-
190- <screen>
191-<command>
192-sudo mkdir /srv/lxclib /srv/lxccache
193-sudo rm -rf /var/lib/lxc /var/cache/lxc
194-sudo ln -s /srv/lxclib /var/lib/lxc
195-sudo ln -s /srv/lxccache /var/cache/lxc
196-</command>
197-</screen>
198-
199- <para>or, using bind mounts:</para>
200-
201- <screen>
202-<command>
203-sudo mkdir /srv/lxclib /srv/lxccache
204-sudo sed -i '$a \
205-/srv/lxclib /var/lib/lxc none defaults,bind 0 0 \
206-/srv/lxccache /var/cache/lxc none defaults,bind 0 0' /etc/fstab
207-sudo mount -a
208-</command>
209-</screen>
210- </sect3>
211-
212- <sect3 id="lxc-lvm" status="review">
213- <title>Containers backed by lvm</title>
214-
215- <para>It is possible to use LVM partitions as the backing stores for
216- containers. Advantages of this include flexibility in storage
217- management and fast container cloning. The tools default to using a VG
218- (volume group) named <emphasis>lxc</emphasis>, but another VG can be
219- used through command line options. When a LV is used as a container
220- backing store, the container's configuration file is still
221- <filename>/var/lib/lxc/CN/config</filename>, but the root fs entry in
222- that file (<emphasis>lxc.rootfs</emphasis>) will point to the lV block
223- device name, i.e. <filename>/dev/lxc/CN</filename>.</para>
224-
225- <para>Containers with directory tree and LVM backing stores can
226- co-exist.</para>
227- </sect3>
228-
229- <sect3 id="lxc-btrfs" status="review">
230- <title>Btrfs</title>
231-
232- <para>If your host has a btrfs <filename>/var</filename>, the LXC
233- administration tools will detect this and automatically exploit it by
234- cloning containers using btrfs snapshots.</para>
235- </sect3>
236-
237- <sect3 id="lxc-apparmor" status="review">
238+ as well as set up a network bridge for containers to use. If you
239+ wish to use unprivileged containers, you will need to ensure that
240+ users have sufficient allocated subuids and subgids, and will likely
241+ want to allow users to connect containers to a bridge (see
242+ <xref linkend="lxc-unpriv"/>).
243+ </para>
244+ </sect2>
245+
246+ <sect2 id="lxc-basic-usage" status="review">
247+ <title>Basic usage</title>
248+ <para>
249+ LXC can be used in two distinct ways - privileged, by running the
250+ lxc commands as the root user; or unprivileged, by running the
251+ lxc commands as a non-root user. (The starting of unprivileged
252+ containers by the root user is possible, but not described here.)
253+ Unprivileged containers are more limited, for instance being unable
254+ to create device nodes or mount block-backed filesystems. However
255+ they are less dangerous to the host, as the root userid in the
256+ container is mapped to a non-root userid on the host.
257+ </para>
258+
259+ <sect3>
260+ <title>Basic privileged usage</title>
261+ <para>
262+ To create a privileged container, you can simply to
263+ </para>
264+<screen>
265+<command>
266+sudo lxc-create --template download --name u1
267+</command>
268+or, abbreviated
269+<command>
270+sudo lxc-create -t download -n u1
271+</command>
272+</screen>
273+ <para>
274+ This will interactively ask for a container root filesystem type
275+ to download - in particular the distribution, release, and
276+ architecture. To create the container non-interactively, you can
277+ specify these values on the command line:
278+ </para>
279+<screen>
280+<command>
281+sudo lxc-create -t download -n u1 -- --dist ubuntu --release trusty --arch amd64
282+</command>
283+or
284+<command>
285+sudo lxc-create -t download -n u1 -- -d ubuntu -r trusty -a amd64
286+</command>
287+</screen>
288+
289+ <para>
290+ You can now use <command>lxc-ls</command> to list containers,
291+ <command>lxc-info</command> to obtain detailed container information,
292+ <command>lxc-start</command> to start and <command>lxc-stop</command>
293+ to stop the container. <command>lxc-attach</command> and
294+ <command>lxc-console</command> allow you to enter a container, if
295+ ssh is not an option. <command>lxc-destroy</command> removes the
296+ container, including its rootfs. See the manual pages for more
297+ information on each command. An example session might look like:
298+ </para>
299+<screen>
300+<command>
301+sudo lxc-ls --fancy
302+sudo lxc-start --name u1 --daemon
303+sudo lxc-info --name u1
304+sudo lxc-stop --name u1
305+sudo lxc-destroy --name u1
306+</command>
307+</screen>
308+
309+ </sect3>
310+
311+ <sect3>
312+ <title>User namespaces</title>
313+ <para>
314+ Unprivileged containers allow users to create and administer
315+ containers without having any root privilege. The feature
316+ underpinning this is called user namespaces. User namespaces
317+ are hierarchical, with privileged tasks in a parent namespace
318+ being able to map its ids into child namespaces. By default every
319+ task on the host runs in the initial user namespace, where
320+ the full range of ids is mapped onto the full range. This can be
321+ seen by looking at /proc/self/uid_map and /proc/self/gid_map,
322+ which both will show "0 0 429496729" when read from the initial
323+ user namespace. When new users are created
324+ in Ubuntu 14.04, they are by default offered a range of userids.
325+ The list of assigned ids can be seen in the files
326+ <filename>/etc/subuid</filename> and <filename>/etc/subgid</filename>
327+ See their respective manpages for more information. Subuids and
328+ subgids are by convention started at id 100000 to avoid conflicting
329+ with system users.
330+ </para>
331+ <para>
332+ If a user was created on an earlier release, it can be granted a
333+ range of ids using <command>usermod</command>, as follows:
334+ </para>
335+<screen>
336+<command>
337+sudo usermod -v 100000-200000 -w 100000-200000 user1
338+</command>
339+</screen>
340+
341+ <para>
342+ The programs <command>newuidmap</command> and <command>
343+ newgidmap</command> are setuid-root programs in the <filename>uidmap</filename>
344+ package, which are used internally by lxc to map subuids and subgids
345+ from the host into the unprivileged container. They ensure that
346+ the user only maps ids which are authorized by the host
347+ configuration.
348+ </para>
349+ </sect3>
350+
351+ <sect3 id="lxc-unpriv">
352+ <title>Basic unprivileged usage</title>
353+ <para>
354+ </para>
355+
356+ <para>
357+ To create unprivileged containers, a few first steps are needed. You
358+ will need to create a default container configuration file, specifying
359+ your desired id mappings and network setup, as well as configure the
360+ host to allow the unprivileged user to hook into the host network. The
361+ example below assumes that your mapped user and group id ranges are
362+ 100000-166000.
363+ </para>
364+<screen>
365+<command>
366+mkdir -p ~/.config/lxc
367+echo "lxc.id_map u 0 100000 66000" > ~/.config/lxc/default.conf
368+echo "lxc.id_map g 0 100000 66000" >> ~/.config/lxc/default.conf
369+echo "lxc.network.type = veth" >> ~/.config/lxc/default.conf
370+echo "lxc.network.link = lxcbr0" >> ~/.config/lxc/default.conf
371+echo "$USER veth lxcbr0 2" >> /etc/lxc/lxc-usernet.conf
372+</command>
373+</screen>
374+
375+ <para>
376+ After this, you can create unprivileged containers the same way as
377+ privileged ones, simply without using sudo.
378+ </para>
379+<screen>
380+<command>
381+lxc-create -t download -n u1 -- -d ubuntu -r trusty -a amd64
382+lxc-start -n u1 -d
383+lxc-attach -n u1
384+lxc-stop -n u1
385+lxc-destroy -n u1
386+</command>
387+</screen>
388+
389+ </sect3>
390+ </sect2>
391+
392+ <sect2 id="lxc-global-conf" status="review">
393+ <title>Global configuration</title>
394+ <para>
395+ The following configuration files are consulted by LXC. For
396+ privileged use, they are found under <filename>/etc/lxc</filename>,
397+ while for unprivileged use they are under <filename>~/.config/lxc</filename>.
398+ <itemizedlist>
399+ <listitem>
400+ <para><filename>lxc.conf</filename> may optionally specify alternate
401+ values for several lxc settings, including the lxcpath,
402+ the default configuration, cgroups to use, a cgroup creation pattern,
403+ and storage backend settings for lvm and zfs.
404+ </para>
405+ </listitem>
406+ <listitem>
407+ <para><filename>default.conf</filename> specifies configuration which
408+ every newly created container should contain. This usually contains
409+ at least a network section, and, for unprivileged users, an id mapping
410+ section
411+ </para>
412+ </listitem>
413+ <listitem>
414+ <para><filename>lxc-usernet.conf</filename> specifies how unprivileged
415+ users may connect their containers to the host-owned network.
416+ </para>
417+ </listitem>
418+ </itemizedlist>
419+ </para>
420+ <para>
421+ <filename>lxc.conf</filename> and <filename>default.conf</filename> are
422+ exist both under <filename>/etc/lxc</filename> and <filename>$HOME/.config/lxc</filename>,
423+ while <filename>lxc-usernet.conf</filename> is only host-wide.
424+ </para>
425+ <para>
426+ By default, containers are located under /var/lib/lxc for the
427+ root user, and $HOME/.local/share/lxc otherwise. The location
428+ can be specified for all lxc commands using the "-P|--lxcpath"
429+ argument.
430+ </para>
431+ </sect2>
432+
433+ <sect2 id="lxc-network" status="review">
434+ <title>Networking</title>
435+ <para>By default LXC creates a private network namespace for each container,
436+ which includes a layer 2 networking stack. Containers usually connect to the
437+ outside world by either having a physical NIC or a veth tunnel endpoint passed
438+ into the container. LXC creates a NATed bridge, lxcbr0, at host startup.
439+ Containers created using the default configuration will have one veth NIC
440+ with the remote end plugged into the lxcbr0 bridge. A NIC can only exist
441+ in one namespace at a time, so a physical NIC passed into the container
442+ is not usable on the host. </para>
443+ <para>It is possible to create a container without a private network namespace.
444+ In this case, the container will have access to the host networking like
445+ any other application. Note that this is particularly dangerous if the
446+ container is running a distribution with upstart, like Ubuntu, since programs
447+ which talk to init, like <command>shutdown</command>, will talk over the
448+ abstract Unix domain socket to the host's upstart, and shut down the host.</para>
449+
450+ <para>There are several ways to determine the ip address for a container.
451+ First, you can use <command>lxc-ls --fancy</command> which will print the ip
452+ addresses for all running containers, or <command>lxc-info -i -H -n C1</command>
453+ which will print C1's ip address. If dnsmasq is installedon the host, you can
454+ also add an entry to <filename>/etc/dnsmasq.conf</filename> as follows
455+<screen>
456+server=/lxc/10.0.3.1
457+</screen>
458+ after which dnsmasq will resolve C1.lxc locally, so that you can do:
459+<screen>
460+ping C1
461+ssh C1
462+</screen>
463+ </para>
464+
465+ <para>For more information, see the lxc.conf manpage as well as the example
466+ network configurations under <filename>/usr/share/doc/lxc/examples/</filename>.
467+ </para>
468+ </sect2>
469+
470+ <sect2 id="lxc-startup" status="review">
471+ <title>LXC startup</title>
472+
473+ <para>LXC does not have a long-running daemon. However it does
474+ have three upstart jobs.</para>
475+
476+ <itemizedlist>
477+
478+ <listitem>
479+ <para><filename>/etc/init/lxc-net.conf:</filename> is an
480+ optional job which only runs if <filename>
481+ /etc/default/lxc-net</filename> specifies USE_LXC_BRIDGE (true by
482+ default). It sets up a NATed bridge for containers to
483+ use.</para>
484+ </listitem>
485+
486+ <listitem>
487+ <para><filename>/etc/init/lxc.conf</filename> loads the
488+ lxc apparmor profiles and optionally starts any autostart
489+ containers. The autostart containers will be ignored if
490+ LXC_AUTO (true by default) is set to true in
491+ <filename>/etc/default/lxc</filename>.
492+ See the lxc-autostart manual page for more information on
493+ autostarted containers.
494+ </para>
495+ </listitem>
496+ <listitem>
497+ <para><filename>/etc/init/lxc-instance.conf:</filename>
498+ is used by <filename>/etc/init/lxc.conf</filename>
499+ to autostart a container.
500+ </para>
501+ </listitem>
502+ </itemizedlist>
503+ </sect2>
504+
505+ <sect2 id="lxc-backinstores" status="review">
506+ <title>Backing Stores</title>
507+ <para>LXC supports several backing stores for container root
508+ filesystems. The default is a simple directory backing store,
509+ because it requires no prior host customization, so long as
510+ the underlying filesystem is large enough. It also requires no root
511+ privilege to create the backing store, so that it is seamless for
512+ unprivileged use. The rootfs for a privileged directory backed
513+ container is located (by default) under
514+ <filename>/var/lib/lxc/C1/rootfs</filename>, while the rootfs for an
515+ unprivileged container is under
516+ <filename>~/.local/share/lxc/C1/rootfs</filename>. If a custom
517+ lxcpath is specified in lxc.system.com, then the container rootfs
518+ will be under <filename>$lxcpath/C1/rootfs</filename>.
519+ </para>
520+
521+ <para>
522+ A snapshot clone C2
523+ of a a directory backed container C1 becomes an overlayfs backed
524+ container, with a rootfs called
525+ <filename>overlayfs:/var/lib/lxc/C1/rootfs:/var/lib/lxc/C2/delta0</filename>.
526+ Other backing store types include loop, btrfs, LVM and zfs.
527+ </para>
528+
529+ <para>
530+ A btrfs backed container mostly looks like a directory backed
531+ container, with its root filesystem in the same location.
532+ However, the root filesystem comprises a subvolume, so that a snapshot
533+ clone is created using a subvolume snapshot.
534+ </para>
535+
536+ <para>The root filesystem for an LVM backed container can be any
537+ separate LV. The default VG name can be specified in lxc.conf.
538+ The filesystem type and size are configurable per-container using
539+ lxc-create.
540+ </para>
541+
542+ <para>
543+ The rootfs for a zfs backed container is a separate zfs filesystem,
544+ mounted under the traditional <filename>/var/lib/lxc/C1/rootfs</filename>
545+ location. The zfsroot can be specified at lxc-create, and a default
546+ can be specified in lxc.system.conf.
547+ </para>
548+
549+ <para> More information on creating containers with the
550+ various backing stores can be found in the lxc-create
551+ manual page.
552+ </para>
553+ </sect2>
554+
555+ <sect2 id="lxc-templates" status="review">
556+ <title>Templates</title>
557+ <para>
558+ Creating a container generally involves creating a root filesystem for
559+ the container. <command>lxc-create</command> delegates this work to
560+ <emphasis>templates</emphasis>, which are generally per-distribution.
561+ The lxc templates shipped with lxc can be found under
562+ <filename>/usr/share/lxc/templates</filename>, and include templates
563+ to create Ubuntu, Debian, Fedora, Oracle, centos, and gentoo containers
564+ among others.
565+ </para>
566+ <para>
567+ Creating distribution images in most cases requires the ability to
568+ create device nodes, often requires tools which are not available
569+ in other distributions, and usually is quite time-consuming. Therefore
570+ lxc comes with a special <emphasis>download</emphasis> template,
571+ which downloads pre-built container images from a central lxc server.
572+ The most important use case is to allow simple creation of unprivileged
573+ containers by non-root users, who could not for instance easily run
574+ the <command>debootstrap</command> command.
575+ </para>
576+ <para>
577+ When running <command>lxc-create</command>, all options which come after
578+ <emphasis>--</emphasis> are passed to the template. In the
579+ following command, <emphasis>--name</emphasis>, <emphasis>--template</emphasis>
580+ and <emphasis>--bdev</emphasis> are passed to <command>lxc-create</command>,
581+ while <emphasis>--release</emphasis> is passed to the template:
582+<screen>
583+<command>
584+lxc-create --template ubuntu --name c1 --bdev loop -- --release trusty
585+</command>
586+</screen>
587+ </para>
588+ <para>
589+ You can obtain help for the options supported by any particular
590+ container by passing <emphasis>--help</emphasis> and the template
591+ name to <command>lxc-create</command>. For instance, for help with
592+ the download template,
593+ </para>
594+<screen>
595+<command>
596+lxc-create --template download --help
597+</command>
598+</screen>
599+ </sect2>
600+ <sect2 id="lxc-autostart" status="review">
601+ <title>Autostart</title>
602+ <para>LXC supports marking containers to be started at system boot. Prior to
603+ Ubuntu 14.04, this was done using symbolic links under the directory
604+ <filename>/etc/lxc/auto</filename>. Starting with Ubuntu 14.04, it is done
605+ through the container configuration files. An entry
606+<screen>
607+<command>
608+lxc.start.auto = 1
609+lxc.start.dely = 5
610+</command>
611+</screen>
612+ would mean that the container should be started at boot, and the system should
613+ wait 5 seconds before starting the next container. LXC also supports ordering
614+ and grouping of containers, as well as reboot and shutdown by autostart groups.
615+ See the manual pages for lxc-autostart and lxc.container.conf for more information.
616+ </para>
617+ </sect2>
618+
619+ <sect2 id="lxc-apparmor" status="review">
620 <title>Apparmor</title>
621
622- <para>LXC ships with an Apparmor profile intended to protect the host
623+ <para>LXC ships with a default Apparmor profile intended to protect the host
624 from accidental misuses of privilege inside the container. For
625 instance, the container will not be able to write to
626 <filename>/proc/sysrq-trigger</filename> or to most
627@@ -1715,6 +1897,16 @@
628 prevents the container from accessing many dangerous paths, and from
629 mounting most filesystems.</para>
630
631+ <para>Prior to 14.04, programs in a container could not be further
632+ confined - for instance, MySQL would run under the container
633+ profile (protecting the host) but would not be able to enter the
634+ MySQL profile (to protect the container). As of Ubuntu 14.04,
635+ the container profile starts a new stacked namespace. All tasks
636+ in the container are confined by the container profile. Furthermore
637+ containers can load their own profiles. Programs started under
638+ those profiles are doubly constrained, first by the container profile,
639+ and secondly by the application profile.</para>
640+
641 <para>If you find that <command>lxc-start</command> is failing due to
642 a legitimate access which is being denied by its Apparmor policy, you
643 can disable the lxc-start profile by doing:</para>
644@@ -1763,9 +1955,9 @@
645
646 <para><command>lxc-execute</command> does not enter an Apparmor
647 profile, but the container it spawns will be confined.</para>
648- </sect3>
649+ </sect2>
650
651- <sect3 id="lxc-cgroups" status="review">
652+ <sect2 id="lxc-cgroups" status="review">
653 <title>Control Groups</title>
654
655 <para>Control groups (cgroups) are a kernel feature providing
656@@ -1773,309 +1965,80 @@
657 limits. They are used in containers to limit block and character
658 device access and to freeze (suspend) containers. They can be further
659 used to limit memory use and block i/o, guarantee minimum cpu shares,
660- and to lock containers to specific cpus. By default, LXC depends on
661- the cgroup-lite package to be installed, which provides the proper
662- cgroup initialization at boot. The cgroup-lite package mounts each
663- cgroup subsystem separately under
664- <filename>/sys/fs/cgroup/SS</filename>, where SS is the subsystem
665- name. For instance the freezer subsystem is mounted under
666- <filename>/sys/fs/cgroup/freezer</filename>. LXC cgroup are kept under
667- <filename>/sys/fs/cgroup/SS/INIT/lxc</filename>, where INIT is the
668- init task's cgroup. This is <filename>/</filename> by default, so in
669- the end the freezer cgroup for container CN would be
670- <filename>/sys/fs/cgroup/freezer/lxc/CN</filename>.</para>
671- </sect3>
672-
673- <sect3 id="lxc-privs" status="review">
674- <title>Privilege</title>
675-
676- <para>The container administration tools must be run with root user
677- privilege. A utility called <filename>lxc-setup</filename> was written
678- with the intention of providing the tools with the needed file
679- capabilities to allow non-root users to run the tools with sufficient
680- privilege. However, as root in a container cannot yet be reliably
681- contained, this is not worthwhile. It is therefore recommended to not
682- use <filename>lxc-setup</filename>, and to provide the LXC
683- administrators the needed sudo privilege.</para>
684-
685- <para>The user namespace, which is expected to be available in the
686- next Long Term Support (LTS) release, will allow containment of the
687- container root user, as well as reduce the amount of privilege
688- required for creating and administering containers.</para>
689- </sect3>
690-
691- <sect3 id="lxc-upstart" status="review">
692- <title>LXC Upstart Jobs</title>
693-
694- <para>As listed above, the <application>lxc</application> package
695- includes two upstart jobs. The first, <filename>lxc-net</filename>, is
696- always started when the other, <filename>lxc</filename>, is about to
697- begin, and stops when it stops. If the USE_LXC_BRIDGE variable is set
698- to false in <filename>/etc/defaults/lxc</filename>, then it will
699- immediately exit. If it is true, and an error occurs bringing up the
700- LXC bridge, then the <filename>lxc</filename> job will not start.
701- <filename>lxc-net</filename> will bring down the LXC bridge when
702- stopped, unless a container is running which is using that
703- bridge.</para>
704-
705- <para>The <filename>lxc</filename> job starts on runlevel 2-5. If the
706- LXC_AUTO variable is set to true, then it will look under
707- <filename>/etc/lxc</filename> for containers which should be started
708- automatically. When the <filename>lxc</filename> job is stopped,
709- either manually or by entering runlevel 0, 1, or 6, it will stop those
710- containers.</para>
711-
712- <para>To register a container to start automatically, create a
713- symbolic link <filename>/etc/lxc/auto/name.conf</filename> pointing
714- to the container's config file. For instance, the configuration file
715- for a container <filename>CN</filename> is
716- <filename>/var/lib/lxc/CN/config</filename>. To make that container
717- auto-start, use the command:</para>
718-
719- <screen>
720-<command>
721-sudo ln -s /var/lib/lxc/CN/config /etc/lxc/auto/CN.conf
722-</command>
723-</screen>
724- </sect3>
725- </sect2>
726-
727- <sect2 id="lxc-admin" status="review">
728- <title>Container Administration</title>
729-
730- <sect3 id="lxc-creation" status="review">
731- <title>Creating Containers</title>
732-
733- <para>The easiest way to create containers is using
734- <command>lxc-create</command>. This script uses distribution-specific
735- templates under <filename>/usr/share/lxc/templates/</filename> to set up
736- container-friendly chroots under
737- <filename>/var/lib/lxc/CN/rootfs</filename>, and initialize the
738- configuration in <filename>/var/lib/lxc/CN/fstab</filename> and
739- <filename>/var/lib/lxc/CN/config</filename>, where CN is the container
740- name</para>
741-
742- <para>The simplest container creation command would look like:</para>
743-
744- <screen>
745-<command>
746-sudo lxc-create -t ubuntu -n CN
747-</command>
748-</screen>
749-
750- <para>This tells lxc-create to use the ubuntu template (-t ubuntu) and
751- to call the container CN (-n CN). Since no configuration file was
752- specified (which would have been done with `-f file'), it will use the
753- default configuration file under
754- <filename>/etc/lxc/lxc.conf</filename>. This gives the container a
755- single veth network interface attached to the lxcbr0 bridge.</para>
756-
757- <para>The container creation templates can also accept arguments.
758- These can be listed after --. For instance</para>
759-
760- <screen>
761-<command>
762-sudo lxc-create -t ubuntu -n oneiric1 -- -r oneiric
763-</command>
764-</screen>
765-
766- <para>passes the arguments '-r oneiric1' to the ubuntu
767- template.</para>
768-
769- <sect4 id="lxc-help" status="review">
770- <title>Help</title>
771-
772- <para>Help on the lxc-create command can be seen by using<command>
773- lxc-create -h</command>. However, the templates also take their own
774- options. If you do</para>
775-
776- <screen>
777-<command>
778-sudo lxc-create -t ubuntu -h
779-</command>
780-</screen>
781-
782- <para>then the general <command>lxc-create</command> help will be
783- followed by help output specific to the ubuntu template. If no
784- template is specified, then only help for
785- <command>lxc-create</command> itself will be shown.</para>
786- </sect4>
787-
788- <sect4 id="lxc-ubuntu" status="review">
789- <title>Ubuntu template</title>
790-
791- <para>The ubuntu template can be used to create Ubuntu system
792- containers with any release at least as new as 10.04 LTS. It uses
793- debootstrap to create a cached container filesystem which gets
794- copied into place each time a container is created. The cached image
795- is saved and only re-generated when you create a container using the
796- <emphasis>-F</emphasis> (flush) option to the template, i.e.:</para>
797-
798- <screen>
799-<command>
800-sudo lxc-create -t ubuntu -n CN -- -F
801-</command>
802-</screen>
803-
804- <para>The Ubuntu release installed by the template will be the same
805- as that on the host, unless otherwise specified with the
806- <emphasis>-r</emphasis> option, i.e.</para>
807-
808- <screen>
809-<command>
810-sudo lxc-create -t ubuntu -n CN -- -r lucid
811-</command>
812-</screen>
813-
814- <para>If you want to create a 32-bit container on a 64-bit host,
815- pass <emphasis>-a i386</emphasis> to the container. If you have the
816- qemu-user-static package installed, then you can create a container
817- using any architecture supported by qemu-user-static.</para>
818-
819- <para>The container will have a user named
820- <emphasis>ubuntu</emphasis> whose password is
821- <emphasis>ubuntu</emphasis> and who is a member of the
822- <emphasis>sudo</emphasis> group. If you wish to inject a public ssh
823- key for the <emphasis>ubuntu</emphasis> user, you can do so with
824- <emphasis>-S sshkey.pub</emphasis>.</para>
825-
826- <para>You can also <emphasis>bind</emphasis> user jdoe from the host
827- into the container using the <emphasis>-b jdoe</emphasis> option.
828- This will copy jdoe's password and shadow entries into the
829- container, make sure his default group and shell are available, add
830- him to the sudo group, and bind-mount his home directory into the
831- container when the container is started.</para>
832-
833- <para>When a container is created, the
834- <filename>release-updates</filename> archive is added to the
835- container's <filename>sources.list</filename>, and its package
836- archive will be updated. If the container release is older than
837- 12.04 LTS, then the lxcguest package will be automatically
838- installed. Alternatively, if the <emphasis>--trim</emphasis> option
839- is specified, then the lxcguest package will not be installed, and
840- many services will be removed from the container. This will result
841- in a faster-booting, but less upgrade-able container.</para>
842- </sect4>
843-
844- <sect4 id="lxc-ubuntu-cloud" status="review">
845- <title>Ubuntu-cloud template</title>
846-
847- <para>The ubuntu-cloud template creates Ubuntu containers by
848- downloading and extracting the published Ubuntu cloud images. It
849- accepts some of the same options as the ubuntu template, namely
850- <emphasis>-r release</emphasis>, <emphasis>-S sshkey.pub</emphasis>,
851- <emphasis>-a arch</emphasis>, and <emphasis>-F</emphasis> to flush
852- the cached image. It also accepts a few extra options. The
853- <emphasis>-C</emphasis> option will create a
854- <emphasis>cloud</emphasis> container, configured for use with a
855- metadata service. The <emphasis>-u</emphasis> option accepts a
856- cloud-init user-data file to configure the container on start. If
857- <emphasis>-L</emphasis> is passed, then no locales will be
858- installed. The <emphasis>-T</emphasis> option can be used to choose
859- a tarball location to extract in place of the published cloud image
860- tarball. Finally the <emphasis>-i</emphasis> option sets a host id
861- for cloud-init, which by default is set to a random string.</para>
862- </sect4>
863-
864- <sect4 id="lxc-other-templates" status="review">
865- <title>Other templates</title>
866-
867- <para>The ubuntu and ubuntu-cloud templates are well supported.
868- Other templates are available however. The debian template creates a
869- Debian based container, using debootstrap much as the ubuntu
870- template does. By default it installs a <emphasis>debian
871- squeeze</emphasis> image. An alternate release can be chosen by
872- setting the SUITE environment variable, i.e.:</para>
873-
874- <screen>
875-<command>
876-sudo SUITE=sid lxc-create -t debian -n d1
877-</command>
878-</screen>
879-
880- <para>To purge the container image cache, call the template directly
881- and pass it the <emphasis>--clean</emphasis> option.</para>
882-
883- <screen>
884-<command>
885-sudo SUITE=sid /usr/share/lxc/templates/lxc-debian --clean
886-</command>
887-</screen>
888-
889- <para>A fedora template exists, which creates containers based on
890- fedora releases &lt;= 14. Fedora release 15 and higher are based on
891- systemd, which the template is not yet able to convert into a
892- container-bootable setup. Before the fedora template is able to run,
893- you'll need to make sure that <command>yum</command> and
894- <command>curl</command> are installed. A fedora 12 container can be
895- created with</para>
896-
897- <screen>
898-<command>
899-sudo lxc-create -t fedora -n fedora12 -- -R 12
900-</command>
901-</screen>
902-
903- <para>A OpenSuSE template exists, but it requires the
904- <command>zypper</command> program, which is not yet packaged. The
905- OpenSuSE template is therefore not supported.</para>
906-
907- <para>Two more templates exist mainly for experimental purposes. The
908- busybox template creates a very small system container based
909- entirely on busybox. The sshd template creates an application
910- container running sshd in a private network namespace. The host's
911- library and binary directories are bind-mounted into the container,
912- though not its <filename>/home</filename> or
913- <filename>/root</filename>. To create, start, and ssh into an ssh
914- container, you might:</para>
915-
916- <screen>
917-<command>
918-sudo lxc-create -t sshd -n ssh1
919-ssh-keygen -f id
920-sudo mkdir /var/lib/lxc/ssh1/rootfs/root/.ssh
921-sudo cp id.pub /var/lib/lxc/ssh1/rootfs/root/.ssh/authorized_keys
922-sudo lxc-start -n ssh1 -d
923-ssh -i id root@ssh1.
924-</command>
925-</screen>
926- </sect4>
927-
928- <sect4 id="lxc-backing-stores" status="review">
929- <title>Backing Stores</title>
930-
931- <para>By default, <command>lxc-create</command> places the
932- container's root filesystem as a directory tree at
933- <filename>/var/lib/lxc/CN/rootfs</filename>. Another option is to
934- use LVM logical volumes. If a volume group named
935- <emphasis>lxc</emphasis> exists, you can create an lvm-backed
936- container called CN using:</para>
937-
938- <screen>
939-<command>
940-sudo lxc-create -t ubuntu -n CN -B lvm
941-</command>
942-</screen>
943-
944- <para>If you want to use a volume group named schroots, with a 5G
945- xfs filesystem, then you would use</para>
946-
947- <screen>
948-<command>
949-sudo lxc-create -t ubuntu -n CN -B lvm --vgname schroots --fssize 5G --fstype xfs
950-</command>
951-</screen>
952- </sect4>
953- </sect3>
954-
955- <sect3 id="lxc-cloning" status="review">
956+ and to lock containers to specific cpus.
957+ </para>
958+
959+ <para> By default, a privileged container CN will be assigned a cgroup
960+ called <filename>/lxc/CN</filename>. In the case of name conflicts
961+ (which can occur when using custom lxcpaths) a suffix "-n", where n
962+ is an integer starting at 0, will be appended to the cgroup name.
963+ </para>
964+
965+ <para> By default, a privileged container CN will be assigned a cgroup
966+ called <filename>CN</filename> under the cgroup of the task which
967+ started the container, for instance
968+ <filename>/usr/1000.user/1.session/CN</filename>. The container root
969+ will be given group ownership of the directory (but not all files)
970+ so that it is allowed to create new child cgroups.
971+ </para>
972+ <para>
973+ As of Ubuntu 14.04, LXC uses the cgroup manager (cgmanager) to
974+ administer cgroups. The cgroup manager receives D-Bus requests
975+ over the Unix socket <filename>/sys/fs/cgroup/cgmanager/sock</filename>.
976+ To fascilitate safe nested containers, the line
977+<screen>
978+<command>
979+lxc.mount.auto = cgroup
980+</command>
981+</screen>
982+ can be added to the container configuration causing the
983+ <filename>/sys/fs/cgroup/cgmanager</filename> directory to be bind-mounted
984+ into the container. The container in turn should start the cgroup
985+ management proxy (done by default if the cgmanager package is installed
986+ in the container) which will move the <filename>/sys/fs/cgroup/cgmanager</filename>
987+ directory to <filename>/sys/fs/cgroup/cgmanager.lower</filename>, then
988+ start listening for requests to proxy on its own socket
989+ <filename>/sys/fs/cgroup/cgmanager/sock</filename>. The host cgmanager
990+ will ensure that nested containers cannot escape their assigned cgroups
991+ or make requests for which they are not authorized.
992+ </para>
993+ </sect2>
994+
995+ <sect2 id="lxc-cloning" status="review">
996 <title>Cloning</title>
997
998 <para>For rapid provisioning, you may wish to customize a canonical
999 container according to your needs and then make multiple copies of it.
1000- This can be done with the <command>lxc-clone</command> program. Given
1001- an existing container called C1, a new container called C2 can be
1002- created using:</para>
1003+ This can be done with the <command>lxc-clone</command> program.
1004+ </para>
1005+ <para>Clones are either snapshots or copies of another container.
1006+ A copy is a new container copied from the original, and takes as
1007+ much space on the host as the original. A snapshot exploits the
1008+ underlying backing store's snapshotting ability to make a
1009+ copy-on-write container referencing the first. Snapshots can be
1010+ created from btrfs, LVM, zfs, and directory backed containers.
1011+ Each backing store has its own peculiarities - for instance, LVM
1012+ containers which are not thinpool-provisioned cannot support snapshots
1013+ of snapshots; zfs containers with snapshots cannot be removed until
1014+ all snapshots are released; LVM containers must be more carefully
1015+ planned as the underlying filesystem may not support growing;
1016+ btrfs does not suffer any of these shortcomings, but suffers from
1017+ reduced fsync performance causing dpkg and apt-get to be slower.
1018+ </para>
1019+ <para>
1020+ Snapshots of directory-packed containers are created using the
1021+ overlay filesystem. For instance, a privileged directory-backed
1022+ container C1 will have its root filesystem under
1023+ <filename>/var/lib/lxc/C1/rootfs</filename>. A snapshot clone of
1024+ C1 called C2 will be started with C1's rootfs mounted readonly
1025+ under <filename>/var/lib/lxc/C2/delta0</filename>. Importantly,
1026+ in this case C1 should not be allowed to run or be removed while
1027+ C2 is running. It is advised instead to consider C1 a <emphasis>
1028+ canonical</emphasis> base container, and to only use its snapshots.
1029+ </para>
1030+
1031+ <para>
1032+ Given an existing container called C1, a copy can be created using:</para>
1033
1034 <screen>
1035 <command>
1036@@ -2083,132 +2046,78 @@
1037 </command>
1038 </screen>
1039
1040- <para>If <filename>/var/lib/lxc</filename> is a btrfs filesystem, then
1041- <command>lxc-clone</command> will create C2's filesystem as a snapshot
1042- of C1's. If the container's root filesystem is lvm backed, then you
1043- can specify the <emphasis>-s</emphasis> option to create the new
1044- rootfs as a lvm snapshot of the original as follows:</para>
1045-
1046- <screen>
1047+ <para>A snapshot can be created using</para>
1048+<screen>
1049 <command>
1050 sudo lxc-clone -s -o C1 -n C2
1051 </command>
1052 </screen>
1053-
1054- <para>Both lvm and btrfs snapshots will provide fast cloning with very
1055- small initial disk usage.</para>
1056- </sect3>
1057-
1058- <sect3 id="lxc-start-stop" status="review">
1059- <title>Starting and stopping</title>
1060-
1061- <note>
1062- <para>The default login/password combination for the newly created
1063- container is ubuntu/ubuntu.</para>
1064- </note>
1065-
1066- <para>To start a container, use <command>lxc-start -n CN</command>. By
1067- default <command>lxc-start</command> will execute
1068- <filename>/sbin/init</filename> in the container. You can provide a
1069- different program to execute, plus arguments, as further arguments to
1070- <command>lxc-start</command>:</para>
1071-
1072- <screen>
1073-<command>
1074-sudo lxc-start -n container /sbin/init loglevel=debug
1075-</command>
1076-</screen>
1077-
1078- <para>If you do not specify the <emphasis>-d</emphasis> (daemon)
1079- option, then you will see a console (on the container's
1080- <filename>/dev/console</filename>, see <xref linkend="lxc-consoles"/>
1081- for more information) on the terminal. If you specify the
1082- <emphasis>-d</emphasis> option, you will not see that console, and
1083- lxc-start will immediately exit success - even if a later part of
1084- container startup has failed. You can use <command>lxc-wait</command>
1085- or <command>lxc-monitor</command> (see <xref
1086- linkend="lxc-monitoring"/>) to check on the success or failure of the
1087- container startup.</para>
1088-
1089- <para>To obtain LXC debugging information, use <emphasis>-o filename
1090- -l debuglevel</emphasis>, for instance:</para>
1091-
1092- <screen>
1093-<command>
1094-sudo lxc-start -o lxc.debug -l DEBUG -n container
1095-</command>
1096-</screen>
1097-
1098- <para>Finally, you can specify configuration parameters inline using
1099- <emphasis>-s</emphasis>. However, it is generally recommended to place
1100- them in the container's configuration file instead. Likewise, an
1101- entirely alternate config file can be specified with the
1102- <emphasis>-f</emphasis> option, but this is not generally
1103- recommended.</para>
1104-
1105- <para>While <command>lxc-start</command> runs the container's
1106- <filename>/sbin/init</filename>, <command>lxc-execute</command> uses a
1107- minimal init program called <command>lxc-init</command>, which
1108- attempts to mount <filename>/proc</filename>,
1109- <filename>/dev/mqueue</filename>, and <filename>/dev/shm</filename>,
1110- executes the programs specified on the command line, and waits for
1111- those to finish executing. <command>lxc-start</command> is intended to
1112- be used for <emphasis>system containers</emphasis>, while
1113- <command>lxc-execute</command> is intended for <emphasis>application
1114- containers</emphasis> (see <ulink
1115- url="https://www.ibm.com/developerworks/linux/library/l-lxc-containers/">
1116- this article</ulink> for more).</para>
1117-
1118- <para>You can stop a container several ways. You can use
1119- <command>shutdown</command>, <command>poweroff</command> and
1120- <command>reboot</command> while logged into the container. To cleanly
1121- shut down a container externally (i.e. from the host), you can issue
1122- the <command>sudo lxc-shutdown -n CN</command> command. This takes an
1123- optional timeout value. If not specified, the command issues a SIGPWR
1124- signal to the container and immediately returns. If the option is
1125- used, as in <command>sudo lxc-shutdown -n CN -t 10</command>, then the
1126- command will wait the specified number of seconds for the container to
1127- cleanly shut down. Then, if the container is still running, it will
1128- kill it (and any running applications). You can also immediately kill
1129- the container (without any chance for applications to cleanly shut
1130- down) using <command>sudo lxc-stop -n CN</command>. Finally,
1131- <command>lxc-kill</command> can be used more generally to send any
1132- signal number to the container's init.</para>
1133-
1134- <para>While the container is shutting down, you can expect to see some
1135- (harmless) error messages, as follows:</para>
1136-
1137- <screen>
1138-$ sudo poweroff
1139-[sudo] password for ubuntu: =
1140-
1141-$ =
1142-
1143-Broadcast message from ubuntu@cn1
1144- (/dev/lxc/console) at 18:17 ...
1145-
1146-The system is going down for power off NOW!
1147- * Asking all remaining processes to terminate...
1148- ...done.
1149- * All processes ended within 1 seconds....
1150- ...done.
1151- * Deconfiguring network interfaces...
1152- ...done.
1153- * Deactivating swap...
1154- ...fail!
1155-umount: /run/lock: not mounted
1156-umount: /dev/shm: not mounted
1157-mount: / is busy
1158- * Will now halt
1159-</screen>
1160-
1161- <para>A container can be frozen with <command>sudo lxc-freeze -n
1162- CN</command>. This will block all its processes until the container is
1163- later unfrozen using <command>sudo lxc-unfreeze -n
1164- CN</command>.</para>
1165- </sect3>
1166-
1167- <sect3 id="lxc-hooks" status="review">
1168+ <para> See the lxc-clone manpage for more information.</para>
1169+
1170+ <sect3>
1171+ <title>Snapshots</title>
1172+ <para>To more easily support the use of snapshot clones for iterative
1173+ container development, LXC supports <emphasis>snapshots</emphasis>.
1174+ When working on a container C1, before making a potentially dangerous
1175+ or hard-to-revert change, you can create a snapshot
1176+<screen>
1177+<command>
1178+sudo lxc-snapshot -n C1
1179+</command>
1180+</screen>
1181+ which is a snapshot-clone called 'snap0' under /var/lib/lxcsnaps
1182+ or $HOME/.local/share/lxcsnaps. The next snapshot will be called
1183+ 'snap1', etc. Existing snapshots can be listed using
1184+ <command>lxc-snapshot -L -n C1</command>, and a snapshot can be
1185+ restored - erasing the current C1 container - using
1186+ <command>lxc-snapshot -r snap1 -n C1</command>. After the restore
1187+ command, the snap1 snapshot continues to exist, and the previous C1
1188+ is erased and replaced with the snap1 snapshot.
1189+ </para>
1190+
1191+ <para>
1192+ Snapshots are supported for btrfs, lvm, zfs, and overlayfs containers.
1193+ If lxc-snapshot is called on a directory-backed container, an error
1194+ will be logged and the snapshot will be created as a copy-clone. The
1195+ reason for this is that if the user creates an overlayfs snapshot of
1196+ a directory-backed container and then makes changes to the directory-backed
1197+ container, then the original container changes will be partially
1198+ reflected in the snapshot. If snapshots of a directory backed container
1199+ C1 are desired, then an overlayfs clone of C1 should be created,
1200+ C1 should not be touched again, and the overlayfs clone can be edited
1201+ and snapshotted at will, as such
1202+<screen>
1203+<command>
1204+lxc-clone -s -o C1 -n C2
1205+lxc-start -n C2 -d # make some changes
1206+lxc-stop -n C2
1207+lxc-snapshot -n C2
1208+lxc-start -n C2 # etc
1209+</command>
1210+</screen>
1211+ </para>
1212+ </sect3>
1213+
1214+ <sect3>
1215+ <title>Ephemeral Containers</title>
1216+ <para>While snapshots are useful for longer-term incremental development
1217+ of images, ephemeral containers utilize snapshots for quick, single-use
1218+ throwaway containers. Given a base container C1, you can start an
1219+ ephemeral container using
1220+<screen>
1221+<command>
1222+lxc-start-ephemeral -o C1
1223+</command>
1224+</screen>
1225+ The container begins as a snapshot of C1. Instructions for logging into
1226+ the container will be printed to the console. After shutdown, the ephemeral
1227+ container will be destroyed. See the lxc-start-ephemeral manual page for
1228+ more options.
1229+ </para>
1230+ </sect3>
1231+ </sect2>
1232+
1233+ <sect2 id="lxc-hooks" status="review">
1234 <title>Lifecycle management hooks</title>
1235
1236 <para>Beginning with Ubuntu 12.10, it is possible to define hooks to
1237@@ -2254,44 +2163,13 @@
1238 executed. Any output generated by the script will be logged at the
1239 debug priority.</para>
1240
1241- <para>See <xref linkend="lxc-conf-other"/> for the configuration file
1242+ <para>Please see the lxc.container.conf manual page for the configuration file
1243 format with which to specify hooks. Some sample hooks are shipped with
1244 the lxc package to serve as an example of how to write and use such
1245 hooks.</para>
1246- </sect3>
1247-
1248- <sect3 id="lxc-monitoring" status="review">
1249- <title>Monitoring container status</title>
1250-
1251- <para>Two commands are available to monitor container state changes.
1252- <command>lxc-monitor</command> monitors one or more containers for any
1253- state changes. It takes a container name as usual with the
1254- <emphasis>-n</emphasis> option, but in this case the container name
1255- can be a posix regular expression to allow monitoring desirable sets
1256- of containers. <command>lxc-monitor</command> continues running as it
1257- prints container changes. <command>lxc-wait</command> waits for a
1258- specific state change and then exits. For instance,</para>
1259-
1260- <screen>
1261-<command>
1262-sudo lxc-monitor -n cont[0-5]*
1263-</command>
1264-</screen>
1265-
1266- <para>would print all state changes to any containers matching the
1267- listed regular expression, whereas</para>
1268-
1269- <screen>
1270-<command>
1271-sudo lxc-wait -n cont1 -s 'STOPPED|FROZEN'
1272-</command>
1273-</screen>
1274-
1275- <para>will wait until container cont1 enters state STOPPED or state
1276- FROZEN and then exit.</para>
1277- </sect3>
1278-
1279- <sect3 id="lxc-consoles" status="review">
1280+ </sect2>
1281+
1282+ <sect2 id="lxc-consoles" status="review">
1283 <title>Consoles</title>
1284
1285 <para>Containers have a configurable number of consoles. One always
1286@@ -2327,941 +2205,111 @@
1287 container will not be able to access that character device and getty
1288 will therefore fail.) This can easily happen when a boot script
1289 blindly mounts a new <filename>/dev</filename>.</para>
1290- </sect3>
1291-
1292- <sect3 id="lxc-introspection" status="review">
1293- <title>Container Inspection</title>
1294-
1295- <para>Several commands are available to gather information on existing
1296- containers. <command>lxc-ls</command> will report all existing
1297- containers in its first line of output, and all running containers in
1298- the second line. <command>lxc-list</command> provides the same
1299- information in a more verbose format, listing running containers first
1300- and stopped containers next. <command>lxc-ps</command> will provide
1301- lists of processes in containers. To provide <command>ps</command>
1302- arguments to <command>lxc-ps</command>, prepend them with
1303- <command>--</command>. For instance, for listing of all processes in
1304- container plain,</para>
1305-
1306- <screen>
1307-<command>
1308-sudo lxc-ps -n plain -- -ef
1309-</command>
1310-</screen>
1311-
1312- <para><command>lxc-info</command> provides the state of a container
1313- and the pid of its init process. <command>lxc-cgroup</command> can be
1314- used to query or set the values of a container's control group limits
1315- and information. This can be more convenient than interacting with the
1316- <command>cgroup</command> filesystem. For instance, to query the list
1317- of devices which a running container is allowed to access, you could
1318- use</para>
1319-
1320- <screen>
1321-<command>
1322-sudo lxc-cgroup -n CN devices.list
1323-</command>
1324-</screen>
1325-
1326- <para>or to add mknod, read, and write access to
1327- <filename>/dev/sda</filename>,</para>
1328-
1329- <screen>
1330-<command>
1331-sudo lxc-cgroup -n CN devices.allow "b 8:* rwm"
1332-</command>
1333-</screen>
1334-
1335- <para>and, to limit it to 300M of RAM,</para>
1336-
1337- <screen>
1338-<command>
1339-lxc-cgroup -n CN memory.limit_in_bytes 300000000
1340-</command>
1341-</screen>
1342-
1343- <para><command>lxc-netstat</command> executes
1344- <command>netstat</command> in the running container, giving you a
1345- glimpse of its network state.</para>
1346-
1347- <para><command>lxc-backup</command> will create backups of the root
1348- filesystems of all existing containers (except lvm-based ones), using
1349- <command>rsync</command> to back the contents up under
1350- <filename>/var/lib/lxc/CN/rootfs.backup.1</filename>. These backups
1351- can be restored using <command>lxc-restore.</command> However,
1352- <command>lxc-backup</command> and <command>lxc-restore</command> are
1353- fragile with respect to customizations and therefore their use is not
1354- recommended.</para>
1355- </sect3>
1356-
1357- <sect3 id="lxc-destroying" status="review">
1358- <title>Destroying containers</title>
1359-
1360- <para>Use <command>lxc-destroy</command> to destroy an existing
1361- container.</para>
1362-
1363- <screen>
1364-<command>
1365-sudo lxc-destroy -n CN
1366-</command>
1367-</screen>
1368-
1369- <para>If the container is running, <command>lxc-destroy</command> will
1370- exit with a message informing you that you can force stopping and
1371- destroying the container with</para>
1372-
1373- <screen>
1374-<command>
1375-sudo lxc-destroy -n CN -f
1376-</command>
1377-</screen>
1378- </sect3>
1379-
1380- <sect3 id="lxc-namespaces" status="review">
1381- <title>Advanced namespace usage</title>
1382-
1383- <para>One of the Linux kernel features used by LXC to create
1384- containers is private namespaces. Namespaces allow a set of tasks to
1385- have private mappings of names to resources for things like pathnames
1386- and process IDs. (See <xref linkend="lxc-resources"/> for a link to
1387- more information). Unlike control groups and other mount features
1388- which are also used to create containers, namespaces cannot be
1389- manipulated using a filesystem interface. Therefore, LXC ships with
1390- the <command>lxc-unshare</command> program, which is mainly for
1391- testing. It provides the ability to create new tasks in private
1392- namespaces. For instance,</para>
1393-
1394- <screen>
1395-<command>
1396-sudo lxc-unshare -s 'MOUNT|PID' /bin/bash
1397-</command>
1398-</screen>
1399-
1400- <para>creates a bash shell with private pid and mount namespaces. In
1401- this shell, you can do</para>
1402-
1403- <screen>
1404-root@ubuntu:~# mount -t proc proc /proc
1405-root@ubuntu:~# ps -ef
1406-UID PID PPID C STIME TTY TIME CMD
1407-root 1 0 6 10:20 pts/9 00:00:00 /bin/bash
1408-root 110 1 0 10:20 pts/9 00:00:00 ps -ef
1409-</screen>
1410-
1411- <para>so that <command>ps</command> shows only the tasks in your new
1412- namespace.</para>
1413- </sect3>
1414-
1415- <sect3 id="lxc-ephemeral" status="review">
1416- <title>Ephemeral containers</title>
1417-
1418- <para>Ephemeral containers are one-time containers. Given an existing
1419- container CN, you can run a command in an ephemeral container created
1420- based on CN, with the host's jdoe user bound into the container,
1421- using:</para>
1422-
1423- <screen>
1424-<command>
1425-lxc-start-ephemeral -b jdoe -o CN -- /home/jdoe/run_my_job
1426-</command>
1427-</screen>
1428-
1429- <para>When the job is finished, the container will be
1430- discarded.</para>
1431- </sect3>
1432-
1433- <sect3 id="lxc-commands" status="review">
1434- <title>Container Commands</title>
1435-
1436- <para>Following is a table of all container commands:</para>
1437-
1438- <table>
1439- <title>Container commands</title>
1440-
1441- <tgroup cols="2" rowsep="1">
1442- <colspec colname="1" colwidth="1.0*"/>
1443-
1444- <colspec colname="2" colwidth="2.5*"/>
1445-
1446- <thead>
1447- <row>
1448- <entry><para>Command</para></entry>
1449-
1450- <entry><para>Synopsis</para></entry>
1451- </row>
1452- </thead>
1453-
1454- <tbody>
1455- <row>
1456- <entry><para>lxc-attach </para></entry>
1457-
1458- <entry><para>(NOT SUPPORTED) Run a command in a running
1459- container</para></entry>
1460- </row>
1461-
1462- <row>
1463- <entry><para>lxc-backup </para></entry>
1464-
1465- <entry><para>Back up the root filesystems for all except lvm-backed
1466- containers</para></entry>
1467- </row>
1468-
1469- <row>
1470- <entry><para>lxc-cgroup </para></entry>
1471-
1472- <entry><para>View and set container control group
1473- settings</para></entry>
1474- </row>
1475-
1476- <row>
1477- <entry><para>lxc-checkconfig </para></entry>
1478-
1479- <entry><para>Verify host support for containers</para></entry>
1480- </row>
1481-
1482- <row>
1483- <entry><para>lxc-checkpoint </para></entry>
1484-
1485- <entry><para>(NOT SUPPORTED) Checkpoint a running
1486- container</para></entry>
1487- </row>
1488-
1489- <row>
1490- <entry><para>lxc-clone </para></entry>
1491-
1492- <entry><para>Clone a new container from an existing
1493- one</para></entry>
1494- </row>
1495-
1496- <row>
1497- <entry><para>lxc-console </para></entry>
1498-
1499- <entry><para>Open a console in a running
1500- container</para></entry>
1501- </row>
1502-
1503- <row>
1504- <entry><para>lxc-create </para></entry>
1505-
1506- <entry><para>Create a new container</para></entry>
1507- </row>
1508-
1509- <row>
1510- <entry><para>lxc-destroy </para></entry>
1511-
1512- <entry><para>Destroy an existing container</para></entry>
1513- </row>
1514-
1515- <row>
1516- <entry><para>lxc-execute </para></entry>
1517-
1518- <entry><para>Run a command in a (not running) application
1519- container</para></entry>
1520- </row>
1521-
1522- <row>
1523- <entry><para>lxc-freeze </para></entry>
1524-
1525- <entry><para>Freeze a running container</para></entry>
1526- </row>
1527-
1528- <row>
1529- <entry><para>lxc-info </para></entry>
1530-
1531- <entry><para>Print information on the state of a
1532- container</para></entry>
1533- </row>
1534-
1535- <row>
1536- <entry><para>lxc-kill </para></entry>
1537-
1538- <entry><para>Send a signal to a container's
1539- init</para></entry>
1540- </row>
1541-
1542- <row>
1543- <entry><para>lxc-list </para></entry>
1544-
1545- <entry><para>List all containers</para></entry>
1546- </row>
1547-
1548- <row>
1549- <entry><para>lxc-ls </para></entry>
1550-
1551- <entry><para>List all containers with shorter output than
1552- lxc-list</para></entry>
1553- </row>
1554-
1555- <row>
1556- <entry><para>lxc-monitor </para></entry>
1557-
1558- <entry><para>Monitor state changes of one or more
1559- containers</para></entry>
1560- </row>
1561-
1562- <row>
1563- <entry><para>lxc-netstat </para></entry>
1564-
1565- <entry><para>Execute netstat in a running
1566- container</para></entry>
1567- </row>
1568-
1569- <row>
1570- <entry><para>lxc-ps </para></entry>
1571-
1572- <entry><para>View process info in a running
1573- container</para></entry>
1574- </row>
1575-
1576- <row>
1577- <entry><para>lxc-restart </para></entry>
1578-
1579- <entry><para>(NOT SUPPORTED) Restart a checkpointed
1580- container</para></entry>
1581- </row>
1582-
1583- <row>
1584- <entry><para>lxc-restore </para></entry>
1585-
1586- <entry><para>Restore containers from backups made by
1587- lxc-backup</para></entry>
1588- </row>
1589-
1590- <row>
1591- <entry><para>lxc-setcap </para></entry>
1592-
1593- <entry><para>(NOT RECOMMENDED) Set file capabilities on LXC
1594- tools</para></entry>
1595- </row>
1596-
1597- <row>
1598- <entry><para>lxc-setuid </para></entry>
1599-
1600- <entry><para>(NOT RECOMMENDED) Set or remove setuid bits on
1601- LXC tools</para></entry>
1602- </row>
1603-
1604- <row>
1605- <entry><para>lxc-shutdown </para></entry>
1606-
1607- <entry><para>Safely shut down a container</para></entry>
1608- </row>
1609-
1610- <row>
1611- <entry><para>lxc-start </para></entry>
1612-
1613- <entry><para>Start a stopped container</para></entry>
1614- </row>
1615-
1616- <row>
1617- <entry><para>lxc-start-ephemeral </para></entry>
1618-
1619- <entry><para>Start an ephemeral (one-time)
1620- container</para></entry>
1621- </row>
1622-
1623- <row>
1624- <entry><para>lxc-stop </para></entry>
1625-
1626- <entry><para>Immediately stop a running
1627- container</para></entry>
1628- </row>
1629-
1630- <row>
1631- <entry><para>lxc-unfreeze </para></entry>
1632-
1633- <entry><para>Unfreeze a frozen container</para></entry>
1634- </row>
1635-
1636- <row>
1637- <entry><para>lxc-unshare </para></entry>
1638-
1639- <entry><para>Testing tool to manually unshare
1640- namespaces</para></entry>
1641- </row>
1642-
1643- <row>
1644- <entry><para>lxc-version </para></entry>
1645-
1646- <entry><para>Print the version of the LXC tools</para></entry>
1647- </row>
1648-
1649- <row>
1650- <entry><para>lxc-wait </para></entry>
1651-
1652- <entry><para>Wait for a container to reach a particular
1653- state</para></entry>
1654- </row>
1655- </tbody>
1656- </tgroup>
1657- </table>
1658- </sect3>
1659- </sect2>
1660-
1661- <sect2 id="lxc-conf" status="review">
1662- <title>Configuration File</title>
1663-
1664- <para>LXC containers are very flexible. The Ubuntu
1665- <application>lxc</application> package sets defaults to make creation of
1666- Ubuntu system containers as simple as possible. If you need more
1667- flexibility, this chapter will show how to fine-tune your containers as
1668- you need.</para>
1669-
1670- <para>Detailed information is available in the
1671- <command>lxc.conf(5)</command> man page. Note that the default
1672- configurations created by the ubuntu templates are reasonable for a
1673- system container and usually do not need customization.</para>
1674-
1675- <sect3 id="lxc-conf-options" status="review">
1676- <title>Choosing configuration files and options</title>
1677-
1678- <para>The container setup is controlled by the LXC configuration
1679- options. Options can be specified at several points:</para>
1680-
1681- <itemizedlist>
1682- <listitem>
1683- <para>During container creation, a configuration file can be
1684- specified. However, creation templates often insert their own
1685- configuration options, so we usually specify only network
1686- configuration options at this point. For other configuration, it
1687- is usually better to edit the configuration file after container
1688- creation.</para>
1689- </listitem>
1690-
1691- <listitem>
1692- <para>The file <filename>/var/lib/lxc/CN/config</filename> is used
1693- at container startup by default.</para>
1694- </listitem>
1695-
1696- <listitem>
1697- <para><command>lxc-start</command> accepts an alternate
1698- configuration file with the <emphasis>-f filename</emphasis>
1699- option.</para>
1700- </listitem>
1701-
1702- <listitem>
1703- <para>Specific configuration variables can be overridden at
1704- <command>lxc-start</command> using <emphasis>-s
1705- key=value</emphasis>. It is generally better to edit the container
1706- configuration file.</para>
1707- </listitem>
1708- </itemizedlist>
1709- </sect3>
1710-
1711- <sect3 id="lxc-conf-net" status="review">
1712- <title>Network Configuration</title>
1713-
1714- <para>Container networking in LXC is very flexible. It is triggered by
1715- the <command>lxc.network.type</command> configuration file entries. If
1716- no such entries exist, then the container will share the host's
1717- networking stack. Services and connections started in the container
1718- will be using the host's IP address. If at least one
1719- <command>lxc.network.type</command> entry is present, then the
1720- container will have a private (layer 2) network stack. It will have
1721- its own network interfaces and firewall rules. There are several
1722- options for <command>lxc.network.type</command>:</para>
1723-
1724- <itemizedlist>
1725- <listitem>
1726- <para><command>lxc.network.type=empty</command>: The container
1727- will have no network interfaces other than loopback.</para>
1728- </listitem>
1729-
1730- <listitem>
1731- <para><command>lxc.network.type=veth</command>: This is the
1732- default when using the ubuntu or ubuntu-cloud templates, and
1733- creates a veth network tunnel. One end of this tunnel becomes the
1734- network interface inside the container. The other end is attached
1735- to a bridged on the host. Any number of such tunnels can be
1736- created by adding more <command>lxc.network.type=veth</command>
1737- entries in the container configuration file. The bridge to which
1738- the host end of the tunnel will be attached is specified with
1739- <command>lxc.network.link = lxcbr0</command>.</para>
1740- </listitem>
1741-
1742- <listitem>
1743- <para><command>lxc.network.type=phys</command> A physical network
1744- interface (i.e. eth2) is passed into the container.</para>
1745- </listitem>
1746- </itemizedlist>
1747-
1748- <para>Two other options are to use vlan or macvlan, however their use
1749- is more complicated and is not described here. A few other networking
1750- options exist:</para>
1751-
1752- <itemizedlist>
1753- <listitem>
1754- <para><command>lxc.network.flags</command> can only be set to
1755- <emphasis>up</emphasis> and ensures that the network interface is
1756- up.</para>
1757- </listitem>
1758-
1759- <listitem>
1760- <para><command>lxc.network.hwaddr</command> specifies a mac
1761- address to assign to the nic inside the container.</para>
1762- </listitem>
1763-
1764- <listitem>
1765- <para><command>lxc.network.ipv4</command> and
1766- <command>lxc.network.ipv6</command> set the respective IP
1767- addresses, if those should be static.</para>
1768- </listitem>
1769-
1770- <listitem>
1771- <para><command>lxc.network.name</command> specifies a name to
1772- assign inside the container. If this is not specified, a good
1773- default (i.e. eth0 for the first nic) is chosen.</para>
1774- </listitem>
1775-
1776- <listitem>
1777- <para><command>lxc.network.lxcscript.up</command> specifies a
1778- script to be called after the host side of the networking has been
1779- set up. See the <command>lxc.conf(5)</command> manual page for
1780- details.</para>
1781- </listitem>
1782- </itemizedlist>
1783- </sect3>
1784-
1785- <sect3 id="lxc-conf-cgroup" status="review">
1786- <title>Control group configuration</title>
1787-
1788- <para>Cgroup options can be specified using
1789- <command>lxc.cgroup</command> entries.
1790- <command>lxc.cgroup.subsystem.item = value</command> instructs LXC to
1791- set cgroup <command>subsystem</command>'s <command>item</command> to
1792- <command>value</command>. It is perhaps simpler to realize that this
1793- will simply write <command>value</command> to the file
1794- <command>item</command> for the container's control group for
1795- subsystem <command>subsystem</command>. For instance, to set the
1796- memory limit to 320M, you could add</para>
1797-
1798- <screen>
1799-<command>
1800-lxc.cgroup.memory.limit_in_bytes = 320000000
1801-</command>
1802-</screen>
1803-
1804- <para>which will cause 320000000 to be written to the file
1805- <filename>/sys/fs/cgroup/memory/lxc/CN/limit_in_bytes</filename>.</para>
1806- </sect3>
1807-
1808- <sect3 id="lxc-conf-mounts" status="review">
1809- <title>Rootfs, mounts and fstab</title>
1810-
1811- <para>An important part of container setup is the mounting of various
1812- filesystems into place. The following is an example configuration file
1813- excerpt demonstrating the commonly used configuration options:</para>
1814-
1815- <screen>
1816-<command>
1817-lxc.rootfs = /var/lib/lxc/CN/rootfs
1818-lxc.mount.entry=proc /var/lib/lxc/CN/rootfs/proc proc nodev,noexec,nosuid 0 0
1819-lxc.mount = /var/lib/lxc/CN/fstab
1820-</command>
1821-</screen>
1822-
1823- <para>The first line says that the container's root filesystem is
1824- already mounted at <filename>/var/lib/lxc/CN/rootfs</filename>. If the
1825- filesystem is a block device (such as an LVM logical volume), then the
1826- path to the block device must be given instead.</para>
1827-
1828- <para>Each <command>lxc.mount.entry</command> line should contain an
1829- item to mount in valid fstab format. The target directory should be
1830- prefixed by <filename>/var/lib/lxc/CN/rootfs</filename>, even if
1831- <command>lxc.rootfs</command> points to a block device.</para>
1832-
1833- <para>Finally, <command>lxc.mount</command> points to a file, in fstab
1834- format, containing further items to mount. Note that all of these
1835- entries will be mounted by the host before the container init is
1836- started. In this way it is possible to bind mount various directories
1837- from the host into the container.</para>
1838- </sect3>
1839-
1840- <sect3 id="lxc-conf-other" status="review">
1841- <title>Other configuration options</title>
1842-
1843- <itemizedlist>
1844- <listitem>
1845- <para><command>lxc.cap.drop</command> can be used to prevent the
1846- container from having or ever obtaining the listed capabilities.
1847- For instance, including</para>
1848-
1849- <screen>
1850-<command>
1851-lxc.cap.drop = sys_admin
1852-</command>
1853-</screen>
1854-
1855- <para>will prevent the container from mounting filesystems, as
1856- well as all other actions which require cap_sys_admin. See the
1857- <command>capabilities(7)</command> manual page for a list of
1858- capabilities and their meanings.</para>
1859- </listitem>
1860-
1861- <listitem>
1862- <para><command>lxc.aa_profile = lxc-CN-profile</command> specifies
1863- a custom Apparmor profile in which to start the container. See
1864- <xref linkend="lxc-apparmor"/> for more information.</para>
1865- </listitem>
1866-
1867- <listitem>
1868- <para><command>lxc.console=/path/to/consolefile</command> will
1869- cause console messages to be written to the specified file.</para>
1870- </listitem>
1871-
1872- <listitem>
1873- <para><command>lxc.arch</command> specifies the architecture for
1874- the container, for instance x86, or x86_64.</para>
1875- </listitem>
1876-
1877- <listitem>
1878- <para><command>lxc.tty=5</command> specifies that 5 consoles (in
1879- addition to <filename>/dev/console</filename>) should be created.
1880- That is, consoles will be available on
1881- <filename>/dev/tty1</filename> through
1882- <filename>/dev/tty5</filename>. The ubuntu templates set this
1883- value to 4.</para>
1884- </listitem>
1885-
1886- <listitem>
1887- <para><command>lxc.pts=1024</command> specifies that the container
1888- should have a private (Unix98) devpts filesystem mount. If this is
1889- not specified, then the container will share
1890- <filename>/dev/pts</filename> with the host, which is rarely
1891- desired. The number 1024 means that 1024 ptys should be allowed in
1892- the container, however this number is currently ignored. Before
1893- starting the container init, LXC will do (essentially) a</para>
1894-
1895- <screen>
1896-<command>
1897-sudo mount -t devpts -o newinstance devpts /dev/pts
1898-</command>
1899-</screen>
1900-
1901- <para>inside the container. It is important to realize that the
1902- container should not mount devpts filesystems of its own. It may
1903- safely do bind or move mounts of its mounted
1904- <filename>/dev/pts</filename>. But if it does</para>
1905-
1906- <screen>
1907-<command>
1908-sudo mount -t devpts devpts /dev/pts
1909-</command>
1910-</screen>
1911-
1912- <para>it will remount the host's devpts instance. If it adds the
1913- newinstance mount option, then it will mount a new private (empty)
1914- instance. In neither case will it remount the instance which was
1915- set up by LXC. For this reason, and to prevent the container from
1916- using the host's ptys, the default Apparmor policy will not allow
1917- containers to mount devpts filesystems after the container's init
1918- has been started.</para>
1919- </listitem>
1920-
1921- <listitem>
1922- <para><command>lxc.devttydir</command> specifies a directory under
1923- <filename>/dev</filename> in which LXC will create its console
1924- devices. If this option is not specified, then the ptys will be
1925- bind-mounted over <filename>/dev/console</filename> and
1926- <filename>/dev/ttyN.</filename> However, rare package updates may
1927- try to blindly <emphasis>rm -f</emphasis> and then
1928- <emphasis>mknod</emphasis> those devices. They will fail (because
1929- the file has been bind-mounted), causing the package update to
1930- fail. When <command>lxc.devttydir</command> is set to LXC, for
1931- instance, then LXC will bind-mount the console ptys onto
1932- <filename>/dev/lxc/console</filename> and
1933- <filename>/dev/lxc/ttyN,</filename> and subsequently symbolically
1934- link them to <filename>/dev/console</filename> and
1935- <filename>/dev/ttyN.</filename> This allows the package updates to
1936- succeed, at the risk of making future gettys on those consoles
1937- fail until the next reboot. This problem will be ideally solved
1938- with device namespaces.</para>
1939- </listitem>
1940-
1941- <listitem>
1942- <para>The <command>lxc.hook.</command> options specify programs to
1943- run at various points in a container's life cycle. See <xref
1944- linkend="lxc-hooks"/> for more information on these hooks. To have
1945- multiple hooks called at any point, list them in multiple entries.
1946- The possible values, whose precise meanings are described in <xref
1947- linkend="lxc-hooks"/>, are</para>
1948-
1949- <para><itemizedlist>
1950- <listitem>
1951- <para><command>lxc.hook.pre-start</command></para>
1952- </listitem>
1953-
1954- <listitem>
1955- <para><command>lxc.hook.pre-mount</command></para>
1956- </listitem>
1957-
1958- <listitem>
1959- <para><command>lxc.hook.mount</command></para>
1960- </listitem>
1961-
1962- <listitem>
1963- <para><command>lxc.hook.start</command></para>
1964- </listitem>
1965-
1966- <listitem>
1967- <para><command>lxc.hook.post-stop</command></para>
1968- </listitem>
1969- </itemizedlist></para>
1970- </listitem>
1971-
1972- <listitem>
1973- <para>The <command>lxc.include</command> option specifies another
1974- configuration file to be loaded. This allows common configuration
1975- sections to be defined once and included by several containers,
1976- simplifying updates of the common section.</para>
1977- </listitem>
1978-
1979- <listitem>
1980- <para>The <command>lxc.seccomp</command> option (introduced with
1981- Ubuntu 12.10) specifies a file containing a
1982- <emphasis>seccomp</emphasis> policy to load. See <xref
1983- linkend="lxc-security"/> for more information on seccomp in
1984- lxc.</para>
1985- </listitem>
1986- </itemizedlist>
1987- </sect3>
1988- </sect2>
1989-
1990- <sect2 id="lxc-container-updates" status="review">
1991- <title>Updates in Ubuntu containers</title>
1992-
1993- <para>Because of some limitations which are placed on containers,
1994- package upgrades at times can fail. For instance, a package install or
1995- upgrade might fail if it is not allowed to create or open a block
1996- device. This often blocks all future upgrades until the issue is
1997- resolved. In some cases, you can work around this by chrooting into the
1998- container, to avoid the container restrictions, and completing the
1999- upgrade in the chroot.</para>
2000-
2001- <para>Some of the specific things known to occasionally impede package
2002- upgrades include:</para>
2003-
2004- <itemizedlist>
2005- <listitem>
2006- <para>The container modifications performed when creating containers
2007- with the --trim option.</para>
2008- </listitem>
2009-
2010- <listitem>
2011- <para>Actions performed by lxcguest. For instance, because
2012- <filename>/lib/init/fstab</filename> is bind-mounted from another
2013- file, mountall upgrades which insist on replacing that file can
2014- fail.</para>
2015- </listitem>
2016-
2017- <listitem>
2018- <para>The over-mounting of console devices with ptys from the host
2019- can cause trouble with udev upgrades.</para>
2020- </listitem>
2021-
2022- <listitem>
2023- <para>Apparmor policy and devices cgroup restrictions can prevent
2024- package upgrades from performing certain actions.</para>
2025- </listitem>
2026-
2027- <listitem>
2028- <para>Capabilities dropped by use of <command>lxc.cap.drop</command>
2029- can likewise stop package upgrades from performing certain
2030- actions.</para>
2031- </listitem>
2032- </itemizedlist>
2033- </sect2>
2034-
2035- <sect2 id="lxc-libvirt" status="review">
2036- <title>Libvirt LXC</title>
2037-
2038- <para>Libvirt is a powerful hypervisor management solution with which
2039- you can administer Qemu, Xen and LXC virtual machines, both locally and
2040- remote. The libvirt LXC driver is a separate implementation from what we
2041- normally call <emphasis>LXC</emphasis>. A few differences
2042- include:</para>
2043-
2044- <itemizedlist>
2045- <listitem>
2046- <para>Configuration is stored in xml format</para>
2047- </listitem>
2048-
2049- <listitem>
2050- <para>There are no tools to facilitate container creation</para>
2051- </listitem>
2052-
2053- <listitem>
2054- <para>By default there is no console on
2055- <filename>/dev/console</filename></para>
2056- </listitem>
2057-
2058- <listitem>
2059- <para>There is no support (yet) for container reboot or full
2060- shutdown</para>
2061- </listitem>
2062- </itemizedlist>
2063-
2064- <!--
2065- <sect3 id="lxc-libvirt-virtinst" status="review">
2066- <title>virt-install</title>
2067-
2068- <para>
2069- virt-install can be used to create an LXC container. (test and
2070- verify). Serge hasn't gotten this to work.
2071- </para>
2072-
2073- </sect3>
2074- -->
2075-
2076- <sect3 id="lxc-libvirt-convert" status="review">
2077- <title>Converting a LXC container to libvirt-lxc</title>
2078-
2079- <para><xref linkend="lxc-creation"/> showed how to create LXC
2080- containers. If you've created a valid LXC container in this way, you
2081- can manage it with libvirt. Fetch a sample xml file from</para>
2082-
2083- <screen>
2084-<command>
2085-wget http://people.canonical.com/~serge/o1.xml
2086-</command>
2087-</screen>
2088-
2089- <para>Edit this file to replace the container name and root filesystem
2090- locations. Then you can define the container with:</para>
2091-
2092- <screen>
2093-<command>
2094-virsh -c lxc:/// define o1.xml
2095-</command>
2096-</screen>
2097- </sect3>
2098-
2099- <sect3 id="lxc-libvirt-fromcloud" status="review">
2100- <title>Creating a container from cloud image</title>
2101-
2102- <para>If you prefer to create a pristine new container just for LXC,
2103- you can download an ubuntu cloud image, extract it, and point a
2104- libvirt LXC xml file to it. For instance, find the url for a root
2105- tarball for the latest daily Ubuntu 12.04 LTS cloud image using</para>
2106-
2107- <screen>
2108-<command>
2109-url1=`ubuntu-cloudimg-query precise daily $arch --format "%{url}\n"`
2110-url=`echo $url1 | sed -e 's/.tar.gz/-root\0/'`
2111-wget $url
2112-filename=`basename $url`
2113-</command>
2114-</screen>
2115-
2116- <para>Extract the downloaded tarball, for instance</para>
2117-
2118- <screen>
2119-<command>
2120-mkdir $HOME/c1
2121-cd $HOME/c1
2122-sudo tar zxf $filename
2123-</command>
2124-</screen>
2125-
2126- <para>Download the xml template</para>
2127-
2128- <screen>
2129-<command>
2130-wget http://people.canonical.com/~serge/o1.xml
2131-</command>
2132-</screen>
2133-
2134- <para>In the xml template, replace the name o1 with c1 and the source
2135- directory <filename>/var/lib/lxc/o1/rootfs</filename> with
2136- <filename>$HOME/c1</filename>. Then define the container using</para>
2137-
2138- <screen>
2139-<command>
2140-virsh define o1.xml
2141-</command>
2142-</screen>
2143- </sect3>
2144-
2145- <sect3 id="lxc-libvirt-interacting" status="review">
2146- <title>Interacting with libvirt containers</title>
2147-
2148- <para>As we've seen, you can create a libvirt-lxc container
2149- using</para>
2150-
2151- <screen>
2152-<command>
2153-virsh -c lxc:/// define container.xml
2154-</command>
2155-</screen>
2156-
2157- <para>To start a container called <emphasis>container</emphasis>,
2158- use</para>
2159-
2160- <screen>
2161-<command>
2162-virsh -c lxc:/// start container
2163-</command>
2164-</screen>
2165-
2166- <para>To stop a running container, use</para>
2167-
2168- <screen>
2169-<command>
2170-virsh -c lxc:/// destroy container
2171-</command>
2172-</screen>
2173-
2174- <para>Note that whereas the <command>lxc-destroy</command> command
2175- deletes the container, the <command>virsh destroy</command> command
2176- stops a running container. To delete the container definition,
2177- use</para>
2178-
2179- <screen>
2180-<command>
2181-virsh -c lxc:/// undefine container
2182-</command>
2183-</screen>
2184-
2185- <para>To get a console to a running container, use</para>
2186-
2187- <screen>
2188-<command>
2189-virsh -c lxc:/// console container
2190-</command>
2191-</screen>
2192-
2193- <para>Exit the console by simultaneously pressing control and
2194- ].</para>
2195- </sect3>
2196- </sect2>
2197-
2198- <sect2 id="lxc-guest" status="review">
2199- <title>The lxcguest package</title>
2200-
2201- <para>In the 11.04 (Natty) and 11.10 (Oneiric) releases of Ubuntu, a
2202- package was introduced called <emphasis
2203- role="italic">lxcguest</emphasis>. An unmodified root image could not be
2204- safely booted inside a container, but an image with the lxcguest package
2205- installed could be booted as a container, on bare hardware, or in a Xen,
2206- kvm, or VMware virtual machine.</para>
2207-
2208- <para>As of the 12.04 LTS release, the work previously done by the
2209- lxcguest package was pushed into the core packages, and the lxcguest
2210- package was removed. As a result, an unmodified 12.04 LTS image can be
2211- booted as a container, on bare hardware, or in a Xen, kvm, or VMware
2212- virtual machine. To use an older release, the lxcguest package should
2213- still be used.</para>
2214+ </sect2>
2215+
2216+ <sect2 id="lxc-debugging" status="review">
2217+ <title>Troubleshooting</title>
2218+ <sect3>
2219+ <title>Logging</title>
2220+ <para> If something goes wrong when starting a container, the first
2221+ step should be to get full logging from LXC:
2222+<screen>
2223+<command>
2224+sudo lxc-start -n C1 -l trace -o debug.out
2225+</command>
2226+</screen>
2227+ This will cause lxc to log at the most verbose level, <filename>trace</filename>,
2228+ and to output log information to a file called 'debug.out'. If the
2229+ file <filename>debug.out</filename> already exists, the new log
2230+ information will be appended.
2231+ </para>
2232+ </sect3>
2233+
2234+ <sect3 id="lxc-monitoring" status="review">
2235+ <title>Monitoring container status</title>
2236+
2237+ <para>Two commands are available to monitor container state changes.
2238+ <command>lxc-monitor</command> monitors one or more containers for any
2239+ state changes. It takes a container name as usual with the
2240+ <emphasis>-n</emphasis> option, but in this case the container name
2241+ can be a posix regular expression to allow monitoring desirable sets
2242+ of containers. <command>lxc-monitor</command> continues running as it
2243+ prints container changes. <command>lxc-wait</command> waits for a
2244+ specific state change and then exits. For instance,</para>
2245+
2246+ <screen>
2247+<command>
2248+sudo lxc-monitor -n cont[0-5]*
2249+</command>
2250+</screen>
2251+
2252+ <para>would print all state changes to any containers matching the
2253+ listed regular expression, whereas</para>
2254+
2255+<screen>
2256+<command>
2257+sudo lxc-wait -n cont1 -s 'STOPPED|FROZEN'
2258+</command>
2259+</screen>
2260+
2261+ <para>will wait until container cont1 enters state STOPPED or state
2262+ FROZEN and then exit.</para>
2263+ </sect3>
2264+
2265+ <sect3>
2266+ <title>Attach</title>
2267+ <para>
2268+ As of Ubuntu 14.04, it is possible to attach to a container's
2269+ namespaces. The simplest case is to simply do
2270+<screen>
2271+<command>
2272+sudo lxc-attach -n C1
2273+</command>
2274+</screen>
2275+ which will start a shell attached to C1's namespaces, or,
2276+ effectively inside the container. The attach functionality is
2277+ very flexible, allowing attaching to a subset of the container's
2278+ namespaces and security context. See the manual page for
2279+ more information.
2280+ </para>
2281+ </sect3>
2282+ <sect3>
2283+ <title>Container init verbosity</title>
2284+ <para>
2285+ If LXC completes the container startup, but the container init
2286+ fails to complete (for instance, no login prompt is shown),
2287+ it can be useful to request additional verbosity from the
2288+ init process. For an upstart container, this might be:
2289+<screen>
2290+<command>
2291+sudo lxc-start -n C1 /sbin/init loglevel=debug
2292+</command>
2293+</screen>
2294+ You can also start an entirely different program in place of
2295+ init, for instance
2296+<screen>
2297+<command>
2298+sudo lxc-start -n C1 /bin/bash
2299+sudo lxc-start -n C1 /bin/sleep 100
2300+sudo lxc-start -n C1 /bin/cat /proc/1/status
2301+</command>
2302+</screen>
2303+ </para>
2304+ </sect3>
2305 </sect2>
2306
2307 <sect2 id="python-lxc" status="review">
2308- <title>Python api</title>
2309+ <title>LXC API</title>
2310
2311- <para>As of 12.10 (Quantal) a <application>python3-lxc</application>
2312- package is available which provides a python module, called
2313- <command>lxc</command>, for managing <application>lxc</application>
2314- containers. An example python session to create and start an Ubuntu
2315- container called <filename>C1</filename>, then wait until it has been
2316- shut down, would look like:</para>
2317+ <para>Most of the LXC functionality can now be accessed through an
2318+ API exported by <filename>liblxc</filename> for which bindings are
2319+ available in several languages, including Python, lua, ruby, and go.
2320+ </para>
2321+ <para>
2322+ Below is an example using the python bindings (which are available in the
2323+ <application>python3-lxc</application> package) which creates and starts
2324+ a container, then waits until it has been shut down:
2325+ </para>
2326
2327 <programlisting>
2328 # sudo python3
2329@@ -3280,8 +2328,6 @@
2330 True
2331 </programlisting>
2332
2333- <para>Debug information for containers started with the python API will
2334- be placed in <filename>/var/log/lxccontainer.log</filename>.</para>
2335 </sect2>
2336
2337 <sect2 id="lxc-security" status="review">
2338@@ -3297,11 +2343,14 @@
2339 the host.</para>
2340
2341 <para>By default, LXC containers are started under a Apparmor policy to
2342- restrict some actions. However, while stronger security is a goal for
2343- future releases, in 12.04 LTS the goal of the Apparmor policy is not to
2344- stop malicious actions but rather to stop accidental harm of the host by
2345- the guest. The details of AppArmor integration with lxc are in section
2346- <xref linkend="lxc-apparmor"/></para>
2347+ restrict some actions.
2348+ The details of AppArmor integration with lxc are in section
2349+ <xref linkend="lxc-apparmor"/>. Unprivileged containers go further
2350+ by mapping root in the container to an unprivileged host userid. This
2351+ prevents access to <filename>/proc</filename> and <filename>/sys</filename>
2352+ files representing host resources, as well as any other files owned by root
2353+ on the host.
2354+ </para>
2355
2356 <sect3 id="lxc-seccomp" status="review">
2357 <title>Exploitable system calls</title>
2358@@ -3327,8 +2376,8 @@
2359 may be possible to reduce the number of available system calls to only
2360 a few. Even for system containers running a full distribution security
2361 gains may be had, for instance by removing the 32-bit compatibility
2362- system calls in a 64-bit container. See <xref
2363- linkend="lxc-conf-other"/> for details of how to configure a container
2364+ system calls in a 64-bit container. See the lxc.container.conf manual
2365+ page for details of how to configure a container
2366 to use seccomp. By default, no seccomp policy is loaded.</para>
2367 </sect3>
2368 </sect2>
2369@@ -3373,7 +2422,7 @@
2370 <listitem>
2371 <para>For more on namespaces in Linux, see: S. Bhattiprolu, E. W.
2372 Biederman, S. E. Hallyn, and D. Lezcano. Virtual Servers and Check-
2373- point/Restart in Mainstream Linux. SIGOPS Op- erating Systems
2374+ point/Restart in Mainstream Linux. SIGOPS Operating Systems
2375 Review, 42(5), 2008.</para>
2376 </listitem>
2377 </itemizedlist>

Subscribers

People subscribed via source and target branches