Merge lp:~tsimonq2/serverguide/lxd into lp:serverguide/trunk

Proposed by Simon Quigley
Status: Merged
Approved by: Doug Smythies
Approved revision: 279
Merged at revision: 279
Proposed branch: lp:~tsimonq2/serverguide/lxd
Merge into: lp:serverguide/trunk
Diff against target: 883 lines (+874/-0)
1 file modified
serverguide/C/virtualization.xml (+874/-0)
To merge this branch: bzr merge lp:~tsimonq2/serverguide/lxd
Reviewer Review Type Date Requested Status
Doug Smythies Approve
Serge Hallyn Pending
Review via email: mp+290540@code.launchpad.net

Description of the change

This includes the LXD addition to the server guide, all credit to Serge Hallyn.

To post a comment you must log in.
Revision history for this message
Doug Smythies (dsmythies) wrote :

Oh, thanks very much Simon. I was just going to setup to do it. I'll review (I know you asked for Serge) it shortly.

Revision history for this message
Simon Quigley (tsimonq2) wrote :

That's fine, I just want to make sure Serge saw, so merge if you want, I
just want his approval, as he took this on. :)

Revision history for this message
Doug Smythies (dsmythies) wrote :

It fails validation, in a great many places, but I suspect all one thing.
The text within a listitem needs to be within <para> bla bla </para> I think, but am not sure.
I am busy with something else at the moment, but I can fix this a little later (it is mindless drone type work).

review: Needs Fixing
Revision history for this message
Simon Quigley (tsimonq2) wrote :

Alright, sounds good. Thanks. :)

Revision history for this message
Doug Smythies (dsmythies) wrote :

+ well justified]</ulink> based on the original academic paper. It also

should be:

+ well justified</ulink> based on the original academic paper. It also

+ The LXC API deals with a 'container'. The LXD API deals with 'remotes,'

should be (I think):

+ The LXC API deals with a 'container'. The LXD API deals with 'remotes',

We will want to do entity substitution where we can. I think we'll come back and do that later, and probably not even during this cycle.

Note to self: Some of our spacing in the PDF is ridiculously large.

Revision history for this message
Doug Smythies (dsmythies) wrote :

I'm going to approve with changes on my copy that will be pushed.

review: Approve
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Thanks!

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'serverguide/C/virtualization.xml'
2--- serverguide/C/virtualization.xml 2016-03-20 21:38:40 +0000
3+++ serverguide/C/virtualization.xml 2016-03-30 23:15:41 +0000
4@@ -786,6 +786,880 @@
5
6 </sect1>
7
8+ <sect1 id="lxd" status="review">
9+ <title>LXD</title>
10+
11+ <para>
12+ LXD (pronounced lex-dee) is the lightervisor, or lightweight container
13+ hypervisor. While this claim has been controversial, it has been <ulink
14+ url="http://blog.dustinkirkland.com/2015/09/container-summit-presentation-and-live.html">quite
15+ well justified]</ulink> based on the original academic paper. It also
16+ nicely distinguishes LXD from <ulink
17+ url="https://help.ubuntu.com/lts/serverguide/lxc.html">LXC</ulink>.
18+ </para>
19+
20+ <para>
21+ LXC (lex-see) is a program which creates and administers "containers" on a
22+ local system. It also provides an API to allow higher level managers, such
23+ as LXD, to administer containers. In a sense, one could compare LXC to
24+ QEMU, while comparing LXD to libvirt.
25+ </para>
26+
27+ <para>
28+ The LXC API deals with a 'container'. The LXD API deals with 'remotes,'
29+ which serve images and containers. This extends the LXC functionality over
30+ the network, and allows concise management of tasks like container
31+ migration and container image publishing.
32+ </para>
33+
34+ <para>
35+ LXD uses LXC under the covers for some container management tasks.
36+ However, it keeps its own container configuration information and has its
37+ own conventions, so that it is best not to use classic LXC commands by hand
38+ with LXD containers. This document will focus on how to configure and
39+ administer LXD on Ubuntu systems.
40+ </para>
41+
42+ <sect2 id="lxd-resources"> <title>Online Resources</title>
43+
44+ <para>
45+ There is excellent documentation for <ulink url="http://github.com/lxc/lxd">getting started with LXD</ulink> in the online LXD README. There is also an online server allowing you to <ulink url="http://linuxcontainers.org/lxd/try-it">try out LXD remotely</ulink>. Stéphane Graber also has an <ulink url="https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/">excellent blog series</ulink> on LXD 2.0. Finally, there is great documentation on how to <ulink url="https://jujucharms.com/docs/devel/config-LXD">drive lxd using juju</ulink>.
46+ </para>
47+
48+ <para>
49+ This document will offer an Ubuntu Server-specific view of LXD, focusing
50+ on administration.
51+ </para>
52+ </sect2>
53+
54+ <sect2 id="lxd-installation"> <title>Installation</title>
55+
56+ <para>
57+ LXD is pre-installed on Ubuntu Server cloud images. On other systems, the lxd
58+ package can be installed using:
59+ </para>
60+
61+<screen>
62+<command>
63+sudo apt install lxd
64+</command>
65+</screen>
66+
67+ <para>
68+ This will install LXD as well as the recommended dependencies, including the LXC
69+ library and lxcfs.
70+ </para>
71+ </sect2>
72+
73+ <sect2 id="lxd-kernel-prep"> <title> Kernel preparation </title>
74+
75+ <para>
76+ In general, Ubuntu 16.04 should have all the desired features enabled by
77+ default. One exception to this is that in order to enable swap
78+ accounting the boot argument <command>swapaccount=1</command> must be set. This can be
79+ done by appending it to the <command>GRUB_CMDLINE_LINUX_DEFAULT=</command>variable in
80+ /etc/default/grub, then running 'update-grub' as root and rebooting.
81+ </para>
82+
83+ </sect2>
84+
85+ <sect2 id="lxd-configuration"> <title> Configuration </title>
86+
87+ <para>
88+ By default, LXD is installed listening on a local UNIX socket, which
89+ members of group LXD can talk to. It has no trust password setup. And
90+ it uses the filesystem at <filename>/var/lib/lxd</filename> to store
91+ containers. To configure LXD with different settings, use <command>lxd
92+ init</command>. This will allow you to choose:
93+ </para>
94+
95+ <itemizedlist>
96+ <listitem>
97+ Directory or <ulink url="http://open-zfs.org">ZFS</ulink> container
98+ backend. If you choose ZFS, you can choose which block devices to use,
99+ or the size of a file to use as backing store.
100+ </listitem>
101+ <listitem> Availability over the network
102+ </listitem>
103+ <listitem> A 'trust password' used by remote clients to vouch for their client certificate
104+ </listitem>
105+ </itemizedlist>
106+
107+ <para>
108+ You must run 'lxd init' as root. 'lxc' commands can be run as any
109+ user who is member of group lxd. If user joe is not a member of group 'lxd',
110+ you may run:
111+ </para>
112+
113+<screen>
114+<command>
115+adduser joe lxd
116+</command>
117+</screen>
118+
119+ <para>
120+ as root to change it. The new membership will take effect on the next login, or after
121+ running 'newgrp lxd' from an existing login.
122+ </para>
123+
124+ <para>
125+ For more information on server, container, profile, and device configuration,
126+ please refer to the definitive configuration provided with the source code,
127+ which can be found <ulink url="https://github.com/lxc/lxd/blob/master/doc/configuration.md">online</ulink>
128+ </para>
129+
130+ </sect2>
131+
132+ <sect2 id="lxd-first-container"> <title> Creating your first container </title>
133+
134+ <para>
135+ This section will describe the simplest container tasks.
136+ </para>
137+
138+ <sect3> <title> Creating a container </title>
139+
140+ <para>
141+ Every new container is created based on either an image, an existing container,
142+ or a container snapshot. At install time, LXD is configured with the following
143+ image servers:
144+ </para>
145+
146+ <itemizedlist>
147+ <listitem>
148+ <filename>ubuntu</filename>: this serves official Ubuntu server cloud image releases.
149+ </listitem>
150+ <listitem>
151+ <filename>ubuntu-daily</filename>: this serves official Ubuntu server cloud images of the daily
152+ development releases.
153+ </listitem>
154+ <listitem>
155+ <filename>images</filename>: this is a default-installed alias for images.linuxcontainers.org.
156+ This is serves classical lxc images built using the same images which the
157+ LXC 'download' template uses. This includes various distributions and
158+ minimal custom-made Ubuntu images. This is not the recommended
159+ server for Ubuntu images.
160+ </listitem>
161+ </itemizedlist>
162+
163+ <para>
164+ The command to create and start a container is
165+ </para>
166+
167+<screen>
168+<command>
169+lxc launch remote:image containername
170+</command>
171+</screen>
172+
173+ <para>
174+ Images are identified by their hash, but are also aliased. The 'ubuntu'
175+ server knows many aliases such as '16.04' and 'xenial'. A list of all
176+ images available from the Ubuntu Server can be seen using:
177+ </para>
178+
179+<screen>
180+<command>
181+lxc image list ubuntu:
182+</command>
183+</screen>
184+
185+ <para>
186+ To see more information about a particular image, including all the aliases it
187+ is known by, you can use:
188+ </para>
189+
190+<screen>
191+<command>
192+lxc image info ubuntu:xenial
193+</command>
194+</screen>
195+
196+ <para>
197+ You can generally refer to an Ubuntu image using the release name ('xenial') or
198+ the release number (16.04). In addition, 'lts' is an alias for the latest
199+ supported LTS release. To choose a different architecture, you can specify the
200+ desired architecture:
201+ </para>
202+
203+<screen>
204+<command>
205+lxc image info ubuntu:lts/arm64
206+</command>
207+</screen>
208+
209+ <para>
210+ Now, let's start our first container:
211+ </para>
212+
213+<screen>
214+<command>
215+lxc launch ubuntu:xenial x1
216+</command>
217+</screen>
218+
219+ <para>
220+ This will download the official current Xenial cloud image for your current
221+ architecture, then create a container using that image, and finally start it.
222+ Once the command returns, you can see it using:
223+ </para>
224+
225+<screen>
226+<command>
227+lxc list
228+lxc info x1
229+</command>
230+</screen>
231+
232+ <para>
233+ and open a shell in it using:
234+ </para>
235+
236+<screen>
237+<command>
238+lxc exec x1 bash
239+</command>
240+</screen>
241+
242+ <para>
243+ The try-it page gives a full synopsis of the commands you can use to administer
244+ containers.
245+ </para>
246+
247+ <para>
248+ Now that the 'xenial' image has been downloaded, it will be kept in sync until
249+ no new containers have been created based on it for (by default) 10 days. After
250+ that, it will be deleted.
251+ </para>
252+ </sect3>
253+ </sect2>
254+
255+ <sect2 id="lxd-server-config"> <title> LXD Server Configuration </title>
256+
257+ <para>
258+ By default, LXD is socket activated and configured to listen only on a
259+ local UNIX socket. While LXD may not be running when you first look at the
260+ process listing, any LXC command will start it up. For instance:
261+ </para>
262+
263+<screen>
264+<command>
265+lxc list
266+</command>
267+</screen>
268+
269+ <para>
270+ This will create your client certificate and contact the LXD server for a
271+ list of containers. To make the server accessible over the network you can
272+ set the http port using:
273+ </para>
274+
275+<screen>
276+<command>
277+lxc config set core.https_address :8443
278+</command>
279+</screen>
280+
281+ <para>
282+ This will tell LXD to listen to port 8843 on all addresses.
283+ </para>
284+
285+ <sect3> <title> Authentication</title>
286+
287+ <para>
288+ By default, LXD will allow all members of group 'lxd' (which by default includes
289+ all members of group admin) to talk to it over the UNIX socket. Communication
290+ over the network is authorized using server and client certificates.
291+ </para>
292+
293+ <para>
294+ Before client c1 wishes to use remote r1, r1 must be registered using:
295+ </para>
296+
297+<screen>
298+<command>
299+lxc remote add r1 r1.example.com:8443
300+</command>
301+</screen>
302+
303+ <para>
304+ The fingerprint of r1's certificate will be shown, to allow the user at
305+ c1 to reject a false certificate. The server in turn will verify that
306+ c1 may be trusted in one of two ways. The first is to register it in advance
307+ from any already-registered client, using:
308+ </para>
309+
310+<screen>
311+<command>
312+lxc config trust add r1 certfile.crt
313+</command>
314+</screen>
315+
316+ <para>
317+ Now when the client adds r1 as a known remote, it will not need to provide
318+ a password as it is already trusted by the server.
319+ </para>
320+
321+ <para>
322+ The other is to configure a 'trust password' with r1, either at initial
323+ configuration using 'lxd init', or after the fact using
324+ </para>
325+
326+<screen>
327+<command>
328+lxc config set core.trust_password PASSWORD
329+</command>
330+</screen>
331+
332+ <para>
333+ The password can then be provided when the client registers
334+ r1 as a known remote.
335+ </para>
336+
337+ </sect3>
338+
339+ <sect3> <title> Backing store </title>
340+
341+ <para>
342+LXD supports several backing stores. The recommended backing store is ZFS,
343+however this is not available on all platforms. Supported backing stores
344+include:
345+ </para>
346+
347+ <itemizedlist>
348+ <listitem>
349+ <para>
350+ ext4: this is the default, and easiest to use. With an ext4 backing store,
351+ containers and images are simply stored as directories on the host filesystem.
352+ Launching new containers requires copying a whole filesystem, and 10 containers
353+ will take up 10 times as much space as one container.
354+ </para>
355+ </listitem>
356+
357+ <listitem>
358+ <para>
359+ ZFS: if ZFS is supported on your architecture (amd64, arm64, or ppc64le), you
360+ can set LXD up to use it using 'lxd init'. If you already have a ZFS pool
361+ configured, you can tell LXD to use it by setting the zfs_pool_name configuration
362+ key:
363+ </para>
364+
365+<screen>
366+<command>
367+lxc config set storage.zfs_pool_name lxd
368+</command>
369+</screen>
370+
371+ <para>
372+ With ZFS, launching a new container
373+ is fast because the filesystem starts as a copy on write clone of the images'
374+ filesystem. Note that unless the container is privileged (see below) LXD will
375+ need to change ownership of all files before the container can start, however
376+ this is fast and change very little of the actual filesystem data.
377+ </para>
378+ </listitem>
379+
380+ <listitem>
381+ <para>
382+ Btrfs: btrfs can be used with many of the same advantages as
383+ ZFS. To use BTRFS as a LXD backing store, simply mount a Btrfs
384+ filesystem under <filename>/var/lib/lxd</filename>. LXD will detect
385+ this and exploit the Btrfs subvolume feature whenever launching a new
386+ container or snapshotting a container.
387+ </para>
388+ </listitem>
389+
390+ <listitem>
391+ <para>
392+ LVM: To use a LVM volume group called 'lxd', you may tell LXD to use that
393+ for containers and images using the command
394+ </para>
395+
396+<screen>
397+<command>
398+ lxc config set storage.lvm_vg_name lxd
399+</command>
400+</screen>
401+
402+ <para>
403+ When launching a new container, its rootfs will start as a lv clone. It is
404+ immediately mounted so that the file uids can be shifted, then unmounted.
405+ Container snapshots also are created as lv snapshots.
406+ </para>
407+ </listitem>
408+ </itemizedlist>
409+ </sect3>
410+ </sect2>
411+
412+ <sect2 id="lxd-container-config"> <title> Container configuration </title>
413+
414+ <para>
415+ Containers are configured according to a set of profiles, described in the
416+ next section, and a set of container-specific configuration. Profiles are
417+ applied first, so that container specific configuration can override profile
418+ configuration.
419+ </para>
420+
421+ <para>
422+ Container configuration includes properties like the architecture, limits
423+ on resources such as CPU and RAM, security details including apparmor
424+ restriction overrides, and devices to apply to the container.
425+ </para>
426+
427+ <para>
428+ Devices can be of several types, including UNIX character, UNIX block,
429+ network interface, or 'disk'. In order to insert a host mount into a
430+ container, a 'disk' device type would be used. For instance, to mount
431+ /opt in container c1 at /opt, you could use:
432+ </para>
433+
434+<screen>
435+<command>
436+lxc config device add c1 opt disk source=/opt path=opt
437+</command>
438+</screen>
439+
440+ <para>
441+ See:
442+ </para>
443+
444+<screen>
445+<command>
446+lxc help config
447+</command>
448+</screen>
449+
450+ <para>
451+ for more information about editing container configurations. You may
452+ also use:
453+ </para>
454+
455+<screen>
456+<command>
457+lxc config edit c1
458+</command>
459+</screen>
460+
461+ <para>
462+ to edit the whole of c1's configuration in your specified $EDITOR.
463+ Comments at the top of the configuration will show examples of
464+ correct syntax to help administrators hit the ground running. If
465+ the edited configuration is not valid when the $EDITOR is exited,
466+ then $EDITOR will be restarted.
467+ </para>
468+
469+ </sect2>
470+
471+ <sect2 id="lxd-profiles"> <title> Profiles </title>
472+
473+ <para>
474+ Profiles are named collections of configurations which may be applied
475+ to more than one container. For instance, all containers created with
476+ 'lxc launch', by default, include the 'default' profile, which provides a
477+ network interface 'eth0'.
478+ </para>
479+
480+ <para>
481+ To mask a device which would be inherited from a profile but which should
482+ not be in the final container, define a device by the same name but of
483+ type 'none':
484+ </para>
485+
486+<screen>
487+<command>
488+lxc config device add c1 eth1 none
489+</command>
490+</screen>
491+
492+ </sect2>
493+ <sect2 id="lxd-nesting"> <title> Nesting </title>
494+
495+ <para>
496+ Containers all share the same host kernel. This means that there is always
497+ an inherent trade-off between features exposed to the container and host
498+ security from malicious containers. Containers by default are therefore
499+ restricted from features needed to nest child containers. In order to
500+ run lxc or lxd containers under a lxd container, the
501+ 'security.nesting' feature must be set to true:
502+ </para>
503+
504+<screen>
505+<command>
506+lxc config set container1 security.nesting true
507+</command>
508+</screen>
509+
510+ <para>
511+ Once this is done, container1 will be able to start sub-containers.
512+ </para>
513+
514+ <para>
515+ In order to run unprivileged (the default in LXD) containers nested under an
516+ unprivileged container, you will need to ensure a wide enough UID mapping.
517+ Please see the 'UID mapping' section below.
518+ </para>
519+
520+ <sect3> <title> Docker </title>
521+
522+ <para>
523+ In order to facilitate running docker containers inside a LXD container,
524+ a 'docker' profile is provided. To launch a new container with the
525+ docker profile, you can run:
526+ </para>
527+
528+<screen>
529+<command>
530+lxc launch xenial container1 -p default -p docker
531+</command>
532+</screen>
533+
534+ <para>
535+ Note that currently the docker package in Ubuntu 16.04 is patched to
536+ facilitate running in a container. This support is expected to land
537+ upstream soon.
538+ </para>
539+
540+ <para>
541+ Note that 'cgroup namespace' support is also required. This is
542+ available in the 16.04 kernel as well as in the 4.6 upstream
543+ source.
544+ </para>
545+
546+ </sect3>
547+ </sect2>
548+
549+ <sect2 id="lxd-limits"> <title> Limits </title>
550+
551+ <para>
552+ LXD supports flexible constraints on the resources which containers
553+ can consume. The limits come in the following categories:
554+ </para>
555+
556+ <itemizedlist>
557+ <listitem>
558+ CPU: limit cpu available to the container in several ways.
559+ </listitem>
560+ <listitem>
561+ Disk: configure the priority of I/O requests under load
562+ </listitem>
563+ <listitem>
564+ RAM: configure memory and swap availability
565+ </listitem>
566+ <listitem>
567+ Network: configure the network priority under load
568+ </listitem>
569+ <listitem>
570+ Processes: limit the number of concurrent processes in the container.
571+ </listitem>
572+ </itemizedlist>
573+
574+ <para>
575+ For a full list of limits known to LXD, see
576+ <ulink url="https://github.com/lxc/lxd/blob/master/doc/configuration.md">
577+ the configuration documentation</ulink>.
578+ </para>
579+
580+ </sect2>
581+
582+ <sect2 id="lxd-uid"> <title> UID mappings and Privileged containers </title>
583+
584+ <para>
585+ By default, LXD creates unprivileged containers. This means that root
586+ in the container is a non-root UID on the host. It is privileged against
587+ the resources owned by the container, but unprivileged with respect to
588+ the host, making root in a container roughly equivalent to an unprivileged
589+ user on the host. (The main exception is the increased attack surface
590+ exposed through the system call interface)
591+ </para>
592+
593+ <para>
594+ Briefly, in an unprivileged container, 65536 UIDs are 'shifted' into the
595+ container. For instance, UID 0 in the container may be 100000 on the host,
596+ UID 1 in the container is 100001, etc, up to 165535. The starting value
597+ for UIDs and GIDs, respectively, is determined by the 'root' entry the
598+ <filename>/etc/subuid</filename> and <filename>/etc/subgid</filename> files. (See the
599+ <ulink url="http://manpages.ubuntu.com/manpages/xenial/en/man5/subuid.5.html">
600+ subuid(5) manual page</ulink>.
601+ </para>
602+
603+ <para>
604+ It is possible to request a container to run without a UID mapping by
605+ setting the security.privileged flag to true:
606+ </para>
607+
608+<screen>
609+<command>
610+lxc config set c1 security.privileged true
611+</command>
612+</screen>
613+
614+ <para>
615+ Note however that in this case the root user in the container is the
616+ root user on the host.
617+ </para>
618+
619+ </sect2>
620+
621+ <sect2 id="lxd-aa"> <title> Apparmor </title>
622+
623+ <para>
624+ LXD confines containers by default with an apparmor profile which protects
625+ containers from each other and the host from containers. For instance
626+ this will prevent root in one container from signaling root in another
627+ container, even though they have the same uid mapping. It also prevents
628+ writing to dangerous, un-namespaced files such as many sysctls and
629+ <filename> /proc/sysrq-trigger</filename>.
630+ </para>
631+
632+ <para>
633+ If the apparmor policy for a container needs to be modified for a container
634+ c1, specific apparmor policy lines can be added in the 'raw.apparmor'
635+ configuration key.
636+ </para>
637+
638+ </sect2>
639+
640+ <sect2 id="lxd-seccomp"> <title> Seccomp </title>
641+
642+ <para>
643+ All containers are confined by a default seccomp policy. This policy
644+ prevents some dangerous actions such as forced umounts, kernel module
645+ loading and unloading, kexec, and the open_by_handle_at system call.
646+ The seccomp configuration cannot be modified, however a completely
647+ different seccomp policy - or none - can be requested using raw.lxc
648+ (see below).
649+ </para>
650+
651+ </sect2>
652+ <sect2> <title> Raw LXC configuration </title>
653+
654+ <para>
655+ LXD configures containers for the best balance of host safety and
656+ container usability. Whenever possible it is highly recommended to
657+ use the defaults, and use the LXD configuration keys to request LXD
658+ to modify as needed. Sometimes, however, it may be necessary to talk
659+ to the underlying lxc driver itself. This can be done by specifying
660+ LXC configuration items in the 'raw.lxc' LXD configuration key. These
661+ must be valid items as documented in
662+ <ulink url="http://manpages.ubuntu.com/manpages/xenial/en/man5/lxc.container.conf.5.html">
663+ the lxc.container.conf(5) manual page</ulink>.
664+ </para>
665+
666+ </sect2>
667+<!-- TODO
668+[//]: # (## Networking)
669+
670+[//]: # (Todo Once the ipv6 changes are implemented.)
671+-->
672+
673+ <sect2> <title> Images and containers </title>
674+
675+ <para>
676+LXD is image based. When you create your first container, you will
677+generally do so using an existing image. LXD comes pre-configured
678+with three default image remotes:
679+ </para>
680+
681+ <itemizedlist>
682+ <listitem>
683+ ubuntu: This is a <ulink url="https://launchpad.net/simplestreams">simplestreams-based</ulink>
684+ remote serving released ubuntu cloud images.
685+ </listitem>
686+
687+ <listitem>
688+ ubuntu-daily: This is another simplestreams based remote which serves
689+ 'daily' ubuntu cloud images. These provide quicker but potentially less
690+ stable images.
691+ </listitem>
692+
693+ <listitem>
694+ images: This is a remote publishing best-effort container images for
695+ many distributions, created using community-provided build scripts.
696+ </listitem>
697+ </itemizedlist>
698+
699+ <para>
700+ To view the images available on one of these servers, you can use:
701+ </para>
702+
703+<screen>
704+<command>
705+lxc image list ubuntu:
706+</command>
707+</screen>
708+
709+ <para>
710+ Most of the images are known by several aliases for easier reference. To
711+ see the full list of aliases, you can use
712+ </para>
713+
714+<screen>
715+<command>
716+lxc image alias list images:
717+</command>
718+</screen>
719+
720+ <para>
721+ Any alias or image fingerprint can be used to specify how to create the new
722+ container. For instance, to create an amd64 Ubuntu 14.04 container, some
723+ options are:
724+ </para>
725+
726+<screen>
727+<command>
728+lxc launch ubuntu:14.04 trusty1
729+lxc launch ubuntu:trusty trusty1
730+lxc launch ubuntu:trusty/amd64 trusty1
731+lxc launch ubuntu:lts trusty1
732+</command>
733+</screen>
734+
735+ <para>
736+ The 'lts' alias always refers to the latest released LTS image.
737+ </para>
738+
739+ <sect3> <title> Snapshots </title>
740+
741+ <para>
742+ Containers can be renamed and live-migrated using the 'lxc move' command:
743+ </para>
744+
745+<screen>
746+<command>
747+lxc move c1 final-beta
748+</command>
749+</screen>
750+
751+ <para>
752+ They can also be snapshotted:
753+ </para>
754+
755+<screen>
756+<command>
757+lxc snapshot c1 YYYY-MM-DD
758+</command>
759+</screen>
760+
761+ <para>
762+ Later changes to c1 can then be reverted by restoring the snapshot:
763+ </para>
764+
765+<screen>
766+<command>
767+lxc restore u1 YYYY-MM-DD
768+</command>
769+</screen>
770+
771+ <para>
772+ New containers can also be created by copying a container or snapshot:
773+ </para>
774+
775+<screen>
776+<command>
777+lxc copy u1/YYYY-MM-DD testcontainer
778+</command>
779+</screen>
780+
781+ </sect3>
782+
783+ <sect3> <title> Publishing images </title>
784+
785+ <para>
786+ When a container or container snapshot is ready for consumption by others,
787+ it can be published as a new image using;
788+ </para>
789+
790+<screen>
791+<command>
792+lxc publish u1/YYYY-MM-DD --alias foo-2.0
793+</command>
794+</screen>
795+
796+ <para>
797+ The published image will be private by default, meaning that LXD will not
798+ allow clients without a trusted certificate to see them. If the image
799+ is safe for public viewing (i.e. contains no private information), then
800+ the 'public' flag can be set, either at publish time using
801+ </para>
802+
803+<screen>
804+<command>
805+lxc publish u1/YYYY-MM-DD --alias foo-2.0 public=true
806+</command>
807+</screen>
808+
809+ <para>
810+ or after the fact using
811+ </para>
812+
813+<screen>
814+<command>
815+lxc image edit foo-2.0
816+</command>
817+</screen>
818+
819+ <para>
820+ and changing the value of the public field.
821+ </para>
822+
823+ </sect3>
824+
825+ <sect3> <title> Image export and import </title>
826+
827+ <para>
828+ Image can be exported as, and imported from, tarballs:
829+ </para>
830+
831+<screen>
832+<command>
833+lxc image export foo-2.0 foo-2.0.tar.gz
834+lxc image import foo-2.0.tar.gz --alias foo-2.0 --public
835+</command>
836+</screen>
837+
838+ </sect3>
839+ </sect2>
840+
841+ <sect2 id="lxd-troubleshooting"> <title> Troubleshooting </title>
842+
843+ <para>
844+ To view debug information about LXD itself, on a systemd based host use
845+ </para>
846+
847+<screen>
848+<command>
849+journalctl -u LXD
850+</command>
851+</screen>
852+
853+ <para>
854+ On an Upstart-based system, you can find the log in
855+ <filename>/var/log/upstart/lxd.log</filename>. To make LXD provide
856+ much more information about requests it is serving, add '--debug' to
857+ LXD's arguments. In systemd, append '--debug' to the 'ExecStart=' line
858+ in <filename>/lib/systemd/system/lxd.service</filename>. In Upstart,
859+ append it to the <command>exec /usr/bin/lxd</command> line in
860+ <filename>/etc/init/lxd.conf</filename>.
861+ </para>
862+
863+ <para>
864+ Container logfiles for container c1 may be seen using:
865+ </para>
866+
867+<screen>
868+<command>
869+lxc info c1 --show-log
870+</command>
871+</screen>
872+
873+ <para>
874+ The configuration file which was used may be found under <filename> /var/log/lxd/c1/lxc.conf</filename>
875+ while apparmor profiles can be found in <filename> /var/lib/lxd/security/apparmor/profiles/c1</filename>
876+ and seccomp profiles in <filename> /var/lib/lxd/security/seccomp/c1</filename>.
877+ </para>
878+ </sect2>
879+
880+ </sect1>
881+
882 <sect1 id="lxc" status="review">
883 <title>LXC</title>
884

Subscribers

People subscribed via source and target branches