Merge lp:~tsimonq2/serverguide/lxd into lp:serverguide/trunk

Proposed by Simon Quigley
Status: Merged
Approved by: Doug Smythies
Approved revision: 279
Merged at revision: 279
Proposed branch: lp:~tsimonq2/serverguide/lxd
Merge into: lp:serverguide/trunk
Diff against target: 883 lines (+874/-0)
1 file modified
serverguide/C/virtualization.xml (+874/-0)
To merge this branch: bzr merge lp:~tsimonq2/serverguide/lxd
Reviewer Review Type Date Requested Status
Doug Smythies Approve
Serge Hallyn Pending
Review via email: mp+290540@code.launchpad.net

Description of the change

This includes the LXD addition to the server guide, all credit to Serge Hallyn.

To post a comment you must log in.
Revision history for this message
Doug Smythies (dsmythies) wrote :

Oh, thanks very much Simon. I was just going to setup to do it. I'll review (I know you asked for Serge) it shortly.

Revision history for this message
Simon Quigley (tsimonq2) wrote :

That's fine, I just want to make sure Serge saw, so merge if you want, I
just want his approval, as he took this on. :)

Revision history for this message
Doug Smythies (dsmythies) wrote :

It fails validation, in a great many places, but I suspect all one thing.
The text within a listitem needs to be within <para> bla bla </para> I think, but am not sure.
I am busy with something else at the moment, but I can fix this a little later (it is mindless drone type work).

review: Needs Fixing
Revision history for this message
Simon Quigley (tsimonq2) wrote :

Alright, sounds good. Thanks. :)

Revision history for this message
Doug Smythies (dsmythies) wrote :

+ well justified]</ulink> based on the original academic paper. It also

should be:

+ well justified</ulink> based on the original academic paper. It also

+ The LXC API deals with a 'container'. The LXD API deals with 'remotes,'

should be (I think):

+ The LXC API deals with a 'container'. The LXD API deals with 'remotes',

We will want to do entity substitution where we can. I think we'll come back and do that later, and probably not even during this cycle.

Note to self: Some of our spacing in the PDF is ridiculously large.

Revision history for this message
Doug Smythies (dsmythies) wrote :

I'm going to approve with changes on my copy that will be pushed.

review: Approve
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Thanks!

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'serverguide/C/virtualization.xml'
--- serverguide/C/virtualization.xml 2016-03-20 21:38:40 +0000
+++ serverguide/C/virtualization.xml 2016-03-30 23:15:41 +0000
@@ -786,6 +786,880 @@
786786
787 </sect1>787 </sect1>
788788
789 <sect1 id="lxd" status="review">
790 <title>LXD</title>
791
792 <para>
793 LXD (pronounced lex-dee) is the lightervisor, or lightweight container
794 hypervisor. While this claim has been controversial, it has been <ulink
795 url="http://blog.dustinkirkland.com/2015/09/container-summit-presentation-and-live.html">quite
796 well justified]</ulink> based on the original academic paper. It also
797 nicely distinguishes LXD from <ulink
798 url="https://help.ubuntu.com/lts/serverguide/lxc.html">LXC</ulink>.
799 </para>
800
801 <para>
802 LXC (lex-see) is a program which creates and administers "containers" on a
803 local system. It also provides an API to allow higher level managers, such
804 as LXD, to administer containers. In a sense, one could compare LXC to
805 QEMU, while comparing LXD to libvirt.
806 </para>
807
808 <para>
809 The LXC API deals with a 'container'. The LXD API deals with 'remotes,'
810 which serve images and containers. This extends the LXC functionality over
811 the network, and allows concise management of tasks like container
812 migration and container image publishing.
813 </para>
814
815 <para>
816 LXD uses LXC under the covers for some container management tasks.
817 However, it keeps its own container configuration information and has its
818 own conventions, so that it is best not to use classic LXC commands by hand
819 with LXD containers. This document will focus on how to configure and
820 administer LXD on Ubuntu systems.
821 </para>
822
823 <sect2 id="lxd-resources"> <title>Online Resources</title>
824
825 <para>
826 There is excellent documentation for <ulink url="http://github.com/lxc/lxd">getting started with LXD</ulink> in the online LXD README. There is also an online server allowing you to <ulink url="http://linuxcontainers.org/lxd/try-it">try out LXD remotely</ulink>. Stéphane Graber also has an <ulink url="https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/">excellent blog series</ulink> on LXD 2.0. Finally, there is great documentation on how to <ulink url="https://jujucharms.com/docs/devel/config-LXD">drive lxd using juju</ulink>.
827 </para>
828
829 <para>
830 This document will offer an Ubuntu Server-specific view of LXD, focusing
831 on administration.
832 </para>
833 </sect2>
834
835 <sect2 id="lxd-installation"> <title>Installation</title>
836
837 <para>
838 LXD is pre-installed on Ubuntu Server cloud images. On other systems, the lxd
839 package can be installed using:
840 </para>
841
842<screen>
843<command>
844sudo apt install lxd
845</command>
846</screen>
847
848 <para>
849 This will install LXD as well as the recommended dependencies, including the LXC
850 library and lxcfs.
851 </para>
852 </sect2>
853
854 <sect2 id="lxd-kernel-prep"> <title> Kernel preparation </title>
855
856 <para>
857 In general, Ubuntu 16.04 should have all the desired features enabled by
858 default. One exception to this is that in order to enable swap
859 accounting the boot argument <command>swapaccount=1</command> must be set. This can be
860 done by appending it to the <command>GRUB_CMDLINE_LINUX_DEFAULT=</command>variable in
861 /etc/default/grub, then running 'update-grub' as root and rebooting.
862 </para>
863
864 </sect2>
865
866 <sect2 id="lxd-configuration"> <title> Configuration </title>
867
868 <para>
869 By default, LXD is installed listening on a local UNIX socket, which
870 members of group LXD can talk to. It has no trust password setup. And
871 it uses the filesystem at <filename>/var/lib/lxd</filename> to store
872 containers. To configure LXD with different settings, use <command>lxd
873 init</command>. This will allow you to choose:
874 </para>
875
876 <itemizedlist>
877 <listitem>
878 Directory or <ulink url="http://open-zfs.org">ZFS</ulink> container
879 backend. If you choose ZFS, you can choose which block devices to use,
880 or the size of a file to use as backing store.
881 </listitem>
882 <listitem> Availability over the network
883 </listitem>
884 <listitem> A 'trust password' used by remote clients to vouch for their client certificate
885 </listitem>
886 </itemizedlist>
887
888 <para>
889 You must run 'lxd init' as root. 'lxc' commands can be run as any
890 user who is member of group lxd. If user joe is not a member of group 'lxd',
891 you may run:
892 </para>
893
894<screen>
895<command>
896adduser joe lxd
897</command>
898</screen>
899
900 <para>
901 as root to change it. The new membership will take effect on the next login, or after
902 running 'newgrp lxd' from an existing login.
903 </para>
904
905 <para>
906 For more information on server, container, profile, and device configuration,
907 please refer to the definitive configuration provided with the source code,
908 which can be found <ulink url="https://github.com/lxc/lxd/blob/master/doc/configuration.md">online</ulink>
909 </para>
910
911 </sect2>
912
913 <sect2 id="lxd-first-container"> <title> Creating your first container </title>
914
915 <para>
916 This section will describe the simplest container tasks.
917 </para>
918
919 <sect3> <title> Creating a container </title>
920
921 <para>
922 Every new container is created based on either an image, an existing container,
923 or a container snapshot. At install time, LXD is configured with the following
924 image servers:
925 </para>
926
927 <itemizedlist>
928 <listitem>
929 <filename>ubuntu</filename>: this serves official Ubuntu server cloud image releases.
930 </listitem>
931 <listitem>
932 <filename>ubuntu-daily</filename>: this serves official Ubuntu server cloud images of the daily
933 development releases.
934 </listitem>
935 <listitem>
936 <filename>images</filename>: this is a default-installed alias for images.linuxcontainers.org.
937 This is serves classical lxc images built using the same images which the
938 LXC 'download' template uses. This includes various distributions and
939 minimal custom-made Ubuntu images. This is not the recommended
940 server for Ubuntu images.
941 </listitem>
942 </itemizedlist>
943
944 <para>
945 The command to create and start a container is
946 </para>
947
948<screen>
949<command>
950lxc launch remote:image containername
951</command>
952</screen>
953
954 <para>
955 Images are identified by their hash, but are also aliased. The 'ubuntu'
956 server knows many aliases such as '16.04' and 'xenial'. A list of all
957 images available from the Ubuntu Server can be seen using:
958 </para>
959
960<screen>
961<command>
962lxc image list ubuntu:
963</command>
964</screen>
965
966 <para>
967 To see more information about a particular image, including all the aliases it
968 is known by, you can use:
969 </para>
970
971<screen>
972<command>
973lxc image info ubuntu:xenial
974</command>
975</screen>
976
977 <para>
978 You can generally refer to an Ubuntu image using the release name ('xenial') or
979 the release number (16.04). In addition, 'lts' is an alias for the latest
980 supported LTS release. To choose a different architecture, you can specify the
981 desired architecture:
982 </para>
983
984<screen>
985<command>
986lxc image info ubuntu:lts/arm64
987</command>
988</screen>
989
990 <para>
991 Now, let's start our first container:
992 </para>
993
994<screen>
995<command>
996lxc launch ubuntu:xenial x1
997</command>
998</screen>
999
1000 <para>
1001 This will download the official current Xenial cloud image for your current
1002 architecture, then create a container using that image, and finally start it.
1003 Once the command returns, you can see it using:
1004 </para>
1005
1006<screen>
1007<command>
1008lxc list
1009lxc info x1
1010</command>
1011</screen>
1012
1013 <para>
1014 and open a shell in it using:
1015 </para>
1016
1017<screen>
1018<command>
1019lxc exec x1 bash
1020</command>
1021</screen>
1022
1023 <para>
1024 The try-it page gives a full synopsis of the commands you can use to administer
1025 containers.
1026 </para>
1027
1028 <para>
1029 Now that the 'xenial' image has been downloaded, it will be kept in sync until
1030 no new containers have been created based on it for (by default) 10 days. After
1031 that, it will be deleted.
1032 </para>
1033 </sect3>
1034 </sect2>
1035
1036 <sect2 id="lxd-server-config"> <title> LXD Server Configuration </title>
1037
1038 <para>
1039 By default, LXD is socket activated and configured to listen only on a
1040 local UNIX socket. While LXD may not be running when you first look at the
1041 process listing, any LXC command will start it up. For instance:
1042 </para>
1043
1044<screen>
1045<command>
1046lxc list
1047</command>
1048</screen>
1049
1050 <para>
1051 This will create your client certificate and contact the LXD server for a
1052 list of containers. To make the server accessible over the network you can
1053 set the http port using:
1054 </para>
1055
1056<screen>
1057<command>
1058lxc config set core.https_address :8443
1059</command>
1060</screen>
1061
1062 <para>
1063 This will tell LXD to listen to port 8843 on all addresses.
1064 </para>
1065
1066 <sect3> <title> Authentication</title>
1067
1068 <para>
1069 By default, LXD will allow all members of group 'lxd' (which by default includes
1070 all members of group admin) to talk to it over the UNIX socket. Communication
1071 over the network is authorized using server and client certificates.
1072 </para>
1073
1074 <para>
1075 Before client c1 wishes to use remote r1, r1 must be registered using:
1076 </para>
1077
1078<screen>
1079<command>
1080lxc remote add r1 r1.example.com:8443
1081</command>
1082</screen>
1083
1084 <para>
1085 The fingerprint of r1's certificate will be shown, to allow the user at
1086 c1 to reject a false certificate. The server in turn will verify that
1087 c1 may be trusted in one of two ways. The first is to register it in advance
1088 from any already-registered client, using:
1089 </para>
1090
1091<screen>
1092<command>
1093lxc config trust add r1 certfile.crt
1094</command>
1095</screen>
1096
1097 <para>
1098 Now when the client adds r1 as a known remote, it will not need to provide
1099 a password as it is already trusted by the server.
1100 </para>
1101
1102 <para>
1103 The other is to configure a 'trust password' with r1, either at initial
1104 configuration using 'lxd init', or after the fact using
1105 </para>
1106
1107<screen>
1108<command>
1109lxc config set core.trust_password PASSWORD
1110</command>
1111</screen>
1112
1113 <para>
1114 The password can then be provided when the client registers
1115 r1 as a known remote.
1116 </para>
1117
1118 </sect3>
1119
1120 <sect3> <title> Backing store </title>
1121
1122 <para>
1123LXD supports several backing stores. The recommended backing store is ZFS,
1124however this is not available on all platforms. Supported backing stores
1125include:
1126 </para>
1127
1128 <itemizedlist>
1129 <listitem>
1130 <para>
1131 ext4: this is the default, and easiest to use. With an ext4 backing store,
1132 containers and images are simply stored as directories on the host filesystem.
1133 Launching new containers requires copying a whole filesystem, and 10 containers
1134 will take up 10 times as much space as one container.
1135 </para>
1136 </listitem>
1137
1138 <listitem>
1139 <para>
1140 ZFS: if ZFS is supported on your architecture (amd64, arm64, or ppc64le), you
1141 can set LXD up to use it using 'lxd init'. If you already have a ZFS pool
1142 configured, you can tell LXD to use it by setting the zfs_pool_name configuration
1143 key:
1144 </para>
1145
1146<screen>
1147<command>
1148lxc config set storage.zfs_pool_name lxd
1149</command>
1150</screen>
1151
1152 <para>
1153 With ZFS, launching a new container
1154 is fast because the filesystem starts as a copy on write clone of the images'
1155 filesystem. Note that unless the container is privileged (see below) LXD will
1156 need to change ownership of all files before the container can start, however
1157 this is fast and change very little of the actual filesystem data.
1158 </para>
1159 </listitem>
1160
1161 <listitem>
1162 <para>
1163 Btrfs: btrfs can be used with many of the same advantages as
1164 ZFS. To use BTRFS as a LXD backing store, simply mount a Btrfs
1165 filesystem under <filename>/var/lib/lxd</filename>. LXD will detect
1166 this and exploit the Btrfs subvolume feature whenever launching a new
1167 container or snapshotting a container.
1168 </para>
1169 </listitem>
1170
1171 <listitem>
1172 <para>
1173 LVM: To use a LVM volume group called 'lxd', you may tell LXD to use that
1174 for containers and images using the command
1175 </para>
1176
1177<screen>
1178<command>
1179 lxc config set storage.lvm_vg_name lxd
1180</command>
1181</screen>
1182
1183 <para>
1184 When launching a new container, its rootfs will start as a lv clone. It is
1185 immediately mounted so that the file uids can be shifted, then unmounted.
1186 Container snapshots also are created as lv snapshots.
1187 </para>
1188 </listitem>
1189 </itemizedlist>
1190 </sect3>
1191 </sect2>
1192
1193 <sect2 id="lxd-container-config"> <title> Container configuration </title>
1194
1195 <para>
1196 Containers are configured according to a set of profiles, described in the
1197 next section, and a set of container-specific configuration. Profiles are
1198 applied first, so that container specific configuration can override profile
1199 configuration.
1200 </para>
1201
1202 <para>
1203 Container configuration includes properties like the architecture, limits
1204 on resources such as CPU and RAM, security details including apparmor
1205 restriction overrides, and devices to apply to the container.
1206 </para>
1207
1208 <para>
1209 Devices can be of several types, including UNIX character, UNIX block,
1210 network interface, or 'disk'. In order to insert a host mount into a
1211 container, a 'disk' device type would be used. For instance, to mount
1212 /opt in container c1 at /opt, you could use:
1213 </para>
1214
1215<screen>
1216<command>
1217lxc config device add c1 opt disk source=/opt path=opt
1218</command>
1219</screen>
1220
1221 <para>
1222 See:
1223 </para>
1224
1225<screen>
1226<command>
1227lxc help config
1228</command>
1229</screen>
1230
1231 <para>
1232 for more information about editing container configurations. You may
1233 also use:
1234 </para>
1235
1236<screen>
1237<command>
1238lxc config edit c1
1239</command>
1240</screen>
1241
1242 <para>
1243 to edit the whole of c1's configuration in your specified $EDITOR.
1244 Comments at the top of the configuration will show examples of
1245 correct syntax to help administrators hit the ground running. If
1246 the edited configuration is not valid when the $EDITOR is exited,
1247 then $EDITOR will be restarted.
1248 </para>
1249
1250 </sect2>
1251
1252 <sect2 id="lxd-profiles"> <title> Profiles </title>
1253
1254 <para>
1255 Profiles are named collections of configurations which may be applied
1256 to more than one container. For instance, all containers created with
1257 'lxc launch', by default, include the 'default' profile, which provides a
1258 network interface 'eth0'.
1259 </para>
1260
1261 <para>
1262 To mask a device which would be inherited from a profile but which should
1263 not be in the final container, define a device by the same name but of
1264 type 'none':
1265 </para>
1266
1267<screen>
1268<command>
1269lxc config device add c1 eth1 none
1270</command>
1271</screen>
1272
1273 </sect2>
1274 <sect2 id="lxd-nesting"> <title> Nesting </title>
1275
1276 <para>
1277 Containers all share the same host kernel. This means that there is always
1278 an inherent trade-off between features exposed to the container and host
1279 security from malicious containers. Containers by default are therefore
1280 restricted from features needed to nest child containers. In order to
1281 run lxc or lxd containers under a lxd container, the
1282 'security.nesting' feature must be set to true:
1283 </para>
1284
1285<screen>
1286<command>
1287lxc config set container1 security.nesting true
1288</command>
1289</screen>
1290
1291 <para>
1292 Once this is done, container1 will be able to start sub-containers.
1293 </para>
1294
1295 <para>
1296 In order to run unprivileged (the default in LXD) containers nested under an
1297 unprivileged container, you will need to ensure a wide enough UID mapping.
1298 Please see the 'UID mapping' section below.
1299 </para>
1300
1301 <sect3> <title> Docker </title>
1302
1303 <para>
1304 In order to facilitate running docker containers inside a LXD container,
1305 a 'docker' profile is provided. To launch a new container with the
1306 docker profile, you can run:
1307 </para>
1308
1309<screen>
1310<command>
1311lxc launch xenial container1 -p default -p docker
1312</command>
1313</screen>
1314
1315 <para>
1316 Note that currently the docker package in Ubuntu 16.04 is patched to
1317 facilitate running in a container. This support is expected to land
1318 upstream soon.
1319 </para>
1320
1321 <para>
1322 Note that 'cgroup namespace' support is also required. This is
1323 available in the 16.04 kernel as well as in the 4.6 upstream
1324 source.
1325 </para>
1326
1327 </sect3>
1328 </sect2>
1329
1330 <sect2 id="lxd-limits"> <title> Limits </title>
1331
1332 <para>
1333 LXD supports flexible constraints on the resources which containers
1334 can consume. The limits come in the following categories:
1335 </para>
1336
1337 <itemizedlist>
1338 <listitem>
1339 CPU: limit cpu available to the container in several ways.
1340 </listitem>
1341 <listitem>
1342 Disk: configure the priority of I/O requests under load
1343 </listitem>
1344 <listitem>
1345 RAM: configure memory and swap availability
1346 </listitem>
1347 <listitem>
1348 Network: configure the network priority under load
1349 </listitem>
1350 <listitem>
1351 Processes: limit the number of concurrent processes in the container.
1352 </listitem>
1353 </itemizedlist>
1354
1355 <para>
1356 For a full list of limits known to LXD, see
1357 <ulink url="https://github.com/lxc/lxd/blob/master/doc/configuration.md">
1358 the configuration documentation</ulink>.
1359 </para>
1360
1361 </sect2>
1362
1363 <sect2 id="lxd-uid"> <title> UID mappings and Privileged containers </title>
1364
1365 <para>
1366 By default, LXD creates unprivileged containers. This means that root
1367 in the container is a non-root UID on the host. It is privileged against
1368 the resources owned by the container, but unprivileged with respect to
1369 the host, making root in a container roughly equivalent to an unprivileged
1370 user on the host. (The main exception is the increased attack surface
1371 exposed through the system call interface)
1372 </para>
1373
1374 <para>
1375 Briefly, in an unprivileged container, 65536 UIDs are 'shifted' into the
1376 container. For instance, UID 0 in the container may be 100000 on the host,
1377 UID 1 in the container is 100001, etc, up to 165535. The starting value
1378 for UIDs and GIDs, respectively, is determined by the 'root' entry the
1379 <filename>/etc/subuid</filename> and <filename>/etc/subgid</filename> files. (See the
1380 <ulink url="http://manpages.ubuntu.com/manpages/xenial/en/man5/subuid.5.html">
1381 subuid(5) manual page</ulink>.
1382 </para>
1383
1384 <para>
1385 It is possible to request a container to run without a UID mapping by
1386 setting the security.privileged flag to true:
1387 </para>
1388
1389<screen>
1390<command>
1391lxc config set c1 security.privileged true
1392</command>
1393</screen>
1394
1395 <para>
1396 Note however that in this case the root user in the container is the
1397 root user on the host.
1398 </para>
1399
1400 </sect2>
1401
1402 <sect2 id="lxd-aa"> <title> Apparmor </title>
1403
1404 <para>
1405 LXD confines containers by default with an apparmor profile which protects
1406 containers from each other and the host from containers. For instance
1407 this will prevent root in one container from signaling root in another
1408 container, even though they have the same uid mapping. It also prevents
1409 writing to dangerous, un-namespaced files such as many sysctls and
1410 <filename> /proc/sysrq-trigger</filename>.
1411 </para>
1412
1413 <para>
1414 If the apparmor policy for a container needs to be modified for a container
1415 c1, specific apparmor policy lines can be added in the 'raw.apparmor'
1416 configuration key.
1417 </para>
1418
1419 </sect2>
1420
1421 <sect2 id="lxd-seccomp"> <title> Seccomp </title>
1422
1423 <para>
1424 All containers are confined by a default seccomp policy. This policy
1425 prevents some dangerous actions such as forced umounts, kernel module
1426 loading and unloading, kexec, and the open_by_handle_at system call.
1427 The seccomp configuration cannot be modified, however a completely
1428 different seccomp policy - or none - can be requested using raw.lxc
1429 (see below).
1430 </para>
1431
1432 </sect2>
1433 <sect2> <title> Raw LXC configuration </title>
1434
1435 <para>
1436 LXD configures containers for the best balance of host safety and
1437 container usability. Whenever possible it is highly recommended to
1438 use the defaults, and use the LXD configuration keys to request LXD
1439 to modify as needed. Sometimes, however, it may be necessary to talk
1440 to the underlying lxc driver itself. This can be done by specifying
1441 LXC configuration items in the 'raw.lxc' LXD configuration key. These
1442 must be valid items as documented in
1443 <ulink url="http://manpages.ubuntu.com/manpages/xenial/en/man5/lxc.container.conf.5.html">
1444 the lxc.container.conf(5) manual page</ulink>.
1445 </para>
1446
1447 </sect2>
1448<!-- TODO
1449[//]: # (## Networking)
1450
1451[//]: # (Todo Once the ipv6 changes are implemented.)
1452-->
1453
1454 <sect2> <title> Images and containers </title>
1455
1456 <para>
1457LXD is image based. When you create your first container, you will
1458generally do so using an existing image. LXD comes pre-configured
1459with three default image remotes:
1460 </para>
1461
1462 <itemizedlist>
1463 <listitem>
1464 ubuntu: This is a <ulink url="https://launchpad.net/simplestreams">simplestreams-based</ulink>
1465 remote serving released ubuntu cloud images.
1466 </listitem>
1467
1468 <listitem>
1469 ubuntu-daily: This is another simplestreams based remote which serves
1470 'daily' ubuntu cloud images. These provide quicker but potentially less
1471 stable images.
1472 </listitem>
1473
1474 <listitem>
1475 images: This is a remote publishing best-effort container images for
1476 many distributions, created using community-provided build scripts.
1477 </listitem>
1478 </itemizedlist>
1479
1480 <para>
1481 To view the images available on one of these servers, you can use:
1482 </para>
1483
1484<screen>
1485<command>
1486lxc image list ubuntu:
1487</command>
1488</screen>
1489
1490 <para>
1491 Most of the images are known by several aliases for easier reference. To
1492 see the full list of aliases, you can use
1493 </para>
1494
1495<screen>
1496<command>
1497lxc image alias list images:
1498</command>
1499</screen>
1500
1501 <para>
1502 Any alias or image fingerprint can be used to specify how to create the new
1503 container. For instance, to create an amd64 Ubuntu 14.04 container, some
1504 options are:
1505 </para>
1506
1507<screen>
1508<command>
1509lxc launch ubuntu:14.04 trusty1
1510lxc launch ubuntu:trusty trusty1
1511lxc launch ubuntu:trusty/amd64 trusty1
1512lxc launch ubuntu:lts trusty1
1513</command>
1514</screen>
1515
1516 <para>
1517 The 'lts' alias always refers to the latest released LTS image.
1518 </para>
1519
1520 <sect3> <title> Snapshots </title>
1521
1522 <para>
1523 Containers can be renamed and live-migrated using the 'lxc move' command:
1524 </para>
1525
1526<screen>
1527<command>
1528lxc move c1 final-beta
1529</command>
1530</screen>
1531
1532 <para>
1533 They can also be snapshotted:
1534 </para>
1535
1536<screen>
1537<command>
1538lxc snapshot c1 YYYY-MM-DD
1539</command>
1540</screen>
1541
1542 <para>
1543 Later changes to c1 can then be reverted by restoring the snapshot:
1544 </para>
1545
1546<screen>
1547<command>
1548lxc restore u1 YYYY-MM-DD
1549</command>
1550</screen>
1551
1552 <para>
1553 New containers can also be created by copying a container or snapshot:
1554 </para>
1555
1556<screen>
1557<command>
1558lxc copy u1/YYYY-MM-DD testcontainer
1559</command>
1560</screen>
1561
1562 </sect3>
1563
1564 <sect3> <title> Publishing images </title>
1565
1566 <para>
1567 When a container or container snapshot is ready for consumption by others,
1568 it can be published as a new image using;
1569 </para>
1570
1571<screen>
1572<command>
1573lxc publish u1/YYYY-MM-DD --alias foo-2.0
1574</command>
1575</screen>
1576
1577 <para>
1578 The published image will be private by default, meaning that LXD will not
1579 allow clients without a trusted certificate to see them. If the image
1580 is safe for public viewing (i.e. contains no private information), then
1581 the 'public' flag can be set, either at publish time using
1582 </para>
1583
1584<screen>
1585<command>
1586lxc publish u1/YYYY-MM-DD --alias foo-2.0 public=true
1587</command>
1588</screen>
1589
1590 <para>
1591 or after the fact using
1592 </para>
1593
1594<screen>
1595<command>
1596lxc image edit foo-2.0
1597</command>
1598</screen>
1599
1600 <para>
1601 and changing the value of the public field.
1602 </para>
1603
1604 </sect3>
1605
1606 <sect3> <title> Image export and import </title>
1607
1608 <para>
1609 Image can be exported as, and imported from, tarballs:
1610 </para>
1611
1612<screen>
1613<command>
1614lxc image export foo-2.0 foo-2.0.tar.gz
1615lxc image import foo-2.0.tar.gz --alias foo-2.0 --public
1616</command>
1617</screen>
1618
1619 </sect3>
1620 </sect2>
1621
1622 <sect2 id="lxd-troubleshooting"> <title> Troubleshooting </title>
1623
1624 <para>
1625 To view debug information about LXD itself, on a systemd based host use
1626 </para>
1627
1628<screen>
1629<command>
1630journalctl -u LXD
1631</command>
1632</screen>
1633
1634 <para>
1635 On an Upstart-based system, you can find the log in
1636 <filename>/var/log/upstart/lxd.log</filename>. To make LXD provide
1637 much more information about requests it is serving, add '--debug' to
1638 LXD's arguments. In systemd, append '--debug' to the 'ExecStart=' line
1639 in <filename>/lib/systemd/system/lxd.service</filename>. In Upstart,
1640 append it to the <command>exec /usr/bin/lxd</command> line in
1641 <filename>/etc/init/lxd.conf</filename>.
1642 </para>
1643
1644 <para>
1645 Container logfiles for container c1 may be seen using:
1646 </para>
1647
1648<screen>
1649<command>
1650lxc info c1 --show-log
1651</command>
1652</screen>
1653
1654 <para>
1655 The configuration file which was used may be found under <filename> /var/log/lxd/c1/lxc.conf</filename>
1656 while apparmor profiles can be found in <filename> /var/lib/lxd/security/apparmor/profiles/c1</filename>
1657 and seccomp profiles in <filename> /var/lib/lxd/security/seccomp/c1</filename>.
1658 </para>
1659 </sect2>
1660
1661 </sect1>
1662
789 <sect1 id="lxc" status="review">1663 <sect1 id="lxc" status="review">
790 <title>LXC</title>1664 <title>LXC</title>
7911665

Subscribers

People subscribed via source and target branches