Merge lp:~razique/openstack-manuals/working into lp:~annegentle/openstack-manuals/trunk

Proposed by Razique Mahroua
Status: Merged
Merged at revision: 176
Proposed branch: lp:~razique/openstack-manuals/working
Merge into: lp:~annegentle/openstack-manuals/trunk
Diff against target: 1894 lines (+1062/-501)
1 file modified
doc/source/docbkx/openstack-compute-admin/computeadmin.xml (+1062/-501)
To merge this branch: bzr merge lp:~razique/openstack-manuals/working
Reviewer Review Type Date Requested Status
Anne Gentle Approve
Review via email: mp+74369@code.launchpad.net

Description of the change

Split the section "1-8 Managing volumes" in four parts :
- Installing nova-volumes
- Configuring nova-volumes
- Troubleshoot the nova-volume setup
- Advanced tips

The section now details deeper the whole nova-volumes component.

To post a comment you must log in.
Revision history for this message
Anne Gentle (annegentle) wrote :

Thanks for doing this, it was needed! I'm bringing in just your section as there were a lot of white space changes in other areas of the document. We can talk online or via email to figure out why there were white space changes in other sections. I just fixed a few misspellings - euca-dettach-volume to euca-detach-volume, reffer to refer, attachement to attachment.

There is some confusion about nova-volume the "service" and nova-volumes the "volume group" but I think you have handled it well. I tried to spell iscsi as "iSCSI" when referring to the standard (but not for the commands, o' course).

Please let me know if you see anything incorrect in my corrections and feel free to continue to maintain the sections. We'll find out what is causing the white space differences.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc/source/docbkx/openstack-compute-admin/computeadmin.xml'
2--- doc/source/docbkx/openstack-compute-admin/computeadmin.xml 2011-09-01 14:09:41 +0000
3+++ doc/source/docbkx/openstack-compute-admin/computeadmin.xml 2011-09-07 09:19:27 +0000
4@@ -1,33 +1,33 @@
5 <?xml version="1.0" encoding="UTF-8"?>
6-<!DOCTYPE chapter [
7+<!DOCTYPE chapter[
8 <!-- Some useful entities borrowed from HTML -->
9-<!ENTITY ndash "&#x2013;">
10-<!ENTITY mdash "&#x2014;">
11+<!ENTITY ndash "&#x2013;">
12+<!ENTITY mdash "&#x2014;">
13 <!ENTITY hellip "&#x2026;">
14 <!ENTITY nbsp "&#160;">
15-<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
16+<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
17 <imageobject>
18 <imagedata fileref="img/Check_mark_23x20_02.svg"
19 format="SVG" scale="60"/>
20 </imageobject>
21 </inlinemediaobject>'>
22
23-<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
24+<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
25 <imageobject>
26 <imagedata fileref="img/Arrow_east.svg"
27 format="SVG" scale="60"/>
28 </imageobject>
29 </inlinemediaobject>'>
30 ]>
31-<chapter xmlns="http://docbook.org/ns/docbook"
32- xmlns:xi="http://www.w3.org/2001/XInclude"
33+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
34 xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
35 <?dbhtml filename="ch_system-administration-for-openstack-compute.html" ?>
36 <title>System Administration</title>
37 <para>By understanding how the different installed nodes interact with each other you can
38 administer the OpenStack Compute installation. OpenStack Compute offers many ways to install
39- using multiple servers but the general idea is that you can have multiple compute nodes that
40- control the virtual servers and a cloud controller node that contains the remaining Nova services. </para>
41+ using multiple servers but the general idea is that you can have multiple compute nodes that
42+ control the virtual servers and a cloud controller node that contains the remaining Nova
43+ services. </para>
44 <para>The OpenStack Compute cloud works via the interaction of a series of daemon processes
45 named nova-* that reside persistently on the host machine or machines. These binaries can
46 all run on the same machine or be spread out on multiple boxes in a large deployment. The
47@@ -77,113 +77,179 @@
48 <para><literallayout class="monospaced">nova-network --network_manager=nova.network.manager.FlatManager</literallayout></para>
49 </listitem>
50 </itemizedlist>
51- <section><?dbhtml filename="starting-images.html" ?>
52- <title>Starting Images</title><para>Once you have an installation, you want to get images that you can use in your Compute cloud. We've created a basic Ubuntu image for testing your installation. First you'll download the image, then use uec-publish-tarball to publish it:</para>
53-
54- <para><literallayout class="monospaced">image="ubuntu1010-UEC-localuser-image.tar.gz"
55+ <section>
56+ <?dbhtml filename="starting-images.html" ?>
57+ <title>Starting Images</title>
58+ <para>Once you have an installation, you want to get images that you can use in your Compute
59+ cloud. We've created a basic Ubuntu image for testing your installation. First you'll
60+ download the image, then use uec-publish-tarball to publish it:</para>
61+
62+ <para><literallayout class="monospaced">image="ubuntu1010-UEC-localuser-image.tar.gz"
63 wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz
64 uec-publish-tarball $image [bucket-name] [hardware-arch]</literallayout></para>
65-
66- <para>Here's an example of what this command looks like with data:</para>
67-
68- <para><literallayout class="monospaced"> uec-publish-tarball ubuntu1010-UEC-localuser-image.tar.gz dub-bucket x86_64</literallayout></para>
69-
70- <para>The command in return should output three references: emi, eri and eki. You need to use the emi value (for example, “ami-zqkyh9th″) for the euca-run-instances command.</para>
71-
72-
73- <para>Now you can schedule, launch and connect to the instance, which you do with tools from the Euca2ools on the command line. Create the emi value from the uec-publish-tarball command, and then you can use the euca-run-instances command.</para>
74- <para>One thing to note here, once you publish the tarball, it has to untar before you can launch an image from it. Using the 'euca-describe-images' command, wait until the state turns to "available" from "untarring.":</para>
75-
76- <para><literallayout class="monospaced">euca-describe-images</literallayout></para>
77-
78- <para>Depending on the image that you're using, you need a public key to connect to it. Some images have built-in accounts already created. Images can be shared by many users, so it is dangerous to put passwords into the images. Nova therefore supports injecting ssh keys into instances before they are
79- booted. This allows a user to login to the instances that he or she creates securely.
80- Generally the first thing that a user does when using the system is create a keypair.
81- Keypairs provide secure authentication to your instances. As part of the first boot of a
82- virtual image, the private key of your keypair is added to root’s authorized_keys file.
83- Nova generates a public and private key pair, and sends the private key to the user. The
84- public key is stored so that it can be injected into instances. </para>
85+
86+ <para>Here's an example of what this command looks like with data:</para>
87+
88+ <para><literallayout class="monospaced"> uec-publish-tarball ubuntu1010-UEC-localuser-image.tar.gz dub-bucket x86_64</literallayout></para>
89+
90+ <para>The command in return should output three references: emi, eri and eki. You need to
91+ use the emi value (for example, “ami-zqkyh9th″) for the euca-run-instances
92+ command.</para>
93+
94+
95+ <para>Now you can schedule, launch and connect to the instance, which you do with tools from
96+ the Euca2ools on the command line. Create the emi value from the uec-publish-tarball
97+ command, and then you can use the euca-run-instances command.</para>
98+ <para>One thing to note here, once you publish the tarball, it has to untar before you can
99+ launch an image from it. Using the 'euca-describe-images' command, wait until the state
100+ turns to "available" from "untarring.":</para>
101+
102+ <para><literallayout class="monospaced">euca-describe-images</literallayout></para>
103+
104+ <para>Depending on the image that you're using, you need a public key to connect to it. Some
105+ images have built-in accounts already created. Images can be shared by many users, so it
106+ is dangerous to put passwords into the images. Nova therefore supports injecting ssh
107+ keys into instances before they are booted. This allows a user to login to the instances
108+ that he or she creates securely. Generally the first thing that a user does when using
109+ the system is create a keypair. Keypairs provide secure authentication to your
110+ instances. As part of the first boot of a virtual image, the private key of your keypair
111+ is added to root’s authorized_keys file. Nova generates a public and private key pair,
112+ and sends the private key to the user. The public key is stored so that it can be
113+ injected into instances. </para>
114 <para>Keypairs are created through the api and you use them as a parameter when launching an
115 instance. They can be created on the command line using the euca2ools script
116 euca-add-keypair. Refer to the man page for the available options. Example usage:</para>
117-
118- <literallayout class="monospaced">euca-add-keypair test > test.pem
119+
120+ <literallayout class="monospaced">euca-add-keypair test > test.pem
121 chmod 600 test.pem</literallayout>
122-
123+
124 <para>Now, you can run the instances:</para>
125- <literallayout class="monospaced">euca-run-instances -k test -t m1.tiny ami-zqkyh9th</literallayout>
126+ <literallayout class="monospaced">euca-run-instances -k test -t m1.tiny ami-zqkyh9th</literallayout>
127 <para>Here's a description of the parameters used above:</para>
128 <para>-t what type of image to create</para>
129 <para>-k name of the key to inject in to the image at launch </para>
130 <para>Optionally, you can use the -n parameter to indicate how many images of this type to
131 launch. </para>
132-
133-
134- <para>The instance will go from “launching” to “running” in a short time, and you should be able to connect via SSH using the 'ubuntu' account, with the password 'ubuntu': (replace $ipaddress with the one you got from euca-describe-instances):</para>
135-
136- <para><literallayout class="monospaced">ssh ubuntu@$ipaddress</literallayout></para>
137- <para>The 'ubuntu' user is part of the sudoers group, so you can escalate to 'root'
138- via the following command:</para>
139-
140- <para><literallayout class="monospaced">sudo -i</literallayout></para>
141- </section>
142- <section>
143- <?dbhtml filename="deleting-instances.html" ?>
144- <title>Deleting Instances</title>
145-
146- <para>When you are done playing with an instance, you can tear the instance down
147- using the following command (replace $instanceid with the instance IDs from above or
148- look it up with euca-describe-instances):</para>
149-
150- <para><literallayout class="monospaced">euca-terminate-instances $instanceid</literallayout></para></section>
151+
152+
153+ <para>The instance will go from “launching” to “running” in a short time, and you should be
154+ able to connect via SSH using the 'ubuntu' account, with the password 'ubuntu': (replace
155+ $ipaddress with the one you got from euca-describe-instances):</para>
156+
157+ <para><literallayout class="monospaced">ssh ubuntu@$ipaddress</literallayout></para>
158+ <para>The 'ubuntu' user is part of the sudoers group, so you can escalate to 'root' via the
159+ following command:</para>
160+
161+ <para><literallayout class="monospaced">sudo -i</literallayout></para>
162+ </section>
163+ <section>
164+ <?dbhtml filename="deleting-instances.html" ?>
165+ <title>Deleting Instances</title>
166+
167+ <para>When you are done playing with an instance, you can tear the instance down using the
168+ following command (replace $instanceid with the instance IDs from above or look it up
169+ with euca-describe-instances):</para>
170+
171+ <para><literallayout class="monospaced">euca-terminate-instances $instanceid</literallayout></para>
172+ </section>
173 <section>
174 <?dbhtml filename="creating-custom-images.html" ?>
175- <info><author>
176- <orgname>CSS Corp- Open Source Services</orgname>
177- </author><title>Image management</title></info>
178- <para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link> </para>
179- <para>There are several pre-built images for OpenStack available from various sources. You can download such images and use them to get familiar with OpenStack. You can refer to <link xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html">http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link> for details on using such images.</para>
180- <para>For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or configuration. This chapter will guide you through the process of creating Linux images of Debian and Redhat based distributions from scratch. We have also covered an approach to bundling Windows images.</para>
181- <para>There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch. cloud-init handles importing ssh keys for password-less login, setting hostname etc. The instance acquires the instance specific configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.</para>
182- <para>While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing the keys etc. by running a set of commands at boot time from rc.local.</para>
183- <para>The process used for Ubuntu and Fedora is largely the same with a few minor differences, which are explained below.</para>
184-
185- <para>In both cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We are using the machine called &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and Configuration&#8221; for this purpose.</para>
186- <para>The approach explained below will give you disk images that represent a disk without any partitions. Nova-compute can resize such disks ( including resizing the file system) based on the instance type chosen at the time of launching the instance. These images cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated kernel and ramdisk images. These kernel and ramdisk images need to be used by nova-compute at the time of launching the instance.</para>
187- <para>However, we have also added a small section towards the end of the chapter about creating bootable images with multiple partitions that can be be used by nova to launch an instance without the need for kernel and ramdisk images. The caveat is that while nova-compute can re-size such disks at the time of launching the instance, the file system size is not altered and hence, for all practical purposes, such disks are not re-sizable.</para>
188- <section><?dbhtml filename="creating-a-linux-image.html" ?><title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>
189-
190- <para>The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
191+ <info>
192+ <author>
193+ <orgname>CSS Corp- Open Source Services</orgname>
194+ </author>
195+ <title>Image management</title>
196+ </info>
197+ <para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link>
198+ </para>
199+ <para>There are several pre-built images for OpenStack available from various sources. You
200+ can download such images and use them to get familiar with OpenStack. You can refer to
201+ <link
202+ xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html"
203+ >http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link>
204+ for details on using such images.</para>
205+ <para>For any production deployment, you may like to have the ability to bundle custom
206+ images, with a custom set of applications or configuration. This chapter will guide you
207+ through the process of creating Linux images of Debian and Redhat based distributions
208+ from scratch. We have also covered an approach to bundling Windows images.</para>
209+ <para>There are some minor differences in the way you would bundle a Linux image, based on
210+ the distribution. Ubuntu makes it very easy by providing cloud-init package, which can
211+ be used to take care of the instance configuration at the time of launch. cloud-init
212+ handles importing ssh keys for password-less login, setting hostname etc. The instance
213+ acquires the instance specific configuration from Nova-compute by connecting to a meta
214+ data interface running on 169.254.169.254.</para>
215+ <para>While creating the image of a distro that does not have cloud-init or an equivalent
216+ package, you may need to take care of importing the keys etc. by running a set of
217+ commands at boot time from rc.local.</para>
218+ <para>The process used for Ubuntu and Fedora is largely the same with a few minor
219+ differences, which are explained below.</para>
220+
221+ <para>In both cases, the documentation below assumes that you have a working KVM
222+ installation to use for creating the images. We are using the machine called
223+ &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and
224+ Configuration&#8221; for this purpose.</para>
225+ <para>The approach explained below will give you disk images that represent a disk without
226+ any partitions. Nova-compute can resize such disks ( including resizing the file system)
227+ based on the instance type chosen at the time of launching the instance. These images
228+ cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated
229+ kernel and ramdisk images. These kernel and ramdisk images need to be used by
230+ nova-compute at the time of launching the instance.</para>
231+ <para>However, we have also added a small section towards the end of the chapter about
232+ creating bootable images with multiple partitions that can be be used by nova to launch
233+ an instance without the need for kernel and ramdisk images. The caveat is that while
234+ nova-compute can re-size such disks at the time of launching the instance, the file
235+ system size is not altered and hence, for all practical purposes, such disks are not
236+ re-sizable.</para>
237+ <section>
238+ <?dbhtml filename="creating-a-linux-image.html" ?>
239+ <title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>
240+
241+ <para>The first step would be to create a raw image on Client1. This will represent the
242+ main HDD of the virtual machine, so make sure to give it as much space as you will
243+ need.</para>
244 <literallayout class="monospaced">
245
246 kvm-img create -f raw server.img 5G
247 </literallayout>
248-
249- <simplesect><title>OS Installation</title>
250- <para>Download the iso file of the Linux distribution you want installed in the image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The points of difference between Ubuntu and Fedora are mentioned wherever required.</para>
251+
252+ <simplesect>
253+ <title>OS Installation</title>
254+ <para>Download the iso file of the Linux distribution you want installed in the
255+ image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit
256+ server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The
257+ points of difference between Ubuntu and Fedora are mentioned wherever
258+ required.</para>
259 <literallayout class="monospaced">
260
261 wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso
262 </literallayout>
263- <para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command below also sets up a VNC display at port 0</para>
264- <literallayout class="monospaced">
265-
266-sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
267-</literallayout>
268- <para>Connect to the VM through VNC (use display number :0) and finish the installation.</para>
269- <para>For Example, where 10.10.10.4 is the IP address of client1:</para>
270- <literallayout class="monospaced">
271-
272- vncviewer 10.10.10.4 :0
273-</literallayout>
274- <para>During the installation of Ubuntu, create a single ext4 partition mounted on &#8216;/&#8217;. Do not create a swap partition.</para>
275- <para>In the case of Fedora 14, the installation will not progress unless you create a swap partition. Please go ahead and create a swap partition.</para>
276-
277- <para>After finishing the installation, relaunch the VM by executing the following command.</para>
278+ <para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will
279+ start the installation process. The command below also sets up a VNC display at
280+ port 0</para>
281+ <literallayout class="monospaced">
282+
283+sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
284+</literallayout>
285+ <para>Connect to the VM through VNC (use display number :0) and finish the
286+ installation.</para>
287+ <para>For Example, where 10.10.10.4 is the IP address of client1:</para>
288+ <literallayout class="monospaced">
289+
290+vncviewer 10.10.10.4 :0
291+</literallayout>
292+ <para>During the installation of Ubuntu, create a single ext4 partition mounted on
293+ &#8216;/&#8217;. Do not create a swap partition.</para>
294+ <para>In the case of Fedora 14, the installation will not progress unless you create
295+ a swap partition. Please go ahead and create a swap partition.</para>
296+
297+ <para>After finishing the installation, relaunch the VM by executing the following
298+ command.</para>
299 <literallayout class="monospaced">
300 sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :0
301 </literallayout>
302- <para>At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.</para>
303+ <para>At this point, you can add all the packages you want to have installed, update
304+ the installation, add users and make any configuration changes you want in your
305+ image.</para>
306 <para>At the minimum, for Ubuntu you may run the following commands</para>
307 <literallayout class="monospaced">
308
309@@ -202,18 +268,23 @@
310
311 chkconfig sshd on
312 </literallayout>
313- <para>Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.</para>
314+ <para>Also remove the network persistence rules from /etc/udev/rules.d as their
315+ presence will result in the network interface in the instance coming up as an
316+ interface other than eth0.</para>
317 <literallayout class="monospaced">
318
319 sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
320 </literallayout>
321 <para>Shutdown the Virtual machine and proceed with the next steps.</para>
322 </simplesect>
323- <simplesect><title>Extracting the EXT4 partition</title>
324- <para>The image that needs to be uploaded to OpenStack needs to be an ext4 filesystem image. Here are the steps to create a ext4 filesystem image from the raw image i.e server.img</para>
325+ <simplesect>
326+ <title>Extracting the EXT4 partition</title>
327+ <para>The image that needs to be uploaded to OpenStack needs to be an ext4
328+ filesystem image. Here are the steps to create a ext4 filesystem image from the
329+ raw image i.e server.img</para>
330 <literallayout class="monospaced">
331
332-sudo losetup -f server.img
333+sudo losetup -f server.img
334
335 sudo losetup -a
336
337@@ -223,14 +294,15 @@
338
339 /dev/loop0: [0801]:16908388 ($filepath)
340 </literallayout>
341- <para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
342+ <para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath
343+ is the path to the mounted .raw file.</para>
344 <para>Now we need to find out the starting sector of the partition. Run:</para>
345 <literallayout class="monospaced">
346
347 sudo fdisk -cul /dev/loop0
348 </literallayout>
349 <para>You should see an output like this:</para>
350-
351+
352 <literallayout class="monospaced">
353
354 Disk /dev/loop0: 5368 MB, 5368709120 bytes
355@@ -245,17 +317,21 @@
356
357 Disk identifier: 0x00072bd4
358
359-Device Boot Start End Blocks Id System
360+Device Boot Start End Blocks Id System
361
362-/dev/loop0p1 * 2048 10483711 5240832 83 Linux
363+/dev/loop0p1 * 2048 10483711 5240832 83 Linux
364 </literallayout>
365- <para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the partition whose ID is 83. This number should be multiplied by 512 to obtain the correct value. In this case: 2048 x 512 = 1048576</para>
366+ <para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the
367+ partition whose ID is 83. This number should be multiplied by 512 to obtain the
368+ correct value. In this case: 2048 x 512 = 1048576</para>
369 <para>Unmount the loop0 device:</para>
370 <literallayout class="monospaced">
371
372 sudo losetup -d /dev/loop0
373 </literallayout>
374- <para>Now mount only the partition(/dev/loop0p1) of server.img which we had previously noted down, by adding the -o parameter with value previously calculated value</para>
375+ <para>Now mount only the partition(/dev/loop0p1) of server.img which we had
376+ previously noted down, by adding the -o parameter with value previously
377+ calculated value</para>
378 <literallayout class="monospaced">
379
380 sudo losetup -f -o 1048576 server.img
381@@ -268,42 +344,53 @@
382
383 /dev/loop0: [0801]:16908388 ($filepath) offset 1048576
384 </literallayout>
385- <para>Make a note of the mount point of our device(/dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>
386+ <para>Make a note of the mount point of our device(/dev/loop0 in our setup) when
387+ $filepath is the path to the mounted .raw file.</para>
388 <para>Copy the entire partition to a new .raw file</para>
389 <literallayout class="monospaced">
390
391 sudo dd if=/dev/loop0 of=serverfinal.img
392 </literallayout>
393 <para>Now we have our ext4 filesystem image i.e serverfinal.img</para>
394-
395+
396 <para>Unmount the loop0 device</para>
397 <literallayout class="monospaced">
398
399 sudo losetup -d /dev/loop0
400 </literallayout>
401 </simplesect>
402- <simplesect><title>Tweaking /etc/fstab</title>
403- <para>You will need to tweak /etc/fstab to make it suitable for a cloud instance. Nova-compute may resize the disk at the time of launch of instances based on the instance type chosen. This can make the UUID of the disk invalid. Hence we have to use File system label as the identifier for the partition instead of the UUID.</para>
404+ <simplesect>
405+ <title>Tweaking /etc/fstab</title>
406+ <para>You will need to tweak /etc/fstab to make it suitable for a cloud instance.
407+ Nova-compute may resize the disk at the time of launch of instances based on the
408+ instance type chosen. This can make the UUID of the disk invalid. Hence we have
409+ to use File system label as the identifier for the partition instead of the
410+ UUID.</para>
411 <para>Loop mount the serverfinal.img, by running</para>
412 <literallayout class="monospaced">
413
414 sudo mount -o loop serverfinal.img /mnt
415 </literallayout>
416- <para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may look like the following)</para>
417-
418+ <para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may
419+ look like the following)</para>
420+
421 <literallayout class="monospaced">
422
423-UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1
424+UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1
425 </literallayout>
426 <para>to</para>
427 <literallayout class="monospaced">
428
429-LABEL=uec-rootfs / ext4 defaults 0 0
430+LABEL=uec-rootfs / ext4 defaults 0 0
431 </literallayout>
432 </simplesect>
433- <simplesect><title>Fetching Metadata in Fedora</title>
434- <para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to take a few steps to have the instance fetch the meta data like ssh keys etc.</para>
435- <para>Edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”</para>
436+ <simplesect>
437+ <title>Fetching Metadata in Fedora</title>
438+ <para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to
439+ take a few steps to have the instance fetch the meta data like ssh keys
440+ etc.</para>
441+ <para>Edit the /etc/rc.local file and add the following lines before the line “touch
442+ /var/lock/subsys/local”</para>
443 <literallayout class="monospaced">
444
445 depmod -a
446@@ -318,10 +405,14 @@
447 cat /root/.ssh/authorized_keys
448 echo &quot;************************&quot;
449 </literallayout>
450- </simplesect></section>
451- <simplesect><title>Kernel and Initrd for OpenStack</title>
452-
453- <para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These will be used later for creating and uploading a complete virtual image to OpenStack.</para>
454+ </simplesect>
455+ </section>
456+ <simplesect>
457+ <title>Kernel and Initrd for OpenStack</title>
458+
459+ <para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These
460+ will be used later for creating and uploading a complete virtual image to
461+ OpenStack.</para>
462 <literallayout class="monospaced">
463
464 sudo cp /mnt/boot/vmlinuz-2.6.38-7-server /home/localadmin
465@@ -331,348 +422,789 @@
466 <para>Unmount the Loop partition</para>
467 <literallayout class="monospaced">
468
469-sudo umount /mnt
470+sudo umount /mnt
471 </literallayout>
472 <para>Change the filesystem label of serverfinal.img to &#8216;uec-rootfs&#8217;</para>
473 <literallayout class="monospaced">
474
475 sudo tune2fs -L uec-rootfs serverfinal.img
476 </literallayout>
477- <para>Now, we have all the components of the image ready to be uploaded to OpenStack imaging server.</para>
478+ <para>Now, we have all the components of the image ready to be uploaded to OpenStack
479+ imaging server.</para>
480 </simplesect>
481- <simplesect><title>Registering with OpenStack</title>
482- <para>The last step would be to upload the images to Openstack Imaging Server glance. The files that need to be uploaded for the above sample setup of Ubuntu are: vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>
483+ <simplesect>
484+ <title>Registering with OpenStack</title>
485+ <para>The last step would be to upload the images to Openstack Imaging Server glance.
486+ The files that need to be uploaded for the above sample setup of Ubuntu are:
487+ vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>
488 <para>Run the following command</para>
489 <literallayout class="monospaced">
490
491 uec-publish-image -t image --kernel-file vmlinuz-2.6.38-7-server --ramdisk-file initrd.img-2.6.38-7-server amd64 serverfinal.img bucket1
492 </literallayout>
493- <para>For Fedora, the process will be similar. Make sure that you use the right kernel and initrd files extracted above.</para>
494- <para>uec-publish-image, like several other commands from euca2ools, returns the prompt back immediately. However, the upload process takes some time and the images will be usable only after the process is complete. You can keep checking the status using the command &#8216;euca-describe-images&#8217; as mentioned below.</para>
495+ <para>For Fedora, the process will be similar. Make sure that you use the right kernel
496+ and initrd files extracted above.</para>
497+ <para>uec-publish-image, like several other commands from euca2ools, returns the prompt
498+ back immediately. However, the upload process takes some time and the images will be
499+ usable only after the process is complete. You can keep checking the status using
500+ the command &#8216;euca-describe-images&#8217; as mentioned below.</para>
501 </simplesect>
502- <simplesect><title>Bootable Images</title>
503- <para>You can register bootable disk images without associating kernel and ramdisk images. When you do not want the flexibility of using the same disk image with different kernel/ramdisk images, you can go for bootable disk images. This greatly simplifies the process of bundling and registering the images. However, the caveats mentioned in the introduction to this chapter apply. Please note that the instructions below use server.img and you can skip all the cumbersome steps related to extracting the single ext4 partition.</para>
504+ <simplesect>
505+ <title>Bootable Images</title>
506+ <para>You can register bootable disk images without associating kernel and ramdisk
507+ images. When you do not want the flexibility of using the same disk image with
508+ different kernel/ramdisk images, you can go for bootable disk images. This greatly
509+ simplifies the process of bundling and registering the images. However, the caveats
510+ mentioned in the introduction to this chapter apply. Please note that the
511+ instructions below use server.img and you can skip all the cumbersome steps related
512+ to extracting the single ext4 partition.</para>
513 <literallayout class="monospaced">
514 euca-bundle-image -i server.img
515 euca-upload-bundle -b mybucket -m /tmp/server.img.manifest.xml
516 euca-register mybucket/server.img.manifest.xml
517 </literallayout>
518 </simplesect>
519- <simplesect><title>Image Listing</title>
520- <para>The status of the images that have been uploaded can be viewed by using euca-describe-images command. The output should like this:</para>
521- <literallayout class="monospaced">
522+ <simplesect>
523+ <title>Image Listing</title>
524+ <para>The status of the images that have been uploaded can be viewed by using
525+ euca-describe-images command. The output should like this:</para>
526+ <literallayout class="monospaced">
527
528 localadmin@client1:~$ euca-describe-images
529
530-IMAGE ari-7bfac859 bucket1/initrd.img-2.6.38-7-server.manifest.xml css available private x86_64 ramdisk
531-
532-IMAGE ami-5e17eb9d bucket1/serverfinal.img.manifest.xml css available private x86_64 machine aki-3d0aeb08 ari-7bfac859
533-
534-IMAGE aki-3d0aeb08 bucket1/vmlinuz-2.6.38-7-server.manifest.xml css available private x86_64 kernel
535+IMAGE ari-7bfac859 bucket1/initrd.img-2.6.38-7-server.manifest.xml css available private x86_64 ramdisk
536+
537+IMAGE ami-5e17eb9d bucket1/serverfinal.img.manifest.xml css available private x86_64 machine aki-3d0aeb08 ari-7bfac859
538+
539+IMAGE aki-3d0aeb08 bucket1/vmlinuz-2.6.38-7-server.manifest.xml css available private x86_64 kernel
540
541 localadmin@client1:~$
542 </literallayout>
543- </simplesect></section>
544- <section><?dbhtml filename="creating-a-windows-image.html" ?><title>Creating a Windows Image</title>
545- <para>The first step would be to create a raw image on Client1, this will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>
546- <literallayout class="monospaced">
547+ </simplesect>
548+ </section>
549+ <section>
550+ <?dbhtml filename="creating-a-windows-image.html" ?>
551+ <title>Creating a Windows Image</title>
552+ <para>The first step would be to create a raw image on Client1, this will represent the main
553+ HDD of the virtual machine, so make sure to give it as much space as you will
554+ need.</para>
555+ <literallayout class="monospaced">
556 kvm-img create -f raw windowsserver.img 20G
557 </literallayout>
558- <para>OpenStack presents the disk using aVIRTIO interface while launching the instance. Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing VIRTIO drivers from the following location</para>
559- <para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/">http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
560- <para>and attach it during the installation</para>
561- <para>Start the installation by running</para>
562- <literallayout class="monospaced">
563+ <para>OpenStack presents the disk using aVIRTIO interface while launching the instance.
564+ Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO
565+ does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing
566+ VIRTIO drivers from the following location</para>
567+ <para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/"
568+ >http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
569+ <para>and attach it during the installation</para>
570+ <para>Start the installation by running</para>
571+ <literallayout class="monospaced">
572 sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d -nographic -vnc :0
573
574 </literallayout>
575- <para>When the installation prompts you to choose a hard disk device you won’t see any devices available. Click on “Load drivers” at the bottom left and load the drivers from A:\i386\Win2008</para>
576- <para>After the Installation is over, boot into it once and install any additional applications you need to install and make any configuration changes you need to make. Also ensure that RDP is enabled as that would be the only way you can connect to a running instance of Windows. Windows firewall needs to be configured to allow incoming ICMP and RDP connections.</para>
577- <para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up port 3389 as described in the chapter on &#8220;Security&#8221;.</para>
578- <para>Shut-down the VM and upload the image to OpenStack</para>
579- <literallayout class="monospaced">
580+ <para>When the installation prompts you to choose a hard disk device you won’t see any
581+ devices available. Click on “Load drivers” at the bottom left and load the drivers from
582+ A:\i386\Win2008</para>
583+ <para>After the Installation is over, boot into it once and install any additional
584+ applications you need to install and make any configuration changes you need to make.
585+ Also ensure that RDP is enabled as that would be the only way you can connect to a
586+ running instance of Windows. Windows firewall needs to be configured to allow incoming
587+ ICMP and RDP connections.</para>
588+ <para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up
589+ port 3389 as described in the chapter on &#8220;Security&#8221;.</para>
590+ <para>Shut-down the VM and upload the image to OpenStack</para>
591+ <literallayout class="monospaced">
592 euca-bundle-image -i windowsserver.img
593 euca-upload-bundle -b mybucket -m /tmp/windowsserver.img.manifest.xml
594 euca-register mybucket/windowsserver.img.manifest.xml
595 </literallayout>
596- </section>
597+ </section>
598 <section>
599 <?dbhtml filename="understanding-the-compute-service-architecture.html" ?>
600 <title>Understanding the Compute Service Architecture</title>
601- <para>These basic categories describe the service architecture and what's going on within the cloud controller.</para>
602- <simplesect><title>API Server</title>
603-
604- <para>At the heart of the cloud framework is an API Server. This API Server makes command and control of the hypervisor, storage, and networking programmatically available to users in realization of the definition of cloud computing.
605- </para>
606- <para>The API endpoints are basic http web services which handle authentication, authorization, and basic command and control functions using various API interfaces under the Amazon, Rackspace, and related models. This enables API compatibility with multiple existing tool sets created for interaction with offerings from other vendors. This broad compatibility prevents vendor lock-in.
607- </para> </simplesect>
608- <simplesect><title>Message Queue</title>
609- <para>
610- A messaging queue brokers the interaction between compute nodes (processing), volumes (block storage), the networking controllers (software which controls network infrastructure), API endpoints, the scheduler (determines which physical hardware to allocate to a virtual resource), and similar components. Communication to and from the cloud controller is by HTTP requests through multiple API endpoints.</para>
611-
612-<para> A typical message passing event begins with the API server receiving a request from a user. The API server authenticates the user and ensures that the user is permitted to issue the subject command. Availability of objects implicated in the request is evaluated and, if available, the request is routed to the queuing engine for the relevant workers. Workers continually listen to the queue based on their role, and occasionally their type hostname. When such listening produces a work request, the worker takes assignment of the task and begins its execution. Upon completion, a response is dispatched to the queue which is received by the API server and relayed to the originating user. Database entries are queried, added, or removed as necessary throughout the process.
613-</para>
614-</simplesect>
615- <simplesect><title>Compute Worker</title>
616-
617- <para>Compute workers manage computing instances on host machines. Through the API, commands are dispatched to compute workers to:</para>
618-
619- <itemizedlist>
620- <listitem><para>Run instances</para></listitem>
621- <listitem><para>Terminate instances</para></listitem>
622- <listitem><para>Reboot instances</para></listitem>
623- <listitem><para>Attach volumes</para></listitem>
624- <listitem><para>Detach volumes</para></listitem>
625- <listitem><para>Get console output</para></listitem></itemizedlist>
626- </simplesect>
627- <simplesect><title>Network Controller</title>
628-
629- <para>The Network Controller manages the networking resources on host machines. The API server dispatches commands through the message queue, which are subsequently processed by Network Controllers. Specific operations include:</para>
630-
631- <itemizedlist><listitem><para>Allocate fixed IP addresses</para></listitem>
632- <listitem><para>Configuring VLANs for projects</para></listitem>
633- <listitem><para>Configuring networks for compute nodes</para></listitem></itemizedlist>
634- </simplesect>
635-<simplesect><title>Volume Workers</title>
636-
637- <para>Volume Workers interact with iSCSI storage to manage LVM-based instance volumes. Specific functions include:
638- </para>
639- <itemizedlist>
640- <listitem><para>Create volumes</para></listitem>
641- <listitem><para>Delete volumes</para></listitem>
642- <listitem><para>Establish Compute volumes</para></listitem></itemizedlist>
643-
644- <para>Volumes may easily be transferred between instances, but may be attached to only a single instance at a time.</para></simplesect></section>
645+ <para>These basic categories describe the service architecture and what's going on within
646+ the cloud controller.</para>
647+ <simplesect>
648+ <title>API Server</title>
649+
650+ <para>At the heart of the cloud framework is an API Server. This API Server makes
651+ command and control of the hypervisor, storage, and networking programmatically
652+ available to users in realization of the definition of cloud computing. </para>
653+ <para>The API endpoints are basic http web services which handle authentication,
654+ authorization, and basic command and control functions using various API interfaces
655+ under the Amazon, Rackspace, and related models. This enables API compatibility with
656+ multiple existing tool sets created for interaction with offerings from other
657+ vendors. This broad compatibility prevents vendor lock-in. </para>
658+ </simplesect>
659+ <simplesect>
660+ <title>Message Queue</title>
661+ <para> A messaging queue brokers the interaction between compute nodes (processing),
662+ volumes (block storage), the networking controllers (software which controls network
663+ infrastructure), API endpoints, the scheduler (determines which physical hardware to
664+ allocate to a virtual resource), and similar components. Communication to and from
665+ the cloud controller is by HTTP requests through multiple API endpoints.</para>
666+
667+ <para> A typical message passing event begins with the API server receiving a request
668+ from a user. The API server authenticates the user and ensures that the user is
669+ permitted to issue the subject command. Availability of objects implicated in the
670+ request is evaluated and, if available, the request is routed to the queuing engine
671+ for the relevant workers. Workers continually listen to the queue based on their
672+ role, and occasionally their type hostname. When such listening produces a work
673+ request, the worker takes assignment of the task and begins its execution. Upon
674+ completion, a response is dispatched to the queue which is received by the API
675+ server and relayed to the originating user. Database entries are queried, added, or
676+ removed as necessary throughout the process. </para>
677+ </simplesect>
678+ <simplesect>
679+ <title>Compute Worker</title>
680+
681+ <para>Compute workers manage computing instances on host machines. Through the API,
682+ commands are dispatched to compute workers to:</para>
683+
684+ <itemizedlist>
685+ <listitem>
686+ <para>Run instances</para>
687+ </listitem>
688+ <listitem>
689+ <para>Terminate instances</para>
690+ </listitem>
691+ <listitem>
692+ <para>Reboot instances</para>
693+ </listitem>
694+ <listitem>
695+ <para>Attach volumes</para>
696+ </listitem>
697+ <listitem>
698+ <para>Detach volumes</para>
699+ </listitem>
700+ <listitem>
701+ <para>Get console output</para>
702+ </listitem>
703+ </itemizedlist>
704+ </simplesect>
705+ <simplesect>
706+ <title>Network Controller</title>
707+
708+ <para>The Network Controller manages the networking resources on host machines. The API
709+ server dispatches commands through the message queue, which are subsequently
710+ processed by Network Controllers. Specific operations include:</para>
711+
712+ <itemizedlist>
713+ <listitem>
714+ <para>Allocate fixed IP addresses</para>
715+ </listitem>
716+ <listitem>
717+ <para>Configuring VLANs for projects</para>
718+ </listitem>
719+ <listitem>
720+ <para>Configuring networks for compute nodes</para>
721+ </listitem>
722+ </itemizedlist>
723+ </simplesect>
724+ <simplesect>
725+ <title>Volume Workers</title>
726+
727+ <para>Volume Workers interact with iSCSI storage to manage LVM-based instance volumes.
728+ Specific functions include: </para>
729+ <itemizedlist>
730+ <listitem>
731+ <para>Create volumes</para>
732+ </listitem>
733+ <listitem>
734+ <para>Delete volumes</para>
735+ </listitem>
736+ <listitem>
737+ <para>Establish Compute volumes</para>
738+ </listitem>
739+ </itemizedlist>
740+
741+ <para>Volumes may easily be transferred between instances, but may be attached to only a
742+ single instance at a time.</para>
743+ </simplesect>
744+ </section>
745 <section>
746 <?dbhtml filename="managing-the-cloud.html" ?>
747- <title>Managing the Cloud</title><para>There are two main tools that a system administrator will find useful to manage their cloud;
748- the nova-manage command or the Euca2ools command line commands. </para>
749- <para>With the Diablo release, the nova-manage command has been deprecated and you must
750- specify if you want to use it by using the --use_deprecated_auth flag in nova.conf. You
751- must also use the modified middleware stack that is commented out in the default
752- paste.ini file.</para>
753- <para>The nova-manage command may only be run by users with admin privileges. Commands for
754+ <title>Managing the Cloud</title>
755+ <para>There are two main tools that a system administrator will find useful to manage their
756+ cloud; the nova-manage command or the Euca2ools command line commands.</para>
757+ <para> The nova-manage command may only be run by users with admin privileges. Commands for
758 euca2ools can be used by all users, though specific commands may be restricted by Role
759- Based Access Control in the deprecated nova auth system. </para>
760- <simplesect><title>Using the nova-manage command</title>
761- <para>The nova-manage command may be used to perform many essential functions for
762+ Based Access Control. </para>
763+ <simplesect>
764+ <title>Using the nova-manage command</title>
765+ <para>The nova-manage command is used to perform many essential functions for
766 administration and ongoing maintenance of nova, such as user creation, vpn
767 management, and much more.</para>
768-
769- <para>The standard pattern for executing a nova-manage command is: </para>
770+
771+ <para>The standard pattern for executing a nova-manage command is: </para>
772 <literallayout class="monospaced">nova-manage category command [args]</literallayout>
773-
774+
775 <para>For example, to obtain a list of all projects: nova-manage project list</para>
776-
777- <para>Run without arguments to see a list of available command categories: nova-manage</para>
778-
779- <para>Command categories are: account, agent, config, db, fixed, flavor, floating, host,
780- instance_type, image, network, project, role, service, shell, user, version, vm,
781- volume, and vpn. </para>
782- <para>You can also run with a category argument such as user to see a list of all commands in that category: nova-manage user</para>
783- </simplesect></section>
784+
785+ <para>Run without arguments to see a list of available command categories:
786+ nova-manage</para>
787+
788+ <para>Command categories are: user, project, role, shell, vpn, and floating. </para>
789+ <para>You can also run with a category argument such as user to see a list of all
790+ commands in that category: nova-manage user</para>
791+ </simplesect>
792+ </section>
793 <section>
794 <?dbhtml filename="managing-compute-users.html" ?>
795 <title>Managing Compute Users</title>
796-
797- <para>Access to the Euca2ools (ec2) API is controlled by an access and secret key. The
798- user’s access key needs to be included in the request, and the request must be
799- signed with the secret key. Upon receipt of API requests, Compute will verify the
800- signature and execute commands on behalf of the user. </para>
801- <para>In order to begin using nova, you will need to create a user. This can be easily
802+
803+ <para>Access to the Euca2ools (ec2) API is controlled by an access and secret key. The
804+ user’s access key needs to be included in the request, and the request must be signed
805+ with the secret key. Upon receipt of API requests, Compute will verify the signature and
806+ execute commands on behalf of the user. </para>
807+ <para>In order to begin using nova, you will need to create a user. This can be easily
808 accomplished using the user create or user admin commands in nova-manage. user create
809 will create a regular user, whereas user admin will create an admin user. The syntax of
810- the command is nova-manage user create username [access] [secretword]. For example: </para>
811- <literallayout class="monospaced">nova-manage user create john my-access-key a-super-secret-key</literallayout>
812- <para>If you do not specify an access or secret key, a random uuid will be created
813- automatically.</para>
814-
815- <simplesect><title>Credentials</title>
816-
817- <para>Nova can generate a handy set of credentials for a user. These credentials include a CA for bundling images and a file for setting environment variables to be used by euca2ools. If you don’t need to bundle images, just the environment script is required. You can export one with the project environment command. The syntax of the command is nova-manage project environment project_id user_id [filename]. If you don’t specify a filename, it will be exported as novarc. After generating the file, you can simply source it in bash to add the variables to your environment:</para>
818-
819- <literallayout class="monospaced">
820- nova-manage project environment john_project john
821- . novarc</literallayout>
822-
823- <para>If you do need to bundle images, you will need to get all of the credentials using project zipfile. Note that zipfile will give you an error message if networks haven’t been created yet. Otherwise zipfile has the same syntax as environment, only the default file name is nova.zip. Example usage:
824- </para>
825- <literallayout class="monospaced">
826- nova-manage project zipfile john_project john
827- unzip nova.zip
828- . novarc
829- </literallayout></simplesect>
830- <simplesect><title>Role Based Access Control</title>
831-
832- <para>Roles control the API actions that a user is allowed to perform. For example, a user
833- cannot allocate a public ip without the netadmin role. It is important to remember
834- that a users de facto permissions in a project is the intersection of user (global)
835- roles and project (local) roles. So for john to have netadmin permissions in his
836- project, he needs to separate roles specified. You can add roles with role add. The
837- syntax is nova-manage role add user_id role [project_id]. Let’s give john the
838- netadmin role for his project:</para>
839-
840- <literallayout class="monospaced"> nova-manage role add john netadmin
841- nova-manage role add john netadmin john_project</literallayout>
842-
843- <para>Role-based access control (RBAC) is an approach to restricting system access to authorized users based on an individual's role within an organization. Various employee functions require certain levels of system access in order to be successful. These functions are mapped to defined roles and individuals are categorized accordingly. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of assigning appropriate roles to the user. This simplifies common operations, such as adding a user, or changing a user’s department.
844- </para>
845- <para>Nova’s rights management system employs the RBAC model and currently supports the following five roles:</para>
846-
847- <itemizedlist>
848- <listitem><para>Cloud Administrator. (cloudadmin) Users of this class enjoy complete system access.</para></listitem>
849- <listitem><para>IT Security. (itsec) This role is limited to IT security personnel. It permits role holders to quarantine instances.</para></listitem>
850- <listitem><para>System Administrator. (sysadmin)The default for project owners, this role affords users the ability to add other users to a project, interact with project images, and launch and terminate instances.</para></listitem>
851- <listitem><para>Network Administrator. (netadmin) Users with this role are permitted to allocate and assign publicly accessible IP addresses as well as create and modify firewall rules.</para></listitem>
852- <listitem><para>Developer. This is a general purpose role that is assigned to users by default.</para></listitem>
853- <listitem><para>Project Manager. (projectmanager) This is a role that is assigned upon project creation and can't be added or removed, but this role can do anything a sysadmin can do.</para></listitem></itemizedlist>
854-
855- <para>RBAC management is exposed through the dashboard for simplified user management.</para></simplesect></section>
856+ the command is nova-manage user create username [access] [secret]. For example: </para>
857+ <literallayout class="monospaced">nova-manage user create john my-access-key a-super-secret-key</literallayout>
858+ <para>If you do not specify an access or secret key, a random uuid will be created
859+ automatically.</para>
860+
861+ <simplesect>
862+ <title>Credentials</title>
863+
864+ <para>Nova can generate a handy set of credentials for a user. These credentials include
865+ a CA for bundling images and a file for setting environment variables to be used by
866+ euca2ools. If you don’t need to bundle images, just the environment script is
867+ required. You can export one with the project environment command. The syntax of the
868+ command is nova-manage project environment project_id user_id [filename]. If you
869+ don’t specify a filename, it will be exported as novarc. After generating the file,
870+ you can simply source it in bash to add the variables to your environment:</para>
871+
872+ <literallayout class="monospaced">
873+nova-manage project environment john_project john
874+. novarc</literallayout>
875+
876+ <para>If you do need to bundle images, you will need to get all of the credentials using
877+ project zipfile. Note that zipfile will give you an error message if networks
878+ haven’t been created yet. Otherwise zipfile has the same syntax as environment, only
879+ the default file name is nova.zip. Example usage: </para>
880+ <literallayout class="monospaced">
881+nova-manage project zipfile john_project john
882+unzip nova.zip
883+. novarc
884+</literallayout>
885+ </simplesect>
886+ <simplesect>
887+ <title>Role Based Access Control</title>
888+
889+ <para>Roles control the API actions that a user is allowed to perform. For example, a
890+ user cannot allocate a public ip without the netadmin role. It is important to
891+ remember that a users de facto permissions in a project is the intersection of user
892+ (global) roles and project (local) roles. So for john to have netadmin permissions
893+ in his project, he needs to separate roles specified. You can add roles with role
894+ add. The syntax is nova-manage role add user_id role [project_id]. Let’s give john
895+ the netadmin role for his project:</para>
896+
897+ <literallayout class="monospaced"> nova-manage role add john netadmin
898+nova-manage role add john netadmin john_project</literallayout>
899+
900+ <para>Role-based access control (RBAC) is an approach to restricting system access to
901+ authorized users based on an individual's role within an organization. Various
902+ employee functions require certain levels of system access in order to be
903+ successful. These functions are mapped to defined roles and individuals are
904+ categorized accordingly. Since users are not assigned permissions directly, but only
905+ acquire them through their role (or roles), management of individual user rights
906+ becomes a matter of assigning appropriate roles to the user. This simplifies common
907+ operations, such as adding a user, or changing a user’s department. </para>
908+ <para>Nova’s rights management system employs the RBAC model and currently supports the
909+ following five roles:</para>
910+ <itemizedlist>
911+ <listitem>
912+ <para>Cloud Administrator. (cloudadmin) Users of this class enjoy complete
913+ system access.</para>
914+ </listitem>
915+ <listitem>
916+ <para>IT Security. (itsec) This role is limited to IT security personnel. It
917+ permits role holders to quarantine instances.</para>
918+ </listitem>
919+ <listitem>
920+ <para>System Administrator. (sysadmin)The default for project owners, this role
921+ affords users the ability to add other users to a project, interact with
922+ project images, and launch and terminate instances.</para>
923+ </listitem>
924+ <listitem>
925+ <para>Network Administrator. (netadmin) Users with this role are permitted to
926+ allocate and assign publicly accessible IP addresses as well as create and
927+ modify firewall rules.</para>
928+ </listitem>
929+ <listitem>
930+ <para>Developer. This is a general purpose role that is assigned to users by
931+ default.</para>
932+ </listitem>
933+ <listitem>
934+ <para>Project Manager. (projectmanager) This is a role that is assigned upon
935+ project creation and can't be added or removed, but this role can do
936+ anything a sysadmin can do.</para>
937+ </listitem>
938+ </itemizedlist>
939+
940+ <para>RBAC management is exposed through the dashboard for simplified user
941+ management.</para>
942+ </simplesect>
943+ </section>
944 <section>
945 <?dbhtml filename="managing-volumes.html" ?>
946- <title>Managing Volumes</title><para>Nova-volume is the service that allows you to give extra block level storage to your OpenStack
947- Compute instances. You may recognize this as a similar offering that Amazon EC2 offers,
948- Elastic Block Storage (EBS). However, nova-volume is not the same implementation that
949- EC2 uses today. Nova-volume is an iSCSI solution that employs the use of Logical Volume
950- Manager (LVM) for Linux. Note that a volume may only be attached to one instance at a
951- time. This is not a ‘shared storage’ solution like a SAN which multiple servers can
952- attach to.</para>
953+ <title>Managing Volumes</title>
954+ <para>Nova-volume is the service that allows you to give extra block level storage to your
955+ OpenStack Compute instances. You may recognize this as a similar offering that Amazon
956+ EC2 offers, Elastic Block Storage (EBS). However, nova-volume is not the same
957+ implementation that EC2 uses today. Nova-volume is an iSCSI solution that employs the
958+ use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached
959+ to one instance at a time. This is not a ‘shared storage’ solution like a SAN of NFS on
960+ which multiple servers can attach to.</para>
961+ <para> Before going any further ; let's present the nova-volume implementation into Open
962+ stack : </para>
963+ <para>The nova-volumes service uses iscsi-exposed LVM volumes to the compute nodes which run
964+ instances. Thus, there are two components involved : </para>
965+ <para>- lvm2, which works with a VG called "nova-volumes" (Reffer to
966+ http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux) for further details) </para>
967+ <para>- open-iscsi, the iscsi implementation which manages iscsi sessions on the compute
968+ nodes. </para>
969+ <para>Here is what happens from the volume creation to it's attachment (we use here the
970+ euca2ools, but the same explanation goes with the API): </para>
971+ <orderedlist>
972+ <listitem>
973+ <para>The volume is created via $euca-create-volume; which creates an LV into the VG
974+ "nova-volumes" </para>
975+ </listitem>
976+ <listitem>
977+ <para>The volume is attached to an instance via $euca-attach-volume; which creates a
978+ unique iscsi IQN that will be exposed to the compute node. </para>
979+ </listitem>
980+ <listitem>
981+ <para>The compute node which run the concerned instance has now an active ISCSI
982+ session; and a new local storage (usually a /dev/sdX disk) </para>
983+ </listitem>
984+ <listitem>
985+ <para>libvirt uses that local storage as a storage for the instance; the instance
986+ get a new disk (usually a /dev/vdX disk) </para>
987+ </listitem>
988+ </orderedlist>
989 <para>For this particular walkthrough, there is one cloud controller running nova-api,
990- nova-compute, nova-scheduler, nova-objectstore, and nova-network. There are two
991- additional compute nodes running both nova-compute and nova-volume. The walkthrough uses
992- a custom partitioning scheme that carves out 60GB of space and labels it as LVM. The
993- network is a /28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack
994- Compute (Nova). </para>
995+ nova-compute, nova-scheduler, nova-objectstore, nova-network and nova-volumes. There are
996+ two additional compute nodes running nova-compute. The walkthrough uses a custom
997+ partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a
998+ /28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova). </para>
999+ <para>Please note that the network mode doesn't interfere at all the way nova-volumes work,
1000+ but it is essential for nova-volumes to work that the mode you are currently using is
1001+ set up. Please reffer to the Section 7 "Networking" for much details.</para>
1002 <para>To set up Compute to use volumes, ensure that nova-volume is installed along with
1003- lvm2. </para>
1004+ lvm2. The guide will be split in four parts : </para>
1005 <para>
1006- <literallayout class="monospaced">apt-get install lvm2 nova-volume</literallayout>
1007+ <itemizedlist>
1008+ <listitem>
1009+ <para>A- Installing nova-volumes on the cloud controller.</para>
1010+ </listitem>
1011+ <listitem>
1012+ <para>B- Configuring nova-volumes on the compute nodes.</para>
1013+
1014+ </listitem>
1015+ <listitem>
1016+ <para>C- Troubleshoot your nova-volumes installation.</para>
1017+ </listitem>
1018+ <listitem>
1019+ <para>D- Advanced tips : Disaster Recovery Process, Backup your nova-volumes,
1020+ Browse your nova-volumes from the cloud-controller </para>
1021+ </listitem>
1022+ </itemizedlist>
1023 </para>
1024- <simplesect><title>Configure Volumes for use with nova-volume</title>
1025- <para>If you do not already have LVM volumes on hand, but have free drive space, you
1026- will need to create a LVM volume before proceeding.</para>
1027- <para>Here is a short run down of how you would create a LVM from free drive space on your system.</para>
1028- <para>Start off by issuing an fdisk command to your drive with the free space:</para>
1029- <para>
1030- <literallayout class="monospaced">fdisk /dev/sda</literallayout></para>
1031- <para>Once in fdisk, perform the following commands:</para>
1032- <orderedlist>
1033- <listitem><para>Press ‘<code>n'</code> to create a new disk partition,</para></listitem>
1034- <listitem><para>Press <code>'p'</code> to create a primary disk partition,</para></listitem>
1035- <listitem><para>Press <code>'1'</code> to denote it as 1st disk partition,</para></listitem>
1036-
1037- <listitem><para>Either press ENTER twice to accept the default of 1st and last cylinder – to convert the remainder of hard disk to a single disk partition
1038- -OR-
1039- press ENTER once to accept the default of the 1st, and then choose how big you want the partition to be by specifying +size{K,M,G} e.g. +5G or +6700M.</para></listitem>
1040- <listitem><para>Press <code>'t', then</code> select the new partition you made.</para></listitem>
1041-
1042- <listitem><para>Press <code>'8e'</code> change your new partition to 8e, i.e. Linux LVM partition type.</para></listitem>
1043- <listitem><para>Press ‘<code>p'</code> to display the hard disk partition setup. Please take note that the first partition is denoted as /dev/sda1 in Linux.</para></listitem>
1044- <listitem><para>Press <code>'w'</code> to write the partition table and exit fdisk upon completion.</para></listitem>
1045- </orderedlist>
1046- <para>Refresh your partition table to ensure your new partition shows up, and verify
1047- with fdisk.</para>
1048-
1049- <para><literallayout class="monospaced">partprobe
1050-fdisk -l (you should see your new partition in this listing)</literallayout></para>
1051- <para>Here is how you can set up partitioning during the OS install to prepare for this
1052- nova-volume configuration:</para>
1053- <para>root@osdemo03:~# fdisk -l</para>
1054- <para><literallayout class="monospaced">
1055- Device Boot&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; End&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Blocks&nbsp;&nbsp; Id&nbsp; System
1056-
1057- /dev/sda1&nbsp;&nbsp;&nbsp; * &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 12158&nbsp;&nbsp;&nbsp; 97280&nbsp;&nbsp; 83&nbsp; Linux
1058- /dev/sda2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 12158&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24316&nbsp;&nbsp;&nbsp; 97655808&nbsp;&nbsp; 83&nbsp; Linux
1059-
1060- /dev/sda3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24316&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24328&nbsp;&nbsp;&nbsp;&nbsp; 97654784&nbsp;&nbsp;&nbsp;&nbsp; 83&nbsp; Linux
1061- /dev/sda4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24328&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 42443&nbsp;&nbsp; 145507329&nbsp;&nbsp;&nbsp; 5&nbsp; Extended
1062-
1063- /dev/sda5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24328&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32352&nbsp;&nbsp;&nbsp; 64452608&nbsp;&nbsp; 8e&nbsp; Linux LVM
1064- /dev/sda6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32352&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 40497&nbsp;&nbsp;&nbsp; 65428480&nbsp;&nbsp; 8e&nbsp; Linux LVM
1065-
1066- /dev/sda7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 40498&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 42443&nbsp;&nbsp;&nbsp; 15624192&nbsp;&nbsp; 82&nbsp; Linux swap / Solaris
1067-</literallayout></para>
1068- <para>Now that you have identified a partition has been labeled for LVM use, perform the
1069- following steps to configure LVM and prepare it as nova-volume. You must name your
1070- volume group ‘nova-volumes’ or things will not work as expected:</para>
1071- <literallayout class="monospaced">
1072- pvcreate /dev/sda5
1073- vgcreate nova-volumes /dev/sda5 </literallayout></simplesect><simplesect><title>Configure iscsitarget</title> <para>If you have a multinode installation of Compute, you may want nova-volume on the same node as nova-compute, although it is not required.</para><para>By default, when the ‘iscsitarget’ package is installed, it is not started, nor enabled by
1074- default. You need to perform the following two steps to configure the iscsitarget
1075- service in order for nova-volume to work.</para>
1076- <para>
1077- <literallayout class="monospaced">
1078- sed -i ‘s/false/true/g’ /etc/default/iscsitarget
1079- service iscsitarget start</literallayout></para></simplesect><simplesect><title>Configure nova.conf Flag File</title>
1080- <para>Edit your nova.conf to include a new flag, –iscsi_ip_prefix=192.168. The value of this flag needs to be set to something that will differentiate the IP addresses, to ensure it uses IP addresses that are route-able, such as a prefix on the private network. </para></simplesect>
1081- <simplesect><title>Start nova-volume and Create Volumes</title>
1082-
1083- <para>You are now ready to fire up nova-volume, and start creating volumes!</para>
1084-
1085- <para><literallayout class="monospaced">service nova-volume start</literallayout></para>
1086-
1087- <para>Once the service is started, login to your controller and ensure you’ve properly sourced your ‘novarc’ file. You will use the following commands to interface with nova-volume:</para>
1088-
1089-<para><literallayout class="monospaced"> euca-create-volume
1090- euca-attach-volume
1091- euca-detach-volume
1092- euca-delete-volume</literallayout></para>
1093-
1094- <para>One of the first things you should do is make sure that nova-volume is checking in as expected.&nbsp; You can do so using nova-manage:</para>
1095- <para><literallayout class="monospaced">nova-manage service list</literallayout></para>
1096- <para>If you see a ‘nova-volume’ in there, you are looking good.&nbsp; Now create a new volume:</para>
1097- <para><literallayout class="monospaced">euca-create-volume -s 7 -z nova&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (-s refers to the size of the volume in GB, and -z is the default zone (usually nova))</literallayout></para>
1098-
1099- <para>You should get some output similar to this:</para>
1100- <para><literallayout class="monospaced">VOLUME&nbsp; vol-0000000b&nbsp;&nbsp;&nbsp; 7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; creating (wayne, None, None, None)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2011-02-11 06:58:46.941818</literallayout></para>
1101- <para>You can view that status of the volumes creation using ‘euca-describe-volumes’.&nbsp; Once that status is ‘available,’ it is ready to be attached to an instance:</para>
1102- <para><literallayout class="monospaced">euca-attach-volume vol-00000009 -i i-00000008 -d /dev/vdb</literallayout></para>
1103-
1104- <para>If you do not get any errors, it is time to login to instance ‘i-00000008′ and see if the new space is there.&nbsp; Here is the output from ‘fdisk -l’ from i-00000008:</para>
1105- <para><literallayout class="monospaced">Disk /dev/vda: 10.7 GB, 10737418240 bytes
1106+
1107+ <simplesect>
1108+ <title>A- Install nova-volumes on the cloud controller.</title>
1109+ <para> This is simply done by installing the two components on the cloud controller : <literallayout class="monospaced"><code>apt-get install lvm2 nova-volume</code></literallayout><itemizedlist>
1110+ <listitem>
1111+ <para>
1112+ <emphasis role="bold">Configure Volumes for use with
1113+ nova-volumes</emphasis></para>
1114+ <para> If you do not already have LVM volumes on hand, but have free drive
1115+ space, you will need to create a LVM volume before proceeding. Here is a
1116+ short run down of how you would create a LVM from free drive space on
1117+ your system. Start off by issuing an fdisk command to your drive with
1118+ the free space:
1119+ <literallayout class="monospaced"><code>fdisk /dev/sda</code></literallayout>
1120+ Once in fdisk, perform the following commands: <orderedlist>
1121+ <listitem>
1122+ <para>Press ‘<code>n'</code> to create a new disk
1123+ partition,</para>
1124+ </listitem>
1125+ <listitem>
1126+ <para>Press <code>'p'</code> to create a primary disk
1127+ partition,</para>
1128+ </listitem>
1129+ <listitem>
1130+ <para>Press <code>'1'</code> to denote it as 1st disk
1131+ partition,</para>
1132+ </listitem>
1133+ <listitem>
1134+ <para>Either press ENTER twice to accept the default of 1st and
1135+ last cylinder – to convert the remainder of hard disk to a
1136+ single disk partition -OR- press ENTER once to accept the
1137+ default of the 1st, and then choose how big you want the
1138+ partition to be by specifying +size{K,M,G} e.g. +5G or
1139+ +6700M.</para>
1140+ </listitem>
1141+ <listitem>
1142+ <para>Press <code>'t', then</code> select the new partition you
1143+ made.</para>
1144+ </listitem>
1145+ <listitem>
1146+ <para>Press <code>'8e'</code> change your new partition to 8e,
1147+ i.e. Linux LVM partition type.</para>
1148+ </listitem>
1149+ <listitem>
1150+ <para>Press ‘<code>p'</code> to display the hard disk partition
1151+ setup. Please take note that the first partition is denoted
1152+ as /dev/sda1 in Linux.</para>
1153+ </listitem>
1154+ <listitem>
1155+ <para>Press <code>'w'</code> to write the partition table and
1156+ exit fdisk upon completion.</para>
1157+ <para>Refresh your partition table to ensure your new partition
1158+ shows up, and verify with fdisk. We then inform the OS about
1159+ the table partition update : </para>
1160+ <para>
1161+ <literallayout class="monospaced"><code>partprobe</code>
1162+
1163+Again :
1164+<code>fdisk -l (you should see your new partition in this listing)</code></literallayout>
1165+ </para>
1166+ <para>Here is how you can set up partitioning during the OS
1167+ install to prepare for this nova-volume
1168+ configuration:</para>
1169+ <para>root@osdemo03:~# fdisk -l </para>
1170+ <para>
1171+ <programlisting>
1172+Device Boot Start End Blocks Id System
1173+
1174+/dev/sda1 * 1 12158 97280 83 Linux
1175+/dev/sda2 12158 24316 97655808 83 Linux
1176+
1177+/dev/sda3 24316 24328 97654784 83 Linux
1178+/dev/sda4 24328 42443 145507329 5 Extended
1179+
1180+<emphasis role="bold">/dev/sda5 24328 32352 64452608 8e Linux LVM</emphasis>
1181+<emphasis role="bold">/dev/sda6 32352 40497 65428480 8e Linux LVM</emphasis>
1182+
1183+/dev/sda7 40498 42443 15624192 82 Linux swap / Solaris
1184+</programlisting>
1185+ </para>
1186+ <para>Now that you have identified a partition has been labeled
1187+ for LVM use, perform the following steps to configure LVM
1188+ and prepare it as nova-volume. <emphasis role="bold">You
1189+ must name your volume group ‘nova-volumes’ or things
1190+ will not work as expected</emphasis>:</para>
1191+ <literallayout class="monospaced"><code>pvcreate /dev/sda5
1192+vgcreate nova-volumes /dev/sda5</code> </literallayout>
1193+ </listitem>
1194+ </orderedlist></para>
1195+ </listitem>
1196+ </itemizedlist></para>
1197+ </simplesect>
1198+ <simplesect>
1199+ <title> B- Configuring nova-volumes on the compute nodes</title>
1200+ <para> Since you have created the VG, you will be able to use the following tools for
1201+ managing your volumes : </para>
1202+ <simpara><code>euca-create-volume</code></simpara>
1203+ <simpara><code>euca-attach-volume</code></simpara>
1204+ <simpara><code>euca-detach-volume</code></simpara>
1205+ <simpara><code>euca-delete-volume</code></simpara>
1206+ <itemizedlist>
1207+ <listitem>
1208+ <para>
1209+ <emphasis role="bold">Installing and Configure the iscsi
1210+ initiator</emphasis></para>
1211+ <para> Remember that every node will act as the iscsi initiator while the server
1212+ running nova-volumes will act as the iscsi target. So make sure, before
1213+ going further that your nodes can communicate with you nova-volumes server.
1214+ If you have a firewall running on it, make sure that the port 3260 (tcp)
1215+ accepts incoming connections. </para>
1216+ <para>First install the open-iscsi package <emphasis role="bold">on your
1217+ compute-nodes only :</emphasis>
1218+ <literallayout class="monospaced"><code>apt-get install open-iscsi</code> </literallayout></para>
1219+ <para>You have to enable it so the startut script (/etc/init.d/open-iscsi) will
1220+ work :
1221+ <literallayout class="monospaced"><code>sed -i ‘s/false/true/g’ /etc/default/iscsitarget</code></literallayout>
1222+ Then run :
1223+ <literallayout class="monospaced"><code>service iscsitarget start</code></literallayout></para>
1224+ </listitem>
1225+ <listitem>
1226+ <para><emphasis role="bold">Configure nova.conf Flag File</emphasis></para>
1227+ <para>Edit your nova.conf to include a new flag, "–iscsi_ip_prefix=192.168." The
1228+ flag will be used by the compute node when the iscsi discovery will be
1229+ performed and the session created. The prefix based on the two first bytes
1230+ will allows the iscsi discovery to use all the available routes (also known
1231+ as multipathing) to the iscsi server (eg. nova-volumes) into your network.
1232+ We will see into the "Troubleshooting" section how to deal with ISCSI
1233+ sessions.</para>
1234+ </listitem>
1235+ <listitem>
1236+ <para>
1237+ <emphasis role="bold">Start nova-volume and Create Volumes</emphasis></para>
1238+ <para>You are now ready to fire up nova-volume, and start creating
1239+ volumes!</para>
1240+
1241+ <para><literallayout class="monospaced"><code>service nova-volume start</code></literallayout></para>
1242+
1243+ <para>Once the service is started, login to your controller and ensure you’ve
1244+ properly sourced your ‘novarc’ file. You will be able to use the euca2ools
1245+ related to volumes interactions (see above).</para>
1246+ <para/>
1247+
1248+ <para>One of the first things you should do is make sure that nova-volume is
1249+ checking in as expected. You can do so using nova-manage:</para>
1250+ <para><literallayout class="monospaced"><code>nova-manage service list</code></literallayout></para>
1251+ <para>If you see a smiling ‘nova-volume’ in there, you are looking good. Now
1252+ create a new volume:</para>
1253+ <para><literallayout class="monospaced"><code>euca-create-volume -s 7 -z nova </code> (-s refers to the size of the volume in GB, and -z is the default zone (usually nova))</literallayout></para>
1254+
1255+ <para>You should get some output similar to this:</para>
1256+ <para>
1257+ <programlisting>VOLUME vol-0000000b 7 creating (wayne, None, None, None) 2011-02-11 06:58:46.941818</programlisting>
1258+ </para>
1259+ <para>You can view that status of the volumes creation using
1260+ ‘euca-describe-volumes’. Once that status is ‘available,’ it is ready to be
1261+ attached to an instance:</para>
1262+ <para><literallayout class="monospaced"><code>euca-attach-volume -i i-00000008 -d /dev/vdb vol-00000009</code> (-i reffers to the instance you will attach the volume to, -d is the mountpoint<emphasis role="bold"> (on the compute-node !</emphasis> and then the volume name.)</literallayout></para>
1263+ <para>By doing that, the compute-node which runs the instance basically performs
1264+ an iscsi connection and creates a session. You can ensure that the session
1265+ has been created by running : </para>
1266+ <para><code>iscsciadm -m session </code></para>
1267+ <para>Which should output : </para>
1268+ <para>
1269+ <programlisting>root@nova-cn1:~# iscsiadm -m session
1270+tcp: [1] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-0000000b</programlisting>
1271+ </para>
1272+
1273+ <para>If you do not get any errors, it is time to login to instance ‘i-00000008′
1274+ and see if the new space is there. You can check the volume attachemnt by
1275+ running : </para>
1276+ <para><code>dmesg | tail </code></para>
1277+ <para>You should from there see a new disk. Here is the output from ‘fdisk -l’
1278+ from i-00000008:</para>
1279+ <programlisting>Disk /dev/vda: 10.7 GB, 10737418240 bytes
1280 16 heads, 63 sectors/track, 20805 cylinders
1281 Units = cylinders of 1008 * 512 = 516096 bytes
1282 Sector size (logical/physical): 512 bytes / 512 bytes
1283 I/O size (minimum/optimal): 512 bytes / 512 bytes
1284-Disk identifier: 0×00000000</literallayout></para>
1285- <literallayout>Disk /dev/vda doesn’t contain a valid partition table</literallayout>
1286-
1287- <para>
1288- <literallayout>Disk /dev/vdb: 21.5 GB, 21474836480 bytes&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;—–Here is our new volume!&nbsp;
1289-16 heads, 63 sectors/track, 41610 cylinders
1290-Units = cylinders of 1008 * 512 = 516096 bytes
1291-Sector size (logical/physical): 512 bytes / 512 bytes
1292-I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0×00000000</literallayout>
1293- </para>
1294- <para>Disk /dev/vdb doesn’t contain a valid partition table</para>
1295-
1296- <para>Now with the space presented, let’s configure it for use:</para>
1297- <para><literallayout class="monospaced">fdisk /dev/vdb</literallayout></para>
1298- <orderedlist>
1299- <listitem><para>Press ‘<code>n'</code> to create a new disk partition.</para></listitem>
1300- <listitem><para>Press <code>'p'</code> to create a primary disk partition.</para></listitem>
1301- <listitem><para>Press <code>'1'</code> to denote it as 1st disk partition.</para></listitem>
1302-
1303- <listitem><para>Press ENTER twice to accept the default of 1st and last cylinder – to convert the remainder of
1304- hard disk to a single disk partition.</para></listitem>
1305- <listitem><para>Press <code>'t', then</code> select the new partition you made.</para></listitem>
1306- <listitem><para>Press <code>'83'</code> change your new partition to 83, i.e. Linux partition type.</para></listitem>
1307- <listitem><para>Press ‘<code>p'</code> to display the hard disk partition setup. Please take note that the
1308- first partition is denoted as /dev/vda1 in your instance.</para></listitem>
1309-
1310- <listitem>
1311- <para>Press <code>'w'</code> to write the partition table and exit fdisk upon
1312- completion.</para>
1313- </listitem>
1314- <listitem>
1315- <para>Lastly, make a file system on the partition and mount it.</para><literallayout class="monospaced">mkfs.ext3 /dev/vdb1
1316+Disk identifier: 0×00000000
1317+Disk /dev/vda doesn’t contain a valid partition table
1318+<emphasis role="bold">Disk /dev/vdb: 21.5 GB, 21474836480 bytes &lt;—–Here is our new volume!</emphasis>
1319+16 heads, 63 sectors/track, 41610 cylinders
1320+Units = cylinders of 1008 * 512 = 516096 bytes
1321+Sector size (logical/physical): 512 bytes / 512 bytes
1322+I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0×00000000 </programlisting>
1323+
1324+ <para>Now with the space presented, let’s configure it for use:</para>
1325+ <para><literallayout class="monospaced"><code>fdisk /dev/vdb</code></literallayout></para>
1326+ <orderedlist>
1327+ <listitem>
1328+ <para>Press ‘<code>n'</code> to create a new disk partition.</para>
1329+ </listitem>
1330+ <listitem>
1331+ <para>Press <code>'p'</code> to create a primary disk partition.</para>
1332+ </listitem>
1333+ <listitem>
1334+ <para>Press <code>'1'</code> to denote it as 1st disk partition.</para>
1335+ </listitem>
1336+
1337+ <listitem>
1338+ <para>Press ENTER twice to accept the default of 1st and last cylinder –
1339+ to convert the remainder of hard disk to a single disk
1340+ partition.</para>
1341+ </listitem>
1342+ <listitem>
1343+ <para>Press <code>'t', then</code> select the new partition you
1344+ made.</para>
1345+ </listitem>
1346+ <listitem>
1347+ <para>Press <code>'83'</code> change your new partition to 83, i.e.
1348+ Linux partition type.</para>
1349+ </listitem>
1350+ <listitem>
1351+ <para>Press ‘<code>p'</code> to display the hard disk partition setup.
1352+ Please take note that the first partition is denoted as /dev/vda1 in
1353+ your instance.</para>
1354+ </listitem>
1355+
1356+ <listitem>
1357+ <para>Press <code>'w'</code> to write the partition table and exit fdisk
1358+ upon completion.</para>
1359+ </listitem>
1360+ <listitem>
1361+ <para>Lastly, make a file system on the partition and mount it.
1362+ <programlisting>mkfs.ext3 /dev/vdb1
1363 mkdir /extraspace
1364-mount /dev/vdb1 /extraspace</literallayout>
1365- </listitem></orderedlist>
1366- <para>Your new volume has now been successfully mounted, and is ready for use! The ‘euca’
1367- commands are pretty self-explanatory, so play around with them and create new
1368- volumes, tear them down, attach and reattach, and so on. </para>
1369- </simplesect></section>
1370+mount /dev/vdb1 /extraspace </programlisting></para>
1371+
1372+ </listitem>
1373+ </orderedlist>
1374+ <para>Your new volume has now been successfully mounted, and is ready for use!
1375+ The ‘euca’ commands are pretty self-explanatory, so play around with them
1376+ and create new volumes, tear them down, attach and reattach, and so on.
1377+ </para>
1378+ </listitem>
1379+ </itemizedlist>
1380+ </simplesect>
1381+ <simplesect>
1382+ <title>C- Troubleshoot your nova-volumes installation</title>
1383+ <para>If the volume attachement doesn't work, you should be able to perform different
1384+ checks in order to see where the issue is. The nova-volumes.log and nova-compute.log
1385+ will help you to diagnosis the errors you could encounter : </para>
1386+ <para><emphasis role="bold">nova-compute.log / nova-volumes.log</emphasis></para>
1387+ <para>
1388+ <itemizedlist>
1389+ <listitem>
1390+ <para><emphasis role="italic">ERROR "15- already exists"</emphasis>
1391+ <programlisting>"ProcessExecutionError: Unexpected error while running command.\nCommand: sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000001 -p
1392+10.192.12.34:3260 --login\nExit code: 255\nStdout: 'Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-00000001, portal:
1393+10.192.12.34,3260]\\n'\nStderr: 'iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-00000001,
1394+portal:10.192.12.34,3260]: openiscsiadm: initiator reported error (15 - already exists)\\n'\n"] </programlisting></para>
1395+ <para> This errors happens sometimes when you run an euca-dettach-volume and
1396+ euca-attach-volume and/ or try to attach another volume to an instance.
1397+ It happens when the compute node has a running session while you try to
1398+ attach a volume by using the same IQN. You could check that by running : </para>
1399+ <para><literallayout class="monospaced"><code>iscsiadm -m session</code></literallayout>
1400+ You should have a session with the same name that the compute is trying
1401+ to open. Actually, it seems to be related to the several routes
1402+ available for the iscsi exposition, those routes could be seen by
1403+ running on the compute node :
1404+ <literallayout class="monospaced"><code>iscsiadm -m discovery -t st -p $ip_of_nova-volumes</code></literallayout>
1405+ You should see for a volume multiple addresses to reach it. The only
1406+ known workaround to that is tha change the "–iscsi_ip_prefix" flag and
1407+ use the 4 bytes (full IP) of the nova-volumes server, eg : </para>
1408+ <para><literallayout class="monospaced"><code>"–iscsi_ip_prefix=192.168.2.1</code></literallayout>
1409+ You'll have then to restart both nova-compute and nova-volumes services. </para>
1410+ <para/>
1411+ </listitem>
1412+ <listitem>
1413+ <para><emphasis role="italic">ERROR "Cannot resolve host"</emphasis>
1414+ <programlisting>(nova.root): TRACE: ProcessExecutionError: Unexpected error while running command.
1415+(nova.root): TRACE: Command: sudo iscsiadm -m discovery -t sendtargets -p ubuntu03c
1416+(nova.root): TRACE: Exit code: 255
1417+(nova.root): TRACE: Stdout: ''
1418+(nova.root): TRACE: Stderr: 'iscsiadm: Cannot resolve host ubuntu03c. getaddrinfo error: [Name or service not known]\n\niscsiadm:
1419+cannot resolve host name ubuntu03c\niscsiadm: Could not perform SendTargets discovery.\n'
1420+(nova.root): TRACE:</programlisting>
1421+ This erros happens when the compute node is unable to resolve the
1422+ nova-volume server name. You could either add a record for the server if
1423+ you have a DNS server; or add it into the "/etc/hosts" file of the
1424+ nova-compute. </para>
1425+ <para/>
1426+ </listitem>
1427+ <listitem>
1428+ <para><emphasis role="italic">ERROR "No route to host"</emphasis>
1429+ <programlisting>iscsiadm: cannot make connection to 172.29.200.37: No route to host\niscsiadm: cannot make connection to 172.29.200.37</programlisting>
1430+ This error could be caused by several things, but<emphasis role="bold">
1431+ it means only one thing : openiscsi is unable to establish a
1432+ communication with your nova-volumes server</emphasis>.</para>
1433+ <para>The first thing you could do is running a telnet session in order to
1434+ see if you are able to reach the nova-volumes server. From the
1435+ compute-node, run :</para>
1436+ <literallayout class="monospaced"><code>telnet $ip_of_nova_volumes 3260</code></literallayout>
1437+ <para> If the session times out, check the server firewall ; or try to ping
1438+ it. You could also run a tcpdump session which will likely gives you
1439+ extra information : </para>
1440+ <literallayout class="monospaced"><code>tcpdump -nvv -i $iscsi_interface port dest $ip_of_nova_volumes</code></literallayout>
1441+ <para> Again, try to manually run an iscsi discovery via : </para>
1442+ <literallayout class="monospaced"><code>iscsiadm -m discovery -t st -p $ip_of_nova-volumes</code></literallayout>
1443+ <para/>
1444+ </listitem>
1445+ <listitem>
1446+ <para><emphasis role="italic">"I lost connectivity between nova-volumes and
1447+ node-compute ; how to restore a clean state ?"</emphasis>
1448+ </para>
1449+ <para>Network disconnection can happens, from an "iscsi view", loosing
1450+ connectivity could be seen as a physical removal of a server's disk. If
1451+ the instance runs a volume while you loose the network between them, you
1452+ won't be able to detach the volume. You would encounter several errors.
1453+ Here is how you could clean this : </para>
1454+ <para>First, from the nova-compute, close the active (but stalled) iscsi
1455+ session, reffer to the volume attached to get the session, and perform
1456+ the following command : </para>
1457+ <literallayout class="monospaced"><code>iscsiadm -m session -r $session_id -u</code></literallayout>
1458+ <para>Here is an <code>iscsi -m session</code> output : </para>
1459+ <programlisting>
1460+tcp: [1] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-0000000e
1461+tcp: [2] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000010
1462+tcp: [3] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000011
1463+tcp: [4] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-0000000a
1464+tcp: [5] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000012
1465+tcp: [6] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000007
1466+tcp: [7] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000009
1467+tcp: [9] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000014 </programlisting>
1468+ <para>I would close the session number 9 if I want to free the volume
1469+ 00000014. </para>
1470+ <para>The cloud-controller is actually unaware about the iscsi session
1471+ closing, and will keeps the volume state as "in-use":
1472+ <programlisting>VOLUME vol-00000014 30 nova in-use (nuage-and-co, nova-cc1, i-0000009a[nova-cn1], \/dev\/sdb) 2011-07-18T12:45:39Z</programlisting>
1473+ You now have to inform it that the disk can be used. Nova stores the
1474+ volumes info into the "volumes" table. You will have to update four
1475+ fields into the database nova uses (eg. MySQL). First, conect to the
1476+ database : </para>
1477+ <literallayout class="monospaced"><code>mysql -uroot -p$password nova</code></literallayout>
1478+ <para>Then, we get some infos from the table "volumes" : </para>
1479+ <programlisting>
1480+ mysql> select id,created_at, size, instance_id, status, attach_status, display_name from volumes;
1481++----+---------------------+------+-------------+----------------+---------------+--------------+
1482+| id | created_at | size | instance_id | status | attach_status | display_name |
1483++----+---------------------+------+-------------+----------------+---------------+--------------+
1484+| 1 | 2011-06-08 09:02:49 | 5 | 0 | available | detached | volume1 |
1485+| 2 | 2011-06-08 14:04:36 | 5 | 0 | available | detached | NULL |
1486+| 3 | 2011-06-08 14:44:55 | 5 | 0 | available | detached | NULL |
1487+| 4 | 2011-06-09 09:09:15 | 5 | 0 | error_deleting | detached | NULL |
1488+| 5 | 2011-06-10 08:46:33 | 6 | 0 | available | detached | NULL |
1489+| 6 | 2011-06-10 09:16:18 | 6 | 0 | available | detached | NULL |
1490+| 7 | 2011-06-16 07:45:57 | 10 | 157 | in-use | attached | NULL |
1491+| 8 | 2011-06-20 07:51:19 | 10 | 0 | available | detached | NULL |
1492+| 9 | 2011-06-21 08:21:38 | 10 | 152 | in-use | attached | NULL |
1493+| 10 | 2011-06-22 09:47:42 | 50 | 136 | in-use | attached | NULL |
1494+| 11 | 2011-06-30 07:30:48 | 50 | 0 | available | detached | NULL |
1495+| 12 | 2011-06-30 11:56:32 | 50 | 0 | available | detached | NULL |
1496+| 13 | 2011-06-30 12:12:08 | 50 | 0 | error_deleting | detached | NULL |
1497+| 14 | 2011-07-04 12:33:50 | 30 | 155 | in-use | attached | NULL |
1498+| 15 | 2011-07-06 15:15:11 | 5 | 0 | error_deleting | detached | NULL |
1499+| 16 | 2011-07-07 08:05:44 | 20 | 149 | in-use | attached | NULL |
1500+| 20 | 2011-08-30 13:28:24 | 20 | 158 | in-use | attached | NULL |
1501+| 17 | 2011-07-13 19:41:13 | 20 | 149 | in-use | attached | NULL |
1502+| 18 | 2011-07-18 12:45:39 | 30 | 154 | in-use | attached | NULL |
1503+| 19 | 2011-08-22 13:11:06 | 50 | 0 | available | detached | NULL |
1504+| 21 | 2011-08-30 15:39:16 | 5 | NULL | error_deleting | detached | NULL |
1505++----+---------------------+------+-------------+----------------+---------------+--------------+
1506+21 rows in set (0.00 sec)</programlisting>
1507+ <para> Once you get the volume id, you will have to run the following sql
1508+ queries (let's say, my volume 14 as the id number 21 : </para>
1509+ <programlisting>
1510+ mysql> update volumes set mountpoint=NULL where id=21;
1511+ mysql> update volumes set status="available" where status "error_deleting" where id=21;
1512+ mysql> update volumes set attach_status="detached" where id=21;
1513+ mysql> update volumes set instance_id=0 where id=21;
1514+ </programlisting>
1515+ <para>Now if you run again <code>euca-describe-volumes</code>from the cloud
1516+ coontroller, you should see an available volume now : </para>
1517+ <programlisting>VOLUME vol-00000014 30 nova available (nuage-and-co, nova-cc1, None, None) 2011-07-18T12:45:39Z</programlisting>
1518+ <para>You can now proceed to the volume attachement again!</para>
1519+ </listitem>
1520+ </itemizedlist>
1521+ </para>
1522+ </simplesect>
1523+ <simplesect>
1524+ <title> D- Advanced tips : Disaster Recovery Process, Backup your nova-volumes, Browse
1525+ your nova-volumes from the cloud-controller </title>
1526+ <para>
1527+ <emphasis role="italic">
1528+ WORK IN PROGRESS
1529+ </emphasis>
1530+ </para>
1531+ <para/>
1532+ </simplesect>
1533+ </section>
1534 <section>
1535 <?dbhtml filename="live-migration-usage.html" ?>
1536 <title>Using Live Migration</title>
1537@@ -680,74 +1212,80 @@
1538 <para>Live migration provides a scheme to migrate running instances from one OpenStack
1539 Compute server to another OpenStack Compute server. No visible downtime and no
1540 transaction loss is the ideal goal. This feature can be used as depicted below. </para>
1541-
1542+
1543 <itemizedlist>
1544 <listitem>
1545 <para>First, make sure any instances running on a specific server.</para>
1546 <programlisting><![CDATA[
1547 # euca-describe-instances
1548 Reservation:r-2raqmabo
1549-RESERVATION r-2raqmabo admin default
1550-INSTANCE i-00000003 ami-ubuntu-lucid a.b.c.d e.f.g.h running testkey (admin, HostB) 0 m1.small 2011-02-15 07:28:32 nova
1551- ]]></programlisting>
1552- <para> In this example, i-00000003 is running on HostB.</para>
1553+RESERVATION r-2raqmabo admin default
1554+INSTANCE i-00000003 ami-ubuntu-lucid a.b.c.d e.f.g.h running testkey (admin, HostB) 0 m1.small 2011-02-15 07:28:32 nova
1555+]]></programlisting>
1556+ <para> In this example, i-00000003 is running on HostB.</para>
1557 </listitem>
1558 <listitem>
1559 <para>Second, pick up other server where instances are migrated to.</para>
1560 <programlisting><![CDATA[
1561 # nova-manage service list
1562-HostA nova-scheduler enabled :-) None
1563-HostA nova-volume enabled :-) None
1564-HostA nova-network enabled :-) None
1565-HostB nova-compute enabled :-) None
1566-HostC nova-compute enabled :-) None
1567- ]]></programlisting>
1568- <para> In this example, HostC can be picked up because nova-compute is running onto it.</para>
1569+HostA nova-scheduler enabled :-) None
1570+HostA nova-volume enabled :-) None
1571+HostA nova-network enabled :-) None
1572+HostB nova-compute enabled :-) None
1573+HostC nova-compute enabled :-) None
1574+]]></programlisting>
1575+ <para> In this example, HostC can be picked up because nova-compute is running onto
1576+ it.</para>
1577 </listitem>
1578 <listitem>
1579 <para>Third, check HostC has enough resource for live migration.</para>
1580 <programlisting><![CDATA[
1581 # nova-manage service update_resource HostC
1582 # nova-manage service describe_resource HostC
1583-HOST PROJECT cpu mem(mb) disk(gb)
1584-HostC(total) 16 32232 878
1585-HostC(used) 13 21284 442
1586-HostC p1 5 10240 150
1587-HostC p2 5 10240 150
1588+HOST PROJECT cpu mem(mb) disk(gb)
1589+HostC(total) 16 32232 878
1590+HostC(used) 13 21284 442
1591+HostC p1 5 10240 150
1592+HostC p2 5 10240 150
1593 .....
1594- ]]></programlisting>
1595- <para>Remember to use update_resource first, then describe_resource. Otherwise,
1596+]]></programlisting>
1597+ <para>Remember to use update_resource first, then describe_resource. Otherwise,
1598 Host(used) is not updated.</para>
1599- <itemizedlist>
1600- <listitem>
1601- <para><emphasis role="bold">cpu:</emphasis>the nuber of cpu</para>
1602- </listitem>
1603- <listitem>
1604- <para><emphasis role="bold">mem(mb):</emphasis>total amount of memory (MB)</para>
1605- </listitem>
1606- <listitem>
1607- <para><emphasis role="bold">disk(gb)</emphasis>total amount of NOVA-INST-DIR/instances(GB)</para>
1608- </listitem>
1609- <listitem>
1610- <para><emphasis role="bold">1st line shows </emphasis>total amount of resource physical server has.</para>
1611- </listitem>
1612- <listitem>
1613- <para><emphasis role="bold">2nd line shows </emphasis>current used resource.</para>
1614- </listitem>
1615- <listitem>
1616- <para><emphasis role="bold">3rd line and under</emphasis> is used resource per project.</para>
1617- </listitem>
1618- </itemizedlist>
1619+ <itemizedlist>
1620+ <listitem>
1621+ <para><emphasis role="bold">cpu:</emphasis>the nuber of cpu</para>
1622+ </listitem>
1623+ <listitem>
1624+ <para><emphasis role="bold">mem(mb):</emphasis>total amount of memory
1625+ (MB)</para>
1626+ </listitem>
1627+ <listitem>
1628+ <para><emphasis role="bold">disk(gb)</emphasis>total amount of
1629+ NOVA-INST-DIR/instances(GB)</para>
1630+ </listitem>
1631+ <listitem>
1632+ <para><emphasis role="bold">1st line shows </emphasis>total amount of
1633+ resource physical server has.</para>
1634+ </listitem>
1635+ <listitem>
1636+ <para><emphasis role="bold">2nd line shows </emphasis>current used
1637+ resource.</para>
1638+ </listitem>
1639+ <listitem>
1640+ <para><emphasis role="bold">3rd line and under</emphasis> is used resource
1641+ per project.</para>
1642+ </listitem>
1643+ </itemizedlist>
1644 </listitem>
1645 <listitem>
1646 <para>Finally, live migration</para>
1647 <programlisting><![CDATA[
1648 # nova-manage vm live_migration i-00000003 HostC
1649 Migration of i-00000001 initiated. Check its progress using euca-describe-instances.
1650- ]]></programlisting>
1651- <para>Make sure instances are migrated successfully with euca-describe-instances.
1652- If instances are still running on HostB, check logfiles( src/dest nova-compute
1653- and nova-scheduler)</para>
1654+]]></programlisting>
1655+ <para>Make sure instances are migrated successfully with euca-describe-instances. If
1656+ instances are still running on HostB, check logfiles( src/dest nova-compute and
1657+ nova-scheduler)</para>
1658 </listitem>
1659 </itemizedlist>
1660
1661@@ -756,12 +1294,12 @@
1662 <section>
1663 <?dbhtml filename="reference-for-flags-in-nova-conf.html" ?>
1664 <title>Reference for Flags in nova.conf</title>
1665- <para>For a complete list of all available flags for each OpenStack Compute service,
1666- run bin/nova-&lt;servicename> --help. </para>
1667-
1668- <table rules="all">
1669+ <para>For a complete list of all available flags for each OpenStack Compute service, run
1670+ bin/nova-&lt;servicename> --help. </para>
1671+
1672+ <table rules="all">
1673 <caption>Description of common nova.conf flags (nova-api, nova-compute)</caption>
1674-
1675+
1676 <thead>
1677 <tr>
1678 <td>Flag</td>
1679@@ -901,17 +1439,22 @@
1680 <tr>
1681 <td>--flat_injected</td>
1682 <td>default: 'false'</td>
1683- <td>Indicates whether Compute (Nova) should use attempt to inject IPv6 network configuration information into the guest. It attempts to modify /etc/network/interfaces and currently only works on Debian-based systems. </td>
1684+ <td>Indicates whether Compute (Nova) should use attempt to inject IPv6 network
1685+ configuration information into the guest. It attempts to modify
1686+ /etc/network/interfaces and currently only works on Debian-based systems.
1687+ </td>
1688 </tr>
1689 <tr>
1690 <td>--fixed_ip_disassociate_timeout</td>
1691 <td>default: '600'</td>
1692- <td>Integer: Number of seconds after which a deallocated ip is disassociated. </td>
1693+ <td>Integer: Number of seconds after which a deallocated ip is disassociated.
1694+ </td>
1695 </tr>
1696 <tr>
1697 <td>--fixed_range</td>
1698 <td>default: '10.0.0.0/8'</td>
1699- <td>Fixed IP address block of addresses from which a set of iptables rules is created</td>
1700+ <td>Fixed IP address block of addresses from which a set of iptables rules is
1701+ created</td>
1702 </tr>
1703 <tr>
1704 <td>--fixed_range_v6</td>
1705@@ -921,7 +1464,8 @@
1706 <tr>
1707 <td>--[no]flat_injected</td>
1708 <td>default: 'true'</td>
1709- <td>Indicates whether to attempt to inject network setup into guest; network injection only works for Debian systems</td>
1710+ <td>Indicates whether to attempt to inject network setup into guest; network
1711+ injection only works for Debian systems</td>
1712 </tr>
1713 <tr>
1714 <td>--flat_interface</td>
1715@@ -936,7 +1480,8 @@
1716 <tr>
1717 <td>--flat_network_dhcp_start</td>
1718 <td>default: '10.0.0.2'</td>
1719- <td>Starting IP address for the DHCP server to start handing out IP addresses when using FlatDhcp </td>
1720+ <td>Starting IP address for the DHCP server to start handing out IP addresses
1721+ when using FlatDhcp </td>
1722 </tr>
1723 <tr>
1724 <td>--flat_network_dns</td>
1725@@ -948,7 +1493,7 @@
1726 <td>default: '4.4.4.0/24'</td>
1727 <td>Floating IP address block </td>
1728 </tr>
1729-
1730+
1731 <tr>
1732 <td>--[no]fake_network</td>
1733 <td>default: 'false'</td>
1734@@ -996,15 +1541,17 @@
1735 <tr>
1736 <td>--image_service</td>
1737 <td>default: 'nova.image.s3.S3ImageService'</td>
1738- <td><para>The service to use for retrieving and searching for images. Images must be registered using
1739- euca2ools. Options: </para><itemizedlist>
1740+ <td><para>The service to use for retrieving and searching for images. Images
1741+ must be registered using euca2ools. Options: </para><itemizedlist>
1742 <listitem>
1743 <para>nova.image.s3.S3ImageService</para>
1744 <para>S3 backend for the Image Service.</para>
1745 </listitem>
1746 <listitem>
1747 <para>nova.image.local.LocalImageService</para>
1748- <para>Image service storing images to local disk. It assumes that image_ids are integers. This is the default setting if no image manager is defined here.</para>
1749+ <para>Image service storing images to local disk. It assumes that
1750+ image_ids are integers. This is the default setting if no image
1751+ manager is defined here.</para>
1752 </listitem>
1753 <listitem>
1754 <para>nova.image.glance.GlanceImageService</para>
1755@@ -1022,7 +1569,8 @@
1756 <tr>
1757 <td>--libvirt_type</td>
1758 <td>default: kvm</td>
1759- <td>String: Name of connection to a hypervisor through libvirt. Supported options are kvm, qemu, uml, and xen.</td>
1760+ <td>String: Name of connection to a hypervisor through libvirt. Supported
1761+ options are kvm, qemu, uml, and xen.</td>
1762 </tr>
1763 <tr>
1764 <td>--lock_path</td>
1765@@ -1195,7 +1743,8 @@
1766 <tr>
1767 <td>--routing_source_ip</td>
1768 <td>default: '10'</td>
1769- <td>IP address; Public IP of network host. When instances without a floating IP hit the Internet, traffic is snatted to this IP address.</td>
1770+ <td>IP address; Public IP of network host. When instances without a floating IP
1771+ hit the Internet, traffic is snatted to this IP address.</td>
1772 </tr>
1773 <tr>
1774 <td>--s3_dmz</td>
1775@@ -1250,15 +1799,18 @@
1776 <td>default: '/usr/lib/pymodules/python2.6/nova/../'</td>
1777 <td>Top-level directory for maintaining Nova's state</td>
1778 </tr>
1779- <tr><td>--use_deprecated_auth</td>
1780- <td>default: 'false'</td>
1781- <td>Set to 1 or true to turn on; Determines whether to use the deprecated nova auth system or Keystone as the auth system </td></tr>
1782- <tr><td>--use_ipv6</td>
1783- <td>default: 'false'</td>
1784- <td>Set to 1 or true to turn on; Determines whether to use IPv6 network addresses </td></tr>
1785- <tr><td>--use_s3</td>
1786+ <tr>
1787+ <td>--use_ipv6</td>
1788+ <td>default: 'false'</td>
1789+ <td>Set to 1 or true to turn on; Determines whether to use IPv6 network
1790+ addresses </td>
1791+ </tr>
1792+ <tr>
1793+ <td>--use_s3</td>
1794 <td>default: 'true'</td>
1795- <td>Set to 1 or true to turn on; Determines whether to get images from s3 or use a local copy </td></tr>
1796+ <td>Set to 1 or true to turn on; Determines whether to get images from s3 or use
1797+ a local copy </td>
1798+ </tr>
1799 <tr>
1800 <td>--verbose</td>
1801 <td>default: 'false'</td>
1802@@ -1267,7 +1819,8 @@
1803 <tr>
1804 <td>--vlan_interface</td>
1805 <td>default: 'eth0'</td>
1806- <td>This is the interface that VlanManager uses to bind bridges and vlans to. </td>
1807+ <td>This is the interface that VlanManager uses to bind bridges and vlans to.
1808+ </td>
1809 </tr>
1810 <tr>
1811 <td>--vlan_start</td>
1812@@ -1282,39 +1835,47 @@
1813 <tr>
1814 <td>--vpn_key_suffix</td>
1815 <td>default: '-vpn'</td>
1816- <td>This is the interface that VlanManager uses to bind bridges and VLANs to.</td>
1817- </tr>
1818- </tbody>
1819- </table>
1820- <table rules="all">
1821- <caption>Description of nova.conf flags specific to nova-volume</caption>
1822-
1823- <thead>
1824- <tr>
1825- <td>Flag</td>
1826- <td>Default</td>
1827- <td>Description</td>
1828- </tr>
1829- </thead>
1830- <tbody>
1831- <tr><td>--iscsi_ip_prefix</td>
1832- <td>default: ''</td>
1833-
1834- <td>IP address or partial IP address; Value that differentiates the IP
1835- addresses using simple string matching, so if all of your hosts are on the 192.168.1.0/24 network you could use --iscsi_ip_prefix=192.168.1</td></tr>
1836-
1837- <tr>
1838- <td>--volume_manager</td>
1839- <td>default: 'nova.volume.manager.VolumeManager'</td>
1840- <td>String value; Manager to use for nova-volume</td>
1841- </tr>
1842- <tr>
1843- <td>--volume_name_template</td>
1844- <td>default: 'volume-%08x'</td>
1845- <td>String value; Template string to be used to generate volume names</td>
1846- </tr><tr>
1847- <td>--volume_topic</td>
1848- <td>default: 'volume'</td>
1849- <td>String value; Name of the topic that volume nodes listen on</td>
1850- </tr></tbody></table></section>
1851+ <td>This is the interface that VlanManager uses to bind bridges and VLANs
1852+ to.</td>
1853+ </tr>
1854+ </tbody>
1855+ </table>
1856+ <table rules="all">
1857+ <caption>Description of nova.conf flags specific to nova-volume</caption>
1858+
1859+ <thead>
1860+ <tr>
1861+ <td>Flag</td>
1862+ <td>Default</td>
1863+ <td>Description</td>
1864+ </tr>
1865+ </thead>
1866+ <tbody>
1867+ <tr>
1868+ <td>--iscsi_ip_prefix</td>
1869+ <td>default: ''</td>
1870+
1871+ <td>IP address or partial IP address; Value that differentiates the IP addresses
1872+ using simple string matching, so if all of your hosts are on the
1873+ 192.168.1.0/24 network you could use --iscsi_ip_prefix=192.168.1</td>
1874+ </tr>
1875+
1876+ <tr>
1877+ <td>--volume_manager</td>
1878+ <td>default: 'nova.volume.manager.VolumeManager'</td>
1879+ <td>String value; Manager to use for nova-volume</td>
1880+ </tr>
1881+ <tr>
1882+ <td>--volume_name_template</td>
1883+ <td>default: 'volume-%08x'</td>
1884+ <td>String value; Template string to be used to generate volume names</td>
1885+ </tr>
1886+ <tr>
1887+ <td>--volume_topic</td>
1888+ <td>default: 'volume'</td>
1889+ <td>String value; Name of the topic that volume nodes listen on</td>
1890+ </tr>
1891+ </tbody>
1892+ </table>
1893+ </section>
1894 </chapter>

Subscribers

People subscribed via source and target branches