Merge lp:~razique/openstack-manuals/working into lp:~annegentle/openstack-manuals/trunk

Proposed by Razique Mahroua
Status: Merged
Merged at revision: 176
Proposed branch: lp:~razique/openstack-manuals/working
Merge into: lp:~annegentle/openstack-manuals/trunk
Diff against target: 1894 lines (+1062/-501)
1 file modified
doc/source/docbkx/openstack-compute-admin/computeadmin.xml (+1062/-501)
To merge this branch: bzr merge lp:~razique/openstack-manuals/working
Reviewer Review Type Date Requested Status
Anne Gentle Approve
Review via email: mp+74369@code.launchpad.net

Description of the change

Split the section "1-8 Managing volumes" in four parts :
- Installing nova-volumes
- Configuring nova-volumes
- Troubleshoot the nova-volume setup
- Advanced tips

The section now details deeper the whole nova-volumes component.

To post a comment you must log in.
Revision history for this message
Anne Gentle (annegentle) wrote :

Thanks for doing this, it was needed! I'm bringing in just your section as there were a lot of white space changes in other areas of the document. We can talk online or via email to figure out why there were white space changes in other sections. I just fixed a few misspellings - euca-dettach-volume to euca-detach-volume, reffer to refer, attachement to attachment.

There is some confusion about nova-volume the "service" and nova-volumes the "volume group" but I think you have handled it well. I tried to spell iscsi as "iSCSI" when referring to the standard (but not for the commands, o' course).

Please let me know if you see anything incorrect in my corrections and feel free to continue to maintain the sections. We'll find out what is causing the white space differences.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'doc/source/docbkx/openstack-compute-admin/computeadmin.xml'
--- doc/source/docbkx/openstack-compute-admin/computeadmin.xml 2011-09-01 14:09:41 +0000
+++ doc/source/docbkx/openstack-compute-admin/computeadmin.xml 2011-09-07 09:19:27 +0000
@@ -1,33 +1,33 @@
1<?xml version="1.0" encoding="UTF-8"?>1<?xml version="1.0" encoding="UTF-8"?>
2<!DOCTYPE chapter [2<!DOCTYPE chapter[
3<!-- Some useful entities borrowed from HTML -->3<!-- Some useful entities borrowed from HTML -->
4<!ENTITY ndash "&#x2013;">4<!ENTITY ndash "&#x2013;">
5<!ENTITY mdash "&#x2014;">5<!ENTITY mdash "&#x2014;">
6<!ENTITY hellip "&#x2026;">6<!ENTITY hellip "&#x2026;">
7<!ENTITY nbsp "&#160;">7<!ENTITY nbsp "&#160;">
8<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">8<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
9<imageobject>9<imageobject>
10<imagedata fileref="img/Check_mark_23x20_02.svg"10<imagedata fileref="img/Check_mark_23x20_02.svg"
11format="SVG" scale="60"/>11format="SVG" scale="60"/>
12</imageobject>12</imageobject>
13</inlinemediaobject>'>13</inlinemediaobject>'>
1414
15<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">15<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
16<imageobject>16<imageobject>
17<imagedata fileref="img/Arrow_east.svg"17<imagedata fileref="img/Arrow_east.svg"
18format="SVG" scale="60"/>18format="SVG" scale="60"/>
19</imageobject>19</imageobject>
20</inlinemediaobject>'>20</inlinemediaobject>'>
21]>21]>
22<chapter xmlns="http://docbook.org/ns/docbook"22<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
23 xmlns:xi="http://www.w3.org/2001/XInclude"
24 xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">23 xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
25 <?dbhtml filename="ch_system-administration-for-openstack-compute.html" ?>24 <?dbhtml filename="ch_system-administration-for-openstack-compute.html" ?>
26 <title>System Administration</title>25 <title>System Administration</title>
27 <para>By understanding how the different installed nodes interact with each other you can26 <para>By understanding how the different installed nodes interact with each other you can
28 administer the OpenStack Compute installation. OpenStack Compute offers many ways to install27 administer the OpenStack Compute installation. OpenStack Compute offers many ways to install
29 using multiple servers but the general idea is that you can have multiple compute nodes that 28 using multiple servers but the general idea is that you can have multiple compute nodes that
30 control the virtual servers and a cloud controller node that contains the remaining Nova services. </para>29 control the virtual servers and a cloud controller node that contains the remaining Nova
30 services. </para>
31 <para>The OpenStack Compute cloud works via the interaction of a series of daemon processes31 <para>The OpenStack Compute cloud works via the interaction of a series of daemon processes
32 named nova-* that reside persistently on the host machine or machines. These binaries can32 named nova-* that reside persistently on the host machine or machines. These binaries can
33 all run on the same machine or be spread out on multiple boxes in a large deployment. The33 all run on the same machine or be spread out on multiple boxes in a large deployment. The
@@ -77,113 +77,179 @@
77 <para><literallayout class="monospaced">nova-network --network_manager=nova.network.manager.FlatManager</literallayout></para>77 <para><literallayout class="monospaced">nova-network --network_manager=nova.network.manager.FlatManager</literallayout></para>
78 </listitem>78 </listitem>
79 </itemizedlist>79 </itemizedlist>
80 <section><?dbhtml filename="starting-images.html" ?>80 <section>
81 <title>Starting Images</title><para>Once you have an installation, you want to get images that you can use in your Compute cloud. We've created a basic Ubuntu image for testing your installation. First you'll download the image, then use uec-publish-tarball to publish it:</para>81 <?dbhtml filename="starting-images.html" ?>
82 82 <title>Starting Images</title>
83 <para><literallayout class="monospaced">image="ubuntu1010-UEC-localuser-image.tar.gz"83 <para>Once you have an installation, you want to get images that you can use in your Compute
84 cloud. We've created a basic Ubuntu image for testing your installation. First you'll
85 download the image, then use uec-publish-tarball to publish it:</para>
86
87 <para><literallayout class="monospaced">image="ubuntu1010-UEC-localuser-image.tar.gz"
84wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz88wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz
85uec-publish-tarball $image [bucket-name] [hardware-arch]</literallayout></para>89uec-publish-tarball $image [bucket-name] [hardware-arch]</literallayout></para>
86 90
87 <para>Here's an example of what this command looks like with data:</para>91 <para>Here's an example of what this command looks like with data:</para>
88 92
89 <para><literallayout class="monospaced"> uec-publish-tarball ubuntu1010-UEC-localuser-image.tar.gz dub-bucket x86_64</literallayout></para>93 <para><literallayout class="monospaced"> uec-publish-tarball ubuntu1010-UEC-localuser-image.tar.gz dub-bucket x86_64</literallayout></para>
90 94
91 <para>The command in return should output three references: emi, eri and eki. You need to use the emi value (for example, “ami-zqkyh9th″) for the euca-run-instances command.</para>95 <para>The command in return should output three references: emi, eri and eki. You need to
92 96 use the emi value (for example, “ami-zqkyh9th″) for the euca-run-instances
93 97 command.</para>
94 <para>Now you can schedule, launch and connect to the instance, which you do with tools from the Euca2ools on the command line. Create the emi value from the uec-publish-tarball command, and then you can use the euca-run-instances command.</para>98
95 <para>One thing to note here, once you publish the tarball, it has to untar before you can launch an image from it. Using the 'euca-describe-images' command, wait until the state turns to "available" from "untarring.":</para>99
96 100 <para>Now you can schedule, launch and connect to the instance, which you do with tools from
97 <para><literallayout class="monospaced">euca-describe-images</literallayout></para>101 the Euca2ools on the command line. Create the emi value from the uec-publish-tarball
98 102 command, and then you can use the euca-run-instances command.</para>
99 <para>Depending on the image that you're using, you need a public key to connect to it. Some images have built-in accounts already created. Images can be shared by many users, so it is dangerous to put passwords into the images. Nova therefore supports injecting ssh keys into instances before they are103 <para>One thing to note here, once you publish the tarball, it has to untar before you can
100 booted. This allows a user to login to the instances that he or she creates securely.104 launch an image from it. Using the 'euca-describe-images' command, wait until the state
101 Generally the first thing that a user does when using the system is create a keypair.105 turns to "available" from "untarring.":</para>
102 Keypairs provide secure authentication to your instances. As part of the first boot of a106
103 virtual image, the private key of your keypair is added to root’s authorized_keys file.107 <para><literallayout class="monospaced">euca-describe-images</literallayout></para>
104 Nova generates a public and private key pair, and sends the private key to the user. The108
105 public key is stored so that it can be injected into instances. </para>109 <para>Depending on the image that you're using, you need a public key to connect to it. Some
110 images have built-in accounts already created. Images can be shared by many users, so it
111 is dangerous to put passwords into the images. Nova therefore supports injecting ssh
112 keys into instances before they are booted. This allows a user to login to the instances
113 that he or she creates securely. Generally the first thing that a user does when using
114 the system is create a keypair. Keypairs provide secure authentication to your
115 instances. As part of the first boot of a virtual image, the private key of your keypair
116 is added to root’s authorized_keys file. Nova generates a public and private key pair,
117 and sends the private key to the user. The public key is stored so that it can be
118 injected into instances. </para>
106 <para>Keypairs are created through the api and you use them as a parameter when launching an119 <para>Keypairs are created through the api and you use them as a parameter when launching an
107 instance. They can be created on the command line using the euca2ools script120 instance. They can be created on the command line using the euca2ools script
108 euca-add-keypair. Refer to the man page for the available options. Example usage:</para>121 euca-add-keypair. Refer to the man page for the available options. Example usage:</para>
109 122
110 <literallayout class="monospaced">euca-add-keypair test > test.pem123 <literallayout class="monospaced">euca-add-keypair test > test.pem
111chmod 600 test.pem</literallayout>124chmod 600 test.pem</literallayout>
112 125
113 <para>Now, you can run the instances:</para>126 <para>Now, you can run the instances:</para>
114 <literallayout class="monospaced">euca-run-instances -k test -t m1.tiny ami-zqkyh9th</literallayout>127 <literallayout class="monospaced">euca-run-instances -k test -t m1.tiny ami-zqkyh9th</literallayout>
115 <para>Here's a description of the parameters used above:</para>128 <para>Here's a description of the parameters used above:</para>
116 <para>-t what type of image to create</para>129 <para>-t what type of image to create</para>
117 <para>-k name of the key to inject in to the image at launch </para>130 <para>-k name of the key to inject in to the image at launch </para>
118 <para>Optionally, you can use the -n parameter to indicate how many images of this type to131 <para>Optionally, you can use the -n parameter to indicate how many images of this type to
119 launch. </para>132 launch. </para>
120 133
121 134
122 <para>The instance will go from “launching” to “running” in a short time, and you should be able to connect via SSH using the 'ubuntu' account, with the password 'ubuntu': (replace $ipaddress with the one you got from euca-describe-instances):</para>135 <para>The instance will go from “launching” to “running” in a short time, and you should be
123 136 able to connect via SSH using the 'ubuntu' account, with the password 'ubuntu': (replace
124 <para><literallayout class="monospaced">ssh ubuntu@$ipaddress</literallayout></para>137 $ipaddress with the one you got from euca-describe-instances):</para>
125 <para>The 'ubuntu' user is part of the sudoers group, so you can escalate to 'root'138
126 via the following command:</para>139 <para><literallayout class="monospaced">ssh ubuntu@$ipaddress</literallayout></para>
127 140 <para>The 'ubuntu' user is part of the sudoers group, so you can escalate to 'root' via the
128 <para><literallayout class="monospaced">sudo -i</literallayout></para>141 following command:</para>
129 </section>142
130 <section>143 <para><literallayout class="monospaced">sudo -i</literallayout></para>
131 <?dbhtml filename="deleting-instances.html" ?>144 </section>
132 <title>Deleting Instances</title>145 <section>
133 146 <?dbhtml filename="deleting-instances.html" ?>
134 <para>When you are done playing with an instance, you can tear the instance down147 <title>Deleting Instances</title>
135 using the following command (replace $instanceid with the instance IDs from above or148
136 look it up with euca-describe-instances):</para>149 <para>When you are done playing with an instance, you can tear the instance down using the
137 150 following command (replace $instanceid with the instance IDs from above or look it up
138 <para><literallayout class="monospaced">euca-terminate-instances $instanceid</literallayout></para></section>151 with euca-describe-instances):</para>
152
153 <para><literallayout class="monospaced">euca-terminate-instances $instanceid</literallayout></para>
154 </section>
139 <section>155 <section>
140 <?dbhtml filename="creating-custom-images.html" ?>156 <?dbhtml filename="creating-custom-images.html" ?>
141 <info><author>157 <info>
142 <orgname>CSS Corp- Open Source Services</orgname>158 <author>
143 </author><title>Image management</title></info>159 <orgname>CSS Corp- Open Source Services</orgname>
144 <para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link> </para>160 </author>
145 <para>There are several pre-built images for OpenStack available from various sources. You can download such images and use them to get familiar with OpenStack. You can refer to <link xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html">http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link> for details on using such images.</para>161 <title>Image management</title>
146 <para>For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or configuration. This chapter will guide you through the process of creating Linux images of Debian and Redhat based distributions from scratch. We have also covered an approach to bundling Windows images.</para>162 </info>
147 <para>There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch. cloud-init handles importing ssh keys for password-less login, setting hostname etc. The instance acquires the instance specific configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.</para>163 <para>by <link xlink:href="http://www.csscorp.com/">CSS Corp Open Source Services</link>
148 <para>While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing the keys etc. by running a set of commands at boot time from rc.local.</para>164 </para>
149 <para>The process used for Ubuntu and Fedora is largely the same with a few minor differences, which are explained below.</para>165 <para>There are several pre-built images for OpenStack available from various sources. You
150 166 can download such images and use them to get familiar with OpenStack. You can refer to
151 <para>In both cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We are using the machine called &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and Configuration&#8221; for this purpose.</para>167 <link
152 <para>The approach explained below will give you disk images that represent a disk without any partitions. Nova-compute can resize such disks ( including resizing the file system) based on the instance type chosen at the time of launching the instance. These images cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated kernel and ramdisk images. These kernel and ramdisk images need to be used by nova-compute at the time of launching the instance.</para>168 xlink:href="http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html"
153 <para>However, we have also added a small section towards the end of the chapter about creating bootable images with multiple partitions that can be be used by nova to launch an instance without the need for kernel and ramdisk images. The caveat is that while nova-compute can re-size such disks at the time of launching the instance, the file system size is not altered and hence, for all practical purposes, such disks are not re-sizable.</para>169 >http://docs.openstack.org/cactus/openstack-compute/admin/content/starting-images.html</link>
154 <section><?dbhtml filename="creating-a-linux-image.html" ?><title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>170 for details on using such images.</para>
155 171 <para>For any production deployment, you may like to have the ability to bundle custom
156 <para>The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>172 images, with a custom set of applications or configuration. This chapter will guide you
173 through the process of creating Linux images of Debian and Redhat based distributions
174 from scratch. We have also covered an approach to bundling Windows images.</para>
175 <para>There are some minor differences in the way you would bundle a Linux image, based on
176 the distribution. Ubuntu makes it very easy by providing cloud-init package, which can
177 be used to take care of the instance configuration at the time of launch. cloud-init
178 handles importing ssh keys for password-less login, setting hostname etc. The instance
179 acquires the instance specific configuration from Nova-compute by connecting to a meta
180 data interface running on 169.254.169.254.</para>
181 <para>While creating the image of a distro that does not have cloud-init or an equivalent
182 package, you may need to take care of importing the keys etc. by running a set of
183 commands at boot time from rc.local.</para>
184 <para>The process used for Ubuntu and Fedora is largely the same with a few minor
185 differences, which are explained below.</para>
186
187 <para>In both cases, the documentation below assumes that you have a working KVM
188 installation to use for creating the images. We are using the machine called
189 &#8216;client1&#8242; as explained in the chapter on &#8220;Installation and
190 Configuration&#8221; for this purpose.</para>
191 <para>The approach explained below will give you disk images that represent a disk without
192 any partitions. Nova-compute can resize such disks ( including resizing the file system)
193 based on the instance type chosen at the time of launching the instance. These images
194 cannot have &#8216;bootable&#8217; flag and hence it is mandatory to have associated
195 kernel and ramdisk images. These kernel and ramdisk images need to be used by
196 nova-compute at the time of launching the instance.</para>
197 <para>However, we have also added a small section towards the end of the chapter about
198 creating bootable images with multiple partitions that can be be used by nova to launch
199 an instance without the need for kernel and ramdisk images. The caveat is that while
200 nova-compute can re-size such disks at the time of launching the instance, the file
201 system size is not altered and hence, for all practical purposes, such disks are not
202 re-sizable.</para>
203 <section>
204 <?dbhtml filename="creating-a-linux-image.html" ?>
205 <title>Creating a Linux Image &#8211; Ubuntu &amp; Fedora</title>
206
207 <para>The first step would be to create a raw image on Client1. This will represent the
208 main HDD of the virtual machine, so make sure to give it as much space as you will
209 need.</para>
157 <literallayout class="monospaced">210 <literallayout class="monospaced">
158211
159kvm-img create -f raw server.img 5G212kvm-img create -f raw server.img 5G
160</literallayout>213</literallayout>
161 214
162 <simplesect><title>OS Installation</title>215 <simplesect>
163 <para>Download the iso file of the Linux distribution you want installed in the image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The points of difference between Ubuntu and Fedora are mentioned wherever required.</para>216 <title>OS Installation</title>
217 <para>Download the iso file of the Linux distribution you want installed in the
218 image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit
219 server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The
220 points of difference between Ubuntu and Fedora are mentioned wherever
221 required.</para>
164 <literallayout class="monospaced">222 <literallayout class="monospaced">
165223
166wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso224wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso
167</literallayout>225</literallayout>
168 <para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command below also sets up a VNC display at port 0</para>226 <para>Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will
169 <literallayout class="monospaced">227 start the installation process. The command below also sets up a VNC display at
170228 port 0</para>
171sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0229 <literallayout class="monospaced">
172</literallayout>230
173 <para>Connect to the VM through VNC (use display number :0) and finish the installation.</para>231sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
174 <para>For Example, where 10.10.10.4 is the IP address of client1:</para>232</literallayout>
175 <literallayout class="monospaced">233 <para>Connect to the VM through VNC (use display number :0) and finish the
176234 installation.</para>
177 vncviewer 10.10.10.4 :0235 <para>For Example, where 10.10.10.4 is the IP address of client1:</para>
178</literallayout>236 <literallayout class="monospaced">
179 <para>During the installation of Ubuntu, create a single ext4 partition mounted on &#8216;/&#8217;. Do not create a swap partition.</para>237
180 <para>In the case of Fedora 14, the installation will not progress unless you create a swap partition. Please go ahead and create a swap partition.</para>238vncviewer 10.10.10.4 :0
181 239</literallayout>
182 <para>After finishing the installation, relaunch the VM by executing the following command.</para>240 <para>During the installation of Ubuntu, create a single ext4 partition mounted on
241 &#8216;/&#8217;. Do not create a swap partition.</para>
242 <para>In the case of Fedora 14, the installation will not progress unless you create
243 a swap partition. Please go ahead and create a swap partition.</para>
244
245 <para>After finishing the installation, relaunch the VM by executing the following
246 command.</para>
183 <literallayout class="monospaced">247 <literallayout class="monospaced">
184sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :0248sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :0
185</literallayout>249</literallayout>
186 <para>At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.</para>250 <para>At this point, you can add all the packages you want to have installed, update
251 the installation, add users and make any configuration changes you want in your
252 image.</para>
187 <para>At the minimum, for Ubuntu you may run the following commands</para>253 <para>At the minimum, for Ubuntu you may run the following commands</para>
188 <literallayout class="monospaced">254 <literallayout class="monospaced">
189255
@@ -202,18 +268,23 @@
202268
203chkconfig sshd on269chkconfig sshd on
204</literallayout>270</literallayout>
205 <para>Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.</para>271 <para>Also remove the network persistence rules from /etc/udev/rules.d as their
272 presence will result in the network interface in the instance coming up as an
273 interface other than eth0.</para>
206 <literallayout class="monospaced">274 <literallayout class="monospaced">
207275
208sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules276sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
209</literallayout>277</literallayout>
210 <para>Shutdown the Virtual machine and proceed with the next steps.</para>278 <para>Shutdown the Virtual machine and proceed with the next steps.</para>
211 </simplesect>279 </simplesect>
212 <simplesect><title>Extracting the EXT4 partition</title>280 <simplesect>
213 <para>The image that needs to be uploaded to OpenStack needs to be an ext4 filesystem image. Here are the steps to create a ext4 filesystem image from the raw image i.e server.img</para>281 <title>Extracting the EXT4 partition</title>
282 <para>The image that needs to be uploaded to OpenStack needs to be an ext4
283 filesystem image. Here are the steps to create a ext4 filesystem image from the
284 raw image i.e server.img</para>
214 <literallayout class="monospaced">285 <literallayout class="monospaced">
215286
216sudo losetup -f server.img287sudo losetup -f server.img
217288
218sudo losetup -a289sudo losetup -a
219290
@@ -223,14 +294,15 @@
223294
224/dev/loop0: [0801]:16908388 ($filepath)295/dev/loop0: [0801]:16908388 ($filepath)
225</literallayout>296</literallayout>
226 <para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>297 <para>Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath
298 is the path to the mounted .raw file.</para>
227 <para>Now we need to find out the starting sector of the partition. Run:</para>299 <para>Now we need to find out the starting sector of the partition. Run:</para>
228 <literallayout class="monospaced">300 <literallayout class="monospaced">
229301
230sudo fdisk -cul /dev/loop0302sudo fdisk -cul /dev/loop0
231</literallayout>303</literallayout>
232 <para>You should see an output like this:</para>304 <para>You should see an output like this:</para>
233 305
234 <literallayout class="monospaced">306 <literallayout class="monospaced">
235307
236Disk /dev/loop0: 5368 MB, 5368709120 bytes308Disk /dev/loop0: 5368 MB, 5368709120 bytes
@@ -245,17 +317,21 @@
245317
246Disk identifier: 0x00072bd4318Disk identifier: 0x00072bd4
247319
248Device Boot Start End Blocks Id System320Device Boot Start End Blocks Id System
249321
250/dev/loop0p1 * 2048 10483711 5240832 83 Linux322/dev/loop0p1 * 2048 10483711 5240832 83 Linux
251</literallayout>323</literallayout>
252 <para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the partition whose ID is 83. This number should be multiplied by 512 to obtain the correct value. In this case: 2048 x 512 = 1048576</para>324 <para>Make a note of the starting sector of the /dev/loop0p1 partition i.e the
325 partition whose ID is 83. This number should be multiplied by 512 to obtain the
326 correct value. In this case: 2048 x 512 = 1048576</para>
253 <para>Unmount the loop0 device:</para>327 <para>Unmount the loop0 device:</para>
254 <literallayout class="monospaced">328 <literallayout class="monospaced">
255329
256sudo losetup -d /dev/loop0330sudo losetup -d /dev/loop0
257</literallayout>331</literallayout>
258 <para>Now mount only the partition(/dev/loop0p1) of server.img which we had previously noted down, by adding the -o parameter with value previously calculated value</para>332 <para>Now mount only the partition(/dev/loop0p1) of server.img which we had
333 previously noted down, by adding the -o parameter with value previously
334 calculated value</para>
259 <literallayout class="monospaced">335 <literallayout class="monospaced">
260336
261sudo losetup -f -o 1048576 server.img337sudo losetup -f -o 1048576 server.img
@@ -268,42 +344,53 @@
268344
269/dev/loop0: [0801]:16908388 ($filepath) offset 1048576345/dev/loop0: [0801]:16908388 ($filepath) offset 1048576
270</literallayout>346</literallayout>
271 <para>Make a note of the mount point of our device(/dev/loop0 in our setup) when $filepath is the path to the mounted .raw file.</para>347 <para>Make a note of the mount point of our device(/dev/loop0 in our setup) when
348 $filepath is the path to the mounted .raw file.</para>
272 <para>Copy the entire partition to a new .raw file</para>349 <para>Copy the entire partition to a new .raw file</para>
273 <literallayout class="monospaced">350 <literallayout class="monospaced">
274351
275sudo dd if=/dev/loop0 of=serverfinal.img352sudo dd if=/dev/loop0 of=serverfinal.img
276</literallayout>353</literallayout>
277 <para>Now we have our ext4 filesystem image i.e serverfinal.img</para>354 <para>Now we have our ext4 filesystem image i.e serverfinal.img</para>
278 355
279 <para>Unmount the loop0 device</para>356 <para>Unmount the loop0 device</para>
280 <literallayout class="monospaced">357 <literallayout class="monospaced">
281358
282sudo losetup -d /dev/loop0359sudo losetup -d /dev/loop0
283</literallayout>360</literallayout>
284 </simplesect>361 </simplesect>
285 <simplesect><title>Tweaking /etc/fstab</title>362 <simplesect>
286 <para>You will need to tweak /etc/fstab to make it suitable for a cloud instance. Nova-compute may resize the disk at the time of launch of instances based on the instance type chosen. This can make the UUID of the disk invalid. Hence we have to use File system label as the identifier for the partition instead of the UUID.</para>363 <title>Tweaking /etc/fstab</title>
364 <para>You will need to tweak /etc/fstab to make it suitable for a cloud instance.
365 Nova-compute may resize the disk at the time of launch of instances based on the
366 instance type chosen. This can make the UUID of the disk invalid. Hence we have
367 to use File system label as the identifier for the partition instead of the
368 UUID.</para>
287 <para>Loop mount the serverfinal.img, by running</para>369 <para>Loop mount the serverfinal.img, by running</para>
288 <literallayout class="monospaced">370 <literallayout class="monospaced">
289371
290sudo mount -o loop serverfinal.img /mnt372sudo mount -o loop serverfinal.img /mnt
291</literallayout>373</literallayout>
292 <para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may look like the following)</para>374 <para>Edit /mnt/etc/fstab and modify the line for mounting root partition(which may
293 375 look like the following)</para>
376
294 <literallayout class="monospaced">377 <literallayout class="monospaced">
295378
296UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1379UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c / ext4 errors=remount-ro 0 1
297</literallayout>380</literallayout>
298 <para>to</para>381 <para>to</para>
299 <literallayout class="monospaced">382 <literallayout class="monospaced">
300383
301LABEL=uec-rootfs / ext4 defaults 0 0384LABEL=uec-rootfs / ext4 defaults 0 0
302</literallayout>385</literallayout>
303 </simplesect>386 </simplesect>
304 <simplesect><title>Fetching Metadata in Fedora</title>387 <simplesect>
305 <para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to take a few steps to have the instance fetch the meta data like ssh keys etc.</para>388 <title>Fetching Metadata in Fedora</title>
306 <para>Edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”</para>389 <para>Since, Fedora does not ship with cloud-init or an equivalent, you will need to
390 take a few steps to have the instance fetch the meta data like ssh keys
391 etc.</para>
392 <para>Edit the /etc/rc.local file and add the following lines before the line “touch
393 /var/lock/subsys/local”</para>
307 <literallayout class="monospaced">394 <literallayout class="monospaced">
308395
309depmod -a396depmod -a
@@ -318,10 +405,14 @@
318cat /root/.ssh/authorized_keys405cat /root/.ssh/authorized_keys
319echo &quot;************************&quot;406echo &quot;************************&quot;
320</literallayout>407</literallayout>
321 </simplesect></section>408 </simplesect>
322 <simplesect><title>Kernel and Initrd for OpenStack</title>409 </section>
323 410 <simplesect>
324 <para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These will be used later for creating and uploading a complete virtual image to OpenStack.</para>411 <title>Kernel and Initrd for OpenStack</title>
412
413 <para>Copy the kernel and the initrd image from /mnt/boot to user home directory. These
414 will be used later for creating and uploading a complete virtual image to
415 OpenStack.</para>
325 <literallayout class="monospaced">416 <literallayout class="monospaced">
326417
327sudo cp /mnt/boot/vmlinuz-2.6.38-7-server /home/localadmin418sudo cp /mnt/boot/vmlinuz-2.6.38-7-server /home/localadmin
@@ -331,348 +422,789 @@
331 <para>Unmount the Loop partition</para>422 <para>Unmount the Loop partition</para>
332 <literallayout class="monospaced">423 <literallayout class="monospaced">
333424
334sudo umount /mnt425sudo umount /mnt
335</literallayout>426</literallayout>
336 <para>Change the filesystem label of serverfinal.img to &#8216;uec-rootfs&#8217;</para>427 <para>Change the filesystem label of serverfinal.img to &#8216;uec-rootfs&#8217;</para>
337 <literallayout class="monospaced">428 <literallayout class="monospaced">
338429
339sudo tune2fs -L uec-rootfs serverfinal.img430sudo tune2fs -L uec-rootfs serverfinal.img
340</literallayout>431</literallayout>
341 <para>Now, we have all the components of the image ready to be uploaded to OpenStack imaging server.</para>432 <para>Now, we have all the components of the image ready to be uploaded to OpenStack
433 imaging server.</para>
342 </simplesect>434 </simplesect>
343 <simplesect><title>Registering with OpenStack</title>435 <simplesect>
344 <para>The last step would be to upload the images to Openstack Imaging Server glance. The files that need to be uploaded for the above sample setup of Ubuntu are: vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>436 <title>Registering with OpenStack</title>
437 <para>The last step would be to upload the images to Openstack Imaging Server glance.
438 The files that need to be uploaded for the above sample setup of Ubuntu are:
439 vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img</para>
345 <para>Run the following command</para>440 <para>Run the following command</para>
346 <literallayout class="monospaced">441 <literallayout class="monospaced">
347442
348uec-publish-image -t image --kernel-file vmlinuz-2.6.38-7-server --ramdisk-file initrd.img-2.6.38-7-server amd64 serverfinal.img bucket1443uec-publish-image -t image --kernel-file vmlinuz-2.6.38-7-server --ramdisk-file initrd.img-2.6.38-7-server amd64 serverfinal.img bucket1
349</literallayout>444</literallayout>
350 <para>For Fedora, the process will be similar. Make sure that you use the right kernel and initrd files extracted above.</para>445 <para>For Fedora, the process will be similar. Make sure that you use the right kernel
351 <para>uec-publish-image, like several other commands from euca2ools, returns the prompt back immediately. However, the upload process takes some time and the images will be usable only after the process is complete. You can keep checking the status using the command &#8216;euca-describe-images&#8217; as mentioned below.</para>446 and initrd files extracted above.</para>
447 <para>uec-publish-image, like several other commands from euca2ools, returns the prompt
448 back immediately. However, the upload process takes some time and the images will be
449 usable only after the process is complete. You can keep checking the status using
450 the command &#8216;euca-describe-images&#8217; as mentioned below.</para>
352 </simplesect>451 </simplesect>
353 <simplesect><title>Bootable Images</title>452 <simplesect>
354 <para>You can register bootable disk images without associating kernel and ramdisk images. When you do not want the flexibility of using the same disk image with different kernel/ramdisk images, you can go for bootable disk images. This greatly simplifies the process of bundling and registering the images. However, the caveats mentioned in the introduction to this chapter apply. Please note that the instructions below use server.img and you can skip all the cumbersome steps related to extracting the single ext4 partition.</para>453 <title>Bootable Images</title>
454 <para>You can register bootable disk images without associating kernel and ramdisk
455 images. When you do not want the flexibility of using the same disk image with
456 different kernel/ramdisk images, you can go for bootable disk images. This greatly
457 simplifies the process of bundling and registering the images. However, the caveats
458 mentioned in the introduction to this chapter apply. Please note that the
459 instructions below use server.img and you can skip all the cumbersome steps related
460 to extracting the single ext4 partition.</para>
355 <literallayout class="monospaced">461 <literallayout class="monospaced">
356euca-bundle-image -i server.img462euca-bundle-image -i server.img
357euca-upload-bundle -b mybucket -m /tmp/server.img.manifest.xml463euca-upload-bundle -b mybucket -m /tmp/server.img.manifest.xml
358euca-register mybucket/server.img.manifest.xml464euca-register mybucket/server.img.manifest.xml
359</literallayout>465</literallayout>
360 </simplesect>466 </simplesect>
361 <simplesect><title>Image Listing</title>467 <simplesect>
362 <para>The status of the images that have been uploaded can be viewed by using euca-describe-images command. The output should like this:</para>468 <title>Image Listing</title>
363 <literallayout class="monospaced">469 <para>The status of the images that have been uploaded can be viewed by using
470 euca-describe-images command. The output should like this:</para>
471 <literallayout class="monospaced">
364472
365localadmin@client1:~$ euca-describe-images473localadmin@client1:~$ euca-describe-images
366474
367IMAGE ari-7bfac859 bucket1/initrd.img-2.6.38-7-server.manifest.xml css available private x86_64 ramdisk475IMAGE ari-7bfac859 bucket1/initrd.img-2.6.38-7-server.manifest.xml css available private x86_64 ramdisk
368476
369IMAGE ami-5e17eb9d bucket1/serverfinal.img.manifest.xml css available private x86_64 machine aki-3d0aeb08 ari-7bfac859477IMAGE ami-5e17eb9d bucket1/serverfinal.img.manifest.xml css available private x86_64 machine aki-3d0aeb08 ari-7bfac859
370478
371IMAGE aki-3d0aeb08 bucket1/vmlinuz-2.6.38-7-server.manifest.xml css available private x86_64 kernel479IMAGE aki-3d0aeb08 bucket1/vmlinuz-2.6.38-7-server.manifest.xml css available private x86_64 kernel
372480
373localadmin@client1:~$481localadmin@client1:~$
374</literallayout>482</literallayout>
375 </simplesect></section>483 </simplesect>
376 <section><?dbhtml filename="creating-a-windows-image.html" ?><title>Creating a Windows Image</title>484 </section>
377 <para>The first step would be to create a raw image on Client1, this will represent the main HDD of the virtual machine, so make sure to give it as much space as you will need.</para>485 <section>
378 <literallayout class="monospaced">486 <?dbhtml filename="creating-a-windows-image.html" ?>
487 <title>Creating a Windows Image</title>
488 <para>The first step would be to create a raw image on Client1, this will represent the main
489 HDD of the virtual machine, so make sure to give it as much space as you will
490 need.</para>
491 <literallayout class="monospaced">
379kvm-img create -f raw windowsserver.img 20G492kvm-img create -f raw windowsserver.img 20G
380</literallayout>493</literallayout>
381 <para>OpenStack presents the disk using aVIRTIO interface while launching the instance. Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing VIRTIO drivers from the following location</para>494 <para>OpenStack presents the disk using aVIRTIO interface while launching the instance.
382 <para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/">http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>495 Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO
383 <para>and attach it during the installation</para>496 does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing
384 <para>Start the installation by running</para>497 VIRTIO drivers from the following location</para>
385 <literallayout class="monospaced">498 <para><link xlink:href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/"
499 >http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/</link></para>
500 <para>and attach it during the installation</para>
501 <para>Start the installation by running</para>
502 <literallayout class="monospaced">
386sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d -nographic -vnc :0503sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d -nographic -vnc :0
387504
388</literallayout>505</literallayout>
389 <para>When the installation prompts you to choose a hard disk device you won’t see any devices available. Click on “Load drivers” at the bottom left and load the drivers from A:\i386\Win2008</para>506 <para>When the installation prompts you to choose a hard disk device you won’t see any
390 <para>After the Installation is over, boot into it once and install any additional applications you need to install and make any configuration changes you need to make. Also ensure that RDP is enabled as that would be the only way you can connect to a running instance of Windows. Windows firewall needs to be configured to allow incoming ICMP and RDP connections.</para>507 devices available. Click on “Load drivers” at the bottom left and load the drivers from
391 <para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up port 3389 as described in the chapter on &#8220;Security&#8221;.</para>508 A:\i386\Win2008</para>
392 <para>Shut-down the VM and upload the image to OpenStack</para>509 <para>After the Installation is over, boot into it once and install any additional
393 <literallayout class="monospaced">510 applications you need to install and make any configuration changes you need to make.
511 Also ensure that RDP is enabled as that would be the only way you can connect to a
512 running instance of Windows. Windows firewall needs to be configured to allow incoming
513 ICMP and RDP connections.</para>
514 <para>For OpenStack to allow incoming RDP Connections, use euca-authorize command to open up
515 port 3389 as described in the chapter on &#8220;Security&#8221;.</para>
516 <para>Shut-down the VM and upload the image to OpenStack</para>
517 <literallayout class="monospaced">
394euca-bundle-image -i windowsserver.img518euca-bundle-image -i windowsserver.img
395euca-upload-bundle -b mybucket -m /tmp/windowsserver.img.manifest.xml519euca-upload-bundle -b mybucket -m /tmp/windowsserver.img.manifest.xml
396euca-register mybucket/windowsserver.img.manifest.xml520euca-register mybucket/windowsserver.img.manifest.xml
397</literallayout>521</literallayout>
398 </section>522 </section>
399 <section>523 <section>
400 <?dbhtml filename="understanding-the-compute-service-architecture.html" ?>524 <?dbhtml filename="understanding-the-compute-service-architecture.html" ?>
401 <title>Understanding the Compute Service Architecture</title>525 <title>Understanding the Compute Service Architecture</title>
402 <para>These basic categories describe the service architecture and what's going on within the cloud controller.</para>526 <para>These basic categories describe the service architecture and what's going on within
403 <simplesect><title>API Server</title>527 the cloud controller.</para>
404 528 <simplesect>
405 <para>At the heart of the cloud framework is an API Server. This API Server makes command and control of the hypervisor, storage, and networking programmatically available to users in realization of the definition of cloud computing.529 <title>API Server</title>
406 </para>530
407 <para>The API endpoints are basic http web services which handle authentication, authorization, and basic command and control functions using various API interfaces under the Amazon, Rackspace, and related models. This enables API compatibility with multiple existing tool sets created for interaction with offerings from other vendors. This broad compatibility prevents vendor lock-in.531 <para>At the heart of the cloud framework is an API Server. This API Server makes
408 </para> </simplesect>532 command and control of the hypervisor, storage, and networking programmatically
409 <simplesect><title>Message Queue</title>533 available to users in realization of the definition of cloud computing. </para>
410 <para>534 <para>The API endpoints are basic http web services which handle authentication,
411 A messaging queue brokers the interaction between compute nodes (processing), volumes (block storage), the networking controllers (software which controls network infrastructure), API endpoints, the scheduler (determines which physical hardware to allocate to a virtual resource), and similar components. Communication to and from the cloud controller is by HTTP requests through multiple API endpoints.</para>535 authorization, and basic command and control functions using various API interfaces
412 536 under the Amazon, Rackspace, and related models. This enables API compatibility with
413<para> A typical message passing event begins with the API server receiving a request from a user. The API server authenticates the user and ensures that the user is permitted to issue the subject command. Availability of objects implicated in the request is evaluated and, if available, the request is routed to the queuing engine for the relevant workers. Workers continually listen to the queue based on their role, and occasionally their type hostname. When such listening produces a work request, the worker takes assignment of the task and begins its execution. Upon completion, a response is dispatched to the queue which is received by the API server and relayed to the originating user. Database entries are queried, added, or removed as necessary throughout the process.537 multiple existing tool sets created for interaction with offerings from other
414</para> 538 vendors. This broad compatibility prevents vendor lock-in. </para>
415</simplesect>539 </simplesect>
416 <simplesect><title>Compute Worker</title>540 <simplesect>
417 541 <title>Message Queue</title>
418 <para>Compute workers manage computing instances on host machines. Through the API, commands are dispatched to compute workers to:</para>542 <para> A messaging queue brokers the interaction between compute nodes (processing),
419 543 volumes (block storage), the networking controllers (software which controls network
420 <itemizedlist>544 infrastructure), API endpoints, the scheduler (determines which physical hardware to
421 <listitem><para>Run instances</para></listitem>545 allocate to a virtual resource), and similar components. Communication to and from
422 <listitem><para>Terminate instances</para></listitem>546 the cloud controller is by HTTP requests through multiple API endpoints.</para>
423 <listitem><para>Reboot instances</para></listitem>547
424 <listitem><para>Attach volumes</para></listitem>548 <para> A typical message passing event begins with the API server receiving a request
425 <listitem><para>Detach volumes</para></listitem>549 from a user. The API server authenticates the user and ensures that the user is
426 <listitem><para>Get console output</para></listitem></itemizedlist>550 permitted to issue the subject command. Availability of objects implicated in the
427 </simplesect>551 request is evaluated and, if available, the request is routed to the queuing engine
428 <simplesect><title>Network Controller</title>552 for the relevant workers. Workers continually listen to the queue based on their
429 553 role, and occasionally their type hostname. When such listening produces a work
430 <para>The Network Controller manages the networking resources on host machines. The API server dispatches commands through the message queue, which are subsequently processed by Network Controllers. Specific operations include:</para>554 request, the worker takes assignment of the task and begins its execution. Upon
431 555 completion, a response is dispatched to the queue which is received by the API
432 <itemizedlist><listitem><para>Allocate fixed IP addresses</para></listitem>556 server and relayed to the originating user. Database entries are queried, added, or
433 <listitem><para>Configuring VLANs for projects</para></listitem>557 removed as necessary throughout the process. </para>
434 <listitem><para>Configuring networks for compute nodes</para></listitem></itemizedlist>558 </simplesect>
435 </simplesect>559 <simplesect>
436<simplesect><title>Volume Workers</title>560 <title>Compute Worker</title>
437 561
438 <para>Volume Workers interact with iSCSI storage to manage LVM-based instance volumes. Specific functions include:562 <para>Compute workers manage computing instances on host machines. Through the API,
439 </para>563 commands are dispatched to compute workers to:</para>
440 <itemizedlist>564
441 <listitem><para>Create volumes</para></listitem>565 <itemizedlist>
442 <listitem><para>Delete volumes</para></listitem>566 <listitem>
443 <listitem><para>Establish Compute volumes</para></listitem></itemizedlist>567 <para>Run instances</para>
444 568 </listitem>
445 <para>Volumes may easily be transferred between instances, but may be attached to only a single instance at a time.</para></simplesect></section>569 <listitem>
570 <para>Terminate instances</para>
571 </listitem>
572 <listitem>
573 <para>Reboot instances</para>
574 </listitem>
575 <listitem>
576 <para>Attach volumes</para>
577 </listitem>
578 <listitem>
579 <para>Detach volumes</para>
580 </listitem>
581 <listitem>
582 <para>Get console output</para>
583 </listitem>
584 </itemizedlist>
585 </simplesect>
586 <simplesect>
587 <title>Network Controller</title>
588
589 <para>The Network Controller manages the networking resources on host machines. The API
590 server dispatches commands through the message queue, which are subsequently
591 processed by Network Controllers. Specific operations include:</para>
592
593 <itemizedlist>
594 <listitem>
595 <para>Allocate fixed IP addresses</para>
596 </listitem>
597 <listitem>
598 <para>Configuring VLANs for projects</para>
599 </listitem>
600 <listitem>
601 <para>Configuring networks for compute nodes</para>
602 </listitem>
603 </itemizedlist>
604 </simplesect>
605 <simplesect>
606 <title>Volume Workers</title>
607
608 <para>Volume Workers interact with iSCSI storage to manage LVM-based instance volumes.
609 Specific functions include: </para>
610 <itemizedlist>
611 <listitem>
612 <para>Create volumes</para>
613 </listitem>
614 <listitem>
615 <para>Delete volumes</para>
616 </listitem>
617 <listitem>
618 <para>Establish Compute volumes</para>
619 </listitem>
620 </itemizedlist>
621
622 <para>Volumes may easily be transferred between instances, but may be attached to only a
623 single instance at a time.</para>
624 </simplesect>
625 </section>
446 <section>626 <section>
447 <?dbhtml filename="managing-the-cloud.html" ?>627 <?dbhtml filename="managing-the-cloud.html" ?>
448 <title>Managing the Cloud</title><para>There are two main tools that a system administrator will find useful to manage their cloud;628 <title>Managing the Cloud</title>
449 the nova-manage command or the Euca2ools command line commands. </para>629 <para>There are two main tools that a system administrator will find useful to manage their
450 <para>With the Diablo release, the nova-manage command has been deprecated and you must630 cloud; the nova-manage command or the Euca2ools command line commands.</para>
451 specify if you want to use it by using the --use_deprecated_auth flag in nova.conf. You631 <para> The nova-manage command may only be run by users with admin privileges. Commands for
452 must also use the modified middleware stack that is commented out in the default
453 paste.ini file.</para>
454 <para>The nova-manage command may only be run by users with admin privileges. Commands for
455 euca2ools can be used by all users, though specific commands may be restricted by Role632 euca2ools can be used by all users, though specific commands may be restricted by Role
456 Based Access Control in the deprecated nova auth system. </para>633 Based Access Control. </para>
457 <simplesect><title>Using the nova-manage command</title>634 <simplesect>
458 <para>The nova-manage command may be used to perform many essential functions for635 <title>Using the nova-manage command</title>
636 <para>The nova-manage command is used to perform many essential functions for
459 administration and ongoing maintenance of nova, such as user creation, vpn637 administration and ongoing maintenance of nova, such as user creation, vpn
460 management, and much more.</para>638 management, and much more.</para>
461 639
462 <para>The standard pattern for executing a nova-manage command is: </para>640 <para>The standard pattern for executing a nova-manage command is: </para>
463 <literallayout class="monospaced">nova-manage category command [args]</literallayout>641 <literallayout class="monospaced">nova-manage category command [args]</literallayout>
464 642
465 <para>For example, to obtain a list of all projects: nova-manage project list</para>643 <para>For example, to obtain a list of all projects: nova-manage project list</para>
466 644
467 <para>Run without arguments to see a list of available command categories: nova-manage</para>645 <para>Run without arguments to see a list of available command categories:
468 646 nova-manage</para>
469 <para>Command categories are: account, agent, config, db, fixed, flavor, floating, host,647
470 instance_type, image, network, project, role, service, shell, user, version, vm,648 <para>Command categories are: user, project, role, shell, vpn, and floating. </para>
471 volume, and vpn. </para>649 <para>You can also run with a category argument such as user to see a list of all
472 <para>You can also run with a category argument such as user to see a list of all commands in that category: nova-manage user</para>650 commands in that category: nova-manage user</para>
473 </simplesect></section>651 </simplesect>
652 </section>
474 <section>653 <section>
475 <?dbhtml filename="managing-compute-users.html" ?>654 <?dbhtml filename="managing-compute-users.html" ?>
476 <title>Managing Compute Users</title>655 <title>Managing Compute Users</title>
477 656
478 <para>Access to the Euca2ools (ec2) API is controlled by an access and secret key. The657 <para>Access to the Euca2ools (ec2) API is controlled by an access and secret key. The
479 user’s access key needs to be included in the request, and the request must be658 user’s access key needs to be included in the request, and the request must be signed
480 signed with the secret key. Upon receipt of API requests, Compute will verify the659 with the secret key. Upon receipt of API requests, Compute will verify the signature and
481 signature and execute commands on behalf of the user. </para>660 execute commands on behalf of the user. </para>
482 <para>In order to begin using nova, you will need to create a user. This can be easily661 <para>In order to begin using nova, you will need to create a user. This can be easily
483 accomplished using the user create or user admin commands in nova-manage. user create662 accomplished using the user create or user admin commands in nova-manage. user create
484 will create a regular user, whereas user admin will create an admin user. The syntax of663 will create a regular user, whereas user admin will create an admin user. The syntax of
485 the command is nova-manage user create username [access] [secretword]. For example: </para>664 the command is nova-manage user create username [access] [secret]. For example: </para>
486 <literallayout class="monospaced">nova-manage user create john my-access-key a-super-secret-key</literallayout>665 <literallayout class="monospaced">nova-manage user create john my-access-key a-super-secret-key</literallayout>
487 <para>If you do not specify an access or secret key, a random uuid will be created666 <para>If you do not specify an access or secret key, a random uuid will be created
488 automatically.</para>667 automatically.</para>
489 668
490 <simplesect><title>Credentials</title>669 <simplesect>
491 670 <title>Credentials</title>
492 <para>Nova can generate a handy set of credentials for a user. These credentials include a CA for bundling images and a file for setting environment variables to be used by euca2ools. If you don’t need to bundle images, just the environment script is required. You can export one with the project environment command. The syntax of the command is nova-manage project environment project_id user_id [filename]. If you don’t specify a filename, it will be exported as novarc. After generating the file, you can simply source it in bash to add the variables to your environment:</para>671
493 672 <para>Nova can generate a handy set of credentials for a user. These credentials include
494 <literallayout class="monospaced">673 a CA for bundling images and a file for setting environment variables to be used by
495 nova-manage project environment john_project john674 euca2ools. If you don’t need to bundle images, just the environment script is
496 . novarc</literallayout>675 required. You can export one with the project environment command. The syntax of the
497 676 command is nova-manage project environment project_id user_id [filename]. If you
498 <para>If you do need to bundle images, you will need to get all of the credentials using project zipfile. Note that zipfile will give you an error message if networks haven’t been created yet. Otherwise zipfile has the same syntax as environment, only the default file name is nova.zip. Example usage:677 don’t specify a filename, it will be exported as novarc. After generating the file,
499 </para>678 you can simply source it in bash to add the variables to your environment:</para>
500 <literallayout class="monospaced">679
501 nova-manage project zipfile john_project john680 <literallayout class="monospaced">
502 unzip nova.zip681nova-manage project environment john_project john
503 . novarc682. novarc</literallayout>
504 </literallayout></simplesect>683
505 <simplesect><title>Role Based Access Control</title>684 <para>If you do need to bundle images, you will need to get all of the credentials using
506 685 project zipfile. Note that zipfile will give you an error message if networks
507 <para>Roles control the API actions that a user is allowed to perform. For example, a user686 haven’t been created yet. Otherwise zipfile has the same syntax as environment, only
508 cannot allocate a public ip without the netadmin role. It is important to remember687 the default file name is nova.zip. Example usage: </para>
509 that a users de facto permissions in a project is the intersection of user (global)688 <literallayout class="monospaced">
510 roles and project (local) roles. So for john to have netadmin permissions in his689nova-manage project zipfile john_project john
511 project, he needs to separate roles specified. You can add roles with role add. The690unzip nova.zip
512 syntax is nova-manage role add user_id role [project_id]. Let’s give john the691. novarc
513 netadmin role for his project:</para>692</literallayout>
514 693 </simplesect>
515 <literallayout class="monospaced"> nova-manage role add john netadmin694 <simplesect>
516 nova-manage role add john netadmin john_project</literallayout>695 <title>Role Based Access Control</title>
517 696
518 <para>Role-based access control (RBAC) is an approach to restricting system access to authorized users based on an individual's role within an organization. Various employee functions require certain levels of system access in order to be successful. These functions are mapped to defined roles and individuals are categorized accordingly. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of assigning appropriate roles to the user. This simplifies common operations, such as adding a user, or changing a user’s department.697 <para>Roles control the API actions that a user is allowed to perform. For example, a
519 </para>698 user cannot allocate a public ip without the netadmin role. It is important to
520 <para>Nova’s rights management system employs the RBAC model and currently supports the following five roles:</para>699 remember that a users de facto permissions in a project is the intersection of user
521 700 (global) roles and project (local) roles. So for john to have netadmin permissions
522 <itemizedlist>701 in his project, he needs to separate roles specified. You can add roles with role
523 <listitem><para>Cloud Administrator. (cloudadmin) Users of this class enjoy complete system access.</para></listitem>702 add. The syntax is nova-manage role add user_id role [project_id]. Let’s give john
524 <listitem><para>IT Security. (itsec) This role is limited to IT security personnel. It permits role holders to quarantine instances.</para></listitem>703 the netadmin role for his project:</para>
525 <listitem><para>System Administrator. (sysadmin)The default for project owners, this role affords users the ability to add other users to a project, interact with project images, and launch and terminate instances.</para></listitem>704
526 <listitem><para>Network Administrator. (netadmin) Users with this role are permitted to allocate and assign publicly accessible IP addresses as well as create and modify firewall rules.</para></listitem>705 <literallayout class="monospaced"> nova-manage role add john netadmin
527 <listitem><para>Developer. This is a general purpose role that is assigned to users by default.</para></listitem>706nova-manage role add john netadmin john_project</literallayout>
528 <listitem><para>Project Manager. (projectmanager) This is a role that is assigned upon project creation and can't be added or removed, but this role can do anything a sysadmin can do.</para></listitem></itemizedlist>707
529 708 <para>Role-based access control (RBAC) is an approach to restricting system access to
530 <para>RBAC management is exposed through the dashboard for simplified user management.</para></simplesect></section>709 authorized users based on an individual's role within an organization. Various
710 employee functions require certain levels of system access in order to be
711 successful. These functions are mapped to defined roles and individuals are
712 categorized accordingly. Since users are not assigned permissions directly, but only
713 acquire them through their role (or roles), management of individual user rights
714 becomes a matter of assigning appropriate roles to the user. This simplifies common
715 operations, such as adding a user, or changing a user’s department. </para>
716 <para>Nova’s rights management system employs the RBAC model and currently supports the
717 following five roles:</para>
718 <itemizedlist>
719 <listitem>
720 <para>Cloud Administrator. (cloudadmin) Users of this class enjoy complete
721 system access.</para>
722 </listitem>
723 <listitem>
724 <para>IT Security. (itsec) This role is limited to IT security personnel. It
725 permits role holders to quarantine instances.</para>
726 </listitem>
727 <listitem>
728 <para>System Administrator. (sysadmin)The default for project owners, this role
729 affords users the ability to add other users to a project, interact with
730 project images, and launch and terminate instances.</para>
731 </listitem>
732 <listitem>
733 <para>Network Administrator. (netadmin) Users with this role are permitted to
734 allocate and assign publicly accessible IP addresses as well as create and
735 modify firewall rules.</para>
736 </listitem>
737 <listitem>
738 <para>Developer. This is a general purpose role that is assigned to users by
739 default.</para>
740 </listitem>
741 <listitem>
742 <para>Project Manager. (projectmanager) This is a role that is assigned upon
743 project creation and can't be added or removed, but this role can do
744 anything a sysadmin can do.</para>
745 </listitem>
746 </itemizedlist>
747
748 <para>RBAC management is exposed through the dashboard for simplified user
749 management.</para>
750 </simplesect>
751 </section>
531 <section>752 <section>
532 <?dbhtml filename="managing-volumes.html" ?>753 <?dbhtml filename="managing-volumes.html" ?>
533 <title>Managing Volumes</title><para>Nova-volume is the service that allows you to give extra block level storage to your OpenStack754 <title>Managing Volumes</title>
534 Compute instances. You may recognize this as a similar offering that Amazon EC2 offers,755 <para>Nova-volume is the service that allows you to give extra block level storage to your
535 Elastic Block Storage (EBS). However, nova-volume is not the same implementation that756 OpenStack Compute instances. You may recognize this as a similar offering that Amazon
536 EC2 uses today. Nova-volume is an iSCSI solution that employs the use of Logical Volume757 EC2 offers, Elastic Block Storage (EBS). However, nova-volume is not the same
537 Manager (LVM) for Linux. Note that a volume may only be attached to one instance at a758 implementation that EC2 uses today. Nova-volume is an iSCSI solution that employs the
538 time. This is not a ‘shared storage’ solution like a SAN which multiple servers can759 use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached
539 attach to.</para>760 to one instance at a time. This is not a ‘shared storage’ solution like a SAN of NFS on
761 which multiple servers can attach to.</para>
762 <para> Before going any further ; let's present the nova-volume implementation into Open
763 stack : </para>
764 <para>The nova-volumes service uses iscsi-exposed LVM volumes to the compute nodes which run
765 instances. Thus, there are two components involved : </para>
766 <para>- lvm2, which works with a VG called "nova-volumes" (Reffer to
767 http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux) for further details) </para>
768 <para>- open-iscsi, the iscsi implementation which manages iscsi sessions on the compute
769 nodes. </para>
770 <para>Here is what happens from the volume creation to it's attachment (we use here the
771 euca2ools, but the same explanation goes with the API): </para>
772 <orderedlist>
773 <listitem>
774 <para>The volume is created via $euca-create-volume; which creates an LV into the VG
775 "nova-volumes" </para>
776 </listitem>
777 <listitem>
778 <para>The volume is attached to an instance via $euca-attach-volume; which creates a
779 unique iscsi IQN that will be exposed to the compute node. </para>
780 </listitem>
781 <listitem>
782 <para>The compute node which run the concerned instance has now an active ISCSI
783 session; and a new local storage (usually a /dev/sdX disk) </para>
784 </listitem>
785 <listitem>
786 <para>libvirt uses that local storage as a storage for the instance; the instance
787 get a new disk (usually a /dev/vdX disk) </para>
788 </listitem>
789 </orderedlist>
540 <para>For this particular walkthrough, there is one cloud controller running nova-api,790 <para>For this particular walkthrough, there is one cloud controller running nova-api,
541 nova-compute, nova-scheduler, nova-objectstore, and nova-network. There are two791 nova-compute, nova-scheduler, nova-objectstore, nova-network and nova-volumes. There are
542 additional compute nodes running both nova-compute and nova-volume. The walkthrough uses792 two additional compute nodes running nova-compute. The walkthrough uses a custom
543 a custom partitioning scheme that carves out 60GB of space and labels it as LVM. The793 partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a
544 network is a /28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack794 /28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova). </para>
545 Compute (Nova). </para>795 <para>Please note that the network mode doesn't interfere at all the way nova-volumes work,
796 but it is essential for nova-volumes to work that the mode you are currently using is
797 set up. Please reffer to the Section 7 "Networking" for much details.</para>
546 <para>To set up Compute to use volumes, ensure that nova-volume is installed along with798 <para>To set up Compute to use volumes, ensure that nova-volume is installed along with
547 lvm2. </para>799 lvm2. The guide will be split in four parts : </para>
548 <para>800 <para>
549 <literallayout class="monospaced">apt-get install lvm2 nova-volume</literallayout>801 <itemizedlist>
802 <listitem>
803 <para>A- Installing nova-volumes on the cloud controller.</para>
804 </listitem>
805 <listitem>
806 <para>B- Configuring nova-volumes on the compute nodes.</para>
807
808 </listitem>
809 <listitem>
810 <para>C- Troubleshoot your nova-volumes installation.</para>
811 </listitem>
812 <listitem>
813 <para>D- Advanced tips : Disaster Recovery Process, Backup your nova-volumes,
814 Browse your nova-volumes from the cloud-controller </para>
815 </listitem>
816 </itemizedlist>
550 </para>817 </para>
551 <simplesect><title>Configure Volumes for use with nova-volume</title>818
552 <para>If you do not already have LVM volumes on hand, but have free drive space, you819 <simplesect>
553 will need to create a LVM volume before proceeding.</para>820 <title>A- Install nova-volumes on the cloud controller.</title>
554 <para>Here is a short run down of how you would create a LVM from free drive space on your system.</para>821 <para> This is simply done by installing the two components on the cloud controller : <literallayout class="monospaced"><code>apt-get install lvm2 nova-volume</code></literallayout><itemizedlist>
555 <para>Start off by issuing an fdisk command to your drive with the free space:</para>822 <listitem>
556 <para>823 <para>
557 <literallayout class="monospaced">fdisk /dev/sda</literallayout></para>824 <emphasis role="bold">Configure Volumes for use with
558 <para>Once in fdisk, perform the following commands:</para>825 nova-volumes</emphasis></para>
559 <orderedlist>826 <para> If you do not already have LVM volumes on hand, but have free drive
560 <listitem><para>Press ‘<code>n'</code> to create a new disk partition,</para></listitem>827 space, you will need to create a LVM volume before proceeding. Here is a
561 <listitem><para>Press <code>'p'</code> to create a primary disk partition,</para></listitem>828 short run down of how you would create a LVM from free drive space on
562 <listitem><para>Press <code>'1'</code> to denote it as 1st disk partition,</para></listitem>829 your system. Start off by issuing an fdisk command to your drive with
563 830 the free space:
564 <listitem><para>Either press ENTER twice to accept the default of 1st and last cylinder – to convert the remainder of hard disk to a single disk partition831 <literallayout class="monospaced"><code>fdisk /dev/sda</code></literallayout>
565 -OR-832 Once in fdisk, perform the following commands: <orderedlist>
566 press ENTER once to accept the default of the 1st, and then choose how big you want the partition to be by specifying +size{K,M,G} e.g. +5G or +6700M.</para></listitem>833 <listitem>
567 <listitem><para>Press <code>'t', then</code> select the new partition you made.</para></listitem>834 <para>Press ‘<code>n'</code> to create a new disk
568 835 partition,</para>
569 <listitem><para>Press <code>'8e'</code> change your new partition to 8e, i.e. Linux LVM partition type.</para></listitem>836 </listitem>
570 <listitem><para>Press ‘<code>p'</code> to display the hard disk partition setup. Please take note that the first partition is denoted as /dev/sda1 in Linux.</para></listitem>837 <listitem>
571 <listitem><para>Press <code>'w'</code> to write the partition table and exit fdisk upon completion.</para></listitem>838 <para>Press <code>'p'</code> to create a primary disk
572 </orderedlist>839 partition,</para>
573 <para>Refresh your partition table to ensure your new partition shows up, and verify840 </listitem>
574 with fdisk.</para>841 <listitem>
575 842 <para>Press <code>'1'</code> to denote it as 1st disk
576 <para><literallayout class="monospaced">partprobe843 partition,</para>
577fdisk -l (you should see your new partition in this listing)</literallayout></para>844 </listitem>
578 <para>Here is how you can set up partitioning during the OS install to prepare for this845 <listitem>
579 nova-volume configuration:</para>846 <para>Either press ENTER twice to accept the default of 1st and
580 <para>root@osdemo03:~# fdisk -l</para>847 last cylinder – to convert the remainder of hard disk to a
581 <para><literallayout class="monospaced">848 single disk partition -OR- press ENTER once to accept the
582 Device Boot&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; End&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Blocks&nbsp;&nbsp; Id&nbsp; System849 default of the 1st, and then choose how big you want the
583 850 partition to be by specifying +size{K,M,G} e.g. +5G or
584 /dev/sda1&nbsp;&nbsp;&nbsp; * &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 12158&nbsp;&nbsp;&nbsp; 97280&nbsp;&nbsp; 83&nbsp; Linux851 +6700M.</para>
585 /dev/sda2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 12158&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24316&nbsp;&nbsp;&nbsp; 97655808&nbsp;&nbsp; 83&nbsp; Linux852 </listitem>
586 853 <listitem>
587 /dev/sda3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24316&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24328&nbsp;&nbsp;&nbsp;&nbsp; 97654784&nbsp;&nbsp;&nbsp;&nbsp; 83&nbsp; Linux854 <para>Press <code>'t', then</code> select the new partition you
588 /dev/sda4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24328&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 42443&nbsp;&nbsp; 145507329&nbsp;&nbsp;&nbsp; 5&nbsp; Extended855 made.</para>
589 856 </listitem>
590 /dev/sda5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 24328&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32352&nbsp;&nbsp;&nbsp; 64452608&nbsp;&nbsp; 8e&nbsp; Linux LVM857 <listitem>
591 /dev/sda6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32352&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 40497&nbsp;&nbsp;&nbsp; 65428480&nbsp;&nbsp; 8e&nbsp; Linux LVM858 <para>Press <code>'8e'</code> change your new partition to 8e,
592 859 i.e. Linux LVM partition type.</para>
593 /dev/sda7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 40498&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 42443&nbsp;&nbsp;&nbsp; 15624192&nbsp;&nbsp; 82&nbsp; Linux swap / Solaris860 </listitem>
594</literallayout></para>861 <listitem>
595 <para>Now that you have identified a partition has been labeled for LVM use, perform the862 <para>Press ‘<code>p'</code> to display the hard disk partition
596 following steps to configure LVM and prepare it as nova-volume. You must name your863 setup. Please take note that the first partition is denoted
597 volume group ‘nova-volumes’ or things will not work as expected:</para>864 as /dev/sda1 in Linux.</para>
598 <literallayout class="monospaced">865 </listitem>
599 pvcreate /dev/sda5 866 <listitem>
600 vgcreate nova-volumes /dev/sda5 </literallayout></simplesect><simplesect><title>Configure iscsitarget</title> <para>If you have a multinode installation of Compute, you may want nova-volume on the same node as nova-compute, although it is not required.</para><para>By default, when the ‘iscsitarget’ package is installed, it is not started, nor enabled by867 <para>Press <code>'w'</code> to write the partition table and
601 default. You need to perform the following two steps to configure the iscsitarget868 exit fdisk upon completion.</para>
602 service in order for nova-volume to work.</para>869 <para>Refresh your partition table to ensure your new partition
603 <para>870 shows up, and verify with fdisk. We then inform the OS about
604 <literallayout class="monospaced">871 the table partition update : </para>
605 sed -i ‘s/false/true/g’ /etc/default/iscsitarget872 <para>
606 service iscsitarget start</literallayout></para></simplesect><simplesect><title>Configure nova.conf Flag File</title>873 <literallayout class="monospaced"><code>partprobe</code>
607 <para>Edit your nova.conf to include a new flag, –iscsi_ip_prefix=192.168. The value of this flag needs to be set to something that will differentiate the IP addresses, to ensure it uses IP addresses that are route-able, such as a prefix on the private network. </para></simplesect>874
608 <simplesect><title>Start nova-volume and Create Volumes</title>875Again :
609 876<code>fdisk -l (you should see your new partition in this listing)</code></literallayout>
610 <para>You are now ready to fire up nova-volume, and start creating volumes!</para>877 </para>
611 878 <para>Here is how you can set up partitioning during the OS
612 <para><literallayout class="monospaced">service nova-volume start</literallayout></para>879 install to prepare for this nova-volume
613 880 configuration:</para>
614 <para>Once the service is started, login to your controller and ensure you’ve properly sourced your ‘novarc’ file. You will use the following commands to interface with nova-volume:</para>881 <para>root@osdemo03:~# fdisk -l </para>
615 882 <para>
616<para><literallayout class="monospaced"> euca-create-volume883 <programlisting>
617 euca-attach-volume884Device Boot Start End Blocks Id System
618 euca-detach-volume885
619 euca-delete-volume</literallayout></para>886/dev/sda1 * 1 12158 97280 83 Linux
620 887/dev/sda2 12158 24316 97655808 83 Linux
621 <para>One of the first things you should do is make sure that nova-volume is checking in as expected.&nbsp; You can do so using nova-manage:</para>888
622 <para><literallayout class="monospaced">nova-manage service list</literallayout></para>889/dev/sda3 24316 24328 97654784 83 Linux
623 <para>If you see a ‘nova-volume’ in there, you are looking good.&nbsp; Now create a new volume:</para>890/dev/sda4 24328 42443 145507329 5 Extended
624 <para><literallayout class="monospaced">euca-create-volume -s 7 -z nova&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (-s refers to the size of the volume in GB, and -z is the default zone (usually nova))</literallayout></para>891
625 892<emphasis role="bold">/dev/sda5 24328 32352 64452608 8e Linux LVM</emphasis>
626 <para>You should get some output similar to this:</para>893<emphasis role="bold">/dev/sda6 32352 40497 65428480 8e Linux LVM</emphasis>
627 <para><literallayout class="monospaced">VOLUME&nbsp; vol-0000000b&nbsp;&nbsp;&nbsp; 7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; creating (wayne, None, None, None)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2011-02-11 06:58:46.941818</literallayout></para>894
628 <para>You can view that status of the volumes creation using ‘euca-describe-volumes’.&nbsp; Once that status is ‘available,’ it is ready to be attached to an instance:</para>895/dev/sda7 40498 42443 15624192 82 Linux swap / Solaris
629 <para><literallayout class="monospaced">euca-attach-volume vol-00000009 -i i-00000008 -d /dev/vdb</literallayout></para>896</programlisting>
630 897 </para>
631 <para>If you do not get any errors, it is time to login to instance ‘i-00000008′ and see if the new space is there.&nbsp; Here is the output from ‘fdisk -l’ from i-00000008:</para>898 <para>Now that you have identified a partition has been labeled
632 <para><literallayout class="monospaced">Disk /dev/vda: 10.7 GB, 10737418240 bytes899 for LVM use, perform the following steps to configure LVM
900 and prepare it as nova-volume. <emphasis role="bold">You
901 must name your volume group ‘nova-volumes’ or things
902 will not work as expected</emphasis>:</para>
903 <literallayout class="monospaced"><code>pvcreate /dev/sda5
904vgcreate nova-volumes /dev/sda5</code> </literallayout>
905 </listitem>
906 </orderedlist></para>
907 </listitem>
908 </itemizedlist></para>
909 </simplesect>
910 <simplesect>
911 <title> B- Configuring nova-volumes on the compute nodes</title>
912 <para> Since you have created the VG, you will be able to use the following tools for
913 managing your volumes : </para>
914 <simpara><code>euca-create-volume</code></simpara>
915 <simpara><code>euca-attach-volume</code></simpara>
916 <simpara><code>euca-detach-volume</code></simpara>
917 <simpara><code>euca-delete-volume</code></simpara>
918 <itemizedlist>
919 <listitem>
920 <para>
921 <emphasis role="bold">Installing and Configure the iscsi
922 initiator</emphasis></para>
923 <para> Remember that every node will act as the iscsi initiator while the server
924 running nova-volumes will act as the iscsi target. So make sure, before
925 going further that your nodes can communicate with you nova-volumes server.
926 If you have a firewall running on it, make sure that the port 3260 (tcp)
927 accepts incoming connections. </para>
928 <para>First install the open-iscsi package <emphasis role="bold">on your
929 compute-nodes only :</emphasis>
930 <literallayout class="monospaced"><code>apt-get install open-iscsi</code> </literallayout></para>
931 <para>You have to enable it so the startut script (/etc/init.d/open-iscsi) will
932 work :
933 <literallayout class="monospaced"><code>sed -i ‘s/false/true/g’ /etc/default/iscsitarget</code></literallayout>
934 Then run :
935 <literallayout class="monospaced"><code>service iscsitarget start</code></literallayout></para>
936 </listitem>
937 <listitem>
938 <para><emphasis role="bold">Configure nova.conf Flag File</emphasis></para>
939 <para>Edit your nova.conf to include a new flag, "–iscsi_ip_prefix=192.168." The
940 flag will be used by the compute node when the iscsi discovery will be
941 performed and the session created. The prefix based on the two first bytes
942 will allows the iscsi discovery to use all the available routes (also known
943 as multipathing) to the iscsi server (eg. nova-volumes) into your network.
944 We will see into the "Troubleshooting" section how to deal with ISCSI
945 sessions.</para>
946 </listitem>
947 <listitem>
948 <para>
949 <emphasis role="bold">Start nova-volume and Create Volumes</emphasis></para>
950 <para>You are now ready to fire up nova-volume, and start creating
951 volumes!</para>
952
953 <para><literallayout class="monospaced"><code>service nova-volume start</code></literallayout></para>
954
955 <para>Once the service is started, login to your controller and ensure you’ve
956 properly sourced your ‘novarc’ file. You will be able to use the euca2ools
957 related to volumes interactions (see above).</para>
958 <para/>
959
960 <para>One of the first things you should do is make sure that nova-volume is
961 checking in as expected. You can do so using nova-manage:</para>
962 <para><literallayout class="monospaced"><code>nova-manage service list</code></literallayout></para>
963 <para>If you see a smiling ‘nova-volume’ in there, you are looking good. Now
964 create a new volume:</para>
965 <para><literallayout class="monospaced"><code>euca-create-volume -s 7 -z nova </code> (-s refers to the size of the volume in GB, and -z is the default zone (usually nova))</literallayout></para>
966
967 <para>You should get some output similar to this:</para>
968 <para>
969 <programlisting>VOLUME vol-0000000b 7 creating (wayne, None, None, None) 2011-02-11 06:58:46.941818</programlisting>
970 </para>
971 <para>You can view that status of the volumes creation using
972 ‘euca-describe-volumes’. Once that status is ‘available,’ it is ready to be
973 attached to an instance:</para>
974 <para><literallayout class="monospaced"><code>euca-attach-volume -i i-00000008 -d /dev/vdb vol-00000009</code> (-i reffers to the instance you will attach the volume to, -d is the mountpoint<emphasis role="bold"> (on the compute-node !</emphasis> and then the volume name.)</literallayout></para>
975 <para>By doing that, the compute-node which runs the instance basically performs
976 an iscsi connection and creates a session. You can ensure that the session
977 has been created by running : </para>
978 <para><code>iscsciadm -m session </code></para>
979 <para>Which should output : </para>
980 <para>
981 <programlisting>root@nova-cn1:~# iscsiadm -m session
982tcp: [1] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-0000000b</programlisting>
983 </para>
984
985 <para>If you do not get any errors, it is time to login to instance ‘i-00000008′
986 and see if the new space is there. You can check the volume attachemnt by
987 running : </para>
988 <para><code>dmesg | tail </code></para>
989 <para>You should from there see a new disk. Here is the output from ‘fdisk -l’
990 from i-00000008:</para>
991 <programlisting>Disk /dev/vda: 10.7 GB, 10737418240 bytes
63316 heads, 63 sectors/track, 20805 cylinders99216 heads, 63 sectors/track, 20805 cylinders
634Units = cylinders of 1008 * 512 = 516096 bytes993Units = cylinders of 1008 * 512 = 516096 bytes
635Sector size (logical/physical): 512 bytes / 512 bytes994Sector size (logical/physical): 512 bytes / 512 bytes
636I/O size (minimum/optimal): 512 bytes / 512 bytes995I/O size (minimum/optimal): 512 bytes / 512 bytes
637Disk identifier: 0×00000000</literallayout></para>996Disk identifier: 0×00000000
638 <literallayout>Disk /dev/vda doesn’t contain a valid partition table</literallayout>997Disk /dev/vda doesn’t contain a valid partition table
639 998<emphasis role="bold">Disk /dev/vdb: 21.5 GB, 21474836480 bytes &lt;—–Here is our new volume!</emphasis>
640 <para>99916 heads, 63 sectors/track, 41610 cylinders
641 <literallayout>Disk /dev/vdb: 21.5 GB, 21474836480 bytes&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;—–Here is our new volume!&nbsp;1000Units = cylinders of 1008 * 512 = 516096 bytes
64216 heads, 63 sectors/track, 41610 cylinders 1001Sector size (logical/physical): 512 bytes / 512 bytes
643Units = cylinders of 1008 * 512 = 516096 bytes 1002I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0×00000000 </programlisting>
644Sector size (logical/physical): 512 bytes / 512 bytes 1003
645I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0×00000000</literallayout>1004 <para>Now with the space presented, let’s configure it for use:</para>
646 </para>1005 <para><literallayout class="monospaced"><code>fdisk /dev/vdb</code></literallayout></para>
647 <para>Disk /dev/vdb doesn’t contain a valid partition table</para>1006 <orderedlist>
648 1007 <listitem>
649 <para>Now with the space presented, let’s configure it for use:</para>1008 <para>Press ‘<code>n'</code> to create a new disk partition.</para>
650 <para><literallayout class="monospaced">fdisk /dev/vdb</literallayout></para>1009 </listitem>
651 <orderedlist>1010 <listitem>
652 <listitem><para>Press ‘<code>n'</code> to create a new disk partition.</para></listitem>1011 <para>Press <code>'p'</code> to create a primary disk partition.</para>
653 <listitem><para>Press <code>'p'</code> to create a primary disk partition.</para></listitem>1012 </listitem>
654 <listitem><para>Press <code>'1'</code> to denote it as 1st disk partition.</para></listitem>1013 <listitem>
655 1014 <para>Press <code>'1'</code> to denote it as 1st disk partition.</para>
656 <listitem><para>Press ENTER twice to accept the default of 1st and last cylinder – to convert the remainder of1015 </listitem>
657 hard disk to a single disk partition.</para></listitem>1016
658 <listitem><para>Press <code>'t', then</code> select the new partition you made.</para></listitem>1017 <listitem>
659 <listitem><para>Press <code>'83'</code> change your new partition to 83, i.e. Linux partition type.</para></listitem>1018 <para>Press ENTER twice to accept the default of 1st and last cylinder –
660 <listitem><para>Press ‘<code>p'</code> to display the hard disk partition setup. Please take note that the1019 to convert the remainder of hard disk to a single disk
661 first partition is denoted as /dev/vda1 in your instance.</para></listitem>1020 partition.</para>
662 1021 </listitem>
663 <listitem>1022 <listitem>
664 <para>Press <code>'w'</code> to write the partition table and exit fdisk upon1023 <para>Press <code>'t', then</code> select the new partition you
665 completion.</para>1024 made.</para>
666 </listitem>1025 </listitem>
667 <listitem>1026 <listitem>
668 <para>Lastly, make a file system on the partition and mount it.</para><literallayout class="monospaced">mkfs.ext3 /dev/vdb11027 <para>Press <code>'83'</code> change your new partition to 83, i.e.
1028 Linux partition type.</para>
1029 </listitem>
1030 <listitem>
1031 <para>Press ‘<code>p'</code> to display the hard disk partition setup.
1032 Please take note that the first partition is denoted as /dev/vda1 in
1033 your instance.</para>
1034 </listitem>
1035
1036 <listitem>
1037 <para>Press <code>'w'</code> to write the partition table and exit fdisk
1038 upon completion.</para>
1039 </listitem>
1040 <listitem>
1041 <para>Lastly, make a file system on the partition and mount it.
1042 <programlisting>mkfs.ext3 /dev/vdb1
669mkdir /extraspace1043mkdir /extraspace
670mount /dev/vdb1 /extraspace</literallayout>1044mount /dev/vdb1 /extraspace </programlisting></para>
671 </listitem></orderedlist>1045
672 <para>Your new volume has now been successfully mounted, and is ready for use! The ‘euca’1046 </listitem>
673 commands are pretty self-explanatory, so play around with them and create new1047 </orderedlist>
674 volumes, tear them down, attach and reattach, and so on. </para>1048 <para>Your new volume has now been successfully mounted, and is ready for use!
675 </simplesect></section> 1049 The ‘euca’ commands are pretty self-explanatory, so play around with them
1050 and create new volumes, tear them down, attach and reattach, and so on.
1051 </para>
1052 </listitem>
1053 </itemizedlist>
1054 </simplesect>
1055 <simplesect>
1056 <title>C- Troubleshoot your nova-volumes installation</title>
1057 <para>If the volume attachement doesn't work, you should be able to perform different
1058 checks in order to see where the issue is. The nova-volumes.log and nova-compute.log
1059 will help you to diagnosis the errors you could encounter : </para>
1060 <para><emphasis role="bold">nova-compute.log / nova-volumes.log</emphasis></para>
1061 <para>
1062 <itemizedlist>
1063 <listitem>
1064 <para><emphasis role="italic">ERROR "15- already exists"</emphasis>
1065 <programlisting>"ProcessExecutionError: Unexpected error while running command.\nCommand: sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000001 -p
106610.192.12.34:3260 --login\nExit code: 255\nStdout: 'Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-00000001, portal:
106710.192.12.34,3260]\\n'\nStderr: 'iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-00000001,
1068portal:10.192.12.34,3260]: openiscsiadm: initiator reported error (15 - already exists)\\n'\n"] </programlisting></para>
1069 <para> This errors happens sometimes when you run an euca-dettach-volume and
1070 euca-attach-volume and/ or try to attach another volume to an instance.
1071 It happens when the compute node has a running session while you try to
1072 attach a volume by using the same IQN. You could check that by running : </para>
1073 <para><literallayout class="monospaced"><code>iscsiadm -m session</code></literallayout>
1074 You should have a session with the same name that the compute is trying
1075 to open. Actually, it seems to be related to the several routes
1076 available for the iscsi exposition, those routes could be seen by
1077 running on the compute node :
1078 <literallayout class="monospaced"><code>iscsiadm -m discovery -t st -p $ip_of_nova-volumes</code></literallayout>
1079 You should see for a volume multiple addresses to reach it. The only
1080 known workaround to that is tha change the "–iscsi_ip_prefix" flag and
1081 use the 4 bytes (full IP) of the nova-volumes server, eg : </para>
1082 <para><literallayout class="monospaced"><code>"–iscsi_ip_prefix=192.168.2.1</code></literallayout>
1083 You'll have then to restart both nova-compute and nova-volumes services. </para>
1084 <para/>
1085 </listitem>
1086 <listitem>
1087 <para><emphasis role="italic">ERROR "Cannot resolve host"</emphasis>
1088 <programlisting>(nova.root): TRACE: ProcessExecutionError: Unexpected error while running command.
1089(nova.root): TRACE: Command: sudo iscsiadm -m discovery -t sendtargets -p ubuntu03c
1090(nova.root): TRACE: Exit code: 255
1091(nova.root): TRACE: Stdout: ''
1092(nova.root): TRACE: Stderr: 'iscsiadm: Cannot resolve host ubuntu03c. getaddrinfo error: [Name or service not known]\n\niscsiadm:
1093cannot resolve host name ubuntu03c\niscsiadm: Could not perform SendTargets discovery.\n'
1094(nova.root): TRACE:</programlisting>
1095 This erros happens when the compute node is unable to resolve the
1096 nova-volume server name. You could either add a record for the server if
1097 you have a DNS server; or add it into the "/etc/hosts" file of the
1098 nova-compute. </para>
1099 <para/>
1100 </listitem>
1101 <listitem>
1102 <para><emphasis role="italic">ERROR "No route to host"</emphasis>
1103 <programlisting>iscsiadm: cannot make connection to 172.29.200.37: No route to host\niscsiadm: cannot make connection to 172.29.200.37</programlisting>
1104 This error could be caused by several things, but<emphasis role="bold">
1105 it means only one thing : openiscsi is unable to establish a
1106 communication with your nova-volumes server</emphasis>.</para>
1107 <para>The first thing you could do is running a telnet session in order to
1108 see if you are able to reach the nova-volumes server. From the
1109 compute-node, run :</para>
1110 <literallayout class="monospaced"><code>telnet $ip_of_nova_volumes 3260</code></literallayout>
1111 <para> If the session times out, check the server firewall ; or try to ping
1112 it. You could also run a tcpdump session which will likely gives you
1113 extra information : </para>
1114 <literallayout class="monospaced"><code>tcpdump -nvv -i $iscsi_interface port dest $ip_of_nova_volumes</code></literallayout>
1115 <para> Again, try to manually run an iscsi discovery via : </para>
1116 <literallayout class="monospaced"><code>iscsiadm -m discovery -t st -p $ip_of_nova-volumes</code></literallayout>
1117 <para/>
1118 </listitem>
1119 <listitem>
1120 <para><emphasis role="italic">"I lost connectivity between nova-volumes and
1121 node-compute ; how to restore a clean state ?"</emphasis>
1122 </para>
1123 <para>Network disconnection can happens, from an "iscsi view", loosing
1124 connectivity could be seen as a physical removal of a server's disk. If
1125 the instance runs a volume while you loose the network between them, you
1126 won't be able to detach the volume. You would encounter several errors.
1127 Here is how you could clean this : </para>
1128 <para>First, from the nova-compute, close the active (but stalled) iscsi
1129 session, reffer to the volume attached to get the session, and perform
1130 the following command : </para>
1131 <literallayout class="monospaced"><code>iscsiadm -m session -r $session_id -u</code></literallayout>
1132 <para>Here is an <code>iscsi -m session</code> output : </para>
1133 <programlisting>
1134tcp: [1] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-0000000e
1135tcp: [2] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000010
1136tcp: [3] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000011
1137tcp: [4] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-0000000a
1138tcp: [5] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000012
1139tcp: [6] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000007
1140tcp: [7] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000009
1141tcp: [9] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-00000014 </programlisting>
1142 <para>I would close the session number 9 if I want to free the volume
1143 00000014. </para>
1144 <para>The cloud-controller is actually unaware about the iscsi session
1145 closing, and will keeps the volume state as "in-use":
1146 <programlisting>VOLUME vol-00000014 30 nova in-use (nuage-and-co, nova-cc1, i-0000009a[nova-cn1], \/dev\/sdb) 2011-07-18T12:45:39Z</programlisting>
1147 You now have to inform it that the disk can be used. Nova stores the
1148 volumes info into the "volumes" table. You will have to update four
1149 fields into the database nova uses (eg. MySQL). First, conect to the
1150 database : </para>
1151 <literallayout class="monospaced"><code>mysql -uroot -p$password nova</code></literallayout>
1152 <para>Then, we get some infos from the table "volumes" : </para>
1153 <programlisting>
1154 mysql> select id,created_at, size, instance_id, status, attach_status, display_name from volumes;
1155+----+---------------------+------+-------------+----------------+---------------+--------------+
1156| id | created_at | size | instance_id | status | attach_status | display_name |
1157+----+---------------------+------+-------------+----------------+---------------+--------------+
1158| 1 | 2011-06-08 09:02:49 | 5 | 0 | available | detached | volume1 |
1159| 2 | 2011-06-08 14:04:36 | 5 | 0 | available | detached | NULL |
1160| 3 | 2011-06-08 14:44:55 | 5 | 0 | available | detached | NULL |
1161| 4 | 2011-06-09 09:09:15 | 5 | 0 | error_deleting | detached | NULL |
1162| 5 | 2011-06-10 08:46:33 | 6 | 0 | available | detached | NULL |
1163| 6 | 2011-06-10 09:16:18 | 6 | 0 | available | detached | NULL |
1164| 7 | 2011-06-16 07:45:57 | 10 | 157 | in-use | attached | NULL |
1165| 8 | 2011-06-20 07:51:19 | 10 | 0 | available | detached | NULL |
1166| 9 | 2011-06-21 08:21:38 | 10 | 152 | in-use | attached | NULL |
1167| 10 | 2011-06-22 09:47:42 | 50 | 136 | in-use | attached | NULL |
1168| 11 | 2011-06-30 07:30:48 | 50 | 0 | available | detached | NULL |
1169| 12 | 2011-06-30 11:56:32 | 50 | 0 | available | detached | NULL |
1170| 13 | 2011-06-30 12:12:08 | 50 | 0 | error_deleting | detached | NULL |
1171| 14 | 2011-07-04 12:33:50 | 30 | 155 | in-use | attached | NULL |
1172| 15 | 2011-07-06 15:15:11 | 5 | 0 | error_deleting | detached | NULL |
1173| 16 | 2011-07-07 08:05:44 | 20 | 149 | in-use | attached | NULL |
1174| 20 | 2011-08-30 13:28:24 | 20 | 158 | in-use | attached | NULL |
1175| 17 | 2011-07-13 19:41:13 | 20 | 149 | in-use | attached | NULL |
1176| 18 | 2011-07-18 12:45:39 | 30 | 154 | in-use | attached | NULL |
1177| 19 | 2011-08-22 13:11:06 | 50 | 0 | available | detached | NULL |
1178| 21 | 2011-08-30 15:39:16 | 5 | NULL | error_deleting | detached | NULL |
1179+----+---------------------+------+-------------+----------------+---------------+--------------+
118021 rows in set (0.00 sec)</programlisting>
1181 <para> Once you get the volume id, you will have to run the following sql
1182 queries (let's say, my volume 14 as the id number 21 : </para>
1183 <programlisting>
1184 mysql> update volumes set mountpoint=NULL where id=21;
1185 mysql> update volumes set status="available" where status "error_deleting" where id=21;
1186 mysql> update volumes set attach_status="detached" where id=21;
1187 mysql> update volumes set instance_id=0 where id=21;
1188 </programlisting>
1189 <para>Now if you run again <code>euca-describe-volumes</code>from the cloud
1190 coontroller, you should see an available volume now : </para>
1191 <programlisting>VOLUME vol-00000014 30 nova available (nuage-and-co, nova-cc1, None, None) 2011-07-18T12:45:39Z</programlisting>
1192 <para>You can now proceed to the volume attachement again!</para>
1193 </listitem>
1194 </itemizedlist>
1195 </para>
1196 </simplesect>
1197 <simplesect>
1198 <title> D- Advanced tips : Disaster Recovery Process, Backup your nova-volumes, Browse
1199 your nova-volumes from the cloud-controller </title>
1200 <para>
1201 <emphasis role="italic">
1202 WORK IN PROGRESS
1203 </emphasis>
1204 </para>
1205 <para/>
1206 </simplesect>
1207 </section>
676 <section>1208 <section>
677 <?dbhtml filename="live-migration-usage.html" ?>1209 <?dbhtml filename="live-migration-usage.html" ?>
678 <title>Using Live Migration</title>1210 <title>Using Live Migration</title>
@@ -680,74 +1212,80 @@
680 <para>Live migration provides a scheme to migrate running instances from one OpenStack1212 <para>Live migration provides a scheme to migrate running instances from one OpenStack
681 Compute server to another OpenStack Compute server. No visible downtime and no1213 Compute server to another OpenStack Compute server. No visible downtime and no
682 transaction loss is the ideal goal. This feature can be used as depicted below. </para>1214 transaction loss is the ideal goal. This feature can be used as depicted below. </para>
683 1215
684 <itemizedlist>1216 <itemizedlist>
685 <listitem>1217 <listitem>
686 <para>First, make sure any instances running on a specific server.</para>1218 <para>First, make sure any instances running on a specific server.</para>
687 <programlisting><![CDATA[1219 <programlisting><![CDATA[
688# euca-describe-instances1220# euca-describe-instances
689Reservation:r-2raqmabo1221Reservation:r-2raqmabo
690RESERVATION r-2raqmabo admin default1222RESERVATION r-2raqmabo admin default
691INSTANCE i-00000003 ami-ubuntu-lucid a.b.c.d e.f.g.h running testkey (admin, HostB) 0 m1.small 2011-02-15 07:28:32 nova1223INSTANCE i-00000003 ami-ubuntu-lucid a.b.c.d e.f.g.h running testkey (admin, HostB) 0 m1.small 2011-02-15 07:28:32 nova
692 ]]></programlisting>1224]]></programlisting>
693 <para> In this example, i-00000003 is running on HostB.</para>1225 <para> In this example, i-00000003 is running on HostB.</para>
694 </listitem>1226 </listitem>
695 <listitem>1227 <listitem>
696 <para>Second, pick up other server where instances are migrated to.</para>1228 <para>Second, pick up other server where instances are migrated to.</para>
697 <programlisting><![CDATA[1229 <programlisting><![CDATA[
698# nova-manage service list1230# nova-manage service list
699HostA nova-scheduler enabled :-) None1231HostA nova-scheduler enabled :-) None
700HostA nova-volume enabled :-) None1232HostA nova-volume enabled :-) None
701HostA nova-network enabled :-) None1233HostA nova-network enabled :-) None
702HostB nova-compute enabled :-) None1234HostB nova-compute enabled :-) None
703HostC nova-compute enabled :-) None1235HostC nova-compute enabled :-) None
704 ]]></programlisting>1236]]></programlisting>
705 <para> In this example, HostC can be picked up because nova-compute is running onto it.</para>1237 <para> In this example, HostC can be picked up because nova-compute is running onto
1238 it.</para>
706 </listitem>1239 </listitem>
707 <listitem>1240 <listitem>
708 <para>Third, check HostC has enough resource for live migration.</para>1241 <para>Third, check HostC has enough resource for live migration.</para>
709 <programlisting><![CDATA[1242 <programlisting><![CDATA[
710# nova-manage service update_resource HostC1243# nova-manage service update_resource HostC
711# nova-manage service describe_resource HostC1244# nova-manage service describe_resource HostC
712HOST PROJECT cpu mem(mb) disk(gb)1245HOST PROJECT cpu mem(mb) disk(gb)
713HostC(total) 16 32232 8781246HostC(total) 16 32232 878
714HostC(used) 13 21284 4421247HostC(used) 13 21284 442
715HostC p1 5 10240 1501248HostC p1 5 10240 150
716HostC p2 5 10240 1501249HostC p2 5 10240 150
717.....1250.....
718 ]]></programlisting>1251]]></programlisting>
719 <para>Remember to use update_resource first, then describe_resource. Otherwise,1252 <para>Remember to use update_resource first, then describe_resource. Otherwise,
720 Host(used) is not updated.</para>1253 Host(used) is not updated.</para>
721 <itemizedlist>1254 <itemizedlist>
722 <listitem>1255 <listitem>
723 <para><emphasis role="bold">cpu:</emphasis>the nuber of cpu</para>1256 <para><emphasis role="bold">cpu:</emphasis>the nuber of cpu</para>
724 </listitem>1257 </listitem>
725 <listitem>1258 <listitem>
726 <para><emphasis role="bold">mem(mb):</emphasis>total amount of memory (MB)</para>1259 <para><emphasis role="bold">mem(mb):</emphasis>total amount of memory
727 </listitem>1260 (MB)</para>
728 <listitem>1261 </listitem>
729 <para><emphasis role="bold">disk(gb)</emphasis>total amount of NOVA-INST-DIR/instances(GB)</para>1262 <listitem>
730 </listitem>1263 <para><emphasis role="bold">disk(gb)</emphasis>total amount of
731 <listitem>1264 NOVA-INST-DIR/instances(GB)</para>
732 <para><emphasis role="bold">1st line shows </emphasis>total amount of resource physical server has.</para>1265 </listitem>
733 </listitem>1266 <listitem>
734 <listitem>1267 <para><emphasis role="bold">1st line shows </emphasis>total amount of
735 <para><emphasis role="bold">2nd line shows </emphasis>current used resource.</para>1268 resource physical server has.</para>
736 </listitem>1269 </listitem>
737 <listitem>1270 <listitem>
738 <para><emphasis role="bold">3rd line and under</emphasis> is used resource per project.</para>1271 <para><emphasis role="bold">2nd line shows </emphasis>current used
739 </listitem>1272 resource.</para>
740 </itemizedlist>1273 </listitem>
1274 <listitem>
1275 <para><emphasis role="bold">3rd line and under</emphasis> is used resource
1276 per project.</para>
1277 </listitem>
1278 </itemizedlist>
741 </listitem>1279 </listitem>
742 <listitem>1280 <listitem>
743 <para>Finally, live migration</para>1281 <para>Finally, live migration</para>
744 <programlisting><![CDATA[1282 <programlisting><![CDATA[
745# nova-manage vm live_migration i-00000003 HostC1283# nova-manage vm live_migration i-00000003 HostC
746Migration of i-00000001 initiated. Check its progress using euca-describe-instances.1284Migration of i-00000001 initiated. Check its progress using euca-describe-instances.
747 ]]></programlisting>1285]]></programlisting>
748 <para>Make sure instances are migrated successfully with euca-describe-instances.1286 <para>Make sure instances are migrated successfully with euca-describe-instances. If
749 If instances are still running on HostB, check logfiles( src/dest nova-compute1287 instances are still running on HostB, check logfiles( src/dest nova-compute and
750 and nova-scheduler)</para>1288 nova-scheduler)</para>
751 </listitem>1289 </listitem>
752 </itemizedlist>1290 </itemizedlist>
7531291
@@ -756,12 +1294,12 @@
756 <section>1294 <section>
757 <?dbhtml filename="reference-for-flags-in-nova-conf.html" ?>1295 <?dbhtml filename="reference-for-flags-in-nova-conf.html" ?>
758 <title>Reference for Flags in nova.conf</title>1296 <title>Reference for Flags in nova.conf</title>
759 <para>For a complete list of all available flags for each OpenStack Compute service,1297 <para>For a complete list of all available flags for each OpenStack Compute service, run
760 run bin/nova-&lt;servicename> --help. </para>1298 bin/nova-&lt;servicename> --help. </para>
761 1299
762 <table rules="all">1300 <table rules="all">
763 <caption>Description of common nova.conf flags (nova-api, nova-compute)</caption>1301 <caption>Description of common nova.conf flags (nova-api, nova-compute)</caption>
764 1302
765 <thead>1303 <thead>
766 <tr>1304 <tr>
767 <td>Flag</td>1305 <td>Flag</td>
@@ -901,17 +1439,22 @@
901 <tr>1439 <tr>
902 <td>--flat_injected</td>1440 <td>--flat_injected</td>
903 <td>default: 'false'</td>1441 <td>default: 'false'</td>
904 <td>Indicates whether Compute (Nova) should use attempt to inject IPv6 network configuration information into the guest. It attempts to modify /etc/network/interfaces and currently only works on Debian-based systems. </td>1442 <td>Indicates whether Compute (Nova) should use attempt to inject IPv6 network
1443 configuration information into the guest. It attempts to modify
1444 /etc/network/interfaces and currently only works on Debian-based systems.
1445 </td>
905 </tr>1446 </tr>
906 <tr>1447 <tr>
907 <td>--fixed_ip_disassociate_timeout</td>1448 <td>--fixed_ip_disassociate_timeout</td>
908 <td>default: '600'</td>1449 <td>default: '600'</td>
909 <td>Integer: Number of seconds after which a deallocated ip is disassociated. </td>1450 <td>Integer: Number of seconds after which a deallocated ip is disassociated.
1451 </td>
910 </tr>1452 </tr>
911 <tr>1453 <tr>
912 <td>--fixed_range</td>1454 <td>--fixed_range</td>
913 <td>default: '10.0.0.0/8'</td>1455 <td>default: '10.0.0.0/8'</td>
914 <td>Fixed IP address block of addresses from which a set of iptables rules is created</td>1456 <td>Fixed IP address block of addresses from which a set of iptables rules is
1457 created</td>
915 </tr>1458 </tr>
916 <tr>1459 <tr>
917 <td>--fixed_range_v6</td>1460 <td>--fixed_range_v6</td>
@@ -921,7 +1464,8 @@
921 <tr>1464 <tr>
922 <td>--[no]flat_injected</td>1465 <td>--[no]flat_injected</td>
923 <td>default: 'true'</td>1466 <td>default: 'true'</td>
924 <td>Indicates whether to attempt to inject network setup into guest; network injection only works for Debian systems</td>1467 <td>Indicates whether to attempt to inject network setup into guest; network
1468 injection only works for Debian systems</td>
925 </tr>1469 </tr>
926 <tr>1470 <tr>
927 <td>--flat_interface</td>1471 <td>--flat_interface</td>
@@ -936,7 +1480,8 @@
936 <tr>1480 <tr>
937 <td>--flat_network_dhcp_start</td>1481 <td>--flat_network_dhcp_start</td>
938 <td>default: '10.0.0.2'</td>1482 <td>default: '10.0.0.2'</td>
939 <td>Starting IP address for the DHCP server to start handing out IP addresses when using FlatDhcp </td>1483 <td>Starting IP address for the DHCP server to start handing out IP addresses
1484 when using FlatDhcp </td>
940 </tr>1485 </tr>
941 <tr>1486 <tr>
942 <td>--flat_network_dns</td>1487 <td>--flat_network_dns</td>
@@ -948,7 +1493,7 @@
948 <td>default: '4.4.4.0/24'</td>1493 <td>default: '4.4.4.0/24'</td>
949 <td>Floating IP address block </td>1494 <td>Floating IP address block </td>
950 </tr>1495 </tr>
951 1496
952 <tr>1497 <tr>
953 <td>--[no]fake_network</td>1498 <td>--[no]fake_network</td>
954 <td>default: 'false'</td>1499 <td>default: 'false'</td>
@@ -996,15 +1541,17 @@
996 <tr>1541 <tr>
997 <td>--image_service</td>1542 <td>--image_service</td>
998 <td>default: 'nova.image.s3.S3ImageService'</td>1543 <td>default: 'nova.image.s3.S3ImageService'</td>
999 <td><para>The service to use for retrieving and searching for images. Images must be registered using1544 <td><para>The service to use for retrieving and searching for images. Images
1000 euca2ools. Options: </para><itemizedlist>1545 must be registered using euca2ools. Options: </para><itemizedlist>
1001 <listitem>1546 <listitem>
1002 <para>nova.image.s3.S3ImageService</para>1547 <para>nova.image.s3.S3ImageService</para>
1003 <para>S3 backend for the Image Service.</para>1548 <para>S3 backend for the Image Service.</para>
1004 </listitem>1549 </listitem>
1005 <listitem>1550 <listitem>
1006 <para>nova.image.local.LocalImageService</para>1551 <para>nova.image.local.LocalImageService</para>
1007 <para>Image service storing images to local disk. It assumes that image_ids are integers. This is the default setting if no image manager is defined here.</para>1552 <para>Image service storing images to local disk. It assumes that
1553 image_ids are integers. This is the default setting if no image
1554 manager is defined here.</para>
1008 </listitem>1555 </listitem>
1009 <listitem>1556 <listitem>
1010 <para>nova.image.glance.GlanceImageService</para>1557 <para>nova.image.glance.GlanceImageService</para>
@@ -1022,7 +1569,8 @@
1022 <tr>1569 <tr>
1023 <td>--libvirt_type</td>1570 <td>--libvirt_type</td>
1024 <td>default: kvm</td>1571 <td>default: kvm</td>
1025 <td>String: Name of connection to a hypervisor through libvirt. Supported options are kvm, qemu, uml, and xen.</td>1572 <td>String: Name of connection to a hypervisor through libvirt. Supported
1573 options are kvm, qemu, uml, and xen.</td>
1026 </tr>1574 </tr>
1027 <tr>1575 <tr>
1028 <td>--lock_path</td>1576 <td>--lock_path</td>
@@ -1195,7 +1743,8 @@
1195 <tr>1743 <tr>
1196 <td>--routing_source_ip</td>1744 <td>--routing_source_ip</td>
1197 <td>default: '10'</td>1745 <td>default: '10'</td>
1198 <td>IP address; Public IP of network host. When instances without a floating IP hit the Internet, traffic is snatted to this IP address.</td>1746 <td>IP address; Public IP of network host. When instances without a floating IP
1747 hit the Internet, traffic is snatted to this IP address.</td>
1199 </tr>1748 </tr>
1200 <tr>1749 <tr>
1201 <td>--s3_dmz</td>1750 <td>--s3_dmz</td>
@@ -1250,15 +1799,18 @@
1250 <td>default: '/usr/lib/pymodules/python2.6/nova/../'</td>1799 <td>default: '/usr/lib/pymodules/python2.6/nova/../'</td>
1251 <td>Top-level directory for maintaining Nova's state</td>1800 <td>Top-level directory for maintaining Nova's state</td>
1252 </tr>1801 </tr>
1253 <tr><td>--use_deprecated_auth</td>1802 <tr>
1254 <td>default: 'false'</td>1803 <td>--use_ipv6</td>
1255 <td>Set to 1 or true to turn on; Determines whether to use the deprecated nova auth system or Keystone as the auth system </td></tr>1804 <td>default: 'false'</td>
1256 <tr><td>--use_ipv6</td>1805 <td>Set to 1 or true to turn on; Determines whether to use IPv6 network
1257 <td>default: 'false'</td>1806 addresses </td>
1258 <td>Set to 1 or true to turn on; Determines whether to use IPv6 network addresses </td></tr>1807 </tr>
1259 <tr><td>--use_s3</td>1808 <tr>
1809 <td>--use_s3</td>
1260 <td>default: 'true'</td>1810 <td>default: 'true'</td>
1261 <td>Set to 1 or true to turn on; Determines whether to get images from s3 or use a local copy </td></tr>1811 <td>Set to 1 or true to turn on; Determines whether to get images from s3 or use
1812 a local copy </td>
1813 </tr>
1262 <tr>1814 <tr>
1263 <td>--verbose</td>1815 <td>--verbose</td>
1264 <td>default: 'false'</td>1816 <td>default: 'false'</td>
@@ -1267,7 +1819,8 @@
1267 <tr>1819 <tr>
1268 <td>--vlan_interface</td>1820 <td>--vlan_interface</td>
1269 <td>default: 'eth0'</td>1821 <td>default: 'eth0'</td>
1270 <td>This is the interface that VlanManager uses to bind bridges and vlans to. </td>1822 <td>This is the interface that VlanManager uses to bind bridges and vlans to.
1823 </td>
1271 </tr>1824 </tr>
1272 <tr>1825 <tr>
1273 <td>--vlan_start</td>1826 <td>--vlan_start</td>
@@ -1282,39 +1835,47 @@
1282 <tr>1835 <tr>
1283 <td>--vpn_key_suffix</td>1836 <td>--vpn_key_suffix</td>
1284 <td>default: '-vpn'</td>1837 <td>default: '-vpn'</td>
1285 <td>This is the interface that VlanManager uses to bind bridges and VLANs to.</td>1838 <td>This is the interface that VlanManager uses to bind bridges and VLANs
1286 </tr>1839 to.</td>
1287 </tbody>1840 </tr>
1288 </table>1841 </tbody>
1289 <table rules="all">1842 </table>
1290 <caption>Description of nova.conf flags specific to nova-volume</caption>1843 <table rules="all">
1291 1844 <caption>Description of nova.conf flags specific to nova-volume</caption>
1292 <thead>1845
1293 <tr>1846 <thead>
1294 <td>Flag</td>1847 <tr>
1295 <td>Default</td>1848 <td>Flag</td>
1296 <td>Description</td>1849 <td>Default</td>
1297 </tr>1850 <td>Description</td>
1298 </thead>1851 </tr>
1299 <tbody>1852 </thead>
1300 <tr><td>--iscsi_ip_prefix</td>1853 <tbody>
1301 <td>default: ''</td>1854 <tr>
1302 1855 <td>--iscsi_ip_prefix</td>
1303 <td>IP address or partial IP address; Value that differentiates the IP1856 <td>default: ''</td>
1304 addresses using simple string matching, so if all of your hosts are on the 192.168.1.0/24 network you could use --iscsi_ip_prefix=192.168.1</td></tr>1857
1305 1858 <td>IP address or partial IP address; Value that differentiates the IP addresses
1306 <tr>1859 using simple string matching, so if all of your hosts are on the
1307 <td>--volume_manager</td>1860 192.168.1.0/24 network you could use --iscsi_ip_prefix=192.168.1</td>
1308 <td>default: 'nova.volume.manager.VolumeManager'</td>1861 </tr>
1309 <td>String value; Manager to use for nova-volume</td>1862
1310 </tr>1863 <tr>
1311 <tr>1864 <td>--volume_manager</td>
1312 <td>--volume_name_template</td>1865 <td>default: 'nova.volume.manager.VolumeManager'</td>
1313 <td>default: 'volume-%08x'</td>1866 <td>String value; Manager to use for nova-volume</td>
1314 <td>String value; Template string to be used to generate volume names</td>1867 </tr>
1315 </tr><tr>1868 <tr>
1316 <td>--volume_topic</td>1869 <td>--volume_name_template</td>
1317 <td>default: 'volume'</td>1870 <td>default: 'volume-%08x'</td>
1318 <td>String value; Name of the topic that volume nodes listen on</td>1871 <td>String value; Template string to be used to generate volume names</td>
1319 </tr></tbody></table></section>1872 </tr>
1873 <tr>
1874 <td>--volume_topic</td>
1875 <td>default: 'volume'</td>
1876 <td>String value; Name of the topic that volume nodes listen on</td>
1877 </tr>
1878 </tbody>
1879 </table>
1880 </section>
1320</chapter>1881</chapter>

Subscribers

People subscribed via source and target branches