Merge lp:~evilnick/clouddocs/reorg into lp:~jujudocs/clouddocs/trunk
- reorg
- Merge into trunk
Proposed by
Nick Veitch
Status: | Merged |
---|---|
Approved by: | Frank Mueller |
Approved revision: | 16 |
Merged at revision: | 16 |
Proposed branch: | lp:~evilnick/clouddocs/reorg |
Merge into: | lp:~jujudocs/clouddocs/trunk |
Diff against target: |
6235 lines (+3009/-3062) 31 files modified
Admin/Appendix-Ceph-and-OpenStack.md (+229/-0) Admin/Backup-and-Recovery-Ceph.md (+107/-0) Admin/Backup-and-Recovery-Juju.md (+59/-0) Admin/Backup-and-Recovery-OpenStack.md (+131/-0) Admin/Logging-Juju.md (+24/-0) Admin/Logging-OpenStack.md (+92/-0) Admin/Logging.md (+15/-0) Admin/Scaling-Ceph.md (+36/-0) Admin/Upgrading-and-Patching-Juju.md (+45/-0) Admin/Upgrading-and-Patching-OpenStack.md (+83/-0) Appendix-Ceph-and-OpenStack.md (+0/-229) Backup-and-Recovery-Ceph.md (+0/-107) Backup-and-Recovery-Juju.md (+0/-59) Backup-and-Recovery-OpenStack.md (+0/-131) Install/Installing-Ceph.md (+56/-0) Install/Installing-MAAS.md (+467/-0) Install/Intro.md (+28/-0) Install/installing-openstack-outline.md (+395/-0) Install/landcsape.md (+909/-0) Installing-Ceph.md (+0/-56) Installing-MAAS.md (+0/-467) Intro.md (+0/-26) Logging-Juju.md (+0/-24) Logging-OpenStack.md (+0/-92) Logging.md (+0/-15) Scaling-Ceph.md (+0/-36) Upgrading-and-Patching-Juju.md (+0/-45) Upgrading-and-Patching-OpenStack.md (+0/-83) installing-openstack-outline.md (+0/-395) landcsape.md (+0/-1297) resources/templates/Template (+333/-0) |
To merge this branch: | bzr merge lp:~evilnick/clouddocs/reorg |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Frank Mueller | Pending | ||
Review via email: mp+215915@code.launchpad.net |
Commit message
Description of the change
I have reorganised the Install and Admin sections into their own directories - this is sort of necessary for making sense of converting them into separate HTMl docs for web.
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added directory 'Admin' |
2 | === added file 'Admin/Appendix-Ceph-and-OpenStack.md' |
3 | --- Admin/Appendix-Ceph-and-OpenStack.md 1970-01-01 00:00:00 +0000 |
4 | +++ Admin/Appendix-Ceph-and-OpenStack.md 2014-04-15 16:06:33 +0000 |
5 | @@ -0,0 +1,229 @@ |
6 | +Title: Appendix - Ceph and OpenStack |
7 | +Status: Done |
8 | + |
9 | +# Appendix: Ceph and OpenStack |
10 | + |
11 | +Ceph stripes block device images as objects across a cluster. This way it provides |
12 | +a better performance than standalone server. OpenStack is able to use Ceph Block Devices |
13 | +through `libvirt`, which configures the QEMU interface to `librbd`. |
14 | + |
15 | +To use Ceph Block Devices with OpenStack, you must install QEMU, `libvirt`, and OpenStack |
16 | +first. It's recommended to use a separate physical node for your OpenStack installation. |
17 | +OpenStack recommends a minimum of 8GB of RAM and a quad-core processor. |
18 | + |
19 | +Three parts of OpenStack integrate with Ceph’s block devices: |
20 | + |
21 | +- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack |
22 | + treats images as binary blobs and downloads them accordingly. |
23 | +- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to |
24 | + attach volumes to running VMs. OpenStack manages volumes using Cinder services. |
25 | +- Guest Disks: Guest disks are guest operating system disks. By default, when you |
26 | + boot a virtual machine, its disk appears as a file on the filesystem of the |
27 | + hypervisor (usually under /var/lib/nova/instances/<uuid>/). Prior OpenStack Havana, |
28 | + the only way to boot a VM in Ceph was to use the boot from volume functionality |
29 | + from Cinder. However, now it is possible to directly boot every virtual machine |
30 | + inside Ceph without using Cinder. This is really handy because it allows us to |
31 | + easily perform maintenance operation with the live-migration process. On the other |
32 | + hand, if your hypervisor dies it is also really convenient to trigger Nova evacuate |
33 | + and almost seamlessly run the virtual machine somewhere else. |
34 | + |
35 | +You can use OpenStack Glance to store images in a Ceph Block Device, and you can |
36 | +use Cinder to boot a VM using a copy-on-write clone of an image. |
37 | + |
38 | +## Create a pool |
39 | + |
40 | +By default, Ceph block devices use the `rbd` pool. You may use any available pool. |
41 | +We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph |
42 | +cluster is running, then create the pools. |
43 | + |
44 | +```` |
45 | +ceph osd pool create volumes 128 |
46 | +ceph osd pool create images 128 |
47 | +ceph osd pool create backups 128 |
48 | +```` |
49 | + |
50 | +## Configure OpenStack Ceph Clients |
51 | + |
52 | +The nodes running `glance-api`, `cinder-volume`, `nova-compute` and `cinder-backup` act |
53 | +as Ceph clients. Each requires the `ceph.conf` file |
54 | + |
55 | +```` |
56 | +ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf |
57 | +```` |
58 | + |
59 | +On the `glance-api` node, you’ll need the Python bindings for `librbd` |
60 | + |
61 | +```` |
62 | +sudo apt-get install python-ceph |
63 | +sudo yum install python-ceph |
64 | +```` |
65 | + |
66 | +On the `nova-compute`, `cinder-backup` and on the `cinder-volume` node, use both the |
67 | +Python bindings and the client command line tools |
68 | + |
69 | +```` |
70 | +sudo apt-get install ceph-common |
71 | +sudo yum install ceph |
72 | +```` |
73 | + |
74 | +If you have cephx authentication enabled, create a new user for Nova/Cinder and |
75 | +Glance. Execute the following |
76 | + |
77 | +```` |
78 | +ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' |
79 | +ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' |
80 | +ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' |
81 | +```` |
82 | + |
83 | +Add the keyrings for `client.cinder`, `client.glance`, and `client.cinder-backup` |
84 | +to the appropriate nodes and change their ownership |
85 | + |
86 | +```` |
87 | +ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring |
88 | +ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring |
89 | +ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring |
90 | +ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring |
91 | +ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring |
92 | +ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring |
93 | +```` |
94 | + |
95 | +Nodes running `nova-compute` need the keyring file for the `nova-compute` process. |
96 | +They also need to store the secret key of the `client.cinder` user in `libvirt`. The |
97 | +`libvirt` process needs it to access the cluster while attaching a block device |
98 | +from Cinder. |
99 | + |
100 | +Create a temporary copy of the secret key on the nodes running `nova-compute` |
101 | + |
102 | +```` |
103 | +ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key |
104 | +```` |
105 | + |
106 | +Then, on the compute nodes, add the secret key to `libvirt` and remove the |
107 | +temporary copy of the key |
108 | + |
109 | +```` |
110 | +uuidgen |
111 | +457eb676-33da-42ec-9a8c-9293d545c337 |
112 | + |
113 | +cat > secret.xml <<EOF |
114 | +<secret ephemeral='no' private='no'> |
115 | + <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> |
116 | + <usage type='ceph'> |
117 | + <name>client.cinder secret</name> |
118 | + </usage> |
119 | +</secret> |
120 | +EOF |
121 | +sudo virsh secret-define --file secret.xml |
122 | +Secret 457eb676-33da-42ec-9a8c-9293d545c337 created |
123 | +sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml |
124 | +```` |
125 | + |
126 | +Save the uuid of the secret for configuring `nova-compute` later. |
127 | + |
128 | +**Important** You don’t necessarily need the UUID on all the compute nodes. |
129 | +However from a platform consistency perspective it’s better to keep the |
130 | +same UUID. |
131 | + |
132 | +## Configure OpenStack to use Ceph |
133 | + |
134 | +### Glance |
135 | + |
136 | +Glance can use multiple back ends to store images. To use Ceph block devices |
137 | +by default, edit `/etc/glance/glance-api.conf` and add |
138 | + |
139 | +```` |
140 | +default_store=rbd |
141 | +rbd_store_user=glance |
142 | +rbd_store_pool=images |
143 | +```` |
144 | + |
145 | +If want to enable copy-on-write cloning of images into volumes, also add: |
146 | + |
147 | +```` |
148 | +show_image_direct_url=True |
149 | +```` |
150 | + |
151 | +Note that this exposes the back end location via Glance’s API, so |
152 | +the endpoint with this option enabled should not be publicly |
153 | +accessible. |
154 | + |
155 | +### Cinder |
156 | + |
157 | +OpenStack requires a driver to interact with Ceph block devices. You |
158 | +must also specify the pool name for the block device. On your |
159 | +OpenStack node, edit `/etc/cinder/cinder.conf` by adding |
160 | + |
161 | +```` |
162 | +volume_driver=cinder.volume.drivers.rbd.RBDDriver |
163 | +rbd_pool=volumes |
164 | +rbd_ceph_conf=/etc/ceph/ceph.conf |
165 | +rbd_flatten_volume_from_snapshot=false |
166 | +rbd_max_clone_depth=5 |
167 | +glance_api_version=2 |
168 | +```` |
169 | + |
170 | +If you’re using cephx authentication, also configure the user and |
171 | +uuid of the secret you added to `libvirt` as documented earlier |
172 | + |
173 | +```` |
174 | +rbd_user=cinder |
175 | +rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337 |
176 | +```` |
177 | + |
178 | +## Cinder Backup |
179 | + |
180 | +OpenStack Cinder Backup requires a specific daemon so don’t |
181 | +forget to install it. On your Cinder Backup node, |
182 | +edit `/etc/cinder/cinder.conf` and add: |
183 | + |
184 | +```` |
185 | +backup_driver=cinder.backup.drivers.ceph |
186 | +backup_ceph_conf=/etc/ceph/ceph.conf |
187 | +backup_ceph_user=cinder-backup |
188 | +backup_ceph_chunk_size=134217728 |
189 | +backup_ceph_pool=backups |
190 | +backup_ceph_stripe_unit=0 |
191 | +backup_ceph_stripe_count=0 |
192 | +restore_discard_excess_bytes=true |
193 | +```` |
194 | + |
195 | +### Nova |
196 | + |
197 | +In order to boot all the virtual machines directly into Ceph Nova must be |
198 | +configured. On every Compute nodes, edit `/etc/nova/nova.conf` and add |
199 | + |
200 | +```` |
201 | +libvirt_images_type=rbd |
202 | +libvirt_images_rbd_pool=volumes |
203 | +libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf |
204 | +rbd_user=cinder |
205 | +rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337 |
206 | +```` |
207 | + |
208 | +It is also a good practice to disable any file injection. Usually, while |
209 | +booting an instance Nova attempts to open the rootfs of the virtual machine. |
210 | +Then, it injects directly into the filesystem things like: password, ssh |
211 | +keys etc... At this point, it is better to rely on the metadata service |
212 | +and cloud-init. On every Compute nodes, edit `/etc/nova/nova.conf` and add |
213 | + |
214 | +```` |
215 | +libvirt_inject_password=false |
216 | +libvirt_inject_key=false |
217 | +libvirt_inject_partition=-2 |
218 | +```` |
219 | + |
220 | +## Restart OpenStack |
221 | + |
222 | +To activate the Ceph block device driver and load the block device pool name |
223 | +into the configuration, you must restart OpenStack. |
224 | + |
225 | +```` |
226 | +sudo glance-control api restart |
227 | +sudo service nova-compute restart |
228 | +sudo service cinder-volume restart |
229 | +sudo service cinder-backup restart |
230 | +```` |
231 | + |
232 | +Once OpenStack is up and running, you should be able to create a volume |
233 | +and boot from it. |
234 | + |
235 | |
236 | === added file 'Admin/Backup-and-Recovery-Ceph.md' |
237 | --- Admin/Backup-and-Recovery-Ceph.md 1970-01-01 00:00:00 +0000 |
238 | +++ Admin/Backup-and-Recovery-Ceph.md 2014-04-15 16:06:33 +0000 |
239 | @@ -0,0 +1,107 @@ |
240 | +Title: Backup and Recovery - Ceph |
241 | +Status: In Progress |
242 | + |
243 | +# Backup and Recovery - Ceph |
244 | + |
245 | +## Introduction |
246 | + |
247 | +A snapshot is a read-only copy of the state of an image at a particular point in time. One |
248 | +of the advanced features of Ceph block devices is that you can create snapshots of the images |
249 | +to retain a history of an image’s state. Ceph also supports snapshot layering, which allows |
250 | +you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots |
251 | +using the `rbd` command and many higher level interfaces including OpenStack. |
252 | + |
253 | +## Scope |
254 | + |
255 | +**TODO** |
256 | + |
257 | +## Backup |
258 | + |
259 | +To create a snapshot with `rbd`, specify the `snap create` option, the pool name and the |
260 | +image name. |
261 | + |
262 | +```` |
263 | +rbd --pool {pool-name} snap create --snap {snap-name} {image-name} |
264 | +rbd snap create {pool-name}/{image-name}@{snap-name} |
265 | +```` |
266 | + |
267 | +For example: |
268 | + |
269 | +```` |
270 | +rbd --pool rbd snap create --snap snapname foo |
271 | +rbd snap create rbd/foo@snapname |
272 | +```` |
273 | + |
274 | +## Restore |
275 | + |
276 | +To rollback to a snapshot with `rbd`, specify the `snap rollback` option, the pool name, the |
277 | +image name and the snap name. |
278 | + |
279 | +```` |
280 | +rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name} |
281 | +rbd snap rollback {pool-name}/{image-name}@{snap-name} |
282 | +```` |
283 | + |
284 | +For example: |
285 | + |
286 | +```` |
287 | +rbd --pool rbd snap rollback --snap snapname foo |
288 | +rbd snap rollback rbd/foo@snapname |
289 | +```` |
290 | + |
291 | +**Note:** Rolling back an image to a snapshot means overwriting the current version of the image |
292 | +with data from a snapshot. The time it takes to execute a rollback increases with the size of the |
293 | +image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is |
294 | +the preferred method of returning to a pre-existing state. |
295 | + |
296 | +## Maintenance |
297 | + |
298 | +Taking snapshots increases your level of security but also costs disk space. To delete older ones |
299 | +you can list them, delete individual ones or purge all snapshots. |
300 | + |
301 | +To list snapshots of an image, specify the pool name and the image name. |
302 | + |
303 | +```` |
304 | +rbd --pool {pool-name} snap ls {image-name} |
305 | +rbd snap ls {pool-name}/{image-name} |
306 | +```` |
307 | + |
308 | +For example: |
309 | + |
310 | +```` |
311 | +rbd --pool rbd snap ls foo |
312 | +rbd snap ls rbd/foo |
313 | +```` |
314 | + |
315 | +To delete a snapshot with `rbd`, specify the `snap rm` option, the pool name, the image name |
316 | +and the username. |
317 | + |
318 | +```` |
319 | +rbd --pool {pool-name} snap rm --snap {snap-name} {image-name} |
320 | +rbd snap rm {pool-name}/{image-name}@{snap-name} |
321 | +```` |
322 | + |
323 | +For example: |
324 | + |
325 | +```` |
326 | +rbd --pool rbd snap rm --snap snapname foo |
327 | +rbd snap rm rbd/foo@snapname |
328 | +```` |
329 | + |
330 | +**Note:** Ceph OSDs delete data asynchronously, so deleting a snapshot doesn’t free up the |
331 | +disk space immediately. |
332 | + |
333 | +To delete all snapshots for an image with `rbd`, specify the snap purge option and the |
334 | +image name. |
335 | + |
336 | +```` |
337 | +rbd --pool {pool-name} snap purge {image-name} |
338 | +rbd snap purge {pool-name}/{image-name} |
339 | +```` |
340 | + |
341 | +For example: |
342 | + |
343 | +```` |
344 | +rbd --pool rbd snap purge foo |
345 | +rbd snap purge rbd/foo |
346 | +```` |
347 | |
348 | === added file 'Admin/Backup-and-Recovery-Juju.md' |
349 | --- Admin/Backup-and-Recovery-Juju.md 1970-01-01 00:00:00 +0000 |
350 | +++ Admin/Backup-and-Recovery-Juju.md 2014-04-15 16:06:33 +0000 |
351 | @@ -0,0 +1,59 @@ |
352 | +Title: Backup and Recovery - Juju |
353 | +Status: In Progress |
354 | + |
355 | +# Backup and Recovery - Juju |
356 | + |
357 | +## Introduction |
358 | + |
359 | +**TODO** |
360 | + |
361 | +## Scope |
362 | + |
363 | +**TODO** |
364 | + |
365 | +## Backup |
366 | + |
367 | +Jujus working principle is based on storing the state of the cloud in |
368 | +database containing information about the environment, machines, services, |
369 | +and units. Changes to an environment are made to the state first, which are |
370 | +then detected by their according agents. Those are responsible to do the |
371 | +needed steps then. |
372 | + |
373 | +This principle allows Juju to easily do a *backup* of this information, plus |
374 | +some needed configuration data and some more useful information more. The |
375 | +command to do so is `juju-backup`, which saves the currently selected |
376 | +environment. So please make sure to switch to the environment you want to |
377 | +backup. |
378 | + |
379 | +```` |
380 | +$ juju switch my-env |
381 | +$ juju backup |
382 | +```` |
383 | + |
384 | +The command creates two generations of backups on the bootstrap node, also |
385 | +know as `machine-0`. Beside the state and configuration data about this machine |
386 | +itself and the other ones of its environment the aggregated log for all |
387 | +machines and the one of this machine itself are saved. The aggregated log |
388 | +is the same you're accessing when calling |
389 | + |
390 | +```` |
391 | +$ juju debug-log |
392 | +```` |
393 | + |
394 | +and enables you to retrieve helpful information in case of a problem. After |
395 | +the backup is created on the bootstrap node it is transferred to your |
396 | +working machine into the current directory as `juju-backup-YYYYMMDD-HHMM.tgz`, |
397 | +where *YYYYMMDD-HHMM* is date and time of the backup. In case you want to open |
398 | +the backup manually to access the mentioned logging data you'll find it in the |
399 | +contained archive `root.tar`. Here please don't wonder, this way all owner, |
400 | +access rights and other information are preserved. |
401 | + |
402 | +## Restore |
403 | + |
404 | +To *restore* an environment the according command is |
405 | + |
406 | +```` |
407 | +$ juju restore <BACKUPFILE> |
408 | +```` |
409 | + |
410 | +This way you're able to choose the concrete environment to restore. |
411 | |
412 | === added file 'Admin/Backup-and-Recovery-OpenStack.md' |
413 | --- Admin/Backup-and-Recovery-OpenStack.md 1970-01-01 00:00:00 +0000 |
414 | +++ Admin/Backup-and-Recovery-OpenStack.md 2014-04-15 16:06:33 +0000 |
415 | @@ -0,0 +1,131 @@ |
416 | +Title: Backup and Recovery - OpenStack |
417 | +Status: In Progress |
418 | + |
419 | +# Backup and Recovery - OpenStack |
420 | + |
421 | +## Introduction |
422 | + |
423 | +The OpenStack flexibility makes backup and restore to a very individual process |
424 | +depending on the used components. This section describes how the critical parts |
425 | +like the configuration files and databases OpenStack needs to run are saved. As |
426 | +before for Juju it doesn't describe ho to back up the objects inside the Object |
427 | +Storage or the data inside the Block Storage. |
428 | + |
429 | +## Scope |
430 | + |
431 | +**TODO** |
432 | + |
433 | +## Backup Cloud Controller Database |
434 | + |
435 | +Like Juju the OpenStack cloud controller uses a database server which stores the |
436 | +central databases for Nova, Glance, Keystone, Cinder, and Switft. You can backup |
437 | +the five databases into one common dump: |
438 | + |
439 | +```` |
440 | +$ mysqldump --opt --all-databases > openstack.sql |
441 | +```` |
442 | + |
443 | +Alternatively you can backup the database for each component individually: |
444 | + |
445 | +```` |
446 | +$ mysqldump --opt nova > nova.sql |
447 | +$ mysqldump --opt glance > glance.sql |
448 | +$ mysqldump --opt keystone > keystone.sql |
449 | +$ mysqldump --opt cinder > cinder.sql |
450 | +$ mysqldump --opt swift > swift.sql |
451 | +```` |
452 | + |
453 | +## Backup File Systems |
454 | + |
455 | +Beside the databases OpenStack uses different directories for its configuration, |
456 | +runtime files, and logging. Like the databases they are grouped individually per |
457 | +component. This way also the backup can be done per component. |
458 | + |
459 | +### Nova |
460 | + |
461 | +You'll find the configuration directory `/etc/nova` on the cloud controller and |
462 | +each compute node. It should be regularly backed up. |
463 | + |
464 | +Another directory to backup is `/var/lib/nova`. But here you have to be careful |
465 | +with the `instances` subdirectory on the compute nodes. It contains the KVM images |
466 | +of the running instances. If you want to maintain backup copies of those instances |
467 | +you can do a backup here too. In this case make sure to not save a live KVM instance |
468 | +because it may not boot properly after restoring the backup. |
469 | + |
470 | +Third directory for the compute component is `/var/log/nova`. In case of a central |
471 | +logging server this directory does not need to be backed up. So we suggest you to |
472 | +run your environment with this kind of logging. |
473 | + |
474 | +### Glance |
475 | + |
476 | +Like for Nova you'll find the directories `/etc/glance` and `/var/log/glance`, the |
477 | +handling should be the same here too. |
478 | + |
479 | +Glance also uses the directory named `/var/lib/glance` which also should be backed |
480 | +up. |
481 | + |
482 | +### Keystone |
483 | + |
484 | +Keystone is using the directories `/etc/keystone`, `/var/lib/keystone`, and |
485 | +`/var/log/keystone`. They follow the same rules as Nova and Glance. Even if |
486 | +the `lib` directory should not contain any data being used, can also be backed |
487 | +up just in case. |
488 | + |
489 | +### Cinder |
490 | + |
491 | +Like before you'll find the directories `/etc/cinder`, `/var/log/cinder`, |
492 | +and `/var/lib/cinder`. And also here the handling should be the same. Opposite |
493 | +to Nova abd Glance there's no special handling of `/var/lib/cinder` needed. |
494 | + |
495 | +### Swift |
496 | + |
497 | +Beside the Swift configuration the directory `/etc/swift` contains the ring files |
498 | +and the ring builder files. If those get lest the data on your data gets inaccessable. |
499 | +So you can easily imagine how important it is to backup this directory. Best practise |
500 | +is to copy the builder files to the storage nodes along with the ring files. So |
501 | +multiple copies are spread throughout the cluster. |
502 | + |
503 | +**TODO(mue)** Really needed when we use Ceph for storage? |
504 | + |
505 | +## Restore |
506 | + |
507 | +The restore based on the backups is a step-by-step process restoring the components |
508 | +databases and all their directories. It's important that the component to restore is |
509 | +currently not running. So always start the restoring with stopping all components. |
510 | + |
511 | +Let's take Nova as an example. First execute |
512 | + |
513 | +```` |
514 | +$ stop nova-api |
515 | +$ stop nova-cert |
516 | +$ stop nova-consoleauth |
517 | +$ stop nova-novncproxy |
518 | +$ stop nova-objectstore |
519 | +$ stop nova-scheduler |
520 | +```` |
521 | + |
522 | +on the cloud controller to savely stop the processes of the component. Next step is the |
523 | +restore of the database. By using the `--opt` option during backup we ensured that all |
524 | +tables are initially dropped and there's no conflict with currently existing data in |
525 | +the databases. |
526 | + |
527 | +```` |
528 | +$ mysql nova < nova.sql |
529 | +```` |
530 | + |
531 | +Before restoring the directories you should move at least the configuration directoy, |
532 | +here `/etc/nova`, into a secure location in case you need to roll it back. |
533 | + |
534 | +After the database and the files are restored you can start MySQL and Nova again. |
535 | + |
536 | +```` |
537 | +$ start mysql |
538 | +$ start nova-api |
539 | +$ start nova-cert |
540 | +$ start nova-consoleauth |
541 | +$ start nova-novncproxy |
542 | +$ start nova-objectstore |
543 | +$ start nova-scheduler |
544 | +```` |
545 | + |
546 | +The process for the other components look similar. |
547 | |
548 | === added file 'Admin/Logging-Juju.md' |
549 | --- Admin/Logging-Juju.md 1970-01-01 00:00:00 +0000 |
550 | +++ Admin/Logging-Juju.md 2014-04-15 16:06:33 +0000 |
551 | @@ -0,0 +1,24 @@ |
552 | +Title: Logging - Juju |
553 | +Status: In Progress |
554 | + |
555 | +# Logging - Juju |
556 | + |
557 | +## Introduction |
558 | + |
559 | +**TODO** |
560 | + |
561 | +## Scope |
562 | + |
563 | +**TODO** |
564 | + |
565 | +## Connecting to rsyslogd |
566 | + |
567 | +Juju already uses `rsyslogd` for the aggregation of all logs into on centralized log. The |
568 | +target of this logging is the file `/var/log/juju/all-machines.log`. You can directly |
569 | +access it using the command |
570 | + |
571 | +```` |
572 | +$ juju debug-log |
573 | +```` |
574 | + |
575 | +**TODO** Describe a way to redirect this log to a central rsyslogd server. |
576 | |
577 | === added file 'Admin/Logging-OpenStack.md' |
578 | --- Admin/Logging-OpenStack.md 1970-01-01 00:00:00 +0000 |
579 | +++ Admin/Logging-OpenStack.md 2014-04-15 16:06:33 +0000 |
580 | @@ -0,0 +1,92 @@ |
581 | +Title: Logging - OpenStack |
582 | +Status: In Progress |
583 | + |
584 | +# Logging - OpenStack |
585 | + |
586 | +## Introduction |
587 | + |
588 | +**TODO** |
589 | + |
590 | +## Scope |
591 | + |
592 | +**TODO** |
593 | + |
594 | +## Connecting to rsyslogd |
595 | + |
596 | +By default OpenStack is writting its logging output into files into directories for each |
597 | +component, like `/var/log/nova` or `/var/log/glance`. For the usage of `rsyslogd` the components |
598 | +have to be configured to also log to `syslog`. When doing this also configure each component |
599 | +to log into a different syslog facility. This will help you to split the logs into individual |
600 | +components on the central logging server. So ensure the following settings: |
601 | + |
602 | +**/etc/nova/nova.conf:** |
603 | + |
604 | +```` |
605 | +use_syslog=True |
606 | +syslog_log_facility=LOG_LOCAL0 |
607 | +```` |
608 | + |
609 | +**/etc/glance/glance-api.conf and /etc/glance/glance-registry.conf:** |
610 | + |
611 | +```` |
612 | +use_syslog=True |
613 | +syslog_log_facility=LOG_LOCAL1 |
614 | +```` |
615 | + |
616 | +**/etc/cinder/cinder.conf:** |
617 | + |
618 | +```` |
619 | +use_syslog=True |
620 | +syslog_log_facility=LOG_LOCAL2 |
621 | +```` |
622 | + |
623 | +**/etc/keystone/keystone.conf:** |
624 | + |
625 | +```` |
626 | +use_syslog=True |
627 | +syslog_log_facility=LOG_LOCAL3 |
628 | +```` |
629 | + |
630 | +The object storage Swift be fault already logs to syslog. So you now can tell the local |
631 | +rsyslogd clients to pass the logged information to the logging server. You'll do this |
632 | +by creating a `/etc/rsyslog.d/client.conf` containing the line like |
633 | + |
634 | +```` |
635 | +*.* @192.16.1.10 |
636 | +```` |
637 | + |
638 | +where the IP address points to your rsyslogd server. Best is to choose a server that is |
639 | +dedicated to this task only. Here you've got to create the file `/etc/rsyslog.d/server.conf` |
640 | +contining the settings |
641 | + |
642 | +```` |
643 | +# Enable UDP |
644 | +$ModLoad imudp |
645 | +# Listen on 192.168.1.10 only |
646 | +$UDPServerAddress 192.168.1.10 |
647 | +# Port 514 |
648 | +$UDPServerRun 514 |
649 | +# Create logging templates for nova |
650 | +$template NovaFile,"/var/log/rsyslog/%HOSTNAME%/nova.log" |
651 | +$template NovaAll,"/var/log/rsyslog/nova.log" |
652 | +# Log everything else to syslog.log |
653 | +$template DynFile,"/var/log/rsyslog/%HOSTNAME%/syslog.log" |
654 | +*.* ?DynFile |
655 | +# Log various openstack components to their own individual file |
656 | +local0.* ?NovaFile |
657 | +local0.* ?NovaAll |
658 | +& ~ |
659 | +```` |
660 | + |
661 | +This example only contains the settings for Nova only, the other OpenStack components |
662 | +have to be added the same way. Using two templates per component, one containing the |
663 | +`%HOSTNAME%` variable and one without it enables a better splitting of the logged |
664 | +data. Think about the two example nodes `alpha.example.com` and `bravo.example.com`. |
665 | +They will write their logging into the files |
666 | + |
667 | +- `/var/log/rsyslog/alpha.example.com/nova.log` - only the data of alpha, |
668 | +- `/var/log/rsyslog/bravo.example.com/nova.log` - only the data of bravo, |
669 | +- `/var/log/rsyslog/nova.log` - the combined data of both. |
670 | + |
671 | +This allows a quick overview over all nodes as well as the focussed analysis of an |
672 | +individual node. |
673 | |
674 | === added file 'Admin/Logging.md' |
675 | --- Admin/Logging.md 1970-01-01 00:00:00 +0000 |
676 | +++ Admin/Logging.md 2014-04-15 16:06:33 +0000 |
677 | @@ -0,0 +1,15 @@ |
678 | +Title: Logging |
679 | +Status: In Progress |
680 | + |
681 | +# Logging |
682 | + |
683 | +The controlling of individual logs is a cumbersome job, even in an environment with only |
684 | +few computer system. But it's even more worse in typical clouds with a large number of |
685 | +nodes. Here the centrallized approach using `rsyslogd` helps. It allows you to aggregate |
686 | +the logging output of all systems in one place. Here the monitoring and analysis gets |
687 | +more simple. |
688 | + |
689 | +Ubuntu uses `rsyslogd` as the default logging service. Since it is natively able to send |
690 | +logs to a remote location, you don't have to install anything extra to enable this feature, |
691 | +just modify the configuration file. In doing this, consider running your logging over |
692 | +a management network or using an encrypted VPN to avoid interception. |
693 | |
694 | === added file 'Admin/Scaling-Ceph.md' |
695 | --- Admin/Scaling-Ceph.md 1970-01-01 00:00:00 +0000 |
696 | +++ Admin/Scaling-Ceph.md 2014-04-15 16:06:33 +0000 |
697 | @@ -0,0 +1,36 @@ |
698 | +Title: Scaling - Ceph |
699 | +Status: In Progress |
700 | + |
701 | +# Scaling - Ceph |
702 | + |
703 | +## Introduction |
704 | + |
705 | +Beside the redundancy for more safety and the higher performance through the usage of |
706 | +Ceph as storage backend for OpenStack the user also benefits from the more simple way |
707 | +of scaling the storage of the needs grow. |
708 | + |
709 | +## Scope |
710 | + |
711 | +**TODO** |
712 | + |
713 | +## Scaling |
714 | + |
715 | +The addition of Ceph nodes is done using the Juju `add-node` command. By default |
716 | +it adds only one node, but it is possible to add the number of wanted nodes as |
717 | +argument. To add one more Ceph OSD Daemon node you simply call |
718 | + |
719 | +``` |
720 | +juju add-node ceph-osd |
721 | +``` |
722 | + |
723 | +Larger numbers of nodes can be added using the `-n` argument, e.g. 5 nodes |
724 | +with |
725 | + |
726 | +``` |
727 | +juju add-node -n 5 ceph-osd |
728 | +``` |
729 | + |
730 | +**Attention:** The adding of more nodes to Ceph leads to a redistribution of data |
731 | +between the nodes of an image. This can cause inefficiencies during this process. So |
732 | +it should be done in smaller steps. |
733 | + |
734 | |
735 | === added file 'Admin/Upgrading-and-Patching-Juju.md' |
736 | --- Admin/Upgrading-and-Patching-Juju.md 1970-01-01 00:00:00 +0000 |
737 | +++ Admin/Upgrading-and-Patching-Juju.md 2014-04-15 16:06:33 +0000 |
738 | @@ -0,0 +1,45 @@ |
739 | +Title: Upgrading and Patching - Juju |
740 | +Status: In Progress |
741 | + |
742 | +# Upgrading and Patching - Juju |
743 | + |
744 | +## Introduction |
745 | + |
746 | +**TODO** |
747 | + |
748 | +## Scope |
749 | + |
750 | +**TODO** |
751 | + |
752 | +## Upgrading |
753 | + |
754 | +The upgrade of a Juju environment is done using the Juju client and its command |
755 | + |
756 | +```` |
757 | +$ juju upgrade-juju |
758 | +```` |
759 | + |
760 | +This command sets the version number for all Juju agents to run. This by default |
761 | +is the most recent supported version compatible with the comand-line tools version. |
762 | +So ensure that you've upgraded the Juju client first. |
763 | + |
764 | +When run without arguments, `upgrade-juju` will try to upgrade to the following |
765 | +versions, in order of preference and depending on the current value of the |
766 | +environment's `agent-version` setting: |
767 | + |
768 | +- The highest patch.build version of the *next* stable major.minor version. |
769 | +- The highest patch.build version of the *current* major.minor version. |
770 | + |
771 | +Both of these depend on the availability of the according tools. On MAAS you've |
772 | +got to manage this yourself using the command |
773 | + |
774 | +```` |
775 | +$ juju sync-tools |
776 | +```` |
777 | + |
778 | +This copies the Juju tools tarball from the official tools store (located |
779 | +at https://streams.canonical.com/juju) into your environment. |
780 | + |
781 | +## Patching |
782 | + |
783 | +**TODO** |
784 | |
785 | === added file 'Admin/Upgrading-and-Patching-OpenStack.md' |
786 | --- Admin/Upgrading-and-Patching-OpenStack.md 1970-01-01 00:00:00 +0000 |
787 | +++ Admin/Upgrading-and-Patching-OpenStack.md 2014-04-15 16:06:33 +0000 |
788 | @@ -0,0 +1,83 @@ |
789 | +Title: Upgrading and Patching - OpenStack |
790 | +Status: In Progress |
791 | + |
792 | +# Upgrading and Patching - OpenStack |
793 | + |
794 | +## Introduction |
795 | + |
796 | +**TODO** |
797 | + |
798 | +## Scope |
799 | + |
800 | +**TODO** |
801 | + |
802 | +## Upgrading |
803 | + |
804 | +The upgrade of an OpenStack cluster in one big step is an approach requiring additional |
805 | +hardware to setup an update cloud beside the productive one and leads to a longer |
806 | +outage while your cloud is in read-only mode, the state is transferred to the new |
807 | +one and the environments are switched. So the preferred way of upgrading an OpenStack |
808 | +cloud is the rolling upgrade of each component of the system piece by piece. |
809 | + |
810 | +Here you can choose between in-place and side-by-side upgrades. But the first one needs |
811 | +to shutdown the regarding component while you're performing its upgrade. Additionally you |
812 | +may have troubles in case of a rollback. So to avoid this the side by side upgrade is |
813 | +the preferred way here. |
814 | + |
815 | +Before starting the upgrade itself you should |
816 | + |
817 | +- Perform some "cleaning" of the environment process to ensure a consistent state; for |
818 | + example, instances not fully purged from the system after deletion may cause |
819 | + indeterminate behavior |
820 | +- Read the release notes and documentation |
821 | +- Find incompatibilities between your versions |
822 | + |
823 | +The upgrade tasks here follow the same procedure for each component: |
824 | + |
825 | +1. Configure the new worker |
826 | +1. Turn off the current worker; during this time hide the downtime using a message |
827 | + queue or a load balancer |
828 | +1. Take a backup as described earlier of the old worker for a rollback |
829 | +1. Copy the state of the current to the new worker |
830 | +1. Start up the new worker |
831 | + |
832 | +Now repeat these steps for each worker in an approprate order. In case of a problem it |
833 | +should be easy to rollback as long as the former worker stays untouched. This is, |
834 | +beside the shorter downtime, the most important advantage of the side-by-side upgrade. |
835 | + |
836 | +The following order for service upgrades seems the most successful: |
837 | + |
838 | +1. Upgrade the OpenStack Identity Service (Keystone). |
839 | +1. Upgrade the OpenStack Image Service (Glance). |
840 | +1. Upgrade OpenStack Compute (Nova), including networking components. |
841 | +1. Upgrade OpenStack Block Storage (Cinder). |
842 | +1. Upgrade the OpenStack dashboard. |
843 | + |
844 | +These steps look very easy, but still are a complex procedure depending on your cloud |
845 | +configuration. So we recommend to have a testing environment with a near-identical |
846 | +architecture to your production system. This doesn't mean that you should use the same |
847 | +sizes and hardware, which would be best but expensive. But there are some ways to reduce |
848 | +the cost. |
849 | + |
850 | +- Use your own cloud. The simplest place to start testing the next version of OpenStack |
851 | + is by setting up a new environment inside your own cloud. This may seem odd—especially |
852 | + the double virtualisation used in running compute nodes—but it's a sure way to very |
853 | + quickly test your configuration. |
854 | +- Use a public cloud. Especially because your own cloud is unlikely to have sufficient |
855 | + space to scale test to the level of the entire cloud, consider using a public cloud |
856 | + to test the scalability limits of your cloud controller configuration. Most public |
857 | + clouds bill by the hour, which means it can be inexpensive to perform even a test |
858 | + with many nodes. |
859 | +- Make another storage endpoint on the same system. If you use an external storage plug-in |
860 | + or shared file system with your cloud, in many cases it's possible to test that it |
861 | + works by creating a second share or endpoint. This will enable you to test the system |
862 | + before entrusting the new version onto your storage. |
863 | +- Watch the network. Even at smaller-scale testing, it should be possible to determine |
864 | + whether something is going horribly wrong in intercomponent communication if you |
865 | + look at the network packets and see too many. |
866 | + |
867 | +**TODO** Add more concrete description here. |
868 | + |
869 | +## Patching |
870 | + |
871 | +**TODO** |
872 | |
873 | === removed file 'Appendix-Ceph-and-OpenStack.md' |
874 | --- Appendix-Ceph-and-OpenStack.md 2014-04-02 16:18:10 +0000 |
875 | +++ Appendix-Ceph-and-OpenStack.md 1970-01-01 00:00:00 +0000 |
876 | @@ -1,229 +0,0 @@ |
877 | -Title: Appendix - Ceph and OpenStack |
878 | -Status: Done |
879 | - |
880 | -# Appendix: Ceph and OpenStack |
881 | - |
882 | -Ceph stripes block device images as objects across a cluster. This way it provides |
883 | -a better performance than standalone server. OpenStack is able to use Ceph Block Devices |
884 | -through `libvirt`, which configures the QEMU interface to `librbd`. |
885 | - |
886 | -To use Ceph Block Devices with OpenStack, you must install QEMU, `libvirt`, and OpenStack |
887 | -first. It's recommended to use a separate physical node for your OpenStack installation. |
888 | -OpenStack recommends a minimum of 8GB of RAM and a quad-core processor. |
889 | - |
890 | -Three parts of OpenStack integrate with Ceph’s block devices: |
891 | - |
892 | -- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack |
893 | - treats images as binary blobs and downloads them accordingly. |
894 | -- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to |
895 | - attach volumes to running VMs. OpenStack manages volumes using Cinder services. |
896 | -- Guest Disks: Guest disks are guest operating system disks. By default, when you |
897 | - boot a virtual machine, its disk appears as a file on the filesystem of the |
898 | - hypervisor (usually under /var/lib/nova/instances/<uuid>/). Prior OpenStack Havana, |
899 | - the only way to boot a VM in Ceph was to use the boot from volume functionality |
900 | - from Cinder. However, now it is possible to directly boot every virtual machine |
901 | - inside Ceph without using Cinder. This is really handy because it allows us to |
902 | - easily perform maintenance operation with the live-migration process. On the other |
903 | - hand, if your hypervisor dies it is also really convenient to trigger Nova evacuate |
904 | - and almost seamlessly run the virtual machine somewhere else. |
905 | - |
906 | -You can use OpenStack Glance to store images in a Ceph Block Device, and you can |
907 | -use Cinder to boot a VM using a copy-on-write clone of an image. |
908 | - |
909 | -## Create a pool |
910 | - |
911 | -By default, Ceph block devices use the `rbd` pool. You may use any available pool. |
912 | -We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph |
913 | -cluster is running, then create the pools. |
914 | - |
915 | -```` |
916 | -ceph osd pool create volumes 128 |
917 | -ceph osd pool create images 128 |
918 | -ceph osd pool create backups 128 |
919 | -```` |
920 | - |
921 | -## Configure OpenStack Ceph Clients |
922 | - |
923 | -The nodes running `glance-api`, `cinder-volume`, `nova-compute` and `cinder-backup` act |
924 | -as Ceph clients. Each requires the `ceph.conf` file |
925 | - |
926 | -```` |
927 | -ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf |
928 | -```` |
929 | - |
930 | -On the `glance-api` node, you’ll need the Python bindings for `librbd` |
931 | - |
932 | -```` |
933 | -sudo apt-get install python-ceph |
934 | -sudo yum install python-ceph |
935 | -```` |
936 | - |
937 | -On the `nova-compute`, `cinder-backup` and on the `cinder-volume` node, use both the |
938 | -Python bindings and the client command line tools |
939 | - |
940 | -```` |
941 | -sudo apt-get install ceph-common |
942 | -sudo yum install ceph |
943 | -```` |
944 | - |
945 | -If you have cephx authentication enabled, create a new user for Nova/Cinder and |
946 | -Glance. Execute the following |
947 | - |
948 | -```` |
949 | -ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' |
950 | -ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' |
951 | -ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' |
952 | -```` |
953 | - |
954 | -Add the keyrings for `client.cinder`, `client.glance`, and `client.cinder-backup` |
955 | -to the appropriate nodes and change their ownership |
956 | - |
957 | -```` |
958 | -ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring |
959 | -ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring |
960 | -ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring |
961 | -ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring |
962 | -ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring |
963 | -ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring |
964 | -```` |
965 | - |
966 | -Nodes running `nova-compute` need the keyring file for the `nova-compute` process. |
967 | -They also need to store the secret key of the `client.cinder` user in `libvirt`. The |
968 | -`libvirt` process needs it to access the cluster while attaching a block device |
969 | -from Cinder. |
970 | - |
971 | -Create a temporary copy of the secret key on the nodes running `nova-compute` |
972 | - |
973 | -```` |
974 | -ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key |
975 | -```` |
976 | - |
977 | -Then, on the compute nodes, add the secret key to `libvirt` and remove the |
978 | -temporary copy of the key |
979 | - |
980 | -```` |
981 | -uuidgen |
982 | -457eb676-33da-42ec-9a8c-9293d545c337 |
983 | - |
984 | -cat > secret.xml <<EOF |
985 | -<secret ephemeral='no' private='no'> |
986 | - <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> |
987 | - <usage type='ceph'> |
988 | - <name>client.cinder secret</name> |
989 | - </usage> |
990 | -</secret> |
991 | -EOF |
992 | -sudo virsh secret-define --file secret.xml |
993 | -Secret 457eb676-33da-42ec-9a8c-9293d545c337 created |
994 | -sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml |
995 | -```` |
996 | - |
997 | -Save the uuid of the secret for configuring `nova-compute` later. |
998 | - |
999 | -**Important** You don’t necessarily need the UUID on all the compute nodes. |
1000 | -However from a platform consistency perspective it’s better to keep the |
1001 | -same UUID. |
1002 | - |
1003 | -## Configure OpenStack to use Ceph |
1004 | - |
1005 | -### Glance |
1006 | - |
1007 | -Glance can use multiple back ends to store images. To use Ceph block devices |
1008 | -by default, edit `/etc/glance/glance-api.conf` and add |
1009 | - |
1010 | -```` |
1011 | -default_store=rbd |
1012 | -rbd_store_user=glance |
1013 | -rbd_store_pool=images |
1014 | -```` |
1015 | - |
1016 | -If want to enable copy-on-write cloning of images into volumes, also add: |
1017 | - |
1018 | -```` |
1019 | -show_image_direct_url=True |
1020 | -```` |
1021 | - |
1022 | -Note that this exposes the back end location via Glance’s API, so |
1023 | -the endpoint with this option enabled should not be publicly |
1024 | -accessible. |
1025 | - |
1026 | -### Cinder |
1027 | - |
1028 | -OpenStack requires a driver to interact with Ceph block devices. You |
1029 | -must also specify the pool name for the block device. On your |
1030 | -OpenStack node, edit `/etc/cinder/cinder.conf` by adding |
1031 | - |
1032 | -```` |
1033 | -volume_driver=cinder.volume.drivers.rbd.RBDDriver |
1034 | -rbd_pool=volumes |
1035 | -rbd_ceph_conf=/etc/ceph/ceph.conf |
1036 | -rbd_flatten_volume_from_snapshot=false |
1037 | -rbd_max_clone_depth=5 |
1038 | -glance_api_version=2 |
1039 | -```` |
1040 | - |
1041 | -If you’re using cephx authentication, also configure the user and |
1042 | -uuid of the secret you added to `libvirt` as documented earlier |
1043 | - |
1044 | -```` |
1045 | -rbd_user=cinder |
1046 | -rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337 |
1047 | -```` |
1048 | - |
1049 | -## Cinder Backup |
1050 | - |
1051 | -OpenStack Cinder Backup requires a specific daemon so don’t |
1052 | -forget to install it. On your Cinder Backup node, |
1053 | -edit `/etc/cinder/cinder.conf` and add: |
1054 | - |
1055 | -```` |
1056 | -backup_driver=cinder.backup.drivers.ceph |
1057 | -backup_ceph_conf=/etc/ceph/ceph.conf |
1058 | -backup_ceph_user=cinder-backup |
1059 | -backup_ceph_chunk_size=134217728 |
1060 | -backup_ceph_pool=backups |
1061 | -backup_ceph_stripe_unit=0 |
1062 | -backup_ceph_stripe_count=0 |
1063 | -restore_discard_excess_bytes=true |
1064 | -```` |
1065 | - |
1066 | -### Nova |
1067 | - |
1068 | -In order to boot all the virtual machines directly into Ceph Nova must be |
1069 | -configured. On every Compute nodes, edit `/etc/nova/nova.conf` and add |
1070 | - |
1071 | -```` |
1072 | -libvirt_images_type=rbd |
1073 | -libvirt_images_rbd_pool=volumes |
1074 | -libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf |
1075 | -rbd_user=cinder |
1076 | -rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337 |
1077 | -```` |
1078 | - |
1079 | -It is also a good practice to disable any file injection. Usually, while |
1080 | -booting an instance Nova attempts to open the rootfs of the virtual machine. |
1081 | -Then, it injects directly into the filesystem things like: password, ssh |
1082 | -keys etc... At this point, it is better to rely on the metadata service |
1083 | -and cloud-init. On every Compute nodes, edit `/etc/nova/nova.conf` and add |
1084 | - |
1085 | -```` |
1086 | -libvirt_inject_password=false |
1087 | -libvirt_inject_key=false |
1088 | -libvirt_inject_partition=-2 |
1089 | -```` |
1090 | - |
1091 | -## Restart OpenStack |
1092 | - |
1093 | -To activate the Ceph block device driver and load the block device pool name |
1094 | -into the configuration, you must restart OpenStack. |
1095 | - |
1096 | -```` |
1097 | -sudo glance-control api restart |
1098 | -sudo service nova-compute restart |
1099 | -sudo service cinder-volume restart |
1100 | -sudo service cinder-backup restart |
1101 | -```` |
1102 | - |
1103 | -Once OpenStack is up and running, you should be able to create a volume |
1104 | -and boot from it. |
1105 | - |
1106 | |
1107 | === removed file 'Backup-and-Recovery-Ceph.md' |
1108 | --- Backup-and-Recovery-Ceph.md 2014-04-02 16:18:10 +0000 |
1109 | +++ Backup-and-Recovery-Ceph.md 1970-01-01 00:00:00 +0000 |
1110 | @@ -1,107 +0,0 @@ |
1111 | -Title: Backup and Recovery - Ceph |
1112 | -Status: In Progress |
1113 | - |
1114 | -# Backup and Recovery - Ceph |
1115 | - |
1116 | -## Introduction |
1117 | - |
1118 | -A snapshot is a read-only copy of the state of an image at a particular point in time. One |
1119 | -of the advanced features of Ceph block devices is that you can create snapshots of the images |
1120 | -to retain a history of an image’s state. Ceph also supports snapshot layering, which allows |
1121 | -you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots |
1122 | -using the `rbd` command and many higher level interfaces including OpenStack. |
1123 | - |
1124 | -## Scope |
1125 | - |
1126 | -**TODO** |
1127 | - |
1128 | -## Backup |
1129 | - |
1130 | -To create a snapshot with `rbd`, specify the `snap create` option, the pool name and the |
1131 | -image name. |
1132 | - |
1133 | -```` |
1134 | -rbd --pool {pool-name} snap create --snap {snap-name} {image-name} |
1135 | -rbd snap create {pool-name}/{image-name}@{snap-name} |
1136 | -```` |
1137 | - |
1138 | -For example: |
1139 | - |
1140 | -```` |
1141 | -rbd --pool rbd snap create --snap snapname foo |
1142 | -rbd snap create rbd/foo@snapname |
1143 | -```` |
1144 | - |
1145 | -## Restore |
1146 | - |
1147 | -To rollback to a snapshot with `rbd`, specify the `snap rollback` option, the pool name, the |
1148 | -image name and the snap name. |
1149 | - |
1150 | -```` |
1151 | -rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name} |
1152 | -rbd snap rollback {pool-name}/{image-name}@{snap-name} |
1153 | -```` |
1154 | - |
1155 | -For example: |
1156 | - |
1157 | -```` |
1158 | -rbd --pool rbd snap rollback --snap snapname foo |
1159 | -rbd snap rollback rbd/foo@snapname |
1160 | -```` |
1161 | - |
1162 | -**Note:** Rolling back an image to a snapshot means overwriting the current version of the image |
1163 | -with data from a snapshot. The time it takes to execute a rollback increases with the size of the |
1164 | -image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is |
1165 | -the preferred method of returning to a pre-existing state. |
1166 | - |
1167 | -## Maintenance |
1168 | - |
1169 | -Taking snapshots increases your level of security but also costs disk space. To delete older ones |
1170 | -you can list them, delete individual ones or purge all snapshots. |
1171 | - |
1172 | -To list snapshots of an image, specify the pool name and the image name. |
1173 | - |
1174 | -```` |
1175 | -rbd --pool {pool-name} snap ls {image-name} |
1176 | -rbd snap ls {pool-name}/{image-name} |
1177 | -```` |
1178 | - |
1179 | -For example: |
1180 | - |
1181 | -```` |
1182 | -rbd --pool rbd snap ls foo |
1183 | -rbd snap ls rbd/foo |
1184 | -```` |
1185 | - |
1186 | -To delete a snapshot with `rbd`, specify the `snap rm` option, the pool name, the image name |
1187 | -and the username. |
1188 | - |
1189 | -```` |
1190 | -rbd --pool {pool-name} snap rm --snap {snap-name} {image-name} |
1191 | -rbd snap rm {pool-name}/{image-name}@{snap-name} |
1192 | -```` |
1193 | - |
1194 | -For example: |
1195 | - |
1196 | -```` |
1197 | -rbd --pool rbd snap rm --snap snapname foo |
1198 | -rbd snap rm rbd/foo@snapname |
1199 | -```` |
1200 | - |
1201 | -**Note:** Ceph OSDs delete data asynchronously, so deleting a snapshot doesn’t free up the |
1202 | -disk space immediately. |
1203 | - |
1204 | -To delete all snapshots for an image with `rbd`, specify the snap purge option and the |
1205 | -image name. |
1206 | - |
1207 | -```` |
1208 | -rbd --pool {pool-name} snap purge {image-name} |
1209 | -rbd snap purge {pool-name}/{image-name} |
1210 | -```` |
1211 | - |
1212 | -For example: |
1213 | - |
1214 | -```` |
1215 | -rbd --pool rbd snap purge foo |
1216 | -rbd snap purge rbd/foo |
1217 | -```` |
1218 | |
1219 | === removed file 'Backup-and-Recovery-Juju.md' |
1220 | --- Backup-and-Recovery-Juju.md 2014-04-02 16:18:10 +0000 |
1221 | +++ Backup-and-Recovery-Juju.md 1970-01-01 00:00:00 +0000 |
1222 | @@ -1,59 +0,0 @@ |
1223 | -Title: Backup and Recovery - Juju |
1224 | -Status: In Progress |
1225 | - |
1226 | -# Backup and Recovery - Juju |
1227 | - |
1228 | -## Introduction |
1229 | - |
1230 | -**TODO** |
1231 | - |
1232 | -## Scope |
1233 | - |
1234 | -**TODO** |
1235 | - |
1236 | -## Backup |
1237 | - |
1238 | -Jujus working principle is based on storing the state of the cloud in |
1239 | -database containing information about the environment, machines, services, |
1240 | -and units. Changes to an environment are made to the state first, which are |
1241 | -then detected by their according agents. Those are responsible to do the |
1242 | -needed steps then. |
1243 | - |
1244 | -This principle allows Juju to easily do a *backup* of this information, plus |
1245 | -some needed configuration data and some more useful information more. The |
1246 | -command to do so is `juju-backup`, which saves the currently selected |
1247 | -environment. So please make sure to switch to the environment you want to |
1248 | -backup. |
1249 | - |
1250 | -```` |
1251 | -$ juju switch my-env |
1252 | -$ juju backup |
1253 | -```` |
1254 | - |
1255 | -The command creates two generations of backups on the bootstrap node, also |
1256 | -know as `machine-0`. Beside the state and configuration data about this machine |
1257 | -itself and the other ones of its environment the aggregated log for all |
1258 | -machines and the one of this machine itself are saved. The aggregated log |
1259 | -is the same you're accessing when calling |
1260 | - |
1261 | -```` |
1262 | -$ juju debug-log |
1263 | -```` |
1264 | - |
1265 | -and enables you to retrieve helpful information in case of a problem. After |
1266 | -the backup is created on the bootstrap node it is transferred to your |
1267 | -working machine into the current directory as `juju-backup-YYYYMMDD-HHMM.tgz`, |
1268 | -where *YYYYMMDD-HHMM* is date and time of the backup. In case you want to open |
1269 | -the backup manually to access the mentioned logging data you'll find it in the |
1270 | -contained archive `root.tar`. Here please don't wonder, this way all owner, |
1271 | -access rights and other information are preserved. |
1272 | - |
1273 | -## Restore |
1274 | - |
1275 | -To *restore* an environment the according command is |
1276 | - |
1277 | -```` |
1278 | -$ juju restore <BACKUPFILE> |
1279 | -```` |
1280 | - |
1281 | -This way you're able to choose the concrete environment to restore. |
1282 | |
1283 | === removed file 'Backup-and-Recovery-OpenStack.md' |
1284 | --- Backup-and-Recovery-OpenStack.md 2014-04-02 16:18:10 +0000 |
1285 | +++ Backup-and-Recovery-OpenStack.md 1970-01-01 00:00:00 +0000 |
1286 | @@ -1,131 +0,0 @@ |
1287 | -Title: Backup and Recovery - OpenStack |
1288 | -Status: In Progress |
1289 | - |
1290 | -# Backup and Recovery - OpenStack |
1291 | - |
1292 | -## Introduction |
1293 | - |
1294 | -The OpenStack flexibility makes backup and restore to a very individual process |
1295 | -depending on the used components. This section describes how the critical parts |
1296 | -like the configuration files and databases OpenStack needs to run are saved. As |
1297 | -before for Juju it doesn't describe ho to back up the objects inside the Object |
1298 | -Storage or the data inside the Block Storage. |
1299 | - |
1300 | -## Scope |
1301 | - |
1302 | -**TODO** |
1303 | - |
1304 | -## Backup Cloud Controller Database |
1305 | - |
1306 | -Like Juju the OpenStack cloud controller uses a database server which stores the |
1307 | -central databases for Nova, Glance, Keystone, Cinder, and Switft. You can backup |
1308 | -the five databases into one common dump: |
1309 | - |
1310 | -```` |
1311 | -$ mysqldump --opt --all-databases > openstack.sql |
1312 | -```` |
1313 | - |
1314 | -Alternatively you can backup the database for each component individually: |
1315 | - |
1316 | -```` |
1317 | -$ mysqldump --opt nova > nova.sql |
1318 | -$ mysqldump --opt glance > glance.sql |
1319 | -$ mysqldump --opt keystone > keystone.sql |
1320 | -$ mysqldump --opt cinder > cinder.sql |
1321 | -$ mysqldump --opt swift > swift.sql |
1322 | -```` |
1323 | - |
1324 | -## Backup File Systems |
1325 | - |
1326 | -Beside the databases OpenStack uses different directories for its configuration, |
1327 | -runtime files, and logging. Like the databases they are grouped individually per |
1328 | -component. This way also the backup can be done per component. |
1329 | - |
1330 | -### Nova |
1331 | - |
1332 | -You'll find the configuration directory `/etc/nova` on the cloud controller and |
1333 | -each compute node. It should be regularly backed up. |
1334 | - |
1335 | -Another directory to backup is `/var/lib/nova`. But here you have to be careful |
1336 | -with the `instances` subdirectory on the compute nodes. It contains the KVM images |
1337 | -of the running instances. If you want to maintain backup copies of those instances |
1338 | -you can do a backup here too. In this case make sure to not save a live KVM instance |
1339 | -because it may not boot properly after restoring the backup. |
1340 | - |
1341 | -Third directory for the compute component is `/var/log/nova`. In case of a central |
1342 | -logging server this directory does not need to be backed up. So we suggest you to |
1343 | -run your environment with this kind of logging. |
1344 | - |
1345 | -### Glance |
1346 | - |
1347 | -Like for Nova you'll find the directories `/etc/glance` and `/var/log/glance`, the |
1348 | -handling should be the same here too. |
1349 | - |
1350 | -Glance also uses the directory named `/var/lib/glance` which also should be backed |
1351 | -up. |
1352 | - |
1353 | -### Keystone |
1354 | - |
1355 | -Keystone is using the directories `/etc/keystone`, `/var/lib/keystone`, and |
1356 | -`/var/log/keystone`. They follow the same rules as Nova and Glance. Even if |
1357 | -the `lib` directory should not contain any data being used, can also be backed |
1358 | -up just in case. |
1359 | - |
1360 | -### Cinder |
1361 | - |
1362 | -Like before you'll find the directories `/etc/cinder`, `/var/log/cinder`, |
1363 | -and `/var/lib/cinder`. And also here the handling should be the same. Opposite |
1364 | -to Nova abd Glance there's no special handling of `/var/lib/cinder` needed. |
1365 | - |
1366 | -### Swift |
1367 | - |
1368 | -Beside the Swift configuration the directory `/etc/swift` contains the ring files |
1369 | -and the ring builder files. If those get lest the data on your data gets inaccessable. |
1370 | -So you can easily imagine how important it is to backup this directory. Best practise |
1371 | -is to copy the builder files to the storage nodes along with the ring files. So |
1372 | -multiple copies are spread throughout the cluster. |
1373 | - |
1374 | -**TODO(mue)** Really needed when we use Ceph for storage? |
1375 | - |
1376 | -## Restore |
1377 | - |
1378 | -The restore based on the backups is a step-by-step process restoring the components |
1379 | -databases and all their directories. It's important that the component to restore is |
1380 | -currently not running. So always start the restoring with stopping all components. |
1381 | - |
1382 | -Let's take Nova as an example. First execute |
1383 | - |
1384 | -```` |
1385 | -$ stop nova-api |
1386 | -$ stop nova-cert |
1387 | -$ stop nova-consoleauth |
1388 | -$ stop nova-novncproxy |
1389 | -$ stop nova-objectstore |
1390 | -$ stop nova-scheduler |
1391 | -```` |
1392 | - |
1393 | -on the cloud controller to savely stop the processes of the component. Next step is the |
1394 | -restore of the database. By using the `--opt` option during backup we ensured that all |
1395 | -tables are initially dropped and there's no conflict with currently existing data in |
1396 | -the databases. |
1397 | - |
1398 | -```` |
1399 | -$ mysql nova < nova.sql |
1400 | -```` |
1401 | - |
1402 | -Before restoring the directories you should move at least the configuration directoy, |
1403 | -here `/etc/nova`, into a secure location in case you need to roll it back. |
1404 | - |
1405 | -After the database and the files are restored you can start MySQL and Nova again. |
1406 | - |
1407 | -```` |
1408 | -$ start mysql |
1409 | -$ start nova-api |
1410 | -$ start nova-cert |
1411 | -$ start nova-consoleauth |
1412 | -$ start nova-novncproxy |
1413 | -$ start nova-objectstore |
1414 | -$ start nova-scheduler |
1415 | -```` |
1416 | - |
1417 | -The process for the other components look similar. |
1418 | |
1419 | === added directory 'Install' |
1420 | === added file 'Install/Installing-Ceph.md' |
1421 | --- Install/Installing-Ceph.md 1970-01-01 00:00:00 +0000 |
1422 | +++ Install/Installing-Ceph.md 2014-04-15 16:06:33 +0000 |
1423 | @@ -0,0 +1,56 @@ |
1424 | +Title: Installing - Ceph |
1425 | +Status: Review |
1426 | + |
1427 | +# Installing - Ceph |
1428 | + |
1429 | +## Introduction |
1430 | + |
1431 | +Typically OpenStack uses the local storage of their nodes for the configuration data |
1432 | +as well as for the object storage provided by Swift and the block storage provided by |
1433 | +Cinder and Glance. But it also can use Ceph as storage backend. Ceph stripes block |
1434 | +device images across a cluster. This way it provides a better performance than typical |
1435 | +standalone server. It allows scalabillity and redundancy needs to be satisfied and |
1436 | +Cinder's RDB driver used to create, export and connect volumes to instances. |
1437 | + |
1438 | +## Scope |
1439 | + |
1440 | +This document covers the deployment of Ceph via Juju. Other related documents are |
1441 | + |
1442 | +- [Scaling Ceph](Scaling-Ceph.md) |
1443 | +- [Troubleshooting Ceph](Troubleshooting-Ceph.md) |
1444 | +- [Appendix Ceph and OpenStack](Appendix-Ceph-and-OpenStack.md) |
1445 | + |
1446 | +## Deployment |
1447 | + |
1448 | +During the installation of OpenStack we've already seen the deployment of Ceph via |
1449 | + |
1450 | +``` |
1451 | +juju deploy --config openstack-config.yaml -n 3 ceph |
1452 | +juju deploy --config openstack-config.yaml -n 10 ceph-osd |
1453 | +``` |
1454 | + |
1455 | +This will install three Ceph nodes configured with the information contained in the |
1456 | +file `openstack-config.yaml`. This file contains the configuration `block-device: None` |
1457 | +for Cinder, so that this component does not use the local disk. Instead we're calling |
1458 | +Additionally 10 Ceph OSD nodes providing the object storage are deployed and related |
1459 | +to the Ceph nodes by |
1460 | + |
1461 | +``` |
1462 | +juju add-relation ceph-osd ceph |
1463 | +``` |
1464 | + |
1465 | +Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which |
1466 | +will scan for the configured storage devices and add them to the pool of available storage. |
1467 | +Now the relation to Cinder and Glance can be established with |
1468 | + |
1469 | +``` |
1470 | +juju add-relation cinder ceph |
1471 | +juju add-relation glance ceph |
1472 | +``` |
1473 | + |
1474 | +so that both are using the storage provided by Ceph. |
1475 | + |
1476 | +## See also |
1477 | + |
1478 | +- https://manage.jujucharms.com/charms/precise/ceph |
1479 | +- https://manage.jujucharms.com/charms/precise/ceph-osd |
1480 | |
1481 | === added file 'Install/Installing-MAAS.md' |
1482 | --- Install/Installing-MAAS.md 1970-01-01 00:00:00 +0000 |
1483 | +++ Install/Installing-MAAS.md 2014-04-15 16:06:33 +0000 |
1484 | @@ -0,0 +1,467 @@ |
1485 | +Title: Installing MAAS |
1486 | +Status: In progress |
1487 | +Notes: |
1488 | + |
1489 | + |
1490 | + |
1491 | + |
1492 | + |
1493 | +#Installing the MAAS software |
1494 | + |
1495 | +##Scope of this documentation |
1496 | + |
1497 | +This document provides instructions on how to install the Metal As A Service (MAAS) software. It has been prepared alongside guides for installing Juju, OpenStack and Landscape as part of a production grade cloud environment. MAAS itself may be used in different ways and you can find documentation for this on the main MAAS website [MAAS docs]. For the purposes of this documentation, the following assumptions have been made: |
1498 | +* You have sufficient, appropriate node hardware |
1499 | +* You will be using Juju to assign workloads to MAAS |
1500 | +* You will be configuring the cluster network to be controlled entirely by MAAS (i.e. DNS and DHCP) |
1501 | +* If you have a compatible power-management system, any additional hardware required is also installed(e.g. IPMI network). |
1502 | + |
1503 | +## Introducing MAAS |
1504 | + |
1505 | +Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource. |
1506 | + |
1507 | +What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud. |
1508 | + |
1509 | +When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova. |
1510 | + |
1511 | +MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal. |
1512 | + |
1513 | +## Installing MAAS from the Cloud Archive |
1514 | + |
1515 | +The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of MAAS, Juju and other tools. It is highly recommended to configure this repository and use it to keep your software up to date: |
1516 | + |
1517 | +``` |
1518 | +sudo add-apt-repository cloud-archive:tools |
1519 | +sudo apt-get update |
1520 | +``` |
1521 | + |
1522 | +There are several packages that comprise a MAAS install. These are: |
1523 | + |
1524 | +maas-region-controller: |
1525 | + Which comprises the 'control' part of the software, including the web-based user interface, the API server and the main database. |
1526 | +maas-cluster-controller: |
1527 | + This includes the software required to manage a cluster of nodes, including managing DHCP and boot images. |
1528 | +maas-dns: |
1529 | + This is a customised DNS service that MAAS can use locally to manage DNS for all the connected nodes. |
1530 | +mass-dhcp: |
1531 | + As for DNS, there is a DHCP service to enable MAAS to correctly enlist nodes and assign IP addresses. The DHCP setup is critical for the correct PXE booting of nodes. |
1532 | + |
1533 | +As a convenience, there is also a `maas` metapackage, which will install all these components |
1534 | + |
1535 | + |
1536 | +If you need to separate these services or want to deploy an additional cluster controller, you should install the corresponding packages individually (see [_the description of a typical setup_](https://www.filepicker.io/api/file/orientation.html#setup) for more background on how a typical hardware setup might be arranged). |
1537 | + |
1538 | + |
1539 | + |
1540 | + |
1541 | +### Installing the packages |
1542 | + |
1543 | +The configuration for the MAAS controller will automatically run and pop up this config screen: |
1544 | + |
1545 | +![]( install_cluster-config.png) |
1546 | + |
1547 | +Here you will need to enter the hostname for where the region controller can be contacted. In many scenarios, you may be running the region controller (i.e. the web and API interface) from a different network address, for example where a server has several network interfaces. |
1548 | + |
1549 | +Once the configuration scripts have run you should see this message telling you that the system is ready to use: |
1550 | + |
1551 | +![]( install_controller-config.png) |
1552 | + |
1553 | +The web server is started last, so you have to accept this message before the service is run and you can access the Web interface. Then there are just a few more setup steps [_Post-Install tasks_](https://www.filepicker.io/api/file/WMGTttJT6aaLnQrEkAPv?signature=a86d0c3b4e25dba2d34633bbdc6873d9d8e6ae3cecc4672f2219fa81ee478502&policy=eyJoYW5kbGUiOiJXTUdUdHRKVDZhYUxuUXJFa0FQdiIsImV4cGlyeSI6MTM5NTE3NDE2MSwiY2FsbCI6WyJyZWFkIl19#post-install) |
1554 | + |
1555 | +The maas-dhcp and maas-dns packages should be installed by default. You can check whether they are installed with: |
1556 | + |
1557 | +``` |
1558 | +dpkg -l maas-dhcp maas-dns |
1559 | +``` |
1560 | + |
1561 | +If they are missing, then: |
1562 | + |
1563 | +``` |
1564 | +sudo apt-get install maas-dhcp maas-dns |
1565 | +``` |
1566 | + |
1567 | +And then proceed to the post-install setup below. |
1568 | + |
1569 | +If you now use a web browser to connect to the region controller, you should see that MAAS is running, but there will also be some errors on the screen: |
1570 | + |
1571 | +![]( install_web-init.png) |
1572 | + |
1573 | +The on screen messages will tell you that there are no boot images present, and that you can't login because there is no admin user. |
1574 | + |
1575 | +## Create a superuser account |
1576 | + |
1577 | +Once MAAS is installed, you'll need to create an administrator account: |
1578 | + |
1579 | +``` |
1580 | +sudo maas createadmin --username=root --email=MYEMAIL@EXAMPLE.COM |
1581 | +``` |
1582 | + |
1583 | +Substitute your own email address in the command above. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember. The command will prompt for a password to assign to the new user. |
1584 | + |
1585 | +You can run this command again for any further administrator accounts you may wish to create, but you need at least one. |
1586 | + |
1587 | +## Import the boot images |
1588 | + |
1589 | +MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually the first time. To do this you will need to connect to the MAAS API using the maas-cli tool. (see for details). Then you need to run the command: |
1590 | + |
1591 | +``` |
1592 | +maas-cli maas node-groups import-boot-images |
1593 | +``` |
1594 | + |
1595 | +(substitute in a different profile name for 'maas' if you have called yours something else) This will initiate downloading the required image files. Note that this may take some time depending on your network connection. |
1596 | + |
1597 | + |
1598 | +## Login to the server |
1599 | + |
1600 | +To check that everything is working properly, you should try and login to the server now. Both the error messages should have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes attached to this controller. |
1601 | + |
1602 | +![]( install-login.png) |
1603 | +## Configure switches on the network |
1604 | + |
1605 | +Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use. |
1606 | + |
1607 | +To alleviate this problem, you should enable [Portfast](https://www.symantec.com/business/support/index?page=content&id=HOWTO6019) for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately. |
1608 | + |
1609 | +##Add an additional cluster |
1610 | + |
1611 | +Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier maintenance, uprades and stability, it is desirable to have at least two operational clusters. |
1612 | + |
1613 | +Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the cluster controller software: |
1614 | + |
1615 | +``` |
1616 | +sudo add-apt-repository cloud-archive:tools |
1617 | +sudo apt-get update |
1618 | +sudo apt-get install maas-cluster-controller |
1619 | +sudo apt-get install maas-dhcp |
1620 | +``` |
1621 | + |
1622 | +During the install process, a configuration window will appear. You merely need to type in the address of the MAAS controller API, like this: |
1623 | + |
1624 | +![config-image.png] |
1625 | + |
1626 | +## Configure Cluster Controller(s) |
1627 | + |
1628 | +### Cluster acceptance |
1629 | +When you install your first cluster controller on the same system as the region controller, it will be automatically accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user interface as “pending,” until you manually accept them into the MAAS. |
1630 | + |
1631 | +To accept a cluster controller, click on the settings “cog” icon at the top right to visit the settings page: |
1632 | +![]settings.png |
1633 | +You can either click on “Accept all” or click on the edit icon to edit the cluster. After clicking on the edit icon, you will see this page: |
1634 | + |
1635 | +![]cluster-edit.png |
1636 | +Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes its status from “pending” to “accepted.” |
1637 | + |
1638 | +Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how to do this and what choices are to be made. |
1639 | + |
1640 | +### Configuration |
1641 | +MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these networks, by configuring those interfaces in the MAAS user interface. |
1642 | + |
1643 | +As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for eth0, which takes us to this page: |
1644 | + |
1645 | +![]cluster-interface-edit.png |
1646 | +Here you can select to what extent you want the cluster controller to manage the network: |
1647 | + |
1648 | +DHCP only - this will run a DHCP server on your cluster |
1649 | +DHCP and DNS - this will run a DHCP server on the cluster and configure the DNS server included with the region controller so that it can be used to look up hosts on this network by name. |
1650 | +Note |
1651 | +You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file to work out the IP address of nodes in the cluster. |
1652 | +If you set the interface to be managed, you now need to provide all of the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able to boot nodes on this network. |
1653 | + |
1654 | +!!! note:There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any nodes. Or, if you do want to manage nodes but don’t want the cluster controller to serve DHCP, you may be able to get by without it. This is explained in Manual DHCP configuration. |
1655 | + |
1656 | +!!! note: A single cluster controller can manage more than one network, each from a different network interface on the cluster-controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your network architecture. |
1657 | + |
1658 | +## Enlisting nodes |
1659 | + |
1660 | +Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS controlling DHCP and nodes capable of PXE booting, this is straightforward |
1661 | + |
1662 | +Automatic Discovery |
1663 | +With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the image, contact the MAAS server and shut down. |
1664 | + |
1665 | +During this process, the MAAS server will be passed information about the node, including the architecture, MAC address and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web interface. When the nodes have been accepted the selected series of Ubuntu will be installed. |
1666 | + |
1667 | +To save time, you can also accept and commission all nodes from the commandline. This requires that you first login with the API key [1], which you can retrieve from the web interface: |
1668 | + |
1669 | +``` |
1670 | +maas-cli maas nodes accept-all |
1671 | +``` |
1672 | + |
1673 | +### Manually adding nodes |
1674 | + |
1675 | +If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the Nodes screen: |
1676 | +![]add-node.png |
1677 | + |
1678 | +Select 'Add node' and manually enter details about the node, including its MAC address. This is used to identify the node when it contacts the DHCP server. |
1679 | + |
1680 | + |
1681 | + |
1682 | +## Preparing MAAS for Juju using Simplestreams |
1683 | + |
1684 | +When Juju bootstraps a cloud, it needs two critical pieces of information: |
1685 | + |
1686 | +1. The uuid of the image to use when starting new compute instances. |
1687 | +2. The URL from which to download the correct version of a tools tarball. |
1688 | + |
1689 | +This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata, after understanding a bit about how it works. |
1690 | + |
1691 | +The simplestreams format is used to describe related items in a structural fashion.( [See the Launchpad project lp:simplestreams for more details on implementation](https://launchpad.net/simplestreams)). Below we will discuss how Juju determines which metadata to use, and how to create your own images and tools and have Juju use them instead of the defaults. |
1692 | + |
1693 | +### Basic Workflow |
1694 | + |
1695 | +Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of lookup) are: |
1696 | + |
1697 | +1. User supplied location (specified by tools-metadata-url or image-metadata-url config settings). |
1698 | +2. The environment's cloud storage. |
1699 | +3. Provider specific locations (eg keystone endpoint if on Openstack). |
1700 | +4. A web location with metadata for supported public clouds (https://streams.canonical.com). |
1701 | + |
1702 | +Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before moving onto the next path location. |
1703 | + |
1704 | +Juju ships with public keys used to validate the integrity of image and tools metadata obtained from https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which ship with Juju. |
1705 | + |
1706 | +### Image Metadata Contents |
1707 | + |
1708 | +Image metadata uses a simplestreams content type of "image-ids". The product id is formed as follows: |
1709 | + |
1710 | +com.ubuntu.cloud:server:<series_version>:<arch> For Example: |
1711 | +com.ubuntu.cloud:server:14.04:amd64 Non-released images (eg beta, daily etc) have product ids like: |
1712 | +com.ubuntu.cloud.daily:server:13.10:amd64 |
1713 | + |
1714 | +The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component): |
1715 | + |
1716 | +<path_url> |-streams |-v1 |-index.(s)json |-product-foo.(s)json |-product-bar.(s)json |
1717 | + |
1718 | +The index file must be called "index.(s)json" (sjson for signed). The various product files are named according to the Path values contained in the index file. |
1719 | + |
1720 | +Tools metadata uses a simplestreams content type of "content-download". The product id is formed as follows: |
1721 | + |
1722 | +"com.ubuntu.juju:<series_version>:<arch>" |
1723 | + |
1724 | +For example: |
1725 | + |
1726 | +"com.ubuntu.juju:12.04:amd64" |
1727 | + |
1728 | +The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component). In addition, tools tarballs which Juju needs to download are also expected. |
1729 | + |
1730 | +|-streams | |-v1 | |-index.(s)json | |-product-foo.(s)json | |-product-bar.(s)json | |-releases |-tools-abc.tar.gz |-tools-def.tar.gz |-tools-xyz.tar.gz |
1731 | + |
1732 | +The index file must be called "index.(s)json" (sjson for signed). The product file and tools tarball name(s) match whatever is in the index/product files. |
1733 | + |
1734 | +### Configuration |
1735 | + |
1736 | +For supported public clouds, no extra configuration is required; things work out-of-the-box. However, for testing purposes, or for non-supported cloud deployments, Juju needs to know where to find the tools and which image to run. Even for supported public clouds where all required metadata is available, the user can put their own metadata in the search path to override what is provided by the cloud. |
1737 | + |
1738 | +#### User specified URLs |
1739 | + |
1740 | +These are initially specified in the environments.yaml file (and then subsequently copied to the jenv file when the environment is bootstrapped). For images, use "image-metadata-url"; for tools, use "tools-metadata-url". The URLs can point to a world readable container/bucket in the cloud, an address served by a http server, or even a shared directory which is accessible by all node instances running in the cloud. |
1741 | + |
1742 | +Assume an Apache http server with base URL `https://juju-metadata` , providing access to information at `<base>/images` and `<base>/tools` . The Juju environment yaml file could have the following entries (one or both): |
1743 | + |
1744 | +tools-metadata-url: https://juju-metadata/tools image-metadata-url: https://juju-metadata/images |
1745 | + |
1746 | +The required files in each location is as per the directory layout described earlier. For a shared directory, use a URL of the form "file:///sharedpath". |
1747 | + |
1748 | +#### Cloud storage |
1749 | + |
1750 | +If no matching metadata is found in the user specified URL, environment's cloud storage is searched. No user configuration is required here - all Juju environments are set up with cloud storage which is used to store state information, charms etc. Cloud storage setup is provider dependent; for Amazon and Openstack clouds, the storage is defined by the "control-bucket" value, for Azure, the "storage-account-name" value is relevant. |
1751 | + |
1752 | +The (optional) directory structure inside the cloud storage is as follows: |
1753 | + |
1754 | +|-tools | |-streams | |-v1 | |-releases | |-images |-streams |-v1 |
1755 | + |
1756 | +Of course, if only custom image metadata is required, the tools directory will not be required, and vice versa. |
1757 | + |
1758 | +Note that if juju bootstrap is run with the `--upload-tools` option, the tools and metadata are placed according to the above structure. That's why the tools are then available for Juju to use. |
1759 | + |
1760 | +#### Provider specific storage |
1761 | + |
1762 | +Providers may allow additional locations to search for metadata and tools. For OpenStack, Keystone endpoints may be created by the cloud administrator. These are defined as follows: |
1763 | + |
1764 | +juju-tools the <path_url> value as described above in Tools Metadata Contentsproduct-streams the <path_url> value as described above in Image Metadata Contents |
1765 | + |
1766 | +Other providers may similarly be able to specify locations, though the implementation will vary. |
1767 | + |
1768 | +This is the default location used to search for image and tools metadata and is used if no matches are found earlier in any of the above locations. No user configuration is required. |
1769 | + |
1770 | +There are two main issues when deploying a private cloud: |
1771 | + |
1772 | +1. Image ids will be specific to the cloud. |
1773 | +2. Often, outside internet access is blocked |
1774 | + |
1775 | +Issue 1 means that image id metadata needs to be generated and made available. |
1776 | + |
1777 | +Issue 2 means that tools need to be mirrored locally to make them accessible. |
1778 | + |
1779 | +Juju tools exist to help with generating and validating image and tools metadata. For tools, it is often easiest to just mirror `https://streams.canonical.com/tools` . However image metadata cannot be simply mirrored because the image ids are taken from the cloud storage provider, so this needs to be generated and validated using the commands described below. |
1780 | + |
1781 | +The available Juju metadata tools can be seen by using the help command: |
1782 | + |
1783 | +juju help metadata |
1784 | + |
1785 | +The overall workflow is: |
1786 | + |
1787 | +- Generate image metadata |
1788 | +- Copy image metadata to somewhere in the metadata search path |
1789 | +- Optionally, mirror tools to somewhere in the metadata search path |
1790 | +- Optionally, configure tools-metadata-url and/or image-metadata-url |
1791 | + |
1792 | +#### Image metadata |
1793 | + |
1794 | +Generate image metadata using |
1795 | + |
1796 | +juju metadata generate-image -d <metadata_dir> |
1797 | + |
1798 | +As a minimum, the above command needs to know the image id to use and a directory in which to write the files. |
1799 | + |
1800 | +Other required parameters like region, series, architecture etc. are taken from the current Juju environment (or an environment specified with the -e option). These parameters can also be overridden on the command line. |
1801 | + |
1802 | +The image metadata command can be run multiple times with different regions, series, architecture, and it will keep adding to the metadata files. Once all required image ids have been added, the index and product json files can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the `image-metadata-url` setting or the cloud's storage etc. |
1803 | + |
1804 | +Examples: |
1805 | + |
1806 | +1. image-metadata-url |
1807 | + |
1808 | +- upload contents of to `http://somelocation` |
1809 | +- set image-metadata-url to `http://somelocation/images` |
1810 | + |
1811 | +2. Cloud storage |
1812 | + |
1813 | +If run without parameters, the validation command will take all required details from the current Juju environment (or as specified by -e) and output the image id it would use to spin up an instance. Alternatively, series, region, architecture etc. can be specified on the command line to override the values in the environment config. |
1814 | +#### Tools metadata |
1815 | + |
1816 | +Generally, tools and related metadata are mirrored from `https://streams.canonical.com/tools` . However, it is possible to manually generate metadata for a custom built tools tarball. |
1817 | + |
1818 | +First, create a tarball of the relevant tools and place in a directory structured like this: |
1819 | + |
1820 | +<tools_dir>/tools/releases/ |
1821 | + |
1822 | +Now generate relevant metadata for the tools by running the command: |
1823 | + |
1824 | +juju generate-tools -d <tools_dir> |
1825 | + |
1826 | +Finally, the contents of can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the tools-metadata-url setting or the cloud's storage path settings etc. |
1827 | + |
1828 | +Examples: |
1829 | + |
1830 | +1. tools-metadata-url |
1831 | + |
1832 | +- upload contents of the tools dir to `http://somelocation` |
1833 | +- set tools-metadata-url to `http://somelocation/tools` |
1834 | + |
1835 | +2. Cloud storage |
1836 | + |
1837 | +upload contents of directly to environment's cloud storage |
1838 | + |
1839 | +As with image metadata, the validation command is used to ensure tools are available for Juju to use: |
1840 | + |
1841 | +juju metadata validate-tools |
1842 | + |
1843 | +The same comments apply. Run the validation tool without parameters to use details from the Juju environment, or override values as required on the command line. See `juju help metadata validate-tools` for more details. |
1844 | + |
1845 | +##Appendix I - Using Tags |
1846 | +##Appendix II - Using the MAAS CLI |
1847 | +As well as the web interface, many tasks can be performed by accessing the MAAS API directly through the maas-cli command. This section details how to login with this tool and perform some common operations. |
1848 | + |
1849 | +###Logging in |
1850 | +Before the API will accept any commands from maas-cli, you must first login. To do this, you need the API key which can be found in the user interface. |
1851 | + |
1852 | +Login to the web interface on your MAAS. Click on the username in the top right corner and select ‘Preferences’ from the menu which appears. |
1853 | + |
1854 | +![]maascli-prefs.png |
1855 | +A new page will load... |
1856 | + |
1857 | +![]maascli-key.png |
1858 | +The very first item is a list of MAAS keys. One will have already been generated when the system was installed. It’s easiest to just select all the text, copy the key (it’s quite long!) and then paste it into the commandline. The format of the login command is: |
1859 | + |
1860 | +``` |
1861 | + maas-cli login <profile-name> <hostname> <key> |
1862 | +``` |
1863 | + |
1864 | +The profile created is an easy way of associating your credentials with any subsequent call to the API. So an example login might look like this: |
1865 | + |
1866 | +``` |
1867 | +maas-cli login maas http://10.98.0.13/MAAS/api/1.0 |
1868 | +AWSCRMzqMNy:jjk...5e1FenoP82Qm5te2 |
1869 | +``` |
1870 | +which creates the profile ‘maas’ and registers it with the given key at the specified API endpoint. If you omit the credentials, they will be prompted for in the console. It is also possible to use a hyphen, ‘-‘ in place of the credentials. In this case a single line will be read from stdin, stripped of any whitespace and used as the credentials, which can be useful if you are devolping scripts for specific tasks. If an empty string is passed instead of the credentials, the profile will be logged in anonymously (and consequently some of the API calls will not be available) |
1871 | + |
1872 | +### maas-cli commands |
1873 | +The maas-cli command exposes the whole API, so you can do anything you actually can do with MAAS using this command. This leaves us with a vast number of options, which are more fully expressed in the complete [2][MAAS Documentation] |
1874 | + |
1875 | +list: |
1876 | + lists the details [name url auth-key] of all the currently logged-in profiles. |
1877 | + |
1878 | +login <profile> <url> <key>: |
1879 | + Logs in to the MAAS controller API at the given URL, using the key provided and |
1880 | + associates this connection with the given profile name. |
1881 | + |
1882 | +logout <profile>: |
1883 | + Logs out from the given profile, flushing the stored credentials. |
1884 | + |
1885 | +refresh: |
1886 | + Refreshes the API descriptions of all the current logged in profiles. This may become necessary for example when upgrading the maas packages to ensure the command-line options match with the API. |
1887 | + |
1888 | +### Useful examples |
1889 | + |
1890 | +Displays current status of nodes in the commissioning phase: |
1891 | +``` |
1892 | +maas cli maas nodes check-commissioning |
1893 | +``` |
1894 | + |
1895 | +Accept and commission all discovered nodes: |
1896 | +``` |
1897 | +maas-cli maas nodes accept-all |
1898 | +``` |
1899 | + |
1900 | +List all known nodes: |
1901 | +``` |
1902 | +maas-cli maas nodes list |
1903 | +``` |
1904 | + |
1905 | +Filter the list using specific key/value pairs: |
1906 | +``` |
1907 | +maas-cli maas nodes list architecture="i386/generic" |
1908 | +``` |
1909 | + |
1910 | +Set the power parameters for an ipmi enabled node: |
1911 | +``` |
1912 | +maas-cli maas node update <system_id> \ |
1913 | + power_type="ipmi" \ |
1914 | + power_parameters_power_address=192.168.22.33 \ |
1915 | + power_parameters_power_user=root \ |
1916 | + power_parameters_power_pass=ubuntu; |
1917 | +``` |
1918 | +## Appendix III - Physical Zones |
1919 | + |
1920 | +To help you maximise fault-tolerance and performance of the services you deploy, MAAS administrators can define _physical zones_ (or just _zones_ for short), and assign nodes to them. When a user requests a node, they can ask for one that is in a specific zone, or one that is not in a specific zone. |
1921 | + |
1922 | +It's up to you as an administrator to decide what a physical zone should represent: it could be a server rack, a room, a data centre, machines attached to the same UPS, or a portion of your network. Zones are most useful when they represent portions of your infrastructure. But you could also use them simply to keep track of where your systems are located. |
1923 | + |
1924 | +Each node is in one and only one physical zone. Each MAAS instance ships with a default zone to which nodes are attached by default. If you do not need this feature, you can simply pretend it does not exist. |
1925 | + |
1926 | +### Applications |
1927 | + |
1928 | +Since you run your own MAAS, its physical zones give you more flexibility than those of a third-party hosted cloud service. That means that you get to design your zones and define what they mean. Below are some examples of how physical zones can help you get the most out of your MAAS. |
1929 | + |
1930 | +### Creating a Zone |
1931 | + |
1932 | +Only administrators can create and manage zones. To create a physical zone in the web user interface, log in as an administrator and browse to the "Zones" section in the top bar. This will takes you to the zones listing page. At the bottom of the page is a button for creating a new zone: |
1933 | + |
1934 | +![]add-zone.png |
1935 | + |
1936 | +Or to do it in the [_region-controller API_][#region-controller-api], POST your zone definition to the _"zones"_ endpoint. |
1937 | + |
1938 | +### Assigning Nodes to a Zone |
1939 | + |
1940 | +Once you have created one or more physical zones, you can set nodes' zones from the nodes listing page in the UI. Select the nodes for which you wish to set a zone, and choose "Set physical zone" from the "Bulk action" dropdown list near the top. A second dropdown list will appear, to let you select which zone you wish to set. Leave it blank to clear nodes' physical zones. Clicking "Go" will apply the change to the selected nodes. |
1941 | + |
1942 | +You can also set an individual node's zone on its "Edit node" page. Both ways are available in the API as well: edit an individual node through a request to the node's URI, or set the zone on multiple nodes at once by calling the operation on the endpoint. |
1943 | + |
1944 | +### Allocating a Node in a Zone |
1945 | + |
1946 | +To deploy in a particular zone, call the method in the [_region-controller API_][#region-controller-api] as before, but pass the parameter with the name of the zone. The method will allocate a node in that zone, or fail with an HTTP 409 ("conflict") error if the zone has no nodes available that match your request. |
1947 | + |
1948 | +Alternatively, you may want to request a node that is _not_ in a particular zone, or one that is not in any of several zones. To do that, specify the parameter to . This parameter takes a list of zone names; the allocated node will not be in any of them. Again, if that leaves no nodes available that match your request, the call will return a "conflict" error. |
1949 | + |
1950 | +It is possible, though not usually useful, to combine the and parameters. If your choice for is also present in , no node will ever match your request. Or if it's not, then the values will not affect the result of the call at all. |
1951 | + |
1952 | |
1953 | === added file 'Install/Intro.md' |
1954 | --- Install/Intro.md 1970-01-01 00:00:00 +0000 |
1955 | +++ Install/Intro.md 2014-04-15 16:06:33 +0000 |
1956 | @@ -0,0 +1,28 @@ |
1957 | +Title: Introduction |
1958 | + |
1959 | +#Ubuntu Cloud Documentation |
1960 | + |
1961 | +## Deploying Production Grade OpenStack with MAAS, Juju and Landscape |
1962 | + |
1963 | +This documentation has been created to describe best practice in deploying |
1964 | +a Production Grade installation of OpenStack using current Canonical |
1965 | +technologies, including bare metal provisioning using MAAS, service |
1966 | +orchestration with Juju and system management with Landscape. |
1967 | + |
1968 | +This documentation is divided into four main topics: |
1969 | + |
1970 | + 1. [Installing the MAAS Metal As A Service software](../installing-maas.html) |
1971 | + 2. [Installing Juju and configuring it to work with MAAS](../installing-juju.html) |
1972 | + 3. [Using Juju to deploy OpenStack](../installing-openstack.html) |
1973 | + 4. [Deploying Landscape to manage your OpenStack cloud](../installing-landscape) |
1974 | + |
1975 | +Once you have an up and running OpenStack deployment, you should also read |
1976 | +our [Administration Guide](../admin-intro.html) which details common tasks |
1977 | +for maintenance and scaling of your service. |
1978 | + |
1979 | + |
1980 | +## Legal notices |
1981 | + |
1982 | + |
1983 | + |
1984 | +![Canonical logo](./media/logo-canonical_no™-aubergine-hex.jpg) |
1985 | |
1986 | === added file 'Install/installing-openstack-outline.md' |
1987 | --- Install/installing-openstack-outline.md 1970-01-01 00:00:00 +0000 |
1988 | +++ Install/installing-openstack-outline.md 2014-04-15 16:06:33 +0000 |
1989 | @@ -0,0 +1,395 @@ |
1990 | +Title:Installing OpenStack |
1991 | + |
1992 | +# Installing OpenStack |
1993 | + |
1994 | +![Openstack](../media/openstack.png) |
1995 | + |
1996 | +##Introduction |
1997 | + |
1998 | +OpenStack is a versatile, open source cloud environment equally suited to serving up public, private or hybrid clouds. Canonical is a Platinum Member of the OpenStack foundation and has been involved with the OpenStack project since its inception; the software covered in this document has been developed with the intention of providing a streamlined way to deploy and manage OpenStack installations. |
1999 | + |
2000 | +### Scope of this documentation |
2001 | + |
2002 | +The OpenStack platform is powerful and its uses diverse. This section of documentation |
2003 | +is primarily concerned with deploying a 'standard' running OpenStack system using, but not limited to, Canonical components such as MAAS, Juju and Ubuntu. Where appropriate other methods and software will be mentioned. |
2004 | + |
2005 | +### Assumptions |
2006 | + |
2007 | +1. Use of MAAS |
2008 | + This document is written to provide instructions on how to deploy OpenStack using MAAS for hardware provisioning. If you are not deploying directly on hardware, this method will still work, with a few alterations, assuming you have a properly configured Juju environment. The main difference will be that you will have to provide different configuration options depending on the network configuration. |
2009 | + |
2010 | +2. Use of Juju |
2011 | + This document assumes an up to date, stable release version of Juju. |
2012 | + |
2013 | +3. Local network configuration |
2014 | + This document assumes that you have an adequate local network configuration, including separate interfaces for access to the OpenStack cloud. Ideal networks are laid out in the [MAAS][MAAS documentation for OpenStack] |
2015 | + |
2016 | +## Planning an installation |
2017 | + |
2018 | +Before deploying any services, it is very useful to take stock of the resources available and how they are to be used. OpenStack comprises of a number of interrelated services (Nova, Swift, etc) which each have differing demands in terms of hosts. For example, the Swift service, which provides object storage, has a different requirement than the Nova service, which provides compute resources. |
2019 | + |
2020 | +The minimum requirements for each service and recommendations are laid out in the official [oog][OpenStack Operations Guide] which is available (free) in HTML or various downloadable formats. For guidance, the following minimums are recommended for Ubuntu Cloud: |
2021 | + |
2022 | +[insert minimum hardware spec] |
2023 | + |
2024 | + |
2025 | + |
2026 | +The recommended composition of nodes for deploying OpenStack with MAAS and Juju is that all nodes in the system should be capable of running *ANY* of the services. This is best practice for the robustness of the system, as since any physical node should fail, another can be repurposed to take its place. This obviously extends to any hardware requirements such as extra network interfaces. |
2027 | + |
2028 | +If for reasons of economy or otherwise you choose to use different configurations of hardware, you should note that your ability to overcome hardware failure will be reduced. It will also be necessary to target deployments to specific nodes - see the section in the MAAS documentation on tags [MAAS tags] |
2029 | + |
2030 | + |
2031 | +###Create the OpenStack configuration file |
2032 | + |
2033 | +We will be using Juju charms to deploy the component parts of OpenStack. Each charm encapsulates everything required to set up a particular service. However, the individual services have many configuration options, some of which we will want to change. |
2034 | + |
2035 | +To make this task easier and more reproduceable, we will create a separate configuration file with the relevant options for all the services. This is written in a standard YAML format. |
2036 | + |
2037 | +You can download the [openstack-config.yaml] file we will be using from here. It is also reproduced below: |
2038 | + |
2039 | +``` |
2040 | +keystone: |
2041 | + admin-password: openstack |
2042 | + debug: 'true' |
2043 | + log-level: DEBUG |
2044 | +nova-cloud-controller: |
2045 | + network-manager: 'Neutron' |
2046 | + quantum-security-groups: 'yes' |
2047 | + neutron-external-network: Public_Network |
2048 | +nova-compute: |
2049 | + enable-live-migration: 'True' |
2050 | + migration-auth-type: "none" |
2051 | + virt-type: kvm |
2052 | + #virt-type: lxc |
2053 | + enable-resize: 'True' |
2054 | +quantum-gateway: |
2055 | + ext-port: 'eth1' |
2056 | + plugin: ovs |
2057 | +glance: |
2058 | + ceph-osd-replication-count: 3 |
2059 | +cinder: |
2060 | + block-device: None |
2061 | + ceph-osd-replication-count: 3 |
2062 | + overwrite: "true" |
2063 | + glance-api-version: 2 |
2064 | +ceph: |
2065 | + fsid: a51ce9ea-35cd-4639-9b5e-668625d3c1d8 |
2066 | + monitor-secret: AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA== |
2067 | + osd-devices: /dev/sdb |
2068 | + osd-reformat: 'True' |
2069 | +``` |
2070 | + |
2071 | +For all services, we can configure the `openstack-origin` to point to an install source. In this case, we will rely on the default, which will point to the relevant sources for the Ubuntu 14.04 LTS Trusty release. Further configuration for each service is explained below: |
2072 | + |
2073 | +####keystone |
2074 | +admin password: |
2075 | + You should set a memorable password here to be able to access OpenStack when it is deployed |
2076 | + |
2077 | +debug: |
2078 | + It is useful to set this to 'true' initially, to monitor the setup. this will produce more verbose messaging. |
2079 | + |
2080 | +log-level: |
2081 | + Similarly, setting the log-level to DEBUG means that more verbose logs can be generated. These options can be changed once the system is set up and running normally. |
2082 | + |
2083 | +####nova-cloud-controller |
2084 | + |
2085 | +cloud-controller: |
2086 | + 'Neutron' - Other options are now depricated. |
2087 | + |
2088 | +quantum-security-groups: |
2089 | + 'yes' |
2090 | + |
2091 | +neutron-external-network: |
2092 | + Public_Network - This is an interface we will use for allowing access to the cloud, and will be defined later |
2093 | + |
2094 | +####nova-compute |
2095 | +enable-live-migration: |
2096 | + We have set this to 'True' |
2097 | + |
2098 | +migration-auth-type: |
2099 | + "none" |
2100 | + |
2101 | +virt-type: |
2102 | + kvm |
2103 | + |
2104 | +enable-resize: |
2105 | + 'True' |
2106 | + |
2107 | +####quantum-gateway |
2108 | +ext-port: |
2109 | + This is where we specify the hardware for the public network. Use 'eth1' or the relevant |
2110 | + plugin: ovs |
2111 | + |
2112 | + |
2113 | +####glance |
2114 | + |
2115 | + ceph-osd-replication-count: 3 |
2116 | + |
2117 | +####cinder |
2118 | + openstack-origin: cloud:trusty-icehouse/updates |
2119 | + block-device: None |
2120 | + ceph-osd-replication-count: 3 |
2121 | + overwrite: "true" |
2122 | + glance-api-version: 2 |
2123 | + |
2124 | +####ceph |
2125 | + |
2126 | +fsid: |
2127 | + The fsid is simply a unique identifier. You can generate a suitable value by running `uuidgen` which should return a value which looks like: a51ce9ea-35cd-4639-9b5e-668625d3c1d8 |
2128 | + |
2129 | +monitor-secret: |
2130 | + The monitor secret is a secret string used to authenticate access. There is advice on how to generate a suitable secure secret at [ceph][the ceph website]. A typical value would be `AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==` |
2131 | + |
2132 | +osd-devices: |
2133 | + This should point (in order of preference) to a device,partition or filename. In this case we will assume secondary device level storage located at `/dev/sdb` |
2134 | + |
2135 | +osd-reformat: |
2136 | + We will set this to 'True', allowing ceph to reformat the drive on provisioning. |
2137 | + |
2138 | + |
2139 | +##Deploying OpenStack with Juju |
2140 | +Now that the configuration is defined, we can use Juju to deploy and relate the services. |
2141 | + |
2142 | +###Initialising Juju |
2143 | +Juju requires a minimal amount of setup. Here we assume it has already been configured to work with your MAAS cluster (see the [juju_install][Juju Install Guide] for more information on this. |
2144 | + |
2145 | +Firstly, we need to fetch images and tools that Juju will use: |
2146 | +``` |
2147 | +juju sync-tools --debug |
2148 | +``` |
2149 | +Then we can create the bootstrap instance: |
2150 | + |
2151 | +``` |
2152 | +juju bootstrap --upload-tools --debug |
2153 | +``` |
2154 | +We use the upload-tools switch to use the local versions of the tools which we just fetched. The debug switch will give verbose output which can be useful. This process may take a few minutes, as Juju is creating an instance and installing the tools. When it has finished, you can check the status of the system with the command: |
2155 | +``` |
2156 | +juju status |
2157 | +``` |
2158 | +This should return something like: |
2159 | +``` |
2160 | +---------- example |
2161 | +``` |
2162 | +### Deploy the OpenStack Charms |
2163 | + |
2164 | +Now that the Juju bootstrap node is up and running we can deploy the services required to make our OpenStack installation. To configure these services properly as they are deployed, we will make use of the configuration file we defined earlier, by passing it along with the `--config` switch with each deploy command. Substitute in the name and path of your config file if different. |
2165 | + |
2166 | +It is useful but not essential to deploy the services in the order below. It is also highly reccommended to open an additional terminal window and run the command `juju debug-log`. This will output the logs of all the services as they run, and can be useful for troubleshooting. |
2167 | + |
2168 | +It is also recommended to run a `juju status` command periodically, to check that each service has been installed and is running properly. If you see any errors, please consult the [troubleshooting][troubleshooting section below]. |
2169 | + |
2170 | +``` |
2171 | +juju deploy --to=0 juju-gui |
2172 | +juju deploy rabbitmq-server |
2173 | +juju deploy mysql |
2174 | +juju deploy --config openstack-config.yaml openstack-dashboard |
2175 | +juju deploy --config openstack-config.yaml keystone |
2176 | +juju deploy --config openstack-config.yaml ceph -n 3 |
2177 | +juju deploy --config openstack-config.yaml nova-compute -n 3 |
2178 | +juju deploy --config openstack-config.yaml quantum-gateway |
2179 | +juju deploy --config openstack-config.yaml cinder |
2180 | +juju deploy --config openstack-config.yaml nova-cloud-controller |
2181 | +juju deploy --config openstack-config.yaml glance |
2182 | +juju deploy --config openstack-config.yaml ceph-radosgw |
2183 | +``` |
2184 | + |
2185 | + |
2186 | +### Add relations between the OpenStack services |
2187 | + |
2188 | +Although the services are now deployed, they are not yet connected together. Each service currently exists in isolation. We use the `juju add-relation`command to make them aware of each other and set up any relevant connections and protocols. This extra configuration is taken care of by the individual charms themselves. |
2189 | + |
2190 | + |
2191 | +We should start adding relations between charms by setting up the Keystone authorization service and its database, as this will be needed by many of the other connections: |
2192 | + |
2193 | +juju add-relation keystone mysql |
2194 | + |
2195 | +We wait until the relation is set. After it finishes check it with juju status: |
2196 | + |
2197 | +``` |
2198 | +juju status mysql |
2199 | +juju status keystone |
2200 | +``` |
2201 | + |
2202 | +It can take a few moments for this service to settle. Although it is certainly possible to continue adding relations (Juju manages a queue for pending actions) it can be counterproductive in terms of the overall time taken, as many of the relations refer to the same services. |
2203 | +The following relations also need to be made: |
2204 | +``` |
2205 | +juju add-relation nova-cloud-controller mysql |
2206 | +juju add-relation nova-cloud-controller rabbitmq-server |
2207 | +juju add-relation nova-cloud-controller glance |
2208 | +juju add-relation nova-cloud-controller keystone |
2209 | +juju add-relation nova-compute mysql |
2210 | +juju add-relation nova-compute rabbitmq-server |
2211 | +juju add-relation nova-compute glance |
2212 | +juju add-relation nova-compute nova-cloud-controller |
2213 | +juju add-relation glance mysql |
2214 | +juju add-relation glance keystone |
2215 | +juju add-relation cinder keystone |
2216 | +juju add-relation cinder mysql |
2217 | +juju add-relation cinder rabbitmq-server |
2218 | +juju add-relation cinder nova-cloud-controller |
2219 | +juju add-relation openstack-dashboard keystone |
2220 | +juju add-relation swift-proxy swift-storage |
2221 | +juju add-relation swift-proxy keystone |
2222 | +``` |
2223 | +Finally, the output of juju status should show the all the relations as complete. The OpenStack cloud is now running, but it needs to be populated with some additional components before it is ready for use. |
2224 | + |
2225 | + |
2226 | + |
2227 | + |
2228 | +##Preparing OpenStack for use |
2229 | + |
2230 | +###Configuring access to Openstack |
2231 | + |
2232 | + |
2233 | + |
2234 | +The configuration data for OpenStack can be fetched by reading the configuration file generated by the Keystone service. You can also copy this information by logging in to the Horizon (OpenStack Dashboard) service and examining the configuration there. However, we actually need only a few bits of information. The following bash script can be run to extract the relevant information: |
2235 | + |
2236 | +``` |
2237 | +#!/bin/bash |
2238 | + |
2239 | +set -e |
2240 | + |
2241 | +KEYSTONE_IP=`juju status keystone/0 | grep public-address | awk '{ print $2 }' | xargs host | grep -v alias | awk '{ print $4 }'` |
2242 | +KEYSTONE_ADMIN_TOKEN=`juju ssh keystone/0 "sudo cat /etc/keystone/keystone.conf | grep admin_token" | sed -e '/^M/d' -e 's/.$//' | awk '{ print $3 }'` |
2243 | + |
2244 | +echo "Keystone IP: [${KEYSTONE_IP}]" |
2245 | +echo "Keystone Admin Token: [${KEYSTONE_ADMIN_TOKEN}]" |
2246 | + |
2247 | +cat << EOF > ./nova.rc |
2248 | +export SERVICE_ENDPOINT=http://${KEYSTONE_IP}:35357/v2.0/ |
2249 | +export SERVICE_TOKEN=${KEYSTONE_ADMIN_TOKEN} |
2250 | +export OS_AUTH_URL=http://${KEYSTONE_IP}:35357/v2.0/ |
2251 | +export OS_USERNAME=admin |
2252 | +export OS_PASSWORD=openstack |
2253 | +export OS_TENANT_NAME=admin |
2254 | +EOF |
2255 | + |
2256 | +juju scp ./nova.rc nova-cloud-controller/0:~ |
2257 | +``` |
2258 | +This script extract the required information and then copies the file to the instance running the nova-cloud-controller. |
2259 | +Before we do any nova or glance command we will load the file we just created: |
2260 | + |
2261 | +``` |
2262 | +$ source ./nova.rc |
2263 | +$ nova endpoints |
2264 | +``` |
2265 | + |
2266 | +At this point the output of nova endpoints should show the information of all the available OpenStack endpoints. |
2267 | + |
2268 | +### Install the Ubuntu Cloud Image |
2269 | + |
2270 | +In order for OpenStack to create instances in its cloud, it needs to have access to relevant images |
2271 | +$ mkdir ~/iso |
2272 | +$ cd ~/iso |
2273 | +$ wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img |
2274 | + |
2275 | +###Import the Ubuntu Cloud Image into Glance |
2276 | +!!!Note: glance comes with the package glance-client which may need to be installed where you plan the run the command from |
2277 | + |
2278 | +``` |
2279 | +apt-get install glance-client |
2280 | +glance add name="Trusty x86_64" is_public=true container_format=ovf disk_format=qcow2 < trusty-server-cloudimg-amd64-disk1.img |
2281 | +``` |
2282 | +###Create OpenStack private network |
2283 | +Note: nova-manage can be run from the nova-cloud-controller node or any of the nova-compute nodes. To access the node we run the following command: |
2284 | + |
2285 | +``` |
2286 | +juju ssh nova-cloud-controller/0 |
2287 | + |
2288 | +sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --network_size=32 --multi_host=T --bridge_interface=eth0 --bridge=br100 |
2289 | +``` |
2290 | + |
2291 | +To make sure that we have created the network we can now run the following command: |
2292 | + |
2293 | +``` |
2294 | +sudo nova-manage network list |
2295 | +``` |
2296 | + |
2297 | +### Create OpenStack public network |
2298 | +``` |
2299 | +sudo nova-manage floating create --ip_range=1.1.21.64/26 |
2300 | +sudo nova-manage floating list |
2301 | +``` |
2302 | +Allow ping and ssh access adding them to the default security group |
2303 | +Note: The following commands are run from a machine where we have the package python-novaclient installed and within a session where we have loaded the above created nova.rc file. |
2304 | + |
2305 | +``` |
2306 | +nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 |
2307 | +nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 |
2308 | +``` |
2309 | + |
2310 | +###Create and register the ssh keys in OpenStack |
2311 | +Generate a default keypair |
2312 | +``` |
2313 | +ssh-keygen -t rsa -f ~/.ssh/admin-key |
2314 | +``` |
2315 | +###Copy the public key into Nova |
2316 | +We will name it admin-key: |
2317 | +Note: In the precise version of python-novaclient the command works with --pub_key instead of --pub-key |
2318 | + |
2319 | +``` |
2320 | +nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key |
2321 | +``` |
2322 | +And make sure it’s been successfully created: |
2323 | +``` |
2324 | +nova keypair-list |
2325 | +``` |
2326 | + |
2327 | +###Create a test instance |
2328 | +We created an image with glance before. Now we need the image ID to start our first instance. The ID can be found with this command: |
2329 | +``` |
2330 | +nova image-list |
2331 | +``` |
2332 | + |
2333 | +Note: we can also use the command glance image-list |
2334 | +###Boot the instance: |
2335 | + |
2336 | +``` |
2337 | +nova boot --flavor=m1.small --image=< image_id_from_glance_index > --key-name admin-key testserver1 |
2338 | +``` |
2339 | + |
2340 | +###Add a floating IP to the new instance |
2341 | +First we allocate a floating IP from the ones we created above: |
2342 | + |
2343 | +``` |
2344 | +nova floating-ip-create |
2345 | +``` |
2346 | + |
2347 | +Then we associate the floating IP obtained above to the new instance: |
2348 | + |
2349 | +``` |
2350 | +nova add-floating-ip 9363f677-2a80-447b-a606-a5bd4970b8e6 1.1.21.65 |
2351 | +``` |
2352 | + |
2353 | + |
2354 | +### Create and attach a Cinder volume to the instance |
2355 | +Note: All these steps can be also done through the Horizon Web UI |
2356 | + |
2357 | +We make sure that cinder works by creating a 1GB volume and attaching it to the VM: |
2358 | + |
2359 | +``` |
2360 | +cinder create --display_name test-cinder1 1 |
2361 | +``` |
2362 | + |
2363 | +Get the ID of the volume with cinder list: |
2364 | + |
2365 | +``` |
2366 | +cinder list |
2367 | +``` |
2368 | + |
2369 | +Attach it to the VM as vdb |
2370 | + |
2371 | +``` |
2372 | +nova volume-attach test-server1 bbb5c5c2-a5fd-4fe1-89c2-d16fe91578d4 /dev/vdb |
2373 | +``` |
2374 | + |
2375 | +Now we should be able to ssh the VM test-server1 from a server with the private key we created above and see that vdb appears in /proc/partitions |
2376 | + |
2377 | + |
2378 | + |
2379 | + |
2380 | +[troubleshooting] |
2381 | +[oog](http://docs.openstack.org/ops/) |
2382 | +[MAAS tags] |
2383 | +[openstack-config.yaml] |
2384 | +[ceph](http://ceph.com/docs/master/dev/mon-bootstrap/) |
2385 | |
2386 | === added file 'Install/landcsape.md' |
2387 | --- Install/landcsape.md 1970-01-01 00:00:00 +0000 |
2388 | +++ Install/landcsape.md 2014-04-15 16:06:33 +0000 |
2389 | @@ -0,0 +1,909 @@ |
2390 | +Title: Landscape |
2391 | +#Managing OpenStack with Landscape |
2392 | + |
2393 | +##About Landscape |
2394 | +Landscape is a system management tool designed to let you easily manage multiple Ubuntu systems - up to 40,000 with a single Landscape instance. From a single dashboard you can apply package updates and perform other administrative tasks on many machines. You can categorize machines by group, and manage each group separately. You can make changes to targeted machines even when they are offline; the changes will be applied next time they start. Landscape lets you create scripts to automate routine work such as starting and stopping services and performing backups. It lets you use both common Ubuntu repositories and any custom repositories you may create for your own computers. Landscape is particularly adept at security updates; it can highlight newly available packages that involve security fixes so they can be applied quickly. You can use Landscape as a hosted service as part of Ubuntu Advantage, or run it on premises via Landscape Dedicated Server. |
2395 | + |
2396 | +##Ubuntu Advantage |
2397 | +Ubuntu Advantage comprises systems management tools, technical support, access to online resources and support engineers, training, and legal assurance to keep organizations on top of their Ubuntu server, desktop, and cloud deployments. Advantage provides subscriptions at various support levels to help organizations maintain the level of support they need. |
2398 | + |
2399 | + |
2400 | + |
2401 | + |
2402 | + |
2403 | +##Access groups |
2404 | + |
2405 | + |
2406 | + |
2407 | + |
2408 | +Landscape lets administrators limit administrative rights on computers |
2409 | +by assigning them to logical groupings called access groups. Each |
2410 | +computer can be in only one access group, but you can organize access |
2411 | +groups hierarchically to mirror the organization of your business. In |
2412 | +addition to computers, access groups can contain package profiles, |
2413 | +scripts, and custom graphs. |
2414 | + |
2415 | +Creating access groups |
2416 | +---------------------- |
2417 | + |
2418 | +A new Landscape installation comes with a single access group, called |
2419 | +global, which gives any administrators who are associated with roles |
2420 | +that include that access group control over every computer managed by |
2421 | +Landscape. Most organizations will want to subdivide administration |
2422 | +responsibilities by creating logical groupings of computers. You can |
2423 | +create new access groups from the ACCESS GROUPS menu under your account |
2424 | +menu. |
2425 | + |
2426 | +**Figure 5.1.** |
2427 | + |
2428 | +![image](./Chapter%A05.%A0Access%20groups_files/accessgroups1.png) |
2429 | + |
2430 | +\ |
2431 | + |
2432 | +To create a new access group, you must provide two pieces of |
2433 | +information: a title for the access group and a parent. |
2434 | + |
2435 | +To start with, the parent must be the global access group. If you want a |
2436 | +flat management hierarchy, you can make every access group a child of |
2437 | +global. Alternatively, you can use parent/child relationships to create |
2438 | +a hierarchy of access groups. For instance, you could specify different |
2439 | +sites at a high level, and under them individual buildings, and finally |
2440 | +individual departments. Such a hierarchy allows you to specify groups of |
2441 | +computers to be managed together by one administrator. Administrators |
2442 | +whose roles are associated with higher-level access groups can manage |
2443 | +all subgroups of which their access group is a parent. |
2444 | + |
2445 | +When a new access group is first created, its administrators are those |
2446 | +who have roles linked to its parent access group, but you can edit the |
2447 | +roles associated with an access group. To change the roles associated |
2448 | +with an access group, see |
2449 | +[below](https://landscape.canonical.com/static/doc/user-guide/ch05.html#associatingadmins "Associating roles with access groups"). |
2450 | + |
2451 | +Adding computers to access groups |
2452 | +--------------------------------- |
2453 | + |
2454 | +To see all the computers currently in an access group, click on the name |
2455 | +of the group in the ACCESS GROUPS screen. The screen that then appears |
2456 | +displays information about that group. On the right side of the screen, |
2457 | +click the word "computers" to show the list of computers that are |
2458 | +currently members of this access group. |
2459 | + |
2460 | +**Figure 5.2.** |
2461 | + |
2462 | +![image](./Chapter%A05.%A0Access%20groups_files/accessgroups2.png) |
2463 | + |
2464 | +\ |
2465 | +Alternatively, you can click on the COMPUTERS menu item at the top of |
2466 | +the Landscape screen, and in the selection box at the top of the left |
2467 | +column, enter `access-group:`{.literal} followed by the name of your |
2468 | +access group: for instance, `access-group:stagingservers`{.literal}. |
2469 | + |
2470 | +To add computers to an access group, click on the COMPUTERS menu item at |
2471 | +the top of the Landscape screen. The resulting INFO screen shows the |
2472 | +total number of available computers being managed by Landscape, and the |
2473 | +number of computers currently selected: |
2474 | + |
2475 | +**Figure 5.3.** |
2476 | + |
2477 | +![image](./Chapter%A05.%A0Access%20groups_files/accessgroups3.png) |
2478 | + |
2479 | +\ |
2480 | +Find computers you wish to include (see the documentation on [selecting |
2481 | +computers](https://landscape.canonical.com/static/doc/user-guide/ch06.html#selectingcomputers "Selecting computers")), |
2482 | +then tick the checkbox next to each computer you wish to select. Once |
2483 | +you've made your selection, click on the INFO menu entry at the top of |
2484 | +the page Scroll down to the bottom section, choose the access group you |
2485 | +want from the drop-down list, then click Update access group. |
2486 | + |
2487 | +**Figure 5.4.** |
2488 | + |
2489 | +![image](./Chapter%A05.%A0Access%20groups_files/accessgroups4.png) |
2490 | + |
2491 | +\ |
2492 | + |
2493 | +Associating roles with access groups |
2494 | +------------------------------------ |
2495 | + |
2496 | +An administrator may manage an access group if he is associated with a |
2497 | +role that has permission to do so. To associate a role with one or more |
2498 | +access groups, click on the ROLES menu item under your account to |
2499 | +display a screen that shows a role membership matrix. |
2500 | + |
2501 | +**Figure 5.5.** |
2502 | + |
2503 | +![image](./Chapter%A05.%A0Access%20groups_files/accessgroups5.png) |
2504 | + |
2505 | +\ |
2506 | +The top of that screen shows a list of role names. Click on a role name |
2507 | +to edit the permissions and access groups associated with that role. |
2508 | +Note that you cannot modify the GlobalAdmin role, so there is no link |
2509 | +associated with that label at the top of the matrix. |
2510 | + |
2511 | +Editing access groups |
2512 | +--------------------- |
2513 | + |
2514 | +To change the name or title of an existing access group, click on the |
2515 | +name of the group in the ACCESS GROUPS screen, then click on the Edit |
2516 | +access group link at the top of next screen. Make changes, then click |
2517 | +Save. |
2518 | + |
2519 | +**Figure 5.6.** |
2520 | + |
2521 | +![image](./Chapter%A05.%A0Access%20groups_files/accessgroups6.png) |
2522 | + |
2523 | +\ |
2524 | + |
2525 | +Deleting access groups |
2526 | +---------------------- |
2527 | + |
2528 | +To delete an existing access group, click on the name of the group in |
2529 | +the ACCESS GROUPS screen, then click on the Edit access group link at |
2530 | +the top of next screen. On the resulting screen, click the Delete |
2531 | +button. You may Confirm the group's deletion, or you can click Cancel to |
2532 | +abort the operation. When you delete an access group, its resources move |
2533 | +to its parent access group. |
2534 | + |
2535 | +**Figure 5.7.** |
2536 | + |
2537 | + |
2538 | + |
2539 | +##Managing computers |
2540 | + |
2541 | + |
2542 | +Provisioning new computers |
2543 | +-------------------------- |
2544 | + |
2545 | +Landscape can provision computers in two ways: manually, or via metal as |
2546 | +a service (MAAS). [The Ubuntu wiki explains how to set up |
2547 | +MAAS](https://wiki.ubuntu.com/ServerTeam/MAAS/). |
2548 | + |
2549 | +To manually provision computers, click on PROVISIONING under your |
2550 | +ACCOUNT menu. Landscape displays a provisioning dashboard that shows the |
2551 | +number of provisioning servers you have set up, managed systems, and |
2552 | +pending systems. |
2553 | + |
2554 | +**Figure 6.1.** |
2555 | + |
2556 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers1.png) |
2557 | + |
2558 | +\ |
2559 | + |
2560 | +To provision new systems, click the Provision new systems link. On the |
2561 | +Provisioning New Systems screen, the top three fields apply to all the |
2562 | +computers you wish to provision at one time. |
2563 | + |
2564 | +**Figure 6.2.** |
2565 | + |
2566 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers2.png) |
2567 | + |
2568 | +\ |
2569 | +Enter the Ubuntu release/architecture from a drop-down list; the |
2570 | +available choices are the two hardware architectures (i386 and amd64) |
2571 | +for the each Ubuntu release beginning with 12.04. Enter the access group |
2572 | +to which the new systems should belong from a drop-down list of the |
2573 | +access groups set up for your account. You can optionally enter user |
2574 | +data, which Landscape can use for special processing. For instance, you |
2575 | +could use this field with Ubuntu's |
2576 | +[cloud-init](https://help.ubuntu.com/community/CloudInit) utility, which |
2577 | +handles early initialization functions for a cloud instance. |
2578 | + |
2579 | +For each computer you wish to provision, enter its MAC address, |
2580 | +hostname, an optional title that will be displayed on the computer |
2581 | +listing screen after the computer is set up, and optional tags separated |
2582 | +by commas that can later help you search for this computer. Click the |
2583 | +Add more systems link to get a new line of empty boxes into which you |
2584 | +can add data. |
2585 | + |
2586 | +When you click the Next button, Landscape displays a screen that lets |
2587 | +you review the information you entered. |
2588 | + |
2589 | +**Figure 6.3.** |
2590 | + |
2591 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers3.png) |
2592 | + |
2593 | +\ |
2594 | +You can click on Back to make changes, or Provision to perform the |
2595 | +operation. Landscape then displays a status screen that at first shows |
2596 | +the specified computers waiting to boot on the MAAS server. |
2597 | + |
2598 | +**Figure 6.4.** |
2599 | + |
2600 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers4.png) |
2601 | + |
2602 | +\ |
2603 | + |
2604 | +Registering computers |
2605 | +--------------------- |
2606 | + |
2607 | +If a computer is provisioned by Landscape, is it automatically |
2608 | +registered with Landscape, but when you first install Landscape, your |
2609 | +computers are not known to the Landscape server. To manage them, you |
2610 | +must register them with the server. Complete instructions for |
2611 | +registering client computers with a Landscape server are available at |
2612 | +https://yourserver/standalone/how-to-register. You can get to this page |
2613 | +by first clicking on the menu item for your account page on the top |
2614 | +menu, then on the link in the box on the left side of the page. |
2615 | + |
2616 | +Selecting computers |
2617 | +------------------- |
2618 | + |
2619 | +You can select one or more computers individually, or by using searches |
2620 | +or tags. For each of those approaches, the starting place is the |
2621 | +COMPUTERS menu entry at the top of the screen. Clicking on it displays a |
2622 | +list of all computers Landscape knows about. |
2623 | + |
2624 | +- To select computers individually, tick the boxes beside their names |
2625 | + in the Select computers list. |
2626 | + |
2627 | +- Using searches - The upper left corner of the Select computers |
2628 | + screen displays the instructions "Refine your selection by searching |
2629 | + or selecting from the tags below," followed by a search box. You can |
2630 | + enter any string in that box and press Enter, or click the arrow |
2631 | + next to the box. Landscape will search both the name and hostname |
2632 | + associated with all computers for a match with the search term. |
2633 | + Searches are not case-sensitive. A list of matching computers is |
2634 | + displayed on the right side of the screen. |
2635 | + |
2636 | + Once you've selected a group of computers, you can apply a tag to |
2637 | + them to make it easier to find them again. To do so, with your |
2638 | + computers selected, click on INFO under COMPUTERS. In the box under |
2639 | + Tags:, enter the tag you want to use and click Add. |
2640 | + |
2641 | +- Using tags - Any tags you have already created appear in a list |
2642 | + under the search box on the left of the Computers screens. You can |
2643 | + click on any tag to display the list of computers associated with |
2644 | + it. To select any of the displayed computers, tick the box next to |
2645 | + its name, or click Select: All link at the top of the list. |
2646 | + |
2647 | +Information about computers |
2648 | +--------------------------- |
2649 | + |
2650 | +By clicking on several submenus of the COMPUTERS menu, you can get |
2651 | +information about selected computers. |
2652 | + |
2653 | +- Clicking on ACTIVITIES displays information about actions that may |
2654 | + be applied to computers. You can filter the activity log to show |
2655 | + All, Pending, Unapproved, or Failed activities. You can click on |
2656 | + each activity in the list to display a screen showing details about |
2657 | + the activity. On that screen you can Approve, Cancel, Undo, or Redo |
2658 | + the activity by clicking on the relevant button. |
2659 | + |
2660 | +- Clicking on HARDWARE displays information about the selected |
2661 | + computer's processor, memory, network, storage, audio, video, PCI, |
2662 | + and USB hardware, as well as BIOS information and CPU flags. |
2663 | + |
2664 | + **Figure 6.5.** |
2665 | + |
2666 | + ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers5.png) |
2667 | + |
2668 | + \ |
2669 | + |
2670 | +- Clicking on PROCESSES displays information about all processes |
2671 | + running on a computer at the last time it checked in with the |
2672 | + Landscape server, and lets you end or kill processes by selecting |
2673 | + them and clicking on the relevant buttons. |
2674 | + |
2675 | + **Figure 6.6.** |
2676 | + |
2677 | + ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers6.png) |
2678 | + |
2679 | + \ |
2680 | + |
2681 | +- Clicking on REPORTS displays seven pie charts that show what |
2682 | + percentage of computers: |
2683 | + |
2684 | + - are securely patched |
2685 | + |
2686 | + - are covered by upgrade profiles |
2687 | + |
2688 | + - have contacted the server within the last five minutes |
2689 | + |
2690 | + - have applied security updates - four charts show computers that |
2691 | + have applied Ubuntu Security Notices within the last two, 14, |
2692 | + 30, and 60+ days |
2693 | + |
2694 | +- Clicking on MONITORING displays graphs of key performance |
2695 | + statistics, such as CPU load, memory use, disk use, and network |
2696 | + traffic. |
2697 | + |
2698 | + **Figure 6.7.** |
2699 | + |
2700 | + ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers7.png) |
2701 | + |
2702 | + \ |
2703 | + You can also create custom graphs to display at the top of the page |
2704 | + by clicking on the Create some now! link. A drop-down box at the top |
2705 | + of the page lets you specify the timeframe the graph data covers: |
2706 | + one day, three days, one week, or four weeks. You can download the |
2707 | + data behind each graph by clicking the relevant button under the |
2708 | + graph. |
2709 | + |
2710 | +The activity log |
2711 | +---------------- |
2712 | + |
2713 | +The right side of the dashboard that displays when you click on your |
2714 | +account menu, and when you click on the ACTIVITIES submenu, shows the |
2715 | +status of Landscape activities, displayed in reverse chronological |
2716 | +order. |
2717 | + |
2718 | +**Figure 6.8.** |
2719 | + |
2720 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers8.png) |
2721 | + |
2722 | +\ |
2723 | +You can view details on an individual activity by clicking on its |
2724 | +description. Each activity is labeled with a status; possible values |
2725 | +are: |
2726 | + |
2727 | +- Succeeded |
2728 | + |
2729 | +- In progress |
2730 | + |
2731 | +- Scheduled |
2732 | + |
2733 | +- Queued |
2734 | + |
2735 | +- Unapproved |
2736 | + |
2737 | +- Canceled |
2738 | + |
2739 | +- Failed |
2740 | + |
2741 | +You can select a subset to view by clicking on the links above the table |
2742 | +for All, Pending, Unapproved, or Failed activities. |
2743 | + |
2744 | +In addition to the status and description of each activity, the table |
2745 | +shows what computers the activity applied to, who created it, and when. |
2746 | + |
2747 | +Managing users |
2748 | +-------------- |
2749 | + |
2750 | +Clicking on USERS displays a list of users on each of the selected |
2751 | +computers. |
2752 | + |
2753 | +**Figure 6.9.** |
2754 | + |
2755 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers9.png) |
2756 | + |
2757 | +\ |
2758 | +You can select one or more users, then click one of the buttons at the |
2759 | +top of the screen: |
2760 | + |
2761 | +- The ADD button lets you add a new user to the selected computers. |
2762 | + |
2763 | + **Figure 6.10.** |
2764 | + |
2765 | + ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers10.png) |
2766 | + |
2767 | + \ |
2768 | + You must specify the person's name, a username, and a passphrase. |
2769 | + You may also specify a location and telephone numbers. Click the ADD |
2770 | + button at the bottom of the screen to complete the operation. |
2771 | + |
2772 | +- The DELETE button displays a screen that lets you delete the |
2773 | + selected users. |
2774 | + |
2775 | + **Figure 6.11.** |
2776 | + |
2777 | + ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers11.png) |
2778 | + |
2779 | + \ |
2780 | + You may also tick a checkbox to delete the user's home folders as |
2781 | + well. Press the Delete button at the bottom of the screen to |
2782 | + complete the operation. |
2783 | + |
2784 | +- The EDIT button displays a User details screen that lets you change |
2785 | + details such as the person's name, primary group, passphrase, |
2786 | + location, and telephone numbers, and add or remove the user from |
2787 | + groups on the selected computers. |
2788 | + |
2789 | + **Figure 6.12.** |
2790 | + |
2791 | + ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers12.png) |
2792 | + |
2793 | + \ |
2794 | + |
2795 | +- The LOCK button prevents the selected users from logging into their |
2796 | + accounts. |
2797 | + |
2798 | +- The UNLOCK button lets users into their cars when they've |
2799 | + accidentally locked their keys inside. Actually, no, it simply |
2800 | + unlocks previously locked accounts. |
2801 | + |
2802 | +Managing alerts |
2803 | +--------------- |
2804 | + |
2805 | +Landscape uses alerts to notify administrators of conditions that |
2806 | +require attention. The following types of alerts are available: |
2807 | + |
2808 | +- when a pending computer needs to be accepted or rejected |
2809 | + |
2810 | +- when you are exceeding your license entitlements for Landscape |
2811 | + Dedicated Server (This alert does not apply to the hosted version of |
2812 | + Landscape.) |
2813 | + |
2814 | +- when new package updates are available for computers |
2815 | + |
2816 | +- when new security updates are available for computers |
2817 | + |
2818 | +- when a package profile is not applied |
2819 | + |
2820 | +- when package reporting fails (Each client runs the command **apt-get |
2821 | + update** every 60 minutes. Anything that prevents that command from |
2822 | + succeeding is considered a package reporting failure.) |
2823 | + |
2824 | +- when an activity requires explicit administrator acceptance or |
2825 | + rejection |
2826 | + |
2827 | +- when a computer has not contacted the Landscape server for more than |
2828 | + five minutes |
2829 | + |
2830 | +- when computers need to be rebooted in order for a package update |
2831 | + (such as a kernel update) to take effect |
2832 | + |
2833 | +To configure alerts, click on the Configure alerts link in the |
2834 | +dashboard, or click on your account's ALERTS menu item. Tick the check |
2835 | +box next to each type of alert you want to subscribe to, or click the |
2836 | +All or None buttons at the top of the table, then click on the Subscribe |
2837 | +or Unsubscribe button below the table. |
2838 | + |
2839 | +**Figure 6.13.** |
2840 | + |
2841 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers13.png) |
2842 | + |
2843 | +\ |
2844 | + |
2845 | +The Alerts screen shows the status of each alert. If an alert has not |
2846 | +been tripped, the status is OK; if it has, the status is Alerted. The |
2847 | +last column notes whether the alert applies to your account (pending |
2848 | +computers, for instance, are not yet Landscape clients, but they are |
2849 | +part of your account), to all computers, or to a specified set of tagged |
2850 | +computers. |
2851 | + |
2852 | +If an alert is tripped, chances are an administrator should investigate |
2853 | +it. You can see alerts on the account dashboard that displays when you |
2854 | +click on your account name on the top menu. The description for each |
2855 | +alert is a link; click on it to see a table of alerts. When you click on |
2856 | +an alert, the resulting screen shows relevant information about the |
2857 | +problem. For instance, if you click on an alert about computers having |
2858 | +issues reporting packages, the table shows the computer affected, the |
2859 | +error code, and error output text. |
2860 | + |
2861 | +**Figure 6.14.** |
2862 | + |
2863 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers14.png) |
2864 | + |
2865 | +\ |
2866 | +On some alert screens you can download the list of affected computers as |
2867 | +a CSV file or save the criteria that generated the alert as a saved |
2868 | +search by clicking the relevant button at the bottom of the screen. |
2869 | + |
2870 | +Managing scripts |
2871 | +---------------- |
2872 | + |
2873 | +Landscape lets you run scripts on the computers you manage in your |
2874 | +account. The scripts may be in any language, as long as an interpreter |
2875 | +for that language is present on the computers on which they are to run. |
2876 | +You can maintain a library of scripts for common tasks. You can manage |
2877 | +scripts from the STORED SCRIPTS menu under your account, and run them |
2878 | +against computers from the SCRIPTS menu under COMPUTERS. |
2879 | + |
2880 | +The Stored scripts screen displays a list of existing scripts, along |
2881 | +with the access groups each belongs to and its creator. |
2882 | + |
2883 | +**Figure 6.15.** |
2884 | + |
2885 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers15.png) |
2886 | + |
2887 | +\ |
2888 | +You can edit a script by clicking on its name. To delete a stored |
2889 | +script, tick the check box next to its name, then click Remove. If you |
2890 | +have the proper permissions, Landscape erases the script immediately |
2891 | +without asking for confirmation. |
2892 | + |
2893 | +From the Stored scripts screen you can add a new script by clicking on |
2894 | +Add stored script. |
2895 | + |
2896 | +**Figure 6.16.** |
2897 | + |
2898 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers16.png) |
2899 | + |
2900 | +\ |
2901 | +On the Create script screen you must enter a title, interpreter, the |
2902 | +script code, the time within which the script must complete, and the |
2903 | +access group to which the script belongs. You may enter a default user |
2904 | +to run the script as; if you don't, you will have to specify the user |
2905 | +when you choose to run the script. You may also attach as many as five |
2906 | +files with a maximum of 1MB in total size. On each computer on which a |
2907 | +script runs, attachments are placed in the directory specified by the |
2908 | +environment variable LANDSCAPE\_ATTACHMENTS, and are deleted once the |
2909 | +script has been run. After specifying all the information for a stored |
2910 | +script, click on Save to save it. |
2911 | + |
2912 | +To run a stored script, go to the SCRIPTS menu under COMPUTERS. Here you |
2913 | +can choose to run a stored script, or run a new script. |
2914 | + |
2915 | +**Figure 6.17.** |
2916 | + |
2917 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers17.png) |
2918 | + |
2919 | +\ |
2920 | +When you choose to run an existing script, Landscape displays the script |
2921 | +details, which allows you to modify any information. You must specify |
2922 | +the user on the target computers to run the script as, and schedule the |
2923 | +script to run either as soon as possible, or at a specified time. When |
2924 | +you're ready to run the script, click on Run. |
2925 | + |
2926 | +To run a new script, you must enter most of the same information you |
2927 | +would if you were creating a stored script, with three differences. |
2928 | + |
2929 | +**Figure 6.18.** |
2930 | + |
2931 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers18.png) |
2932 | + |
2933 | +\ |
2934 | +On this screen you must specify the user on the target computers to run |
2935 | +the script as, and you may optionally tick a check box to store the |
2936 | +script in your script library. You must also schedule the script to run |
2937 | +either as soon as possible, or at a specified time. When you're ready to |
2938 | +run the script, click on Run. |
2939 | + |
2940 | +Managing upgrade profiles |
2941 | +------------------------- |
2942 | + |
2943 | +An upgrade profile defines a schedule for the times when upgrades are to |
2944 | +be automatically installed on the machines associated with a specific |
2945 | +access group. You can associate zero or more computers with each upgrade |
2946 | +profile via tags to install packages on those computers. You can also |
2947 | +associate an upgrade profile with an access group, which limits its use |
2948 | +to only computers within the specified access group. You can manage |
2949 | +upgrade profiles from the UPGRADE PROFILES link in the PROFILES choice |
2950 | +under your account. |
2951 | + |
2952 | +When you do so, Landscape displays a list of the names and descriptions |
2953 | +of existing upgrade profiles. |
2954 | + |
2955 | +**Figure 6.19.** |
2956 | + |
2957 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers19.png) |
2958 | + |
2959 | +\ |
2960 | +To see the details of an existing profile, click on its name to display |
2961 | +a screen that shows the name, schedule, and tags of computers associated |
2962 | +with the upgrade profile. If you want to change the upgrade profile's |
2963 | +name or schedule, click on the Edit upgrade profile link. If you want to |
2964 | +change the computers associated with the upgrade profile, tick or untick |
2965 | +the check boxes next to the tags on the lower part of the screen, then |
2966 | +click on the Change button. Though you can see the access group |
2967 | +associated with the upgrade profile, you cannot change the access groups |
2968 | +anywhere but from their association with a computer. |
2969 | + |
2970 | +To add an upgrade profile, click on the Add upgrade profile link. |
2971 | + |
2972 | +**Figure 6.20.** |
2973 | + |
2974 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers20.png) |
2975 | + |
2976 | +\ |
2977 | +On the resulting Create an upgrade profile screen you must enter a name |
2978 | +for the upgrade profile. Names can contain only letters, numbers, and |
2979 | +hyphens. You may check a box to make the upgrade profile apply only to |
2980 | +security upgrades; if you leave it unchecked, it will target all |
2981 | +upgrades. Specify the access group to which the upgrade profile belongs |
2982 | +from a drop-down list. Finally, specify the schedule on which the |
2983 | +upgrade profile can run. You can specify a number of hours to let the |
2984 | +upgrade profile run; if it does not complete successfully in that time, |
2985 | +Landscape will trigger an alert. Click on the Save button to save the |
2986 | +new upgrade profile. |
2987 | + |
2988 | +To delete one or more upgrade profiles, tick a check box next to the |
2989 | +upgrade profiles' names, then click on the Remove button. |
2990 | + |
2991 | +Managing removal profiles |
2992 | +------------------------- |
2993 | + |
2994 | +A removal profile defines a maximum number of days that a computer can |
2995 | +go without exchanging data with the Landscape server before it is |
2996 | +automatically removed. If more days pass than the profile's "Days |
2997 | +without exchange", that computer will automatically be removed and the |
2998 | +license seat it held will be released. This helps Landscape keep license |
2999 | +seats open and ensure Landscape is not tracking stale or retired |
3000 | +computer data for long periods of time. You can associate zero or more |
3001 | +computers with each removal profile via tags to ensure those computers |
3002 | +are governed by this removal profile. You can also associate a removal |
3003 | +profile with an access group, which limits its use to only computers |
3004 | +within the specified access group. You can manage removal profiles from |
3005 | +the REMOVAL PROFILES link in the PROFILES choice under your account. |
3006 | + |
3007 | +When you do so, Landscape displays a list of the names and descriptions |
3008 | +of existing removal profiles. |
3009 | + |
3010 | +**Figure 6.21.** |
3011 | + |
3012 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers21.png) |
3013 | + |
3014 | +\ |
3015 | +To see the details of an existing profile, click on its name to display |
3016 | +a screen that shows the title, name and number of days without exchange |
3017 | +before the computer is automatically removed, and tags of computers |
3018 | +associated with the removal profile. If you want to change the removal |
3019 | +profile's title or number of days before removal, click on the Edit |
3020 | +removal profile link. If you want to change the computers associated |
3021 | +with the removal profile, tick or untick the check boxes next to the |
3022 | +tags on the lower part of the screen, then click on the Change button. |
3023 | +Though you can see the access group associated with the removal profile, |
3024 | +you cannot change the access groups anywhere but from their association |
3025 | +with a computer. |
3026 | + |
3027 | +To add a removal profile, click on the Add removal profile link. |
3028 | + |
3029 | +**Figure 6.22.** |
3030 | + |
3031 | +![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers22.png) |
3032 | + |
3033 | +\ |
3034 | +On the resulting Create a removal profile screen you must enter a title |
3035 | +for the removal profile. Specify the access group to which the removal |
3036 | +profile belongs from a drop-down list. Finally, specify the number of |
3037 | +days without exchange that computers will be allowed without contact |
3038 | +before they are automatically removed and their license seat is |
3039 | +released. If a computer does not contact Landscape within that number of |
3040 | +days, it will subsequently be removed. Click on the Save button to save |
3041 | +the new removal profile. |
3042 | + |
3043 | +To delete one or more removal profiles, tick a check box next to the |
3044 | +removal profiles' names, then click on the Remove button. |
3045 | + |
3046 | +##Managing packages |
3047 | + |
3048 | + |
3049 | +A package is a group of related files that comprise everything you need |
3050 | +to install an application. Packages are stored in repositories, and each |
3051 | +package is managed via a package profile, which is a record of the |
3052 | +package's dependencies and conflicts. |
3053 | + |
3054 | +Package information |
3055 | +------------------- |
3056 | + |
3057 | +Clicking on PACKAGES under the COMPUTERS menu displays a screen where |
3058 | +you can search for information about all the packages Landscape knows |
3059 | +about. You may first specify a package name or other search string, then |
3060 | +press Enter or click on the arrow next to the box. Landscape then |
3061 | +displays a list of packages that meet the search criteria. |
3062 | + |
3063 | +**Figure 7.1.** |
3064 | + |
3065 | +![image](./Chapter%A07.%A0Managing%20packages_files/managepackages1.png) |
3066 | + |
3067 | +\ |
3068 | +The top of the screen displays summary information about the packages: |
3069 | +clickable links to which computers have security updates and other |
3070 | +upgrades to be installed, and the number of computers that are |
3071 | +up-to-date and those that have not reported package information. |
3072 | + |
3073 | +The next section provides a list of security issues on computers that |
3074 | +need security updates. You can click on the name or USN number of a |
3075 | +security issue to see a full Ubuntu Security Notice. |
3076 | + |
3077 | +**Figure 7.2.** |
3078 | + |
3079 | +![image](./Chapter%A07.%A0Managing%20packages_files/managepackages2.png) |
3080 | + |
3081 | +\ |
3082 | +The third section displays package information in the form of four |
3083 | +numbers for each selected computer: the number of packages available and |
3084 | +installed, pending upgrades, and held upgrades. You can click on the |
3085 | +number of pending or held upgrades to see a screen that lets you modify |
3086 | +the relevant package list and set a time for the upgrades to take place: |
3087 | + |
3088 | +**Figure 7.3.** |
3089 | + |
3090 | +![image](./Chapter%A07.%A0Managing%20packages_files/managepackages3.png) |
3091 | + |
3092 | + |
3093 | +Finally, a Request upgrades button at the bottom of the screen lets you |
3094 | +quickly request that all possible upgrades be applied to the selected |
3095 | +computers. Any resulting activities require explicit administrator |
3096 | +approval. |
3097 | + |
3098 | +Adding a package profile |
3099 | +------------------------ |
3100 | + |
3101 | +Landscape uses package profiles (also called meta packages) to make sure |
3102 | +the proper software is installed when you request packages. You can |
3103 | +think of a package profile as a package with no file contents, just |
3104 | +dependencies and conflicts. With that information, the package profile |
3105 | +can trigger the installation of other packages necessary for the |
3106 | +requested package to run, or trigger the removal of software that |
3107 | +conflicts with the requested package. These dependencies and conflicts |
3108 | +fall under the general category of constraints. |
3109 | + |
3110 | +To manage package profiles, click the PROFILES menu entry under your |
3111 | +account and the Package profiles link. The Package profiles screen |
3112 | +displays a list of existing package profiles and a link that you can |
3113 | +click to add a new package profile. |
3114 | + |
3115 | +**Figure 7.4.** |
3116 | + |
3117 | +![image](./Chapter%A07.%A0Managing%20packages_files/managepackages4.png) |
3118 | + |
3119 | +\ |
3120 | +Click on that link to display the Create package profile screen: |
3121 | + |
3122 | +**Figure 7.5.** |
3123 | + |
3124 | +![image](./Chapter%A07.%A0Managing%20packages_files/managepackages5.png) |
3125 | + |
3126 | +\ |
3127 | +Here you enter a name for the package profile, a description (which |
3128 | +appears at the top of the package profile's information screen), the |
3129 | +access group to which the package profile should belong, and, |
3130 | +optionally, any package constraints - packages that this profile depends |
3131 | +on or conflicts with. The constraints drop-down lists lets you add |
3132 | +constraints in three ways: based on a computer's installed packages, |
3133 | +imported from a previously exported CSV file or the output of the **dpkg |
3134 | +--get-selections** command, or manually added. Use the first option if |
3135 | +you want to replicate one computer to another, as it makes all currently |
3136 | +installed packages that are on the selected computer dependencies of the |
3137 | +package profile you're creating. The second approach imports the |
3138 | +dependencies of a previously exported package profile. The manual |
3139 | +approach is suitable when you have few dependencies to add, all of which |
3140 | +you know. |
3141 | + |
3142 | +When you save a package profile, behind the scenes Landscape creates a |
3143 | +Debian package with the specified dependencies and conflicts and gives |
3144 | +it a name and a version. Every time you change the package profile, |
3145 | +Landscape increments the version by one. |
3146 | + |
3147 | +If Landscape finds computers on which the package profile should be |
3148 | +installed, it creates an activity to do so. That activity will run |
3149 | +unattended, except that you must provide explicit administrator approval |
3150 | +to remove any packages that the package profile wants to delete. |
3151 | + |
3152 | +Exporting a package profile |
3153 | +--------------------------- |
3154 | + |
3155 | +You can export a package profile in order to use the same constraints |
3156 | +it's set up for with a new package profile. To export a package profile, |
3157 | +click the PROFILES menu entry under your account and the Package |
3158 | +profiles link. Tick the check box next to the packages you want to |
3159 | +export, then click Download as CSV. |
3160 | + |
3161 | +Modifying a package profile |
3162 | +--------------------------- |
3163 | + |
3164 | +To modify a package profile, click the PROFILES menu entry under your |
3165 | +account and the Package profiles link, then click on the name of a |
3166 | +package profile in the list. |
3167 | + |
3168 | +Deleting a package profile |
3169 | +-------------------------- |
3170 | + |
3171 | +To delete a package profile, click the PROFILES menu entry under your |
3172 | +account and then the Package profiles link. Tick the check box next to |
3173 | +the packages you want to delete, then click Remove. The package profile |
3174 | +is deleted immediately, with no prompt to confirm the action. |
3175 | + |
3176 | +Repositories |
3177 | +------------ |
3178 | + |
3179 | +Packages are stored in repositories. A repository is simply a designated |
3180 | +location that stores packages. You can manage Landscape repositories |
3181 | +only via [the Landscape |
3182 | +API](https://landscape.canonical.com/static/doc/user-guide/ch09.html "Chapter 9. The Landscape API"). |
3183 | + |
3184 | + |
3185 | + |
3186 | + |
3187 | +* * * * * |
3188 | + |
3189 | +[Prev](https://landscape.canonical.com/static/doc/user-guide/ch07.html) |
3190 | +[Up](https://landscape.canonical.com/static/doc/user-guide/index.html) |
3191 | +[Next](https://landscape.canonical.com/static/doc/user-guide/ch09.html) |
3192 | + |
3193 | +##Use cases |
3194 | +-------------------- |
3195 | + |
3196 | + |
3197 | +You can use Landscape to perform many common system administration tasks |
3198 | +easily and automatically. Here are a few examples. |
3199 | + |
3200 | +How do I upgrade all packages on a certain group of machines? |
3201 | +------------------------------------------------------------- |
3202 | + |
3203 | +First, tag the machines you want to upgrade with a common tag, so you |
3204 | +can use the tag anytime you need to manage those computers as a group. |
3205 | +If, for instance, you want to upgrade all your desktop computers, you |
3206 | +might want to use "desktop" as a tag. Select your computers, then click |
3207 | +on COMPUTERS on the top menu, and under that INFO. In the box under |
3208 | +Tags:, enter the tag you want to use and click the Add button. |
3209 | + |
3210 | +If you've already tagged the computers, click on COMPUTERS, then click |
3211 | +on the tag in the left column. |
3212 | + |
3213 | +With your desktop computers selected, click on COMPUTERS, then PACKAGES. |
3214 | +Scroll to the bottom of the screen, where you'll see a Request upgrades |
3215 | +button. Click it to queue the upgrade tasks. |
3216 | + |
3217 | +![image](./Chapter%A08.%A0Use%20cases_files/usecases1.png) |
3218 | + |
3219 | +While the upgrade tasks are now in the queue, they will not be executed |
3220 | +until you approve them. To do so, next to Select:, click All, then click |
3221 | +on the Approve button at the bottom of the page. |
3222 | + |
3223 | +How do I keep all of my file servers automatically up to date? |
3224 | +-------------------------------------------------------------- |
3225 | + |
3226 | +The best way is to use [upgrade |
3227 | +profiles](https://landscape.canonical.com/static/doc/user-guide/ch02.html#defineupgradeprofiles), |
3228 | +which rely on access groups. |
3229 | + |
3230 | +If an access group for your file servers already exists, simply click on |
3231 | +its name. If not, you must create an access group for them. To do so, |
3232 | +click on your account, then on ACCESS GROUPS. Specify a name for your |
3233 | +new access group and click the Save button. You must then add computers |
3234 | +to the access group. To do that, click on COMPUTERS, then select all |
3235 | +your file servers by using a tag, if one exists, or a search, or by |
3236 | +ticking them individually. Once all the computers you want to add to the |
3237 | +access group are tagged, click on the INFO menu choice, scroll down to |
3238 | +the bottom section, choose the access group you want from the drop-down |
3239 | +list, then click the Update access group button. |
3240 | + |
3241 | +![image](./Chapter%A08.%A0Use%20cases_files/accessgroups4.png) |
3242 | + |
3243 | +Once you have all your file servers in an access group you can create an |
3244 | +upgrade profile for them. Click on your account, then PROFILES menu |
3245 | +following the Upgrade profiles link, and then on the Add upgrade profile |
3246 | +link. Enter a name for the new upgrade profile, choose the access group |
3247 | +you wish to associate with it, and specify the schedule on which the |
3248 | +upgrades should run, then click the Save button. |
3249 | + |
3250 | +How do I keep Landscape from upgrading a certain package on one of my servers? |
3251 | +------------------------------------------------------------------------------ |
3252 | + |
3253 | +First find the package by clicking on COMPUTERS, then PACKAGES. Use the |
3254 | +search box at the top of the screen to find the package you want. Click |
3255 | +the triangle on the left of the listing line of the package you want to |
3256 | +hold, which expands the information for that package. Now click on the |
3257 | +icon to the left of the package name. A new icon with a lock replaces |
3258 | +the old one, indicating that this package is to be held during upgrades. |
3259 | +Scroll to the bottom of the page and click on the Apply Changes button. |
3260 | + |
3261 | +![image](./Chapter%A08.%A0Use%20cases_files/usecases2.png) |
3262 | + |
3263 | +How do I set up a custom graph? |
3264 | +------------------------------- |
3265 | + |
3266 | +First select the computers whose information you want to see. One good |
3267 | +way to do so is to create a tag for that group of computers on my |
3268 | +computers. Suppose you want to monitor the size of the PostgreSQL |
3269 | +database on your database servers. Select the servers, then click on |
3270 | +COMPUTERS on the top menu, and INFO under that. In the box under Tags:, |
3271 | +enter a tag name, such as "db-server," and click the Add button. Next, |
3272 | +under your account, click on CUSTOM GRAPHS, then on the link to Add |
3273 | +custom graph. Enter a title, and in the \#! field, enter **/bin/sh** to |
3274 | +indicate a shell script. In the Code section, enter the commands |
3275 | +necessary to create the data for the graph. For this example, the |
3276 | +command might be: |
3277 | + |
3278 | +~~~~ {.programlisting} |
3279 | +psql -tAc "select pg_database_size('postgres')" |
3280 | +~~~~ |
3281 | + |
3282 | +For Run as user, enter **postgres**. |
3283 | + |
3284 | +Fill in the Y-axix title, then click the Save button at the bottom of |
3285 | +the page. |
3286 | + |
3287 | +![image](./Chapter%A08.%A0Use%20cases_files/usecases3.png) |
3288 | + |
3289 | +To view the graph, click on COMPUTERS, then MONITORING. You can select |
3290 | +the monitoring period from the drop-down box at the top of the window. |
3291 | + |
3292 | +How do I ensure all computers with a given tag have a common list of packages installed? |
3293 | +---------------------------------------------------------------------------------------- |
3294 | + |
3295 | +Manage them via a [package |
3296 | +profile](https://landscape.canonical.com/static/doc/user-guide/ch07.html#definepp "Adding a package profile"). |
3297 | + |
3298 | + |
3299 | |
3300 | === removed file 'Installing-Ceph.md' |
3301 | --- Installing-Ceph.md 2014-04-07 13:23:30 +0000 |
3302 | +++ Installing-Ceph.md 1970-01-01 00:00:00 +0000 |
3303 | @@ -1,56 +0,0 @@ |
3304 | -Title: Installing - Ceph |
3305 | -Status: Review |
3306 | - |
3307 | -# Installing - Ceph |
3308 | - |
3309 | -## Introduction |
3310 | - |
3311 | -Typically OpenStack uses the local storage of their nodes for the configuration data |
3312 | -as well as for the object storage provided by Swift and the block storage provided by |
3313 | -Cinder and Glance. But it also can use Ceph as storage backend. Ceph stripes block |
3314 | -device images across a cluster. This way it provides a better performance than typical |
3315 | -standalone server. It allows scalabillity and redundancy needs to be satisfied and |
3316 | -Cinder's RDB driver used to create, export and connect volumes to instances. |
3317 | - |
3318 | -## Scope |
3319 | - |
3320 | -This document covers the deployment of Ceph via Juju. Other related documents are |
3321 | - |
3322 | -- [Scaling Ceph](Scaling-Ceph.md) |
3323 | -- [Troubleshooting Ceph](Troubleshooting-Ceph.md) |
3324 | -- [Appendix Ceph and OpenStack](Appendix-Ceph-and-OpenStack.md) |
3325 | - |
3326 | -## Deployment |
3327 | - |
3328 | -During the installation of OpenStack we've already seen the deployment of Ceph via |
3329 | - |
3330 | -``` |
3331 | -juju deploy --config openstack-config.yaml -n 3 ceph |
3332 | -juju deploy --config openstack-config.yaml -n 10 ceph-osd |
3333 | -``` |
3334 | - |
3335 | -This will install three Ceph nodes configured with the information contained in the |
3336 | -file `openstack-config.yaml`. This file contains the configuration `block-device: None` |
3337 | -for Cinder, so that this component does not use the local disk. Instead we're calling |
3338 | -Additionally 10 Ceph OSD nodes providing the object storage are deployed and related |
3339 | -to the Ceph nodes by |
3340 | - |
3341 | -``` |
3342 | -juju add-relation ceph-osd ceph |
3343 | -``` |
3344 | - |
3345 | -Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which |
3346 | -will scan for the configured storage devices and add them to the pool of available storage. |
3347 | -Now the relation to Cinder and Glance can be established with |
3348 | - |
3349 | -``` |
3350 | -juju add-relation cinder ceph |
3351 | -juju add-relation glance ceph |
3352 | -``` |
3353 | - |
3354 | -so that both are using the storage provided by Ceph. |
3355 | - |
3356 | -## See also |
3357 | - |
3358 | -- https://manage.jujucharms.com/charms/precise/ceph |
3359 | -- https://manage.jujucharms.com/charms/precise/ceph-osd |
3360 | |
3361 | === removed file 'Installing-MAAS.md' |
3362 | --- Installing-MAAS.md 2014-04-02 23:18:00 +0000 |
3363 | +++ Installing-MAAS.md 1970-01-01 00:00:00 +0000 |
3364 | @@ -1,467 +0,0 @@ |
3365 | -Title: Installing MAAS |
3366 | -Status: In progress |
3367 | -Notes: |
3368 | - |
3369 | - |
3370 | - |
3371 | - |
3372 | - |
3373 | -#Installing the MAAS software |
3374 | - |
3375 | -##Scope of this documentation |
3376 | - |
3377 | -This document provides instructions on how to install the Metal As A Service (MAAS) software. It has been prepared alongside guides for installing Juju, OpenStack and Landscape as part of a production grade cloud environment. MAAS itself may be used in different ways and you can find documentation for this on the main MAAS website [MAAS docs]. For the purposes of this documentation, the following assumptions have been made: |
3378 | -* You have sufficient, appropriate node hardware |
3379 | -* You will be using Juju to assign workloads to MAAS |
3380 | -* You will be configuring the cluster network to be controlled entirely by MAAS (i.e. DNS and DHCP) |
3381 | -* If you have a compatible power-management system, any additional hardware required is also installed(e.g. IPMI network). |
3382 | - |
3383 | -## Introducing MAAS |
3384 | - |
3385 | -Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource. |
3386 | - |
3387 | -What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud. |
3388 | - |
3389 | -When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova. |
3390 | - |
3391 | -MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal. |
3392 | - |
3393 | -## Installing MAAS from the Cloud Archive |
3394 | - |
3395 | -The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of MAAS, Juju and other tools. It is highly recommended to configure this repository and use it to keep your software up to date: |
3396 | - |
3397 | -``` |
3398 | -sudo add-apt-repository cloud-archive:tools |
3399 | -sudo apt-get update |
3400 | -``` |
3401 | - |
3402 | -There are several packages that comprise a MAAS install. These are: |
3403 | - |
3404 | -maas-region-controller: |
3405 | - Which comprises the 'control' part of the software, including the web-based user interface, the API server and the main database. |
3406 | -maas-cluster-controller: |
3407 | - This includes the software required to manage a cluster of nodes, including managing DHCP and boot images. |
3408 | -maas-dns: |
3409 | - This is a customised DNS service that MAAS can use locally to manage DNS for all the connected nodes. |
3410 | -mass-dhcp: |
3411 | - As for DNS, there is a DHCP service to enable MAAS to correctly enlist nodes and assign IP addresses. The DHCP setup is critical for the correct PXE booting of nodes. |
3412 | - |
3413 | -As a convenience, there is also a `maas` metapackage, which will install all these components |
3414 | - |
3415 | - |
3416 | -If you need to separate these services or want to deploy an additional cluster controller, you should install the corresponding packages individually (see [_the description of a typical setup_](https://www.filepicker.io/api/file/orientation.html#setup) for more background on how a typical hardware setup might be arranged). |
3417 | - |
3418 | - |
3419 | - |
3420 | - |
3421 | -### Installing the packages |
3422 | - |
3423 | -The configuration for the MAAS controller will automatically run and pop up this config screen: |
3424 | - |
3425 | -![]( install_cluster-config.png) |
3426 | - |
3427 | -Here you will need to enter the hostname for where the region controller can be contacted. In many scenarios, you may be running the region controller (i.e. the web and API interface) from a different network address, for example where a server has several network interfaces. |
3428 | - |
3429 | -Once the configuration scripts have run you should see this message telling you that the system is ready to use: |
3430 | - |
3431 | -![]( install_controller-config.png) |
3432 | - |
3433 | -The web server is started last, so you have to accept this message before the service is run and you can access the Web interface. Then there are just a few more setup steps [_Post-Install tasks_](https://www.filepicker.io/api/file/WMGTttJT6aaLnQrEkAPv?signature=a86d0c3b4e25dba2d34633bbdc6873d9d8e6ae3cecc4672f2219fa81ee478502&policy=eyJoYW5kbGUiOiJXTUdUdHRKVDZhYUxuUXJFa0FQdiIsImV4cGlyeSI6MTM5NTE3NDE2MSwiY2FsbCI6WyJyZWFkIl19#post-install) |
3434 | - |
3435 | -The maas-dhcp and maas-dns packages should be installed by default. You can check whether they are installed with: |
3436 | - |
3437 | -``` |
3438 | -dpkg -l maas-dhcp maas-dns |
3439 | -``` |
3440 | - |
3441 | -If they are missing, then: |
3442 | - |
3443 | -``` |
3444 | -sudo apt-get install maas-dhcp maas-dns |
3445 | -``` |
3446 | - |
3447 | -And then proceed to the post-install setup below. |
3448 | - |
3449 | -If you now use a web browser to connect to the region controller, you should see that MAAS is running, but there will also be some errors on the screen: |
3450 | - |
3451 | -![]( install_web-init.png) |
3452 | - |
3453 | -The on screen messages will tell you that there are no boot images present, and that you can't login because there is no admin user. |
3454 | - |
3455 | -## Create a superuser account |
3456 | - |
3457 | -Once MAAS is installed, you'll need to create an administrator account: |
3458 | - |
3459 | -``` |
3460 | -sudo maas createadmin --username=root --email=MYEMAIL@EXAMPLE.COM |
3461 | -``` |
3462 | - |
3463 | -Substitute your own email address in the command above. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember. The command will prompt for a password to assign to the new user. |
3464 | - |
3465 | -You can run this command again for any further administrator accounts you may wish to create, but you need at least one. |
3466 | - |
3467 | -## Import the boot images |
3468 | - |
3469 | -MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually the first time. To do this you will need to connect to the MAAS API using the maas-cli tool. (see for details). Then you need to run the command: |
3470 | - |
3471 | -``` |
3472 | -maas-cli maas node-groups import-boot-images |
3473 | -``` |
3474 | - |
3475 | -(substitute in a different profile name for 'maas' if you have called yours something else) This will initiate downloading the required image files. Note that this may take some time depending on your network connection. |
3476 | - |
3477 | - |
3478 | -## Login to the server |
3479 | - |
3480 | -To check that everything is working properly, you should try and login to the server now. Both the error messages should have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes attached to this controller. |
3481 | - |
3482 | -![]( install-login.png) |
3483 | -## Configure switches on the network |
3484 | - |
3485 | -Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use. |
3486 | - |
3487 | -To alleviate this problem, you should enable [Portfast](https://www.symantec.com/business/support/index?page=content&id=HOWTO6019) for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately. |
3488 | - |
3489 | -##Add an additional cluster |
3490 | - |
3491 | -Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier maintenance, uprades and stability, it is desirable to have at least two operational clusters. |
3492 | - |
3493 | -Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the cluster controller software: |
3494 | - |
3495 | -``` |
3496 | -sudo add-apt-repository cloud-archive:tools |
3497 | -sudo apt-get update |
3498 | -sudo apt-get install maas-cluster-controller |
3499 | -sudo apt-get install maas-dhcp |
3500 | -``` |
3501 | - |
3502 | -During the install process, a configuration window will appear. You merely need to type in the address of the MAAS controller API, like this: |
3503 | - |
3504 | -![config-image.png] |
3505 | - |
3506 | -## Configure Cluster Controller(s) |
3507 | - |
3508 | -### Cluster acceptance |
3509 | -When you install your first cluster controller on the same system as the region controller, it will be automatically accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user interface as “pending,” until you manually accept them into the MAAS. |
3510 | - |
3511 | -To accept a cluster controller, click on the settings “cog” icon at the top right to visit the settings page: |
3512 | -![]settings.png |
3513 | -You can either click on “Accept all” or click on the edit icon to edit the cluster. After clicking on the edit icon, you will see this page: |
3514 | - |
3515 | -![]cluster-edit.png |
3516 | -Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes its status from “pending” to “accepted.” |
3517 | - |
3518 | -Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how to do this and what choices are to be made. |
3519 | - |
3520 | -### Configuration |
3521 | -MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these networks, by configuring those interfaces in the MAAS user interface. |
3522 | - |
3523 | -As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for eth0, which takes us to this page: |
3524 | - |
3525 | -![]cluster-interface-edit.png |
3526 | -Here you can select to what extent you want the cluster controller to manage the network: |
3527 | - |
3528 | -DHCP only - this will run a DHCP server on your cluster |
3529 | -DHCP and DNS - this will run a DHCP server on the cluster and configure the DNS server included with the region controller so that it can be used to look up hosts on this network by name. |
3530 | -Note |
3531 | -You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file to work out the IP address of nodes in the cluster. |
3532 | -If you set the interface to be managed, you now need to provide all of the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able to boot nodes on this network. |
3533 | - |
3534 | -!!! note:There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any nodes. Or, if you do want to manage nodes but don’t want the cluster controller to serve DHCP, you may be able to get by without it. This is explained in Manual DHCP configuration. |
3535 | - |
3536 | -!!! note: A single cluster controller can manage more than one network, each from a different network interface on the cluster-controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your network architecture. |
3537 | - |
3538 | -## Enlisting nodes |
3539 | - |
3540 | -Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS controlling DHCP and nodes capable of PXE booting, this is straightforward |
3541 | - |
3542 | -Automatic Discovery |
3543 | -With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the image, contact the MAAS server and shut down. |
3544 | - |
3545 | -During this process, the MAAS server will be passed information about the node, including the architecture, MAC address and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web interface. When the nodes have been accepted the selected series of Ubuntu will be installed. |
3546 | - |
3547 | -To save time, you can also accept and commission all nodes from the commandline. This requires that you first login with the API key [1], which you can retrieve from the web interface: |
3548 | - |
3549 | -``` |
3550 | -maas-cli maas nodes accept-all |
3551 | -``` |
3552 | - |
3553 | -### Manually adding nodes |
3554 | - |
3555 | -If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the Nodes screen: |
3556 | -![]add-node.png |
3557 | - |
3558 | -Select 'Add node' and manually enter details about the node, including its MAC address. This is used to identify the node when it contacts the DHCP server. |
3559 | - |
3560 | - |
3561 | - |
3562 | -## Preparing MAAS for Juju using Simplestreams |
3563 | - |
3564 | -When Juju bootstraps a cloud, it needs two critical pieces of information: |
3565 | - |
3566 | -1. The uuid of the image to use when starting new compute instances. |
3567 | -2. The URL from which to download the correct version of a tools tarball. |
3568 | - |
3569 | -This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata, after understanding a bit about how it works. |
3570 | - |
3571 | -The simplestreams format is used to describe related items in a structural fashion.( [See the Launchpad project lp:simplestreams for more details on implementation](https://launchpad.net/simplestreams)). Below we will discuss how Juju determines which metadata to use, and how to create your own images and tools and have Juju use them instead of the defaults. |
3572 | - |
3573 | -### Basic Workflow |
3574 | - |
3575 | -Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of lookup) are: |
3576 | - |
3577 | -1. User supplied location (specified by tools-metadata-url or image-metadata-url config settings). |
3578 | -2. The environment's cloud storage. |
3579 | -3. Provider specific locations (eg keystone endpoint if on Openstack). |
3580 | -4. A web location with metadata for supported public clouds (https://streams.canonical.com). |
3581 | - |
3582 | -Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before moving onto the next path location. |
3583 | - |
3584 | -Juju ships with public keys used to validate the integrity of image and tools metadata obtained from https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which ship with Juju. |
3585 | - |
3586 | -### Image Metadata Contents |
3587 | - |
3588 | -Image metadata uses a simplestreams content type of "image-ids". The product id is formed as follows: |
3589 | - |
3590 | -com.ubuntu.cloud:server:<series_version>:<arch> For Example: |
3591 | -com.ubuntu.cloud:server:14.04:amd64 Non-released images (eg beta, daily etc) have product ids like: |
3592 | -com.ubuntu.cloud.daily:server:13.10:amd64 |
3593 | - |
3594 | -The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component): |
3595 | - |
3596 | -<path_url> |-streams |-v1 |-index.(s)json |-product-foo.(s)json |-product-bar.(s)json |
3597 | - |
3598 | -The index file must be called "index.(s)json" (sjson for signed). The various product files are named according to the Path values contained in the index file. |
3599 | - |
3600 | -Tools metadata uses a simplestreams content type of "content-download". The product id is formed as follows: |
3601 | - |
3602 | -"com.ubuntu.juju:<series_version>:<arch>" |
3603 | - |
3604 | -For example: |
3605 | - |
3606 | -"com.ubuntu.juju:12.04:amd64" |
3607 | - |
3608 | -The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component). In addition, tools tarballs which Juju needs to download are also expected. |
3609 | - |
3610 | -|-streams | |-v1 | |-index.(s)json | |-product-foo.(s)json | |-product-bar.(s)json | |-releases |-tools-abc.tar.gz |-tools-def.tar.gz |-tools-xyz.tar.gz |
3611 | - |
3612 | -The index file must be called "index.(s)json" (sjson for signed). The product file and tools tarball name(s) match whatever is in the index/product files. |
3613 | - |
3614 | -### Configuration |
3615 | - |
3616 | -For supported public clouds, no extra configuration is required; things work out-of-the-box. However, for testing purposes, or for non-supported cloud deployments, Juju needs to know where to find the tools and which image to run. Even for supported public clouds where all required metadata is available, the user can put their own metadata in the search path to override what is provided by the cloud. |
3617 | - |
3618 | -#### User specified URLs |
3619 | - |
3620 | -These are initially specified in the environments.yaml file (and then subsequently copied to the jenv file when the environment is bootstrapped). For images, use "image-metadata-url"; for tools, use "tools-metadata-url". The URLs can point to a world readable container/bucket in the cloud, an address served by a http server, or even a shared directory which is accessible by all node instances running in the cloud. |
3621 | - |
3622 | -Assume an Apache http server with base URL `https://juju-metadata` , providing access to information at `<base>/images` and `<base>/tools` . The Juju environment yaml file could have the following entries (one or both): |
3623 | - |
3624 | -tools-metadata-url: https://juju-metadata/tools image-metadata-url: https://juju-metadata/images |
3625 | - |
3626 | -The required files in each location is as per the directory layout described earlier. For a shared directory, use a URL of the form "file:///sharedpath". |
3627 | - |
3628 | -#### Cloud storage |
3629 | - |
3630 | -If no matching metadata is found in the user specified URL, environment's cloud storage is searched. No user configuration is required here - all Juju environments are set up with cloud storage which is used to store state information, charms etc. Cloud storage setup is provider dependent; for Amazon and Openstack clouds, the storage is defined by the "control-bucket" value, for Azure, the "storage-account-name" value is relevant. |
3631 | - |
3632 | -The (optional) directory structure inside the cloud storage is as follows: |
3633 | - |
3634 | -|-tools | |-streams | |-v1 | |-releases | |-images |-streams |-v1 |
3635 | - |
3636 | -Of course, if only custom image metadata is required, the tools directory will not be required, and vice versa. |
3637 | - |
3638 | -Note that if juju bootstrap is run with the `--upload-tools` option, the tools and metadata are placed according to the above structure. That's why the tools are then available for Juju to use. |
3639 | - |
3640 | -#### Provider specific storage |
3641 | - |
3642 | -Providers may allow additional locations to search for metadata and tools. For OpenStack, Keystone endpoints may be created by the cloud administrator. These are defined as follows: |
3643 | - |
3644 | -juju-tools the <path_url> value as described above in Tools Metadata Contentsproduct-streams the <path_url> value as described above in Image Metadata Contents |
3645 | - |
3646 | -Other providers may similarly be able to specify locations, though the implementation will vary. |
3647 | - |
3648 | -This is the default location used to search for image and tools metadata and is used if no matches are found earlier in any of the above locations. No user configuration is required. |
3649 | - |
3650 | -There are two main issues when deploying a private cloud: |
3651 | - |
3652 | -1. Image ids will be specific to the cloud. |
3653 | -2. Often, outside internet access is blocked |
3654 | - |
3655 | -Issue 1 means that image id metadata needs to be generated and made available. |
3656 | - |
3657 | -Issue 2 means that tools need to be mirrored locally to make them accessible. |
3658 | - |
3659 | -Juju tools exist to help with generating and validating image and tools metadata. For tools, it is often easiest to just mirror `https://streams.canonical.com/tools` . However image metadata cannot be simply mirrored because the image ids are taken from the cloud storage provider, so this needs to be generated and validated using the commands described below. |
3660 | - |
3661 | -The available Juju metadata tools can be seen by using the help command: |
3662 | - |
3663 | -juju help metadata |
3664 | - |
3665 | -The overall workflow is: |
3666 | - |
3667 | -- Generate image metadata |
3668 | -- Copy image metadata to somewhere in the metadata search path |
3669 | -- Optionally, mirror tools to somewhere in the metadata search path |
3670 | -- Optionally, configure tools-metadata-url and/or image-metadata-url |
3671 | - |
3672 | -#### Image metadata |
3673 | - |
3674 | -Generate image metadata using |
3675 | - |
3676 | -juju metadata generate-image -d <metadata_dir> |
3677 | - |
3678 | -As a minimum, the above command needs to know the image id to use and a directory in which to write the files. |
3679 | - |
3680 | -Other required parameters like region, series, architecture etc. are taken from the current Juju environment (or an environment specified with the -e option). These parameters can also be overridden on the command line. |
3681 | - |
3682 | -The image metadata command can be run multiple times with different regions, series, architecture, and it will keep adding to the metadata files. Once all required image ids have been added, the index and product json files can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the `image-metadata-url` setting or the cloud's storage etc. |
3683 | - |
3684 | -Examples: |
3685 | - |
3686 | -1. image-metadata-url |
3687 | - |
3688 | -- upload contents of to `http://somelocation` |
3689 | -- set image-metadata-url to `http://somelocation/images` |
3690 | - |
3691 | -2. Cloud storage |
3692 | - |
3693 | -If run without parameters, the validation command will take all required details from the current Juju environment (or as specified by -e) and output the image id it would use to spin up an instance. Alternatively, series, region, architecture etc. can be specified on the command line to override the values in the environment config. |
3694 | -#### Tools metadata |
3695 | - |
3696 | -Generally, tools and related metadata are mirrored from `https://streams.canonical.com/tools` . However, it is possible to manually generate metadata for a custom built tools tarball. |
3697 | - |
3698 | -First, create a tarball of the relevant tools and place in a directory structured like this: |
3699 | - |
3700 | -<tools_dir>/tools/releases/ |
3701 | - |
3702 | -Now generate relevant metadata for the tools by running the command: |
3703 | - |
3704 | -juju generate-tools -d <tools_dir> |
3705 | - |
3706 | -Finally, the contents of can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the tools-metadata-url setting or the cloud's storage path settings etc. |
3707 | - |
3708 | -Examples: |
3709 | - |
3710 | -1. tools-metadata-url |
3711 | - |
3712 | -- upload contents of the tools dir to `http://somelocation` |
3713 | -- set tools-metadata-url to `http://somelocation/tools` |
3714 | - |
3715 | -2. Cloud storage |
3716 | - |
3717 | -upload contents of directly to environment's cloud storage |
3718 | - |
3719 | -As with image metadata, the validation command is used to ensure tools are available for Juju to use: |
3720 | - |
3721 | -juju metadata validate-tools |
3722 | - |
3723 | -The same comments apply. Run the validation tool without parameters to use details from the Juju environment, or override values as required on the command line. See `juju help metadata validate-tools` for more details. |
3724 | - |
3725 | -##Appendix I - Using Tags |
3726 | -##Appendix II - Using the MAAS CLI |
3727 | -As well as the web interface, many tasks can be performed by accessing the MAAS API directly through the maas-cli command. This section details how to login with this tool and perform some common operations. |
3728 | - |
3729 | -###Logging in |
3730 | -Before the API will accept any commands from maas-cli, you must first login. To do this, you need the API key which can be found in the user interface. |
3731 | - |
3732 | -Login to the web interface on your MAAS. Click on the username in the top right corner and select ‘Preferences’ from the menu which appears. |
3733 | - |
3734 | -![]maascli-prefs.png |
3735 | -A new page will load... |
3736 | - |
3737 | -![]maascli-key.png |
3738 | -The very first item is a list of MAAS keys. One will have already been generated when the system was installed. It’s easiest to just select all the text, copy the key (it’s quite long!) and then paste it into the commandline. The format of the login command is: |
3739 | - |
3740 | -``` |
3741 | - maas-cli login <profile-name> <hostname> <key> |
3742 | -``` |
3743 | - |
3744 | -The profile created is an easy way of associating your credentials with any subsequent call to the API. So an example login might look like this: |
3745 | - |
3746 | -``` |
3747 | -maas-cli login maas http://10.98.0.13/MAAS/api/1.0 |
3748 | -AWSCRMzqMNy:jjk...5e1FenoP82Qm5te2 |
3749 | -``` |
3750 | -which creates the profile ‘maas’ and registers it with the given key at the specified API endpoint. If you omit the credentials, they will be prompted for in the console. It is also possible to use a hyphen, ‘-‘ in place of the credentials. In this case a single line will be read from stdin, stripped of any whitespace and used as the credentials, which can be useful if you are devolping scripts for specific tasks. If an empty string is passed instead of the credentials, the profile will be logged in anonymously (and consequently some of the API calls will not be available) |
3751 | - |
3752 | -### maas-cli commands |
3753 | -The maas-cli command exposes the whole API, so you can do anything you actually can do with MAAS using this command. This leaves us with a vast number of options, which are more fully expressed in the complete [2][MAAS Documentation] |
3754 | - |
3755 | -list: |
3756 | - lists the details [name url auth-key] of all the currently logged-in profiles. |
3757 | - |
3758 | -login <profile> <url> <key>: |
3759 | - Logs in to the MAAS controller API at the given URL, using the key provided and |
3760 | - associates this connection with the given profile name. |
3761 | - |
3762 | -logout <profile>: |
3763 | - Logs out from the given profile, flushing the stored credentials. |
3764 | - |
3765 | -refresh: |
3766 | - Refreshes the API descriptions of all the current logged in profiles. This may become necessary for example when upgrading the maas packages to ensure the command-line options match with the API. |
3767 | - |
3768 | -### Useful examples |
3769 | - |
3770 | -Displays current status of nodes in the commissioning phase: |
3771 | -``` |
3772 | -maas cli maas nodes check-commissioning |
3773 | -``` |
3774 | - |
3775 | -Accept and commission all discovered nodes: |
3776 | -``` |
3777 | -maas-cli maas nodes accept-all |
3778 | -``` |
3779 | - |
3780 | -List all known nodes: |
3781 | -``` |
3782 | -maas-cli maas nodes list |
3783 | -``` |
3784 | - |
3785 | -Filter the list using specific key/value pairs: |
3786 | -``` |
3787 | -maas-cli maas nodes list architecture="i386/generic" |
3788 | -``` |
3789 | - |
3790 | -Set the power parameters for an ipmi enabled node: |
3791 | -``` |
3792 | -maas-cli maas node update <system_id> \ |
3793 | - power_type="ipmi" \ |
3794 | - power_parameters_power_address=192.168.22.33 \ |
3795 | - power_parameters_power_user=root \ |
3796 | - power_parameters_power_pass=ubuntu; |
3797 | -``` |
3798 | -## Appendix III - Physical Zones |
3799 | - |
3800 | -To help you maximise fault-tolerance and performance of the services you deploy, MAAS administrators can define _physical zones_ (or just _zones_ for short), and assign nodes to them. When a user requests a node, they can ask for one that is in a specific zone, or one that is not in a specific zone. |
3801 | - |
3802 | -It's up to you as an administrator to decide what a physical zone should represent: it could be a server rack, a room, a data centre, machines attached to the same UPS, or a portion of your network. Zones are most useful when they represent portions of your infrastructure. But you could also use them simply to keep track of where your systems are located. |
3803 | - |
3804 | -Each node is in one and only one physical zone. Each MAAS instance ships with a default zone to which nodes are attached by default. If you do not need this feature, you can simply pretend it does not exist. |
3805 | - |
3806 | -### Applications |
3807 | - |
3808 | -Since you run your own MAAS, its physical zones give you more flexibility than those of a third-party hosted cloud service. That means that you get to design your zones and define what they mean. Below are some examples of how physical zones can help you get the most out of your MAAS. |
3809 | - |
3810 | -### Creating a Zone |
3811 | - |
3812 | -Only administrators can create and manage zones. To create a physical zone in the web user interface, log in as an administrator and browse to the "Zones" section in the top bar. This will takes you to the zones listing page. At the bottom of the page is a button for creating a new zone: |
3813 | - |
3814 | -![]add-zone.png |
3815 | - |
3816 | -Or to do it in the [_region-controller API_][#region-controller-api], POST your zone definition to the _"zones"_ endpoint. |
3817 | - |
3818 | -### Assigning Nodes to a Zone |
3819 | - |
3820 | -Once you have created one or more physical zones, you can set nodes' zones from the nodes listing page in the UI. Select the nodes for which you wish to set a zone, and choose "Set physical zone" from the "Bulk action" dropdown list near the top. A second dropdown list will appear, to let you select which zone you wish to set. Leave it blank to clear nodes' physical zones. Clicking "Go" will apply the change to the selected nodes. |
3821 | - |
3822 | -You can also set an individual node's zone on its "Edit node" page. Both ways are available in the API as well: edit an individual node through a request to the node's URI, or set the zone on multiple nodes at once by calling the operation on the endpoint. |
3823 | - |
3824 | -### Allocating a Node in a Zone |
3825 | - |
3826 | -To deploy in a particular zone, call the method in the [_region-controller API_][#region-controller-api] as before, but pass the parameter with the name of the zone. The method will allocate a node in that zone, or fail with an HTTP 409 ("conflict") error if the zone has no nodes available that match your request. |
3827 | - |
3828 | -Alternatively, you may want to request a node that is _not_ in a particular zone, or one that is not in any of several zones. To do that, specify the parameter to . This parameter takes a list of zone names; the allocated node will not be in any of them. Again, if that leaves no nodes available that match your request, the call will return a "conflict" error. |
3829 | - |
3830 | -It is possible, though not usually useful, to combine the and parameters. If your choice for is also present in , no node will ever match your request. Or if it's not, then the values will not affect the result of the call at all. |
3831 | - |
3832 | |
3833 | === removed file 'Intro.md' |
3834 | --- Intro.md 2014-04-11 14:51:27 +0000 |
3835 | +++ Intro.md 1970-01-01 00:00:00 +0000 |
3836 | @@ -1,26 +0,0 @@ |
3837 | -#Ubuntu Cloud Documentation |
3838 | - |
3839 | -## Deploying Production Grade OpenStack with MAAS, Juju and Landscape |
3840 | - |
3841 | -This documentation has been created to describe best practice in deploying |
3842 | -a Production Grade installation of OpenStack using current Canonical |
3843 | -technologies, including bare metal provisioning using MAAS, service |
3844 | -orchestration with Juju and system management with Landscape. |
3845 | - |
3846 | -This documentation is divided into four main topics: |
3847 | - |
3848 | - 1. [Installing the MAAS Metal As A Service software](../installing-maas.html) |
3849 | - 2. [Installing Juju and configuring it to work with MAAS](../installing-juju.html) |
3850 | - 3. [Using Juju to deploy OpenStack](../installing-openstack.html) |
3851 | - 4. [Deploying Landscape to manage your OpenStack cloud](../installing-landscape) |
3852 | - |
3853 | -Once you have an up and running OpenStack deployment, you should also read |
3854 | -our [Administration Guide](../admin-intro.html) which details common tasks |
3855 | -for maintenance and scaling of your service. |
3856 | - |
3857 | - |
3858 | -## Legal notices |
3859 | - |
3860 | - |
3861 | - |
3862 | -![Canonical logo](./media/logo-canonical_no™-aubergine-hex.jpg) |
3863 | |
3864 | === removed file 'Logging-Juju.md' |
3865 | --- Logging-Juju.md 2014-04-02 16:18:10 +0000 |
3866 | +++ Logging-Juju.md 1970-01-01 00:00:00 +0000 |
3867 | @@ -1,24 +0,0 @@ |
3868 | -Title: Logging - Juju |
3869 | -Status: In Progress |
3870 | - |
3871 | -# Logging - Juju |
3872 | - |
3873 | -## Introduction |
3874 | - |
3875 | -**TODO** |
3876 | - |
3877 | -## Scope |
3878 | - |
3879 | -**TODO** |
3880 | - |
3881 | -## Connecting to rsyslogd |
3882 | - |
3883 | -Juju already uses `rsyslogd` for the aggregation of all logs into on centralized log. The |
3884 | -target of this logging is the file `/var/log/juju/all-machines.log`. You can directly |
3885 | -access it using the command |
3886 | - |
3887 | -```` |
3888 | -$ juju debug-log |
3889 | -```` |
3890 | - |
3891 | -**TODO** Describe a way to redirect this log to a central rsyslogd server. |
3892 | |
3893 | === removed file 'Logging-OpenStack.md' |
3894 | --- Logging-OpenStack.md 2014-04-02 16:18:10 +0000 |
3895 | +++ Logging-OpenStack.md 1970-01-01 00:00:00 +0000 |
3896 | @@ -1,92 +0,0 @@ |
3897 | -Title: Logging - OpenStack |
3898 | -Status: In Progress |
3899 | - |
3900 | -# Logging - OpenStack |
3901 | - |
3902 | -## Introduction |
3903 | - |
3904 | -**TODO** |
3905 | - |
3906 | -## Scope |
3907 | - |
3908 | -**TODO** |
3909 | - |
3910 | -## Connecting to rsyslogd |
3911 | - |
3912 | -By default OpenStack is writting its logging output into files into directories for each |
3913 | -component, like `/var/log/nova` or `/var/log/glance`. For the usage of `rsyslogd` the components |
3914 | -have to be configured to also log to `syslog`. When doing this also configure each component |
3915 | -to log into a different syslog facility. This will help you to split the logs into individual |
3916 | -components on the central logging server. So ensure the following settings: |
3917 | - |
3918 | -**/etc/nova/nova.conf:** |
3919 | - |
3920 | -```` |
3921 | -use_syslog=True |
3922 | -syslog_log_facility=LOG_LOCAL0 |
3923 | -```` |
3924 | - |
3925 | -**/etc/glance/glance-api.conf and /etc/glance/glance-registry.conf:** |
3926 | - |
3927 | -```` |
3928 | -use_syslog=True |
3929 | -syslog_log_facility=LOG_LOCAL1 |
3930 | -```` |
3931 | - |
3932 | -**/etc/cinder/cinder.conf:** |
3933 | - |
3934 | -```` |
3935 | -use_syslog=True |
3936 | -syslog_log_facility=LOG_LOCAL2 |
3937 | -```` |
3938 | - |
3939 | -**/etc/keystone/keystone.conf:** |
3940 | - |
3941 | -```` |
3942 | -use_syslog=True |
3943 | -syslog_log_facility=LOG_LOCAL3 |
3944 | -```` |
3945 | - |
3946 | -The object storage Swift be fault already logs to syslog. So you now can tell the local |
3947 | -rsyslogd clients to pass the logged information to the logging server. You'll do this |
3948 | -by creating a `/etc/rsyslog.d/client.conf` containing the line like |
3949 | - |
3950 | -```` |
3951 | -*.* @192.16.1.10 |
3952 | -```` |
3953 | - |
3954 | -where the IP address points to your rsyslogd server. Best is to choose a server that is |
3955 | -dedicated to this task only. Here you've got to create the file `/etc/rsyslog.d/server.conf` |
3956 | -contining the settings |
3957 | - |
3958 | -```` |
3959 | -# Enable UDP |
3960 | -$ModLoad imudp |
3961 | -# Listen on 192.168.1.10 only |
3962 | -$UDPServerAddress 192.168.1.10 |
3963 | -# Port 514 |
3964 | -$UDPServerRun 514 |
3965 | -# Create logging templates for nova |
3966 | -$template NovaFile,"/var/log/rsyslog/%HOSTNAME%/nova.log" |
3967 | -$template NovaAll,"/var/log/rsyslog/nova.log" |
3968 | -# Log everything else to syslog.log |
3969 | -$template DynFile,"/var/log/rsyslog/%HOSTNAME%/syslog.log" |
3970 | -*.* ?DynFile |
3971 | -# Log various openstack components to their own individual file |
3972 | -local0.* ?NovaFile |
3973 | -local0.* ?NovaAll |
3974 | -& ~ |
3975 | -```` |
3976 | - |
3977 | -This example only contains the settings for Nova only, the other OpenStack components |
3978 | -have to be added the same way. Using two templates per component, one containing the |
3979 | -`%HOSTNAME%` variable and one without it enables a better splitting of the logged |
3980 | -data. Think about the two example nodes `alpha.example.com` and `bravo.example.com`. |
3981 | -They will write their logging into the files |
3982 | - |
3983 | -- `/var/log/rsyslog/alpha.example.com/nova.log` - only the data of alpha, |
3984 | -- `/var/log/rsyslog/bravo.example.com/nova.log` - only the data of bravo, |
3985 | -- `/var/log/rsyslog/nova.log` - the combined data of both. |
3986 | - |
3987 | -This allows a quick overview over all nodes as well as the focussed analysis of an |
3988 | -individual node. |
3989 | |
3990 | === removed file 'Logging.md' |
3991 | --- Logging.md 2014-04-02 16:18:10 +0000 |
3992 | +++ Logging.md 1970-01-01 00:00:00 +0000 |
3993 | @@ -1,15 +0,0 @@ |
3994 | -Title: Logging |
3995 | -Status: In Progress |
3996 | - |
3997 | -# Logging |
3998 | - |
3999 | -The controlling of individual logs is a cumbersome job, even in an environment with only |
4000 | -few computer system. But it's even more worse in typical clouds with a large number of |
4001 | -nodes. Here the centrallized approach using `rsyslogd` helps. It allows you to aggregate |
4002 | -the logging output of all systems in one place. Here the monitoring and analysis gets |
4003 | -more simple. |
4004 | - |
4005 | -Ubuntu uses `rsyslogd` as the default logging service. Since it is natively able to send |
4006 | -logs to a remote location, you don't have to install anything extra to enable this feature, |
4007 | -just modify the configuration file. In doing this, consider running your logging over |
4008 | -a management network or using an encrypted VPN to avoid interception. |
4009 | |
4010 | === removed file 'Scaling-Ceph.md' |
4011 | --- Scaling-Ceph.md 2014-04-07 13:23:30 +0000 |
4012 | +++ Scaling-Ceph.md 1970-01-01 00:00:00 +0000 |
4013 | @@ -1,36 +0,0 @@ |
4014 | -Title: Scaling - Ceph |
4015 | -Status: In Progress |
4016 | - |
4017 | -# Scaling - Ceph |
4018 | - |
4019 | -## Introduction |
4020 | - |
4021 | -Beside the redundancy for more safety and the higher performance through the usage of |
4022 | -Ceph as storage backend for OpenStack the user also benefits from the more simple way |
4023 | -of scaling the storage of the needs grow. |
4024 | - |
4025 | -## Scope |
4026 | - |
4027 | -**TODO** |
4028 | - |
4029 | -## Scaling |
4030 | - |
4031 | -The addition of Ceph nodes is done using the Juju `add-node` command. By default |
4032 | -it adds only one node, but it is possible to add the number of wanted nodes as |
4033 | -argument. To add one more Ceph OSD Daemon node you simply call |
4034 | - |
4035 | -``` |
4036 | -juju add-node ceph-osd |
4037 | -``` |
4038 | - |
4039 | -Larger numbers of nodes can be added using the `-n` argument, e.g. 5 nodes |
4040 | -with |
4041 | - |
4042 | -``` |
4043 | -juju add-node -n 5 ceph-osd |
4044 | -``` |
4045 | - |
4046 | -**Attention:** The adding of more nodes to Ceph leads to a redistribution of data |
4047 | -between the nodes of an image. This can cause inefficiencies during this process. So |
4048 | -it should be done in smaller steps. |
4049 | - |
4050 | |
4051 | === removed file 'Upgrading-and-Patching-Juju.md' |
4052 | --- Upgrading-and-Patching-Juju.md 2014-04-02 16:18:10 +0000 |
4053 | +++ Upgrading-and-Patching-Juju.md 1970-01-01 00:00:00 +0000 |
4054 | @@ -1,45 +0,0 @@ |
4055 | -Title: Upgrading and Patching - Juju |
4056 | -Status: In Progress |
4057 | - |
4058 | -# Upgrading and Patching - Juju |
4059 | - |
4060 | -## Introduction |
4061 | - |
4062 | -**TODO** |
4063 | - |
4064 | -## Scope |
4065 | - |
4066 | -**TODO** |
4067 | - |
4068 | -## Upgrading |
4069 | - |
4070 | -The upgrade of a Juju environment is done using the Juju client and its command |
4071 | - |
4072 | -```` |
4073 | -$ juju upgrade-juju |
4074 | -```` |
4075 | - |
4076 | -This command sets the version number for all Juju agents to run. This by default |
4077 | -is the most recent supported version compatible with the comand-line tools version. |
4078 | -So ensure that you've upgraded the Juju client first. |
4079 | - |
4080 | -When run without arguments, `upgrade-juju` will try to upgrade to the following |
4081 | -versions, in order of preference and depending on the current value of the |
4082 | -environment's `agent-version` setting: |
4083 | - |
4084 | -- The highest patch.build version of the *next* stable major.minor version. |
4085 | -- The highest patch.build version of the *current* major.minor version. |
4086 | - |
4087 | -Both of these depend on the availability of the according tools. On MAAS you've |
4088 | -got to manage this yourself using the command |
4089 | - |
4090 | -```` |
4091 | -$ juju sync-tools |
4092 | -```` |
4093 | - |
4094 | -This copies the Juju tools tarball from the official tools store (located |
4095 | -at https://streams.canonical.com/juju) into your environment. |
4096 | - |
4097 | -## Patching |
4098 | - |
4099 | -**TODO** |
4100 | |
4101 | === removed file 'Upgrading-and-Patching-OpenStack.md' |
4102 | --- Upgrading-and-Patching-OpenStack.md 2014-04-02 16:18:10 +0000 |
4103 | +++ Upgrading-and-Patching-OpenStack.md 1970-01-01 00:00:00 +0000 |
4104 | @@ -1,83 +0,0 @@ |
4105 | -Title: Upgrading and Patching - OpenStack |
4106 | -Status: In Progress |
4107 | - |
4108 | -# Upgrading and Patching - OpenStack |
4109 | - |
4110 | -## Introduction |
4111 | - |
4112 | -**TODO** |
4113 | - |
4114 | -## Scope |
4115 | - |
4116 | -**TODO** |
4117 | - |
4118 | -## Upgrading |
4119 | - |
4120 | -The upgrade of an OpenStack cluster in one big step is an approach requiring additional |
4121 | -hardware to setup an update cloud beside the productive one and leads to a longer |
4122 | -outage while your cloud is in read-only mode, the state is transferred to the new |
4123 | -one and the environments are switched. So the preferred way of upgrading an OpenStack |
4124 | -cloud is the rolling upgrade of each component of the system piece by piece. |
4125 | - |
4126 | -Here you can choose between in-place and side-by-side upgrades. But the first one needs |
4127 | -to shutdown the regarding component while you're performing its upgrade. Additionally you |
4128 | -may have troubles in case of a rollback. So to avoid this the side by side upgrade is |
4129 | -the preferred way here. |
4130 | - |
4131 | -Before starting the upgrade itself you should |
4132 | - |
4133 | -- Perform some "cleaning" of the environment process to ensure a consistent state; for |
4134 | - example, instances not fully purged from the system after deletion may cause |
4135 | - indeterminate behavior |
4136 | -- Read the release notes and documentation |
4137 | -- Find incompatibilities between your versions |
4138 | - |
4139 | -The upgrade tasks here follow the same procedure for each component: |
4140 | - |
4141 | -1. Configure the new worker |
4142 | -1. Turn off the current worker; during this time hide the downtime using a message |
4143 | - queue or a load balancer |
4144 | -1. Take a backup as described earlier of the old worker for a rollback |
4145 | -1. Copy the state of the current to the new worker |
4146 | -1. Start up the new worker |
4147 | - |
4148 | -Now repeat these steps for each worker in an approprate order. In case of a problem it |
4149 | -should be easy to rollback as long as the former worker stays untouched. This is, |
4150 | -beside the shorter downtime, the most important advantage of the side-by-side upgrade. |
4151 | - |
4152 | -The following order for service upgrades seems the most successful: |
4153 | - |
4154 | -1. Upgrade the OpenStack Identity Service (Keystone). |
4155 | -1. Upgrade the OpenStack Image Service (Glance). |
4156 | -1. Upgrade OpenStack Compute (Nova), including networking components. |
4157 | -1. Upgrade OpenStack Block Storage (Cinder). |
4158 | -1. Upgrade the OpenStack dashboard. |
4159 | - |
4160 | -These steps look very easy, but still are a complex procedure depending on your cloud |
4161 | -configuration. So we recommend to have a testing environment with a near-identical |
4162 | -architecture to your production system. This doesn't mean that you should use the same |
4163 | -sizes and hardware, which would be best but expensive. But there are some ways to reduce |
4164 | -the cost. |
4165 | - |
4166 | -- Use your own cloud. The simplest place to start testing the next version of OpenStack |
4167 | - is by setting up a new environment inside your own cloud. This may seem odd—especially |
4168 | - the double virtualisation used in running compute nodes—but it's a sure way to very |
4169 | - quickly test your configuration. |
4170 | -- Use a public cloud. Especially because your own cloud is unlikely to have sufficient |
4171 | - space to scale test to the level of the entire cloud, consider using a public cloud |
4172 | - to test the scalability limits of your cloud controller configuration. Most public |
4173 | - clouds bill by the hour, which means it can be inexpensive to perform even a test |
4174 | - with many nodes. |
4175 | -- Make another storage endpoint on the same system. If you use an external storage plug-in |
4176 | - or shared file system with your cloud, in many cases it's possible to test that it |
4177 | - works by creating a second share or endpoint. This will enable you to test the system |
4178 | - before entrusting the new version onto your storage. |
4179 | -- Watch the network. Even at smaller-scale testing, it should be possible to determine |
4180 | - whether something is going horribly wrong in intercomponent communication if you |
4181 | - look at the network packets and see too many. |
4182 | - |
4183 | -**TODO** Add more concrete description here. |
4184 | - |
4185 | -## Patching |
4186 | - |
4187 | -**TODO** |
4188 | |
4189 | === removed directory 'build' |
4190 | === removed directory 'build/epub' |
4191 | === removed directory 'build/html' |
4192 | === removed directory 'build/pdf' |
4193 | === removed file 'installing-openstack-outline.md' |
4194 | --- installing-openstack-outline.md 2014-04-11 14:51:27 +0000 |
4195 | +++ installing-openstack-outline.md 1970-01-01 00:00:00 +0000 |
4196 | @@ -1,395 +0,0 @@ |
4197 | -Title:Installing OpenStack |
4198 | - |
4199 | -# Installing OpenStack |
4200 | - |
4201 | -![Openstack](../media/openstack.png) |
4202 | - |
4203 | -##Introduction |
4204 | - |
4205 | -OpenStack is a versatile, open source cloud environment equally suited to serving up public, private or hybrid clouds. Canonical is a Platinum Member of the OpenStack foundation and has been involved with the OpenStack project since its inception; the software covered in this document has been developed with the intention of providing a streamlined way to deploy and manage OpenStack installations. |
4206 | - |
4207 | -### Scope of this documentation |
4208 | - |
4209 | -The OpenStack platform is powerful and its uses diverse. This section of documentation |
4210 | -is primarily concerned with deploying a 'standard' running OpenStack system using, but not limited to, Canonical components such as MAAS, Juju and Ubuntu. Where appropriate other methods and software will be mentioned. |
4211 | - |
4212 | -### Assumptions |
4213 | - |
4214 | -1. Use of MAAS |
4215 | - This document is written to provide instructions on how to deploy OpenStack using MAAS for hardware provisioning. If you are not deploying directly on hardware, this method will still work, with a few alterations, assuming you have a properly configured Juju environment. The main difference will be that you will have to provide different configuration options depending on the network configuration. |
4216 | - |
4217 | -2. Use of Juju |
4218 | - This document assumes an up to date, stable release version of Juju. |
4219 | - |
4220 | -3. Local network configuration |
4221 | - This document assumes that you have an adequate local network configuration, including separate interfaces for access to the OpenStack cloud. Ideal networks are laid out in the [MAAS][MAAS documentation for OpenStack] |
4222 | - |
4223 | -## Planning an installation |
4224 | - |
4225 | -Before deploying any services, it is very useful to take stock of the resources available and how they are to be used. OpenStack comprises of a number of interrelated services (Nova, Swift, etc) which each have differing demands in terms of hosts. For example, the Swift service, which provides object storage, has a different requirement than the Nova service, which provides compute resources. |
4226 | - |
4227 | -The minimum requirements for each service and recommendations are laid out in the official [oog][OpenStack Operations Guide] which is available (free) in HTML or various downloadable formats. For guidance, the following minimums are recommended for Ubuntu Cloud: |
4228 | - |
4229 | -[insert minimum hardware spec] |
4230 | - |
4231 | - |
4232 | - |
4233 | -The recommended composition of nodes for deploying OpenStack with MAAS and Juju is that all nodes in the system should be capable of running *ANY* of the services. This is best practice for the robustness of the system, as since any physical node should fail, another can be repurposed to take its place. This obviously extends to any hardware requirements such as extra network interfaces. |
4234 | - |
4235 | -If for reasons of economy or otherwise you choose to use different configurations of hardware, you should note that your ability to overcome hardware failure will be reduced. It will also be necessary to target deployments to specific nodes - see the section in the MAAS documentation on tags [MAAS tags] |
4236 | - |
4237 | - |
4238 | -###Create the OpenStack configuration file |
4239 | - |
4240 | -We will be using Juju charms to deploy the component parts of OpenStack. Each charm encapsulates everything required to set up a particular service. However, the individual services have many configuration options, some of which we will want to change. |
4241 | - |
4242 | -To make this task easier and more reproduceable, we will create a separate configuration file with the relevant options for all the services. This is written in a standard YAML format. |
4243 | - |
4244 | -You can download the [openstack-config.yaml] file we will be using from here. It is also reproduced below: |
4245 | - |
4246 | -``` |
4247 | -keystone: |
4248 | - admin-password: openstack |
4249 | - debug: 'true' |
4250 | - log-level: DEBUG |
4251 | -nova-cloud-controller: |
4252 | - network-manager: 'Neutron' |
4253 | - quantum-security-groups: 'yes' |
4254 | - neutron-external-network: Public_Network |
4255 | -nova-compute: |
4256 | - enable-live-migration: 'True' |
4257 | - migration-auth-type: "none" |
4258 | - virt-type: kvm |
4259 | - #virt-type: lxc |
4260 | - enable-resize: 'True' |
4261 | -quantum-gateway: |
4262 | - ext-port: 'eth1' |
4263 | - plugin: ovs |
4264 | -glance: |
4265 | - ceph-osd-replication-count: 3 |
4266 | -cinder: |
4267 | - block-device: None |
4268 | - ceph-osd-replication-count: 3 |
4269 | - overwrite: "true" |
4270 | - glance-api-version: 2 |
4271 | -ceph: |
4272 | - fsid: a51ce9ea-35cd-4639-9b5e-668625d3c1d8 |
4273 | - monitor-secret: AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA== |
4274 | - osd-devices: /dev/sdb |
4275 | - osd-reformat: 'True' |
4276 | -``` |
4277 | - |
4278 | -For all services, we can configure the `openstack-origin` to point to an install source. In this case, we will rely on the default, which will point to the relevant sources for the Ubuntu 14.04 LTS Trusty release. Further configuration for each service is explained below: |
4279 | - |
4280 | -####keystone |
4281 | -admin password: |
4282 | - You should set a memorable password here to be able to access OpenStack when it is deployed |
4283 | - |
4284 | -debug: |
4285 | - It is useful to set this to 'true' initially, to monitor the setup. this will produce more verbose messaging. |
4286 | - |
4287 | -log-level: |
4288 | - Similarly, setting the log-level to DEBUG means that more verbose logs can be generated. These options can be changed once the system is set up and running normally. |
4289 | - |
4290 | -####nova-cloud-controller |
4291 | - |
4292 | -cloud-controller: |
4293 | - 'Neutron' - Other options are now depricated. |
4294 | - |
4295 | -quantum-security-groups: |
4296 | - 'yes' |
4297 | - |
4298 | -neutron-external-network: |
4299 | - Public_Network - This is an interface we will use for allowing access to the cloud, and will be defined later |
4300 | - |
4301 | -####nova-compute |
4302 | -enable-live-migration: |
4303 | - We have set this to 'True' |
4304 | - |
4305 | -migration-auth-type: |
4306 | - "none" |
4307 | - |
4308 | -virt-type: |
4309 | - kvm |
4310 | - |
4311 | -enable-resize: |
4312 | - 'True' |
4313 | - |
4314 | -####quantum-gateway |
4315 | -ext-port: |
4316 | - This is where we specify the hardware for the public network. Use 'eth1' or the relevant |
4317 | - plugin: ovs |
4318 | - |
4319 | - |
4320 | -####glance |
4321 | - |
4322 | - ceph-osd-replication-count: 3 |
4323 | - |
4324 | -####cinder |
4325 | - openstack-origin: cloud:trusty-icehouse/updates |
4326 | - block-device: None |
4327 | - ceph-osd-replication-count: 3 |
4328 | - overwrite: "true" |
4329 | - glance-api-version: 2 |
4330 | - |
4331 | -####ceph |
4332 | - |
4333 | -fsid: |
4334 | - The fsid is simply a unique identifier. You can generate a suitable value by running `uuidgen` which should return a value which looks like: a51ce9ea-35cd-4639-9b5e-668625d3c1d8 |
4335 | - |
4336 | -monitor-secret: |
4337 | - The monitor secret is a secret string used to authenticate access. There is advice on how to generate a suitable secure secret at [ceph][the ceph website]. A typical value would be `AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==` |
4338 | - |
4339 | -osd-devices: |
4340 | - This should point (in order of preference) to a device,partition or filename. In this case we will assume secondary device level storage located at `/dev/sdb` |
4341 | - |
4342 | -osd-reformat: |
4343 | - We will set this to 'True', allowing ceph to reformat the drive on provisioning. |
4344 | - |
4345 | - |
4346 | -##Deploying OpenStack with Juju |
4347 | -Now that the configuration is defined, we can use Juju to deploy and relate the services. |
4348 | - |
4349 | -###Initialising Juju |
4350 | -Juju requires a minimal amount of setup. Here we assume it has already been configured to work with your MAAS cluster (see the [juju_install][Juju Install Guide] for more information on this. |
4351 | - |
4352 | -Firstly, we need to fetch images and tools that Juju will use: |
4353 | -``` |
4354 | -juju sync-tools --debug |
4355 | -``` |
4356 | -Then we can create the bootstrap instance: |
4357 | - |
4358 | -``` |
4359 | -juju bootstrap --upload-tools --debug |
4360 | -``` |
4361 | -We use the upload-tools switch to use the local versions of the tools which we just fetched. The debug switch will give verbose output which can be useful. This process may take a few minutes, as Juju is creating an instance and installing the tools. When it has finished, you can check the status of the system with the command: |
4362 | -``` |
4363 | -juju status |
4364 | -``` |
4365 | -This should return something like: |
4366 | -``` |
4367 | ----------- example |
4368 | -``` |
4369 | -### Deploy the OpenStack Charms |
4370 | - |
4371 | -Now that the Juju bootstrap node is up and running we can deploy the services required to make our OpenStack installation. To configure these services properly as they are deployed, we will make use of the configuration file we defined earlier, by passing it along with the `--config` switch with each deploy command. Substitute in the name and path of your config file if different. |
4372 | - |
4373 | -It is useful but not essential to deploy the services in the order below. It is also highly reccommended to open an additional terminal window and run the command `juju debug-log`. This will output the logs of all the services as they run, and can be useful for troubleshooting. |
4374 | - |
4375 | -It is also recommended to run a `juju status` command periodically, to check that each service has been installed and is running properly. If you see any errors, please consult the [troubleshooting][troubleshooting section below]. |
4376 | - |
4377 | -``` |
4378 | -juju deploy --to=0 juju-gui |
4379 | -juju deploy rabbitmq-server |
4380 | -juju deploy mysql |
4381 | -juju deploy --config openstack-config.yaml openstack-dashboard |
4382 | -juju deploy --config openstack-config.yaml keystone |
4383 | -juju deploy --config openstack-config.yaml ceph -n 3 |
4384 | -juju deploy --config openstack-config.yaml nova-compute -n 3 |
4385 | -juju deploy --config openstack-config.yaml quantum-gateway |
4386 | -juju deploy --config openstack-config.yaml cinder |
4387 | -juju deploy --config openstack-config.yaml nova-cloud-controller |
4388 | -juju deploy --config openstack-config.yaml glance |
4389 | -juju deploy --config openstack-config.yaml ceph-radosgw |
4390 | -``` |
4391 | - |
4392 | - |
4393 | -### Add relations between the OpenStack services |
4394 | - |
4395 | -Although the services are now deployed, they are not yet connected together. Each service currently exists in isolation. We use the `juju add-relation`command to make them aware of each other and set up any relevant connections and protocols. This extra configuration is taken care of by the individual charms themselves. |
4396 | - |
4397 | - |
4398 | -We should start adding relations between charms by setting up the Keystone authorization service and its database, as this will be needed by many of the other connections: |
4399 | - |
4400 | -juju add-relation keystone mysql |
4401 | - |
4402 | -We wait until the relation is set. After it finishes check it with juju status: |
4403 | - |
4404 | -``` |
4405 | -juju status mysql |
4406 | -juju status keystone |
4407 | -``` |
4408 | - |
4409 | -It can take a few moments for this service to settle. Although it is certainly possible to continue adding relations (Juju manages a queue for pending actions) it can be counterproductive in terms of the overall time taken, as many of the relations refer to the same services. |
4410 | -The following relations also need to be made: |
4411 | -``` |
4412 | -juju add-relation nova-cloud-controller mysql |
4413 | -juju add-relation nova-cloud-controller rabbitmq-server |
4414 | -juju add-relation nova-cloud-controller glance |
4415 | -juju add-relation nova-cloud-controller keystone |
4416 | -juju add-relation nova-compute mysql |
4417 | -juju add-relation nova-compute rabbitmq-server |
4418 | -juju add-relation nova-compute glance |
4419 | -juju add-relation nova-compute nova-cloud-controller |
4420 | -juju add-relation glance mysql |
4421 | -juju add-relation glance keystone |
4422 | -juju add-relation cinder keystone |
4423 | -juju add-relation cinder mysql |
4424 | -juju add-relation cinder rabbitmq-server |
4425 | -juju add-relation cinder nova-cloud-controller |
4426 | -juju add-relation openstack-dashboard keystone |
4427 | -juju add-relation swift-proxy swift-storage |
4428 | -juju add-relation swift-proxy keystone |
4429 | -``` |
4430 | -Finally, the output of juju status should show the all the relations as complete. The OpenStack cloud is now running, but it needs to be populated with some additional components before it is ready for use. |
4431 | - |
4432 | - |
4433 | - |
4434 | - |
4435 | -##Preparing OpenStack for use |
4436 | - |
4437 | -###Configuring access to Openstack |
4438 | - |
4439 | - |
4440 | - |
4441 | -The configuration data for OpenStack can be fetched by reading the configuration file generated by the Keystone service. You can also copy this information by logging in to the Horizon (OpenStack Dashboard) service and examining the configuration there. However, we actually need only a few bits of information. The following bash script can be run to extract the relevant information: |
4442 | - |
4443 | -``` |
4444 | -#!/bin/bash |
4445 | - |
4446 | -set -e |
4447 | - |
4448 | -KEYSTONE_IP=`juju status keystone/0 | grep public-address | awk '{ print $2 }' | xargs host | grep -v alias | awk '{ print $4 }'` |
4449 | -KEYSTONE_ADMIN_TOKEN=`juju ssh keystone/0 "sudo cat /etc/keystone/keystone.conf | grep admin_token" | sed -e '/^M/d' -e 's/.$//' | awk '{ print $3 }'` |
4450 | - |
4451 | -echo "Keystone IP: [${KEYSTONE_IP}]" |
4452 | -echo "Keystone Admin Token: [${KEYSTONE_ADMIN_TOKEN}]" |
4453 | - |
4454 | -cat << EOF > ./nova.rc |
4455 | -export SERVICE_ENDPOINT=http://${KEYSTONE_IP}:35357/v2.0/ |
4456 | -export SERVICE_TOKEN=${KEYSTONE_ADMIN_TOKEN} |
4457 | -export OS_AUTH_URL=http://${KEYSTONE_IP}:35357/v2.0/ |
4458 | -export OS_USERNAME=admin |
4459 | -export OS_PASSWORD=openstack |
4460 | -export OS_TENANT_NAME=admin |
4461 | -EOF |
4462 | - |
4463 | -juju scp ./nova.rc nova-cloud-controller/0:~ |
4464 | -``` |
4465 | -This script extract the required information and then copies the file to the instance running the nova-cloud-controller. |
4466 | -Before we do any nova or glance command we will load the file we just created: |
4467 | - |
4468 | -``` |
4469 | -$ source ./nova.rc |
4470 | -$ nova endpoints |
4471 | -``` |
4472 | - |
4473 | -At this point the output of nova endpoints should show the information of all the available OpenStack endpoints. |
4474 | - |
4475 | -### Install the Ubuntu Cloud Image |
4476 | - |
4477 | -In order for OpenStack to create instances in its cloud, it needs to have access to relevant images |
4478 | -$ mkdir ~/iso |
4479 | -$ cd ~/iso |
4480 | -$ wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img |
4481 | - |
4482 | -###Import the Ubuntu Cloud Image into Glance |
4483 | -!!!Note: glance comes with the package glance-client which may need to be installed where you plan the run the command from |
4484 | - |
4485 | -``` |
4486 | -apt-get install glance-client |
4487 | -glance add name="Trusty x86_64" is_public=true container_format=ovf disk_format=qcow2 < trusty-server-cloudimg-amd64-disk1.img |
4488 | -``` |
4489 | -###Create OpenStack private network |
4490 | -Note: nova-manage can be run from the nova-cloud-controller node or any of the nova-compute nodes. To access the node we run the following command: |
4491 | - |
4492 | -``` |
4493 | -juju ssh nova-cloud-controller/0 |
4494 | - |
4495 | -sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --network_size=32 --multi_host=T --bridge_interface=eth0 --bridge=br100 |
4496 | -``` |
4497 | - |
4498 | -To make sure that we have created the network we can now run the following command: |
4499 | - |
4500 | -``` |
4501 | -sudo nova-manage network list |
4502 | -``` |
4503 | - |
4504 | -### Create OpenStack public network |
4505 | -``` |
4506 | -sudo nova-manage floating create --ip_range=1.1.21.64/26 |
4507 | -sudo nova-manage floating list |
4508 | -``` |
4509 | -Allow ping and ssh access adding them to the default security group |
4510 | -Note: The following commands are run from a machine where we have the package python-novaclient installed and within a session where we have loaded the above created nova.rc file. |
4511 | - |
4512 | -``` |
4513 | -nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 |
4514 | -nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 |
4515 | -``` |
4516 | - |
4517 | -###Create and register the ssh keys in OpenStack |
4518 | -Generate a default keypair |
4519 | -``` |
4520 | -ssh-keygen -t rsa -f ~/.ssh/admin-key |
4521 | -``` |
4522 | -###Copy the public key into Nova |
4523 | -We will name it admin-key: |
4524 | -Note: In the precise version of python-novaclient the command works with --pub_key instead of --pub-key |
4525 | - |
4526 | -``` |
4527 | -nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key |
4528 | -``` |
4529 | -And make sure it’s been successfully created: |
4530 | -``` |
4531 | -nova keypair-list |
4532 | -``` |
4533 | - |
4534 | -###Create a test instance |
4535 | -We created an image with glance before. Now we need the image ID to start our first instance. The ID can be found with this command: |
4536 | -``` |
4537 | -nova image-list |
4538 | -``` |
4539 | - |
4540 | -Note: we can also use the command glance image-list |
4541 | -###Boot the instance: |
4542 | - |
4543 | -``` |
4544 | -nova boot --flavor=m1.small --image=< image_id_from_glance_index > --key-name admin-key testserver1 |
4545 | -``` |
4546 | - |
4547 | -###Add a floating IP to the new instance |
4548 | -First we allocate a floating IP from the ones we created above: |
4549 | - |
4550 | -``` |
4551 | -nova floating-ip-create |
4552 | -``` |
4553 | - |
4554 | -Then we associate the floating IP obtained above to the new instance: |
4555 | - |
4556 | -``` |
4557 | -nova add-floating-ip 9363f677-2a80-447b-a606-a5bd4970b8e6 1.1.21.65 |
4558 | -``` |
4559 | - |
4560 | - |
4561 | -### Create and attach a Cinder volume to the instance |
4562 | -Note: All these steps can be also done through the Horizon Web UI |
4563 | - |
4564 | -We make sure that cinder works by creating a 1GB volume and attaching it to the VM: |
4565 | - |
4566 | -``` |
4567 | -cinder create --display_name test-cinder1 1 |
4568 | -``` |
4569 | - |
4570 | -Get the ID of the volume with cinder list: |
4571 | - |
4572 | -``` |
4573 | -cinder list |
4574 | -``` |
4575 | - |
4576 | -Attach it to the VM as vdb |
4577 | - |
4578 | -``` |
4579 | -nova volume-attach test-server1 bbb5c5c2-a5fd-4fe1-89c2-d16fe91578d4 /dev/vdb |
4580 | -``` |
4581 | - |
4582 | -Now we should be able to ssh the VM test-server1 from a server with the private key we created above and see that vdb appears in /proc/partitions |
4583 | - |
4584 | - |
4585 | - |
4586 | - |
4587 | -[troubleshooting] |
4588 | -[oog](http://docs.openstack.org/ops/) |
4589 | -[MAAS tags] |
4590 | -[openstack-config.yaml] |
4591 | -[ceph](http://ceph.com/docs/master/dev/mon-bootstrap/) |
4592 | |
4593 | === removed file 'landcsape.md' |
4594 | --- landcsape.md 2014-03-24 13:49:58 +0000 |
4595 | +++ landcsape.md 1970-01-01 00:00:00 +0000 |
4596 | @@ -1,1297 +0,0 @@ |
4597 | -#Managing OpenStack with Landscape |
4598 | - |
4599 | -##About Landscape |
4600 | -Landscape is a system management tool designed to let you easily manage multiple Ubuntu systems - up to 40,000 with a single Landscape instance. From a single dashboard you can apply package updates and perform other administrative tasks on many machines. You can categorize machines by group, and manage each group separately. You can make changes to targeted machines even when they are offline; the changes will be applied next time they start. Landscape lets you create scripts to automate routine work such as starting and stopping services and performing backups. It lets you use both common Ubuntu repositories and any custom repositories you may create for your own computers. Landscape is particularly adept at security updates; it can highlight newly available packages that involve security fixes so they can be applied quickly. You can use Landscape as a hosted service as part of Ubuntu Advantage, or run it on premises via Landscape Dedicated Server. |
4601 | - |
4602 | -##Ubuntu Advantage |
4603 | -Ubuntu Advantage comprises systems management tools, technical support, access to online resources and support engineers, training, and legal assurance to keep organizations on top of their Ubuntu server, desktop, and cloud deployments. Advantage provides subscriptions at various support levels to help organizations maintain the level of support they need. |
4604 | - |
4605 | -##Concepts |
4606 | - |
4607 | -###Tags |
4608 | - |
4609 | - |
4610 | -Landscape lets you group multiple computers by applying tags to them. |
4611 | -You can group computers using any set of characteristics; architecture |
4612 | -and location might be two logical tagging schemes. Tag names may use any |
4613 | -combination of letters, numbers, and dashes. Each computer can be |
4614 | -associated with multiple tags. There is no menu choice for tags; rather, |
4615 | -you can select multiple computers under the COMPUTERS menu and apply or |
4616 | -remove one or more tags to all the ones you select on the INFO screen. |
4617 | -If you want to specify more than one tag at a time for your selected |
4618 | -computers, separate the tags by spaces. |
4619 | - |
4620 | -###Packages |
4621 | - |
4622 | -In Linux, a package is a group of related files for an application that |
4623 | -make it easy to install, upgrade, and remove the application. You can |
4624 | -manage packages from the PACKAGES menu under COMPUTERS. |
4625 | - |
4626 | -###Repositories |
4627 | - |
4628 | -Linux distributions like Ubuntu use repositories to hold packages you |
4629 | -can install on managed computers. While Ubuntu has [several |
4630 | -repositories](https://help.ubuntu.com/community/Repositories/Ubuntu/) |
4631 | -that anyone can access, you can also maintain your own repositories on |
4632 | -your network. This can be useful when you want to maintain packages with |
4633 | -different versions from those in the community repositories, or if |
4634 | -you've packages in-house software for installation. Landscape's [12.09 |
4635 | -release |
4636 | -notes](https://help.landscape.canonical.com/LDS/ReleaseNotes12.09#Repository_Management) |
4637 | -contain a quick tutorial about repository management. |
4638 | - |
4639 | -###Upgrade profiles |
4640 | - |
4641 | -An upgrade profile defines a schedule for the times when upgrades are to |
4642 | -be automatically installed on the machines associated with a specific |
4643 | -access group. You can associate zero or more computers with each upgrade |
4644 | -profile via tags to install packages on those computers. You can also |
4645 | -associate an upgrade profile with an access group, which limits its use |
4646 | -to only computers within the specified access group. You can manage |
4647 | -upgrade profiles from the UPGRADE PROFILES link in the PROFILES choice |
4648 | -under your account. |
4649 | - |
4650 | -###Package profiles |
4651 | - |
4652 | -A package profile, or meta-package, comprises a set of one or more |
4653 | -packages, including their dependencies and conflicts (generally called |
4654 | -constraints), that you can manage as a group. Package profiles specify |
4655 | -sets of packages that associated systems should always get, or never |
4656 | -get. You can associate zero or more computers with each package profile |
4657 | -via tags to install packages on those computers. You can also associate |
4658 | -a package profile with an access group, which limits its use to only |
4659 | -computers within the specified access group. You can manage package |
4660 | -profiles from the Package Profiles link in the PROFILES menu under your |
4661 | -account. |
4662 | - |
4663 | -###Removal profiles |
4664 | - |
4665 | -A removal profile defines a maximum number of days that a computer can |
4666 | -go without exchanging data with the Landscape server before it is |
4667 | -automatically removed. If more days pass than the profile's "Days |
4668 | -without exchange", that computer will automatically be removed and the |
4669 | -license seat it held will be released. This helps Landscape keep license |
4670 | -seats open and ensure Landscape is not tracking stale or retired |
4671 | -computer data for long periods of time. You can associate zero or more |
4672 | -computers with each removal profile via tags to ensure those computers |
4673 | -are governed by this removal profile. You can also associate a removal |
4674 | -profile with an access group, which limits its use to only computers |
4675 | -within the specified access group. You can manage removal profiles from |
4676 | -the REMOVAL PROFILES link in the PROFILES choice under your account. |
4677 | - |
4678 | -Scripts |
4679 | -------- |
4680 | - |
4681 | -Landscape lets you run scripts on the computers you manage in your |
4682 | -account. The scripts may be in any language, as long as an interpreter |
4683 | -for that language is present on the computers on which they are to run. |
4684 | -You can maintain a library of scripts for common tasks. You can manage |
4685 | -scripts from the STORED SCRIPTS menu under your account, and run them |
4686 | -against computers from the SCRIPTS menu under COMPUTERS. |
4687 | - |
4688 | -Administrators |
4689 | --------------- |
4690 | - |
4691 | -Administrators are people who are authorized to manage computers using |
4692 | -Landscape. You can manage administrators from the ADMINISTRATORS menu |
4693 | -under your account. |
4694 | - |
4695 | -Access Groups |
4696 | -------------- |
4697 | - |
4698 | -Landscape lets administrators limit administrative rights on computers |
4699 | -by assigning them to logical groupings called access groups. Each |
4700 | -computer can be in only one access group. Typical access groups might be |
4701 | -constucted around organizational units or departments, locations, or |
4702 | -hardware architecture. You can manage access groups from the ACCESS |
4703 | -GROUPS menu under your account; read about [how to create access |
4704 | -groups](https://landscape.canonical.com/static/doc/user-guide/ch05.html#creatingaccessgroups "Creating access groups"), |
4705 | -[add computers to access |
4706 | -groups](https://landscape.canonical.com/static/doc/user-guide/ch05.html#addingtoaccessgroups "Adding computers to access groups"), |
4707 | -and [associate administrators with access |
4708 | -groups](https://landscape.canonical.com/static/doc/user-guide/ch05.html#associatingadmins "Associating roles with access groups"). |
4709 | -It is good policy to come up with and document a naming convention for |
4710 | -access groups before you deploy Landscape, so that all administrators |
4711 | -understand what constitutes an acceptable logical grouping for your |
4712 | -organization. |
4713 | - |
4714 | -Roles |
4715 | ------ |
4716 | - |
4717 | -For each access group, you can assign management privileges to |
4718 | -administrators via the use of roles. Administrators may be associated |
4719 | -with multiple roles, and roles may be associated with many access |
4720 | -groups. You can manage roles from the ROLES menu under your account. |
4721 | - |
4722 | -Alerts |
4723 | ------- |
4724 | - |
4725 | -Landscape uses alerts to notify administrators of conditions that |
4726 | -require attention. You can manage alerts from the ALERTS menu under your |
4727 | -account. |
4728 | - |
4729 | -Provisioning |
4730 | ------------- |
4731 | - |
4732 | -Landscape lets you provision new computers starting with bare hardware - |
4733 | -what Canonical calls metal as a service. With MAAS, you provision new |
4734 | -hardware only as you need it, just as you would bring new cloud |
4735 | -instances online. [The Ubuntu wiki explains how to set up |
4736 | -MAAS](https://wiki.ubuntu.com/ServerTeam/MAAS/). |
4737 | - |
4738 | -You can provision one or more new computers from the PROVISIONING menu |
4739 | -under your account. |
4740 | - |
4741 | - |
4742 | -##Managing Landscape |
4743 | ------------------- |
4744 | - |
4745 | - |
4746 | -Prerequisites |
4747 | -------------- |
4748 | - |
4749 | -You can install Landscape Dedicated Server (LDS) on any server with a |
4750 | -dual-core processor running at 2.0GHz or higher, at least 4GB of RAM, |
4751 | -and 5GB of disk space. The operating system must be Ubuntu Server 12.04 |
4752 | -LTS x86\_64 or higher. You must also have PostgreSQL installed and |
4753 | -network ports 80/tcp (http) and 443/tcp (https) open. You can optionally |
4754 | -open port 22/tcp (ssh) as well for general server maintenance. |
4755 | - |
4756 | -Installing |
4757 | ----------- |
4758 | - |
4759 | -Refer to the [Recommended |
4760 | -Deployment](https://help.landscape.canonical.com/LDS/RecommendedDeployment) |
4761 | -guide in the Landscape wiki for all the information you need to install, |
4762 | -configure, and start Landscape and the dependent services it relies on. |
4763 | - |
4764 | -Upgrading Landscape |
4765 | -------------------- |
4766 | - |
4767 | -The process of upgrading an installed version of Landscape is |
4768 | -[documented in the Landscape |
4769 | -wiki](https://help.landscape.canonical.com/LDS/ReleaseNotes#Upgrading). |
4770 | - |
4771 | -Backing up and restoring |
4772 | ------------------------- |
4773 | - |
4774 | -Landscape uses several PostgreSQL databases and needs to keep them |
4775 | -consistent. For example, if you remove a computer from Landscape |
4776 | -management, more than one database needs to be updated. Running a |
4777 | -utility like `pg_dumpall`{.code} won't guarantee the consistency of the |
4778 | -backup, because while the dump process does lock all tables in the |
4779 | -database being backed up, it doesn't care about other databases. The |
4780 | -result will likely be an inconsistent backup. |
4781 | - |
4782 | -Instead, you should perform hot backups by using write-ahead log files |
4783 | -from PostgreSQL and/or filesystem snapshots in order to take a |
4784 | -consistent image of all the databases at a given time, or, if you can |
4785 | -afford some down time, run offline backups. To run offline backups, |
4786 | -disable the Landscape service and run a normal backup with |
4787 | -`pg_dump`{.code} or `pg_dumpall`{.code}. Offline backup can take just a |
4788 | -few minutes for databases at smaller sites, or about half an hour for a |
4789 | -database with several thousand computers. Bear in mind that Landscape |
4790 | -can be deployed using several servers, so when you are taking the |
4791 | -offline backup route, remember to disable all the Landscape services on |
4792 | -all server machines. See the [PostgreSQL documentation on backup and |
4793 | -restore](http://www.postgresql.org/docs/9.1/interactive/backup.html) for |
4794 | -detailed instructions. |
4795 | - |
4796 | -In addition to the Landscape databases, make sure you back up certain |
4797 | -additional important files: |
4798 | - |
4799 | -- `/etc/landscape`{.filename}: configuration files and the LDS license |
4800 | - |
4801 | -- `/etc/default/landscape-server`{.filename}: file to configure which |
4802 | - services will start on this machine |
4803 | - |
4804 | -- `/var/lib/landscape/hash-id-databases`{.filename}: these files are |
4805 | - recreated by a weekly cron job, which can take several minutes to |
4806 | - run, so backing them up can save time |
4807 | - |
4808 | -- `/etc/apache2/sites-available/`{.filename}: the Landscape Apache |
4809 | - vhost configuration file, usually named after the fully qualified |
4810 | - domain name of the server |
4811 | - |
4812 | -- `/etc/ssl/certs/`{.filename}: the Landscape server X509 certificate |
4813 | - |
4814 | -- `/etc/ssl/private/`{.filename}: the Landscape server X509 key file |
4815 | - |
4816 | -- `/etc/ssl/certs/landscape_server_ca.crt`{.filename}: if in use, this |
4817 | - is the CA file for the internal CA used to issue the Landscape |
4818 | - server certificates |
4819 | - |
4820 | -- `/etc/postgresql/8.4/main/`{.filename}: PostgreSQL configuration |
4821 | - files - in particular, postgresql.conf for tuning and pg\_hba.conf |
4822 | - for access rules. These files may be in a separate host, dedicated |
4823 | - to the database. Use subdirectory 9.1 for PostgreSQL version 9.1, |
4824 | - etc. |
4825 | - |
4826 | -- `/var/log/landscape`{.filename}: all LDS log files |
4827 | - |
4828 | -Log files |
4829 | ---------- |
4830 | - |
4831 | -Landscape generates several log files in |
4832 | -`/var/log/landscape`{.filename}: |
4833 | - |
4834 | -- `update-alerts`{.filename}: output of that cron job. Used to |
4835 | - determine which computers are offline |
4836 | - |
4837 | -- `process-alerts`{.filename}: output of that cron job. Used to |
4838 | - trigger alerts and send out alert email messages |
4839 | - |
4840 | -- `process-profiles`{.filename}: output of that cron job. Used to |
4841 | - process upgrade profiles |
4842 | - |
4843 | -- `sync_lds_releases`{.filename}: output of that cron job. Used to |
4844 | - check for new LDS releases |
4845 | - |
4846 | -- `maintenance`{.filename}: output of that cron job. Removes old |
4847 | - monitoring data and performs other maintenance tasks |
4848 | - |
4849 | -- `update_security_db`{.filename}: output of that cron job. Checks for |
4850 | - new Ubuntu Security Notices |
4851 | - |
4852 | -- `maas-poller`{.filename}: output of that cron job. Used to check the |
4853 | - status of MAAS tasks |
4854 | - |
4855 | -- `package-retirement`{.filename}: output of that (optional) cron job. |
4856 | - Moves unreferenced packages to another table in the database to |
4857 | - speed up package queries |
4858 | - |
4859 | -- `appserver-N`{.filename}: output of the application server N, where |
4860 | - N (here and below) is a number that distinguishes between multiple |
4861 | - instances that may be running |
4862 | - |
4863 | -- `appserver_access-N`{.filename}: access log for application server |
4864 | - N; the application server handles the web-based user interface |
4865 | - |
4866 | -- `message_server-N`{.filename}: output of message server N; the |
4867 | - message server handles communication between the clients and the |
4868 | - server |
4869 | - |
4870 | -- `message_server_access-N`{.filename}: access log for message server |
4871 | - N |
4872 | - |
4873 | -- `pingserver-N`{.filename}: output of pingserver N; the pingserver |
4874 | - tracks client heartbeats to watch for unresponsive clients |
4875 | - |
4876 | -- `pingtracker-N`{.filename}: complementary log for pingserver N |
4877 | - detailing how the algorithm is working |
4878 | - |
4879 | -- `async-frontend-N`{.filename}: log for async-frontend server N; the |
4880 | - async front end delivers AJAX-style content to the web user |
4881 | - interface |
4882 | - |
4883 | -- `api-N`{.filename}: log for API server N; the API services handles |
4884 | - requests from landscape-api clients |
4885 | - |
4886 | -- `combo-loader-N`{.filename}: log for combo-loader server N, which is |
4887 | - responsible for delivering CSS and JavaScript |
4888 | - |
4889 | -- `job-handler-N`{.filename}: log for job-handler server N; the job |
4890 | - handler service controls individual back-end tasks on the server |
4891 | - |
4892 | -- `package-upload-N`{.filename}: output of package-upload server N, |
4893 | - which is used in repository management for upload pockets, which are |
4894 | - repositories that hold packages that are uploaded to them by |
4895 | - authorized users |
4896 | - |
4897 | - |
4898 | - |
4899 | -##Managing administrators |
4900 | - |
4901 | - |
4902 | -Administrators are people who are authorized to manage computers using |
4903 | -Landscape. You can manage administrators from the ADMINISTRATORS menu |
4904 | -under your account. |
4905 | - |
4906 | -**Figure 4.1.** |
4907 | - |
4908 | -![image](./Chapter%A04.%A0Managing%20administrators_files/manageadmin1.png) |
4909 | - |
4910 | -\ |
4911 | -On this page, the upper part of the screen shows a list of existing |
4912 | -administrators and their email addresses. You may create as many as |
4913 | -1,000 administrators, or as few as one. If you're running Landscape |
4914 | -Dedicated Server, the first user you create automatically become an |
4915 | -administrator of your account. If you're using the hosted version of |
4916 | -Landscape, Canonical sends you an administrator invitation when your |
4917 | -account is created. After that, you must create additional |
4918 | -administrators yourself. |
4919 | - |
4920 | -Inviting administrators |
4921 | ------------------------ |
4922 | - |
4923 | -You make someone an administrator by sending that person an invitation |
4924 | -via email. On the administrator management page, specify the person's |
4925 | -name and email address, and the administration role you wish the person |
4926 | -to have. The choices that appear in the drop-down list are the roles |
4927 | -defined under the ROLES menu. See the discussion of roles below. |
4928 | - |
4929 | -When you have specified contact and role information, click on the |
4930 | -Invite button to send an invitation. The message will go out from the |
4931 | -email address you specified during Landscape setup. |
4932 | - |
4933 | -Users who receive an invitation will see an HTML link in the email |
4934 | -message. Clicking on the link takes them to a page where they are asked |
4935 | -to log in to Landscape or create an Ubuntu Single Sign-on account. Once |
4936 | -they do so, they gain the administrator privileges associated with the |
4937 | -role to which they've been assigned. |
4938 | - |
4939 | -It's worth noting that an administrator invitation is like a blank check |
4940 | -- the first person who clicks on the link and submits information can |
4941 | -become an administrator, even if it's not the person with the name and |
4942 | -email address to which you sent the invitation. Therefore, take care to |
4943 | -keep track of the status of administrator invitations. |
4944 | - |
4945 | -Disabling administrators |
4946 | ------------------------- |
4947 | - |
4948 | -To disable one or more administrators, tick the check boxes next to |
4949 | -their names, then click on the Disable button. The adminstrator is |
4950 | -permanently disabled and will no longer show up in Landscape. Though |
4951 | -this operation cannot be reversed, you can send another invitation to |
4952 | -the same email address. |
4953 | - |
4954 | -Roles |
4955 | ------ |
4956 | - |
4957 | -A role is a set of permissions that determine what operations an |
4958 | -administrator can perform. When you define a role, you also specify a |
4959 | -set of one or more access groups to which the role applies. |
4960 | - |
4961 | -Available permissions: |
4962 | - |
4963 | -- View computers |
4964 | - |
4965 | -- Manage computer |
4966 | - |
4967 | -- Add computers to an access group |
4968 | - |
4969 | -- Remove computers from an access group |
4970 | - |
4971 | -- Manage pending computers (In the hosted version of Landscape, |
4972 | - pending computers are clients that have been set up with the |
4973 | - landscape-config tool but have not yet been accepted or rejected by |
4974 | - an administrator. Landscape Dedicated Server never needs to have |
4975 | - pending computers once it is set up and has an account password |
4976 | - assigned.) |
4977 | - |
4978 | -- View scripts |
4979 | - |
4980 | -- Manage scripts |
4981 | - |
4982 | -- View upgrade profiles |
4983 | - |
4984 | -- Manage upgrade profiles |
4985 | - |
4986 | -- View package profiles |
4987 | - |
4988 | -- Manage package profiles |
4989 | - |
4990 | -By specifying different permission levels and different access groups to |
4991 | -which they apply, you can create roles and associate them with |
4992 | -administrators to get a very granular level of control over sets of |
4993 | -computers. |
4994 | - |
4995 | - |
4996 | - |
4997 | - |
4998 | -##Access groups |
4999 | - |
5000 | - |
The diff has been truncated for viewing.