Merge lp:~evilnick/clouddocs/reorg into lp:~jujudocs/clouddocs/trunk

Proposed by Nick Veitch
Status: Merged
Approved by: Frank Mueller
Approved revision: 16
Merged at revision: 16
Proposed branch: lp:~evilnick/clouddocs/reorg
Merge into: lp:~jujudocs/clouddocs/trunk
Diff against target: 6235 lines (+3009/-3062)
31 files modified
Admin/Appendix-Ceph-and-OpenStack.md (+229/-0)
Admin/Backup-and-Recovery-Ceph.md (+107/-0)
Admin/Backup-and-Recovery-Juju.md (+59/-0)
Admin/Backup-and-Recovery-OpenStack.md (+131/-0)
Admin/Logging-Juju.md (+24/-0)
Admin/Logging-OpenStack.md (+92/-0)
Admin/Logging.md (+15/-0)
Admin/Scaling-Ceph.md (+36/-0)
Admin/Upgrading-and-Patching-Juju.md (+45/-0)
Admin/Upgrading-and-Patching-OpenStack.md (+83/-0)
Appendix-Ceph-and-OpenStack.md (+0/-229)
Backup-and-Recovery-Ceph.md (+0/-107)
Backup-and-Recovery-Juju.md (+0/-59)
Backup-and-Recovery-OpenStack.md (+0/-131)
Install/Installing-Ceph.md (+56/-0)
Install/Installing-MAAS.md (+467/-0)
Install/Intro.md (+28/-0)
Install/installing-openstack-outline.md (+395/-0)
Install/landcsape.md (+909/-0)
Installing-Ceph.md (+0/-56)
Installing-MAAS.md (+0/-467)
Intro.md (+0/-26)
Logging-Juju.md (+0/-24)
Logging-OpenStack.md (+0/-92)
Logging.md (+0/-15)
Scaling-Ceph.md (+0/-36)
Upgrading-and-Patching-Juju.md (+0/-45)
Upgrading-and-Patching-OpenStack.md (+0/-83)
installing-openstack-outline.md (+0/-395)
landcsape.md (+0/-1297)
resources/templates/Template (+333/-0)
To merge this branch: bzr merge lp:~evilnick/clouddocs/reorg
Reviewer Review Type Date Requested Status
Frank Mueller Pending
Review via email: mp+215915@code.launchpad.net

Description of the change

I have reorganised the Install and Admin sections into their own directories - this is sort of necessary for making sense of converting them into separate HTMl docs for web.

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added directory 'Admin'
=== added file 'Admin/Appendix-Ceph-and-OpenStack.md'
--- Admin/Appendix-Ceph-and-OpenStack.md 1970-01-01 00:00:00 +0000
+++ Admin/Appendix-Ceph-and-OpenStack.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,229 @@
1Title: Appendix - Ceph and OpenStack
2Status: Done
3
4# Appendix: Ceph and OpenStack
5
6Ceph stripes block device images as objects across a cluster. This way it provides
7a better performance than standalone server. OpenStack is able to use Ceph Block Devices
8through `libvirt`, which configures the QEMU interface to `librbd`.
9
10To use Ceph Block Devices with OpenStack, you must install QEMU, `libvirt`, and OpenStack
11first. It's recommended to use a separate physical node for your OpenStack installation.
12OpenStack recommends a minimum of 8GB of RAM and a quad-core processor.
13
14Three parts of OpenStack integrate with Ceph’s block devices:
15
16- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack
17 treats images as binary blobs and downloads them accordingly.
18- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to
19 attach volumes to running VMs. OpenStack manages volumes using Cinder services.
20- Guest Disks: Guest disks are guest operating system disks. By default, when you
21 boot a virtual machine, its disk appears as a file on the filesystem of the
22 hypervisor (usually under /var/lib/nova/instances/<uuid>/). Prior OpenStack Havana,
23 the only way to boot a VM in Ceph was to use the boot from volume functionality
24 from Cinder. However, now it is possible to directly boot every virtual machine
25 inside Ceph without using Cinder. This is really handy because it allows us to
26 easily perform maintenance operation with the live-migration process. On the other
27 hand, if your hypervisor dies it is also really convenient to trigger Nova evacuate
28 and almost seamlessly run the virtual machine somewhere else.
29
30You can use OpenStack Glance to store images in a Ceph Block Device, and you can
31use Cinder to boot a VM using a copy-on-write clone of an image.
32
33## Create a pool
34
35By default, Ceph block devices use the `rbd` pool. You may use any available pool.
36We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph
37cluster is running, then create the pools.
38
39````
40ceph osd pool create volumes 128
41ceph osd pool create images 128
42ceph osd pool create backups 128
43````
44
45## Configure OpenStack Ceph Clients
46
47The nodes running `glance-api`, `cinder-volume`, `nova-compute` and `cinder-backup` act
48as Ceph clients. Each requires the `ceph.conf` file
49
50````
51ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
52````
53
54On the `glance-api` node, you’ll need the Python bindings for `librbd`
55
56````
57sudo apt-get install python-ceph
58sudo yum install python-ceph
59````
60
61On the `nova-compute`, `cinder-backup` and on the `cinder-volume` node, use both the
62Python bindings and the client command line tools
63
64````
65sudo apt-get install ceph-common
66sudo yum install ceph
67````
68
69If you have cephx authentication enabled, create a new user for Nova/Cinder and
70Glance. Execute the following
71
72````
73ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
74ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
75ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
76````
77
78Add the keyrings for `client.cinder`, `client.glance`, and `client.cinder-backup`
79to the appropriate nodes and change their ownership
80
81````
82ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
83ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
84ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
85ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
86ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
87ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
88````
89
90Nodes running `nova-compute` need the keyring file for the `nova-compute` process.
91They also need to store the secret key of the `client.cinder` user in `libvirt`. The
92`libvirt` process needs it to access the cluster while attaching a block device
93from Cinder.
94
95Create a temporary copy of the secret key on the nodes running `nova-compute`
96
97````
98ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
99````
100
101Then, on the compute nodes, add the secret key to `libvirt` and remove the
102temporary copy of the key
103
104````
105uuidgen
106457eb676-33da-42ec-9a8c-9293d545c337
107
108cat > secret.xml <<EOF
109<secret ephemeral='no' private='no'>
110 <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
111 <usage type='ceph'>
112 <name>client.cinder secret</name>
113 </usage>
114</secret>
115EOF
116sudo virsh secret-define --file secret.xml
117Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
118sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
119````
120
121Save the uuid of the secret for configuring `nova-compute` later.
122
123**Important** You don’t necessarily need the UUID on all the compute nodes.
124However from a platform consistency perspective it’s better to keep the
125same UUID.
126
127## Configure OpenStack to use Ceph
128
129### Glance
130
131Glance can use multiple back ends to store images. To use Ceph block devices
132by default, edit `/etc/glance/glance-api.conf` and add
133
134````
135default_store=rbd
136rbd_store_user=glance
137rbd_store_pool=images
138````
139
140If want to enable copy-on-write cloning of images into volumes, also add:
141
142````
143show_image_direct_url=True
144````
145
146Note that this exposes the back end location via Glance’s API, so
147the endpoint with this option enabled should not be publicly
148accessible.
149
150### Cinder
151
152OpenStack requires a driver to interact with Ceph block devices. You
153must also specify the pool name for the block device. On your
154OpenStack node, edit `/etc/cinder/cinder.conf` by adding
155
156````
157volume_driver=cinder.volume.drivers.rbd.RBDDriver
158rbd_pool=volumes
159rbd_ceph_conf=/etc/ceph/ceph.conf
160rbd_flatten_volume_from_snapshot=false
161rbd_max_clone_depth=5
162glance_api_version=2
163````
164
165If you’re using cephx authentication, also configure the user and
166uuid of the secret you added to `libvirt` as documented earlier
167
168````
169rbd_user=cinder
170rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
171````
172
173## Cinder Backup
174
175OpenStack Cinder Backup requires a specific daemon so don’t
176forget to install it. On your Cinder Backup node,
177edit `/etc/cinder/cinder.conf` and add:
178
179````
180backup_driver=cinder.backup.drivers.ceph
181backup_ceph_conf=/etc/ceph/ceph.conf
182backup_ceph_user=cinder-backup
183backup_ceph_chunk_size=134217728
184backup_ceph_pool=backups
185backup_ceph_stripe_unit=0
186backup_ceph_stripe_count=0
187restore_discard_excess_bytes=true
188````
189
190### Nova
191
192In order to boot all the virtual machines directly into Ceph Nova must be
193configured. On every Compute nodes, edit `/etc/nova/nova.conf` and add
194
195````
196libvirt_images_type=rbd
197libvirt_images_rbd_pool=volumes
198libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
199rbd_user=cinder
200rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
201````
202
203It is also a good practice to disable any file injection. Usually, while
204booting an instance Nova attempts to open the rootfs of the virtual machine.
205Then, it injects directly into the filesystem things like: password, ssh
206keys etc... At this point, it is better to rely on the metadata service
207and cloud-init. On every Compute nodes, edit `/etc/nova/nova.conf` and add
208
209````
210libvirt_inject_password=false
211libvirt_inject_key=false
212libvirt_inject_partition=-2
213````
214
215## Restart OpenStack
216
217To activate the Ceph block device driver and load the block device pool name
218into the configuration, you must restart OpenStack.
219
220````
221sudo glance-control api restart
222sudo service nova-compute restart
223sudo service cinder-volume restart
224sudo service cinder-backup restart
225````
226
227Once OpenStack is up and running, you should be able to create a volume
228and boot from it.
229
0230
=== added file 'Admin/Backup-and-Recovery-Ceph.md'
--- Admin/Backup-and-Recovery-Ceph.md 1970-01-01 00:00:00 +0000
+++ Admin/Backup-and-Recovery-Ceph.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,107 @@
1Title: Backup and Recovery - Ceph
2Status: In Progress
3
4# Backup and Recovery - Ceph
5
6## Introduction
7
8A snapshot is a read-only copy of the state of an image at a particular point in time. One
9of the advanced features of Ceph block devices is that you can create snapshots of the images
10to retain a history of an image’s state. Ceph also supports snapshot layering, which allows
11you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots
12using the `rbd` command and many higher level interfaces including OpenStack.
13
14## Scope
15
16**TODO**
17
18## Backup
19
20To create a snapshot with `rbd`, specify the `snap create` option, the pool name and the
21image name.
22
23````
24rbd --pool {pool-name} snap create --snap {snap-name} {image-name}
25rbd snap create {pool-name}/{image-name}@{snap-name}
26````
27
28For example:
29
30````
31rbd --pool rbd snap create --snap snapname foo
32rbd snap create rbd/foo@snapname
33````
34
35## Restore
36
37To rollback to a snapshot with `rbd`, specify the `snap rollback` option, the pool name, the
38image name and the snap name.
39
40````
41rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name}
42rbd snap rollback {pool-name}/{image-name}@{snap-name}
43````
44
45For example:
46
47````
48rbd --pool rbd snap rollback --snap snapname foo
49rbd snap rollback rbd/foo@snapname
50````
51
52**Note:** Rolling back an image to a snapshot means overwriting the current version of the image
53with data from a snapshot. The time it takes to execute a rollback increases with the size of the
54image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is
55the preferred method of returning to a pre-existing state.
56
57## Maintenance
58
59Taking snapshots increases your level of security but also costs disk space. To delete older ones
60you can list them, delete individual ones or purge all snapshots.
61
62To list snapshots of an image, specify the pool name and the image name.
63
64````
65rbd --pool {pool-name} snap ls {image-name}
66rbd snap ls {pool-name}/{image-name}
67````
68
69For example:
70
71````
72rbd --pool rbd snap ls foo
73rbd snap ls rbd/foo
74````
75
76To delete a snapshot with `rbd`, specify the `snap rm` option, the pool name, the image name
77and the username.
78
79````
80rbd --pool {pool-name} snap rm --snap {snap-name} {image-name}
81rbd snap rm {pool-name}/{image-name}@{snap-name}
82````
83
84For example:
85
86````
87rbd --pool rbd snap rm --snap snapname foo
88rbd snap rm rbd/foo@snapname
89````
90
91**Note:** Ceph OSDs delete data asynchronously, so deleting a snapshot doesn’t free up the
92disk space immediately.
93
94To delete all snapshots for an image with `rbd`, specify the snap purge option and the
95image name.
96
97````
98rbd --pool {pool-name} snap purge {image-name}
99rbd snap purge {pool-name}/{image-name}
100````
101
102For example:
103
104````
105rbd --pool rbd snap purge foo
106rbd snap purge rbd/foo
107````
0108
=== added file 'Admin/Backup-and-Recovery-Juju.md'
--- Admin/Backup-and-Recovery-Juju.md 1970-01-01 00:00:00 +0000
+++ Admin/Backup-and-Recovery-Juju.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,59 @@
1Title: Backup and Recovery - Juju
2Status: In Progress
3
4# Backup and Recovery - Juju
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Backup
15
16Jujus working principle is based on storing the state of the cloud in
17database containing information about the environment, machines, services,
18and units. Changes to an environment are made to the state first, which are
19then detected by their according agents. Those are responsible to do the
20needed steps then.
21
22This principle allows Juju to easily do a *backup* of this information, plus
23some needed configuration data and some more useful information more. The
24command to do so is `juju-backup`, which saves the currently selected
25environment. So please make sure to switch to the environment you want to
26backup.
27
28````
29$ juju switch my-env
30$ juju backup
31````
32
33The command creates two generations of backups on the bootstrap node, also
34know as `machine-0`. Beside the state and configuration data about this machine
35itself and the other ones of its environment the aggregated log for all
36machines and the one of this machine itself are saved. The aggregated log
37is the same you're accessing when calling
38
39````
40$ juju debug-log
41````
42
43and enables you to retrieve helpful information in case of a problem. After
44the backup is created on the bootstrap node it is transferred to your
45working machine into the current directory as `juju-backup-YYYYMMDD-HHMM.tgz`,
46where *YYYYMMDD-HHMM* is date and time of the backup. In case you want to open
47the backup manually to access the mentioned logging data you'll find it in the
48contained archive `root.tar`. Here please don't wonder, this way all owner,
49access rights and other information are preserved.
50
51## Restore
52
53To *restore* an environment the according command is
54
55````
56$ juju restore <BACKUPFILE>
57````
58
59This way you're able to choose the concrete environment to restore.
060
=== added file 'Admin/Backup-and-Recovery-OpenStack.md'
--- Admin/Backup-and-Recovery-OpenStack.md 1970-01-01 00:00:00 +0000
+++ Admin/Backup-and-Recovery-OpenStack.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,131 @@
1Title: Backup and Recovery - OpenStack
2Status: In Progress
3
4# Backup and Recovery - OpenStack
5
6## Introduction
7
8The OpenStack flexibility makes backup and restore to a very individual process
9depending on the used components. This section describes how the critical parts
10like the configuration files and databases OpenStack needs to run are saved. As
11before for Juju it doesn't describe ho to back up the objects inside the Object
12Storage or the data inside the Block Storage.
13
14## Scope
15
16**TODO**
17
18## Backup Cloud Controller Database
19
20Like Juju the OpenStack cloud controller uses a database server which stores the
21central databases for Nova, Glance, Keystone, Cinder, and Switft. You can backup
22the five databases into one common dump:
23
24````
25$ mysqldump --opt --all-databases > openstack.sql
26````
27
28Alternatively you can backup the database for each component individually:
29
30````
31$ mysqldump --opt nova > nova.sql
32$ mysqldump --opt glance > glance.sql
33$ mysqldump --opt keystone > keystone.sql
34$ mysqldump --opt cinder > cinder.sql
35$ mysqldump --opt swift > swift.sql
36````
37
38## Backup File Systems
39
40Beside the databases OpenStack uses different directories for its configuration,
41runtime files, and logging. Like the databases they are grouped individually per
42component. This way also the backup can be done per component.
43
44### Nova
45
46You'll find the configuration directory `/etc/nova` on the cloud controller and
47each compute node. It should be regularly backed up.
48
49Another directory to backup is `/var/lib/nova`. But here you have to be careful
50with the `instances` subdirectory on the compute nodes. It contains the KVM images
51of the running instances. If you want to maintain backup copies of those instances
52you can do a backup here too. In this case make sure to not save a live KVM instance
53because it may not boot properly after restoring the backup.
54
55Third directory for the compute component is `/var/log/nova`. In case of a central
56logging server this directory does not need to be backed up. So we suggest you to
57run your environment with this kind of logging.
58
59### Glance
60
61Like for Nova you'll find the directories `/etc/glance` and `/var/log/glance`, the
62handling should be the same here too.
63
64Glance also uses the directory named `/var/lib/glance` which also should be backed
65up.
66
67### Keystone
68
69Keystone is using the directories `/etc/keystone`, `/var/lib/keystone`, and
70`/var/log/keystone`. They follow the same rules as Nova and Glance. Even if
71the `lib` directory should not contain any data being used, can also be backed
72up just in case.
73
74### Cinder
75
76Like before you'll find the directories `/etc/cinder`, `/var/log/cinder`,
77and `/var/lib/cinder`. And also here the handling should be the same. Opposite
78to Nova abd Glance there's no special handling of `/var/lib/cinder` needed.
79
80### Swift
81
82Beside the Swift configuration the directory `/etc/swift` contains the ring files
83and the ring builder files. If those get lest the data on your data gets inaccessable.
84So you can easily imagine how important it is to backup this directory. Best practise
85is to copy the builder files to the storage nodes along with the ring files. So
86multiple copies are spread throughout the cluster.
87
88**TODO(mue)** Really needed when we use Ceph for storage?
89
90## Restore
91
92The restore based on the backups is a step-by-step process restoring the components
93databases and all their directories. It's important that the component to restore is
94currently not running. So always start the restoring with stopping all components.
95
96Let's take Nova as an example. First execute
97
98````
99$ stop nova-api
100$ stop nova-cert
101$ stop nova-consoleauth
102$ stop nova-novncproxy
103$ stop nova-objectstore
104$ stop nova-scheduler
105````
106
107on the cloud controller to savely stop the processes of the component. Next step is the
108restore of the database. By using the `--opt` option during backup we ensured that all
109tables are initially dropped and there's no conflict with currently existing data in
110the databases.
111
112````
113$ mysql nova < nova.sql
114````
115
116Before restoring the directories you should move at least the configuration directoy,
117here `/etc/nova`, into a secure location in case you need to roll it back.
118
119After the database and the files are restored you can start MySQL and Nova again.
120
121````
122$ start mysql
123$ start nova-api
124$ start nova-cert
125$ start nova-consoleauth
126$ start nova-novncproxy
127$ start nova-objectstore
128$ start nova-scheduler
129````
130
131The process for the other components look similar.
0132
=== added file 'Admin/Logging-Juju.md'
--- Admin/Logging-Juju.md 1970-01-01 00:00:00 +0000
+++ Admin/Logging-Juju.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,24 @@
1Title: Logging - Juju
2Status: In Progress
3
4# Logging - Juju
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Connecting to rsyslogd
15
16Juju already uses `rsyslogd` for the aggregation of all logs into on centralized log. The
17target of this logging is the file `/var/log/juju/all-machines.log`. You can directly
18access it using the command
19
20````
21$ juju debug-log
22````
23
24**TODO** Describe a way to redirect this log to a central rsyslogd server.
025
=== added file 'Admin/Logging-OpenStack.md'
--- Admin/Logging-OpenStack.md 1970-01-01 00:00:00 +0000
+++ Admin/Logging-OpenStack.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,92 @@
1Title: Logging - OpenStack
2Status: In Progress
3
4# Logging - OpenStack
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Connecting to rsyslogd
15
16By default OpenStack is writting its logging output into files into directories for each
17component, like `/var/log/nova` or `/var/log/glance`. For the usage of `rsyslogd` the components
18have to be configured to also log to `syslog`. When doing this also configure each component
19to log into a different syslog facility. This will help you to split the logs into individual
20components on the central logging server. So ensure the following settings:
21
22**/etc/nova/nova.conf:**
23
24````
25use_syslog=True
26syslog_log_facility=LOG_LOCAL0
27````
28
29**/etc/glance/glance-api.conf and /etc/glance/glance-registry.conf:**
30
31````
32use_syslog=True
33syslog_log_facility=LOG_LOCAL1
34````
35
36**/etc/cinder/cinder.conf:**
37
38````
39use_syslog=True
40syslog_log_facility=LOG_LOCAL2
41````
42
43**/etc/keystone/keystone.conf:**
44
45````
46use_syslog=True
47syslog_log_facility=LOG_LOCAL3
48````
49
50The object storage Swift be fault already logs to syslog. So you now can tell the local
51rsyslogd clients to pass the logged information to the logging server. You'll do this
52by creating a `/etc/rsyslog.d/client.conf` containing the line like
53
54````
55*.* @192.16.1.10
56````
57
58where the IP address points to your rsyslogd server. Best is to choose a server that is
59dedicated to this task only. Here you've got to create the file `/etc/rsyslog.d/server.conf`
60contining the settings
61
62````
63# Enable UDP
64$ModLoad imudp
65# Listen on 192.168.1.10 only
66$UDPServerAddress 192.168.1.10
67# Port 514
68$UDPServerRun 514
69# Create logging templates for nova
70$template NovaFile,"/var/log/rsyslog/%HOSTNAME%/nova.log"
71$template NovaAll,"/var/log/rsyslog/nova.log"
72# Log everything else to syslog.log
73$template DynFile,"/var/log/rsyslog/%HOSTNAME%/syslog.log"
74*.* ?DynFile
75# Log various openstack components to their own individual file
76local0.* ?NovaFile
77local0.* ?NovaAll
78& ~
79````
80
81This example only contains the settings for Nova only, the other OpenStack components
82have to be added the same way. Using two templates per component, one containing the
83`%HOSTNAME%` variable and one without it enables a better splitting of the logged
84data. Think about the two example nodes `alpha.example.com` and `bravo.example.com`.
85They will write their logging into the files
86
87- `/var/log/rsyslog/alpha.example.com/nova.log` - only the data of alpha,
88- `/var/log/rsyslog/bravo.example.com/nova.log` - only the data of bravo,
89- `/var/log/rsyslog/nova.log` - the combined data of both.
90
91This allows a quick overview over all nodes as well as the focussed analysis of an
92individual node.
093
=== added file 'Admin/Logging.md'
--- Admin/Logging.md 1970-01-01 00:00:00 +0000
+++ Admin/Logging.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,15 @@
1Title: Logging
2Status: In Progress
3
4# Logging
5
6The controlling of individual logs is a cumbersome job, even in an environment with only
7few computer system. But it's even more worse in typical clouds with a large number of
8nodes. Here the centrallized approach using `rsyslogd` helps. It allows you to aggregate
9the logging output of all systems in one place. Here the monitoring and analysis gets
10more simple.
11
12Ubuntu uses `rsyslogd` as the default logging service. Since it is natively able to send
13logs to a remote location, you don't have to install anything extra to enable this feature,
14just modify the configuration file. In doing this, consider running your logging over
15a management network or using an encrypted VPN to avoid interception.
016
=== added file 'Admin/Scaling-Ceph.md'
--- Admin/Scaling-Ceph.md 1970-01-01 00:00:00 +0000
+++ Admin/Scaling-Ceph.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,36 @@
1Title: Scaling - Ceph
2Status: In Progress
3
4# Scaling - Ceph
5
6## Introduction
7
8Beside the redundancy for more safety and the higher performance through the usage of
9Ceph as storage backend for OpenStack the user also benefits from the more simple way
10of scaling the storage of the needs grow.
11
12## Scope
13
14**TODO**
15
16## Scaling
17
18The addition of Ceph nodes is done using the Juju `add-node` command. By default
19it adds only one node, but it is possible to add the number of wanted nodes as
20argument. To add one more Ceph OSD Daemon node you simply call
21
22```
23juju add-node ceph-osd
24```
25
26Larger numbers of nodes can be added using the `-n` argument, e.g. 5 nodes
27with
28
29```
30juju add-node -n 5 ceph-osd
31```
32
33**Attention:** The adding of more nodes to Ceph leads to a redistribution of data
34between the nodes of an image. This can cause inefficiencies during this process. So
35it should be done in smaller steps.
36
037
=== added file 'Admin/Upgrading-and-Patching-Juju.md'
--- Admin/Upgrading-and-Patching-Juju.md 1970-01-01 00:00:00 +0000
+++ Admin/Upgrading-and-Patching-Juju.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,45 @@
1Title: Upgrading and Patching - Juju
2Status: In Progress
3
4# Upgrading and Patching - Juju
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Upgrading
15
16The upgrade of a Juju environment is done using the Juju client and its command
17
18````
19$ juju upgrade-juju
20````
21
22This command sets the version number for all Juju agents to run. This by default
23is the most recent supported version compatible with the comand-line tools version.
24So ensure that you've upgraded the Juju client first.
25
26When run without arguments, `upgrade-juju` will try to upgrade to the following
27versions, in order of preference and depending on the current value of the
28environment's `agent-version` setting:
29
30- The highest patch.build version of the *next* stable major.minor version.
31- The highest patch.build version of the *current* major.minor version.
32
33Both of these depend on the availability of the according tools. On MAAS you've
34got to manage this yourself using the command
35
36````
37$ juju sync-tools
38````
39
40This copies the Juju tools tarball from the official tools store (located
41at https://streams.canonical.com/juju) into your environment.
42
43## Patching
44
45**TODO**
046
=== added file 'Admin/Upgrading-and-Patching-OpenStack.md'
--- Admin/Upgrading-and-Patching-OpenStack.md 1970-01-01 00:00:00 +0000
+++ Admin/Upgrading-and-Patching-OpenStack.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,83 @@
1Title: Upgrading and Patching - OpenStack
2Status: In Progress
3
4# Upgrading and Patching - OpenStack
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Upgrading
15
16The upgrade of an OpenStack cluster in one big step is an approach requiring additional
17hardware to setup an update cloud beside the productive one and leads to a longer
18outage while your cloud is in read-only mode, the state is transferred to the new
19one and the environments are switched. So the preferred way of upgrading an OpenStack
20cloud is the rolling upgrade of each component of the system piece by piece.
21
22Here you can choose between in-place and side-by-side upgrades. But the first one needs
23to shutdown the regarding component while you're performing its upgrade. Additionally you
24may have troubles in case of a rollback. So to avoid this the side by side upgrade is
25the preferred way here.
26
27Before starting the upgrade itself you should
28
29- Perform some "cleaning" of the environment process to ensure a consistent state; for
30 example, instances not fully purged from the system after deletion may cause
31 indeterminate behavior
32- Read the release notes and documentation
33- Find incompatibilities between your versions
34
35The upgrade tasks here follow the same procedure for each component:
36
371. Configure the new worker
381. Turn off the current worker; during this time hide the downtime using a message
39 queue or a load balancer
401. Take a backup as described earlier of the old worker for a rollback
411. Copy the state of the current to the new worker
421. Start up the new worker
43
44Now repeat these steps for each worker in an approprate order. In case of a problem it
45should be easy to rollback as long as the former worker stays untouched. This is,
46beside the shorter downtime, the most important advantage of the side-by-side upgrade.
47
48The following order for service upgrades seems the most successful:
49
501. Upgrade the OpenStack Identity Service (Keystone).
511. Upgrade the OpenStack Image Service (Glance).
521. Upgrade OpenStack Compute (Nova), including networking components.
531. Upgrade OpenStack Block Storage (Cinder).
541. Upgrade the OpenStack dashboard.
55
56These steps look very easy, but still are a complex procedure depending on your cloud
57configuration. So we recommend to have a testing environment with a near-identical
58architecture to your production system. This doesn't mean that you should use the same
59sizes and hardware, which would be best but expensive. But there are some ways to reduce
60the cost.
61
62- Use your own cloud. The simplest place to start testing the next version of OpenStack
63 is by setting up a new environment inside your own cloud. This may seem odd—especially
64 the double virtualisation used in running compute nodes—but it's a sure way to very
65 quickly test your configuration.
66- Use a public cloud. Especially because your own cloud is unlikely to have sufficient
67 space to scale test to the level of the entire cloud, consider using a public cloud
68 to test the scalability limits of your cloud controller configuration. Most public
69 clouds bill by the hour, which means it can be inexpensive to perform even a test
70 with many nodes.
71- Make another storage endpoint on the same system. If you use an external storage plug-in
72 or shared file system with your cloud, in many cases it's possible to test that it
73 works by creating a second share or endpoint. This will enable you to test the system
74 before entrusting the new version onto your storage.
75- Watch the network. Even at smaller-scale testing, it should be possible to determine
76 whether something is going horribly wrong in intercomponent communication if you
77 look at the network packets and see too many.
78
79**TODO** Add more concrete description here.
80
81## Patching
82
83**TODO**
084
=== removed file 'Appendix-Ceph-and-OpenStack.md'
--- Appendix-Ceph-and-OpenStack.md 2014-04-02 16:18:10 +0000
+++ Appendix-Ceph-and-OpenStack.md 1970-01-01 00:00:00 +0000
@@ -1,229 +0,0 @@
1Title: Appendix - Ceph and OpenStack
2Status: Done
3
4# Appendix: Ceph and OpenStack
5
6Ceph stripes block device images as objects across a cluster. This way it provides
7a better performance than standalone server. OpenStack is able to use Ceph Block Devices
8through `libvirt`, which configures the QEMU interface to `librbd`.
9
10To use Ceph Block Devices with OpenStack, you must install QEMU, `libvirt`, and OpenStack
11first. It's recommended to use a separate physical node for your OpenStack installation.
12OpenStack recommends a minimum of 8GB of RAM and a quad-core processor.
13
14Three parts of OpenStack integrate with Ceph’s block devices:
15
16- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack
17 treats images as binary blobs and downloads them accordingly.
18- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to
19 attach volumes to running VMs. OpenStack manages volumes using Cinder services.
20- Guest Disks: Guest disks are guest operating system disks. By default, when you
21 boot a virtual machine, its disk appears as a file on the filesystem of the
22 hypervisor (usually under /var/lib/nova/instances/<uuid>/). Prior OpenStack Havana,
23 the only way to boot a VM in Ceph was to use the boot from volume functionality
24 from Cinder. However, now it is possible to directly boot every virtual machine
25 inside Ceph without using Cinder. This is really handy because it allows us to
26 easily perform maintenance operation with the live-migration process. On the other
27 hand, if your hypervisor dies it is also really convenient to trigger Nova evacuate
28 and almost seamlessly run the virtual machine somewhere else.
29
30You can use OpenStack Glance to store images in a Ceph Block Device, and you can
31use Cinder to boot a VM using a copy-on-write clone of an image.
32
33## Create a pool
34
35By default, Ceph block devices use the `rbd` pool. You may use any available pool.
36We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph
37cluster is running, then create the pools.
38
39````
40ceph osd pool create volumes 128
41ceph osd pool create images 128
42ceph osd pool create backups 128
43````
44
45## Configure OpenStack Ceph Clients
46
47The nodes running `glance-api`, `cinder-volume`, `nova-compute` and `cinder-backup` act
48as Ceph clients. Each requires the `ceph.conf` file
49
50````
51ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
52````
53
54On the `glance-api` node, you’ll need the Python bindings for `librbd`
55
56````
57sudo apt-get install python-ceph
58sudo yum install python-ceph
59````
60
61On the `nova-compute`, `cinder-backup` and on the `cinder-volume` node, use both the
62Python bindings and the client command line tools
63
64````
65sudo apt-get install ceph-common
66sudo yum install ceph
67````
68
69If you have cephx authentication enabled, create a new user for Nova/Cinder and
70Glance. Execute the following
71
72````
73ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
74ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
75ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
76````
77
78Add the keyrings for `client.cinder`, `client.glance`, and `client.cinder-backup`
79to the appropriate nodes and change their ownership
80
81````
82ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
83ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
84ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
85ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
86ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
87ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
88````
89
90Nodes running `nova-compute` need the keyring file for the `nova-compute` process.
91They also need to store the secret key of the `client.cinder` user in `libvirt`. The
92`libvirt` process needs it to access the cluster while attaching a block device
93from Cinder.
94
95Create a temporary copy of the secret key on the nodes running `nova-compute`
96
97````
98ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
99````
100
101Then, on the compute nodes, add the secret key to `libvirt` and remove the
102temporary copy of the key
103
104````
105uuidgen
106457eb676-33da-42ec-9a8c-9293d545c337
107
108cat > secret.xml <<EOF
109<secret ephemeral='no' private='no'>
110 <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
111 <usage type='ceph'>
112 <name>client.cinder secret</name>
113 </usage>
114</secret>
115EOF
116sudo virsh secret-define --file secret.xml
117Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
118sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
119````
120
121Save the uuid of the secret for configuring `nova-compute` later.
122
123**Important** You don’t necessarily need the UUID on all the compute nodes.
124However from a platform consistency perspective it’s better to keep the
125same UUID.
126
127## Configure OpenStack to use Ceph
128
129### Glance
130
131Glance can use multiple back ends to store images. To use Ceph block devices
132by default, edit `/etc/glance/glance-api.conf` and add
133
134````
135default_store=rbd
136rbd_store_user=glance
137rbd_store_pool=images
138````
139
140If want to enable copy-on-write cloning of images into volumes, also add:
141
142````
143show_image_direct_url=True
144````
145
146Note that this exposes the back end location via Glance’s API, so
147the endpoint with this option enabled should not be publicly
148accessible.
149
150### Cinder
151
152OpenStack requires a driver to interact with Ceph block devices. You
153must also specify the pool name for the block device. On your
154OpenStack node, edit `/etc/cinder/cinder.conf` by adding
155
156````
157volume_driver=cinder.volume.drivers.rbd.RBDDriver
158rbd_pool=volumes
159rbd_ceph_conf=/etc/ceph/ceph.conf
160rbd_flatten_volume_from_snapshot=false
161rbd_max_clone_depth=5
162glance_api_version=2
163````
164
165If you’re using cephx authentication, also configure the user and
166uuid of the secret you added to `libvirt` as documented earlier
167
168````
169rbd_user=cinder
170rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
171````
172
173## Cinder Backup
174
175OpenStack Cinder Backup requires a specific daemon so don’t
176forget to install it. On your Cinder Backup node,
177edit `/etc/cinder/cinder.conf` and add:
178
179````
180backup_driver=cinder.backup.drivers.ceph
181backup_ceph_conf=/etc/ceph/ceph.conf
182backup_ceph_user=cinder-backup
183backup_ceph_chunk_size=134217728
184backup_ceph_pool=backups
185backup_ceph_stripe_unit=0
186backup_ceph_stripe_count=0
187restore_discard_excess_bytes=true
188````
189
190### Nova
191
192In order to boot all the virtual machines directly into Ceph Nova must be
193configured. On every Compute nodes, edit `/etc/nova/nova.conf` and add
194
195````
196libvirt_images_type=rbd
197libvirt_images_rbd_pool=volumes
198libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
199rbd_user=cinder
200rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
201````
202
203It is also a good practice to disable any file injection. Usually, while
204booting an instance Nova attempts to open the rootfs of the virtual machine.
205Then, it injects directly into the filesystem things like: password, ssh
206keys etc... At this point, it is better to rely on the metadata service
207and cloud-init. On every Compute nodes, edit `/etc/nova/nova.conf` and add
208
209````
210libvirt_inject_password=false
211libvirt_inject_key=false
212libvirt_inject_partition=-2
213````
214
215## Restart OpenStack
216
217To activate the Ceph block device driver and load the block device pool name
218into the configuration, you must restart OpenStack.
219
220````
221sudo glance-control api restart
222sudo service nova-compute restart
223sudo service cinder-volume restart
224sudo service cinder-backup restart
225````
226
227Once OpenStack is up and running, you should be able to create a volume
228and boot from it.
229
2300
=== removed file 'Backup-and-Recovery-Ceph.md'
--- Backup-and-Recovery-Ceph.md 2014-04-02 16:18:10 +0000
+++ Backup-and-Recovery-Ceph.md 1970-01-01 00:00:00 +0000
@@ -1,107 +0,0 @@
1Title: Backup and Recovery - Ceph
2Status: In Progress
3
4# Backup and Recovery - Ceph
5
6## Introduction
7
8A snapshot is a read-only copy of the state of an image at a particular point in time. One
9of the advanced features of Ceph block devices is that you can create snapshots of the images
10to retain a history of an image’s state. Ceph also supports snapshot layering, which allows
11you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots
12using the `rbd` command and many higher level interfaces including OpenStack.
13
14## Scope
15
16**TODO**
17
18## Backup
19
20To create a snapshot with `rbd`, specify the `snap create` option, the pool name and the
21image name.
22
23````
24rbd --pool {pool-name} snap create --snap {snap-name} {image-name}
25rbd snap create {pool-name}/{image-name}@{snap-name}
26````
27
28For example:
29
30````
31rbd --pool rbd snap create --snap snapname foo
32rbd snap create rbd/foo@snapname
33````
34
35## Restore
36
37To rollback to a snapshot with `rbd`, specify the `snap rollback` option, the pool name, the
38image name and the snap name.
39
40````
41rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name}
42rbd snap rollback {pool-name}/{image-name}@{snap-name}
43````
44
45For example:
46
47````
48rbd --pool rbd snap rollback --snap snapname foo
49rbd snap rollback rbd/foo@snapname
50````
51
52**Note:** Rolling back an image to a snapshot means overwriting the current version of the image
53with data from a snapshot. The time it takes to execute a rollback increases with the size of the
54image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is
55the preferred method of returning to a pre-existing state.
56
57## Maintenance
58
59Taking snapshots increases your level of security but also costs disk space. To delete older ones
60you can list them, delete individual ones or purge all snapshots.
61
62To list snapshots of an image, specify the pool name and the image name.
63
64````
65rbd --pool {pool-name} snap ls {image-name}
66rbd snap ls {pool-name}/{image-name}
67````
68
69For example:
70
71````
72rbd --pool rbd snap ls foo
73rbd snap ls rbd/foo
74````
75
76To delete a snapshot with `rbd`, specify the `snap rm` option, the pool name, the image name
77and the username.
78
79````
80rbd --pool {pool-name} snap rm --snap {snap-name} {image-name}
81rbd snap rm {pool-name}/{image-name}@{snap-name}
82````
83
84For example:
85
86````
87rbd --pool rbd snap rm --snap snapname foo
88rbd snap rm rbd/foo@snapname
89````
90
91**Note:** Ceph OSDs delete data asynchronously, so deleting a snapshot doesn’t free up the
92disk space immediately.
93
94To delete all snapshots for an image with `rbd`, specify the snap purge option and the
95image name.
96
97````
98rbd --pool {pool-name} snap purge {image-name}
99rbd snap purge {pool-name}/{image-name}
100````
101
102For example:
103
104````
105rbd --pool rbd snap purge foo
106rbd snap purge rbd/foo
107````
1080
=== removed file 'Backup-and-Recovery-Juju.md'
--- Backup-and-Recovery-Juju.md 2014-04-02 16:18:10 +0000
+++ Backup-and-Recovery-Juju.md 1970-01-01 00:00:00 +0000
@@ -1,59 +0,0 @@
1Title: Backup and Recovery - Juju
2Status: In Progress
3
4# Backup and Recovery - Juju
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Backup
15
16Jujus working principle is based on storing the state of the cloud in
17database containing information about the environment, machines, services,
18and units. Changes to an environment are made to the state first, which are
19then detected by their according agents. Those are responsible to do the
20needed steps then.
21
22This principle allows Juju to easily do a *backup* of this information, plus
23some needed configuration data and some more useful information more. The
24command to do so is `juju-backup`, which saves the currently selected
25environment. So please make sure to switch to the environment you want to
26backup.
27
28````
29$ juju switch my-env
30$ juju backup
31````
32
33The command creates two generations of backups on the bootstrap node, also
34know as `machine-0`. Beside the state and configuration data about this machine
35itself and the other ones of its environment the aggregated log for all
36machines and the one of this machine itself are saved. The aggregated log
37is the same you're accessing when calling
38
39````
40$ juju debug-log
41````
42
43and enables you to retrieve helpful information in case of a problem. After
44the backup is created on the bootstrap node it is transferred to your
45working machine into the current directory as `juju-backup-YYYYMMDD-HHMM.tgz`,
46where *YYYYMMDD-HHMM* is date and time of the backup. In case you want to open
47the backup manually to access the mentioned logging data you'll find it in the
48contained archive `root.tar`. Here please don't wonder, this way all owner,
49access rights and other information are preserved.
50
51## Restore
52
53To *restore* an environment the according command is
54
55````
56$ juju restore <BACKUPFILE>
57````
58
59This way you're able to choose the concrete environment to restore.
600
=== removed file 'Backup-and-Recovery-OpenStack.md'
--- Backup-and-Recovery-OpenStack.md 2014-04-02 16:18:10 +0000
+++ Backup-and-Recovery-OpenStack.md 1970-01-01 00:00:00 +0000
@@ -1,131 +0,0 @@
1Title: Backup and Recovery - OpenStack
2Status: In Progress
3
4# Backup and Recovery - OpenStack
5
6## Introduction
7
8The OpenStack flexibility makes backup and restore to a very individual process
9depending on the used components. This section describes how the critical parts
10like the configuration files and databases OpenStack needs to run are saved. As
11before for Juju it doesn't describe ho to back up the objects inside the Object
12Storage or the data inside the Block Storage.
13
14## Scope
15
16**TODO**
17
18## Backup Cloud Controller Database
19
20Like Juju the OpenStack cloud controller uses a database server which stores the
21central databases for Nova, Glance, Keystone, Cinder, and Switft. You can backup
22the five databases into one common dump:
23
24````
25$ mysqldump --opt --all-databases > openstack.sql
26````
27
28Alternatively you can backup the database for each component individually:
29
30````
31$ mysqldump --opt nova > nova.sql
32$ mysqldump --opt glance > glance.sql
33$ mysqldump --opt keystone > keystone.sql
34$ mysqldump --opt cinder > cinder.sql
35$ mysqldump --opt swift > swift.sql
36````
37
38## Backup File Systems
39
40Beside the databases OpenStack uses different directories for its configuration,
41runtime files, and logging. Like the databases they are grouped individually per
42component. This way also the backup can be done per component.
43
44### Nova
45
46You'll find the configuration directory `/etc/nova` on the cloud controller and
47each compute node. It should be regularly backed up.
48
49Another directory to backup is `/var/lib/nova`. But here you have to be careful
50with the `instances` subdirectory on the compute nodes. It contains the KVM images
51of the running instances. If you want to maintain backup copies of those instances
52you can do a backup here too. In this case make sure to not save a live KVM instance
53because it may not boot properly after restoring the backup.
54
55Third directory for the compute component is `/var/log/nova`. In case of a central
56logging server this directory does not need to be backed up. So we suggest you to
57run your environment with this kind of logging.
58
59### Glance
60
61Like for Nova you'll find the directories `/etc/glance` and `/var/log/glance`, the
62handling should be the same here too.
63
64Glance also uses the directory named `/var/lib/glance` which also should be backed
65up.
66
67### Keystone
68
69Keystone is using the directories `/etc/keystone`, `/var/lib/keystone`, and
70`/var/log/keystone`. They follow the same rules as Nova and Glance. Even if
71the `lib` directory should not contain any data being used, can also be backed
72up just in case.
73
74### Cinder
75
76Like before you'll find the directories `/etc/cinder`, `/var/log/cinder`,
77and `/var/lib/cinder`. And also here the handling should be the same. Opposite
78to Nova abd Glance there's no special handling of `/var/lib/cinder` needed.
79
80### Swift
81
82Beside the Swift configuration the directory `/etc/swift` contains the ring files
83and the ring builder files. If those get lest the data on your data gets inaccessable.
84So you can easily imagine how important it is to backup this directory. Best practise
85is to copy the builder files to the storage nodes along with the ring files. So
86multiple copies are spread throughout the cluster.
87
88**TODO(mue)** Really needed when we use Ceph for storage?
89
90## Restore
91
92The restore based on the backups is a step-by-step process restoring the components
93databases and all their directories. It's important that the component to restore is
94currently not running. So always start the restoring with stopping all components.
95
96Let's take Nova as an example. First execute
97
98````
99$ stop nova-api
100$ stop nova-cert
101$ stop nova-consoleauth
102$ stop nova-novncproxy
103$ stop nova-objectstore
104$ stop nova-scheduler
105````
106
107on the cloud controller to savely stop the processes of the component. Next step is the
108restore of the database. By using the `--opt` option during backup we ensured that all
109tables are initially dropped and there's no conflict with currently existing data in
110the databases.
111
112````
113$ mysql nova < nova.sql
114````
115
116Before restoring the directories you should move at least the configuration directoy,
117here `/etc/nova`, into a secure location in case you need to roll it back.
118
119After the database and the files are restored you can start MySQL and Nova again.
120
121````
122$ start mysql
123$ start nova-api
124$ start nova-cert
125$ start nova-consoleauth
126$ start nova-novncproxy
127$ start nova-objectstore
128$ start nova-scheduler
129````
130
131The process for the other components look similar.
1320
=== added directory 'Install'
=== added file 'Install/Installing-Ceph.md'
--- Install/Installing-Ceph.md 1970-01-01 00:00:00 +0000
+++ Install/Installing-Ceph.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,56 @@
1Title: Installing - Ceph
2Status: Review
3
4# Installing - Ceph
5
6## Introduction
7
8Typically OpenStack uses the local storage of their nodes for the configuration data
9as well as for the object storage provided by Swift and the block storage provided by
10Cinder and Glance. But it also can use Ceph as storage backend. Ceph stripes block
11device images across a cluster. This way it provides a better performance than typical
12standalone server. It allows scalabillity and redundancy needs to be satisfied and
13Cinder's RDB driver used to create, export and connect volumes to instances.
14
15## Scope
16
17This document covers the deployment of Ceph via Juju. Other related documents are
18
19- [Scaling Ceph](Scaling-Ceph.md)
20- [Troubleshooting Ceph](Troubleshooting-Ceph.md)
21- [Appendix Ceph and OpenStack](Appendix-Ceph-and-OpenStack.md)
22
23## Deployment
24
25During the installation of OpenStack we've already seen the deployment of Ceph via
26
27```
28juju deploy --config openstack-config.yaml -n 3 ceph
29juju deploy --config openstack-config.yaml -n 10 ceph-osd
30```
31
32This will install three Ceph nodes configured with the information contained in the
33file `openstack-config.yaml`. This file contains the configuration `block-device: None`
34for Cinder, so that this component does not use the local disk. Instead we're calling
35Additionally 10 Ceph OSD nodes providing the object storage are deployed and related
36to the Ceph nodes by
37
38```
39juju add-relation ceph-osd ceph
40```
41
42Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which
43will scan for the configured storage devices and add them to the pool of available storage.
44Now the relation to Cinder and Glance can be established with
45
46```
47juju add-relation cinder ceph
48juju add-relation glance ceph
49```
50
51so that both are using the storage provided by Ceph.
52
53## See also
54
55- https://manage.jujucharms.com/charms/precise/ceph
56- https://manage.jujucharms.com/charms/precise/ceph-osd
057
=== added file 'Install/Installing-MAAS.md'
--- Install/Installing-MAAS.md 1970-01-01 00:00:00 +0000
+++ Install/Installing-MAAS.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,467 @@
1Title: Installing MAAS
2Status: In progress
3Notes:
4
5
6
7
8
9#Installing the MAAS software
10
11##Scope of this documentation
12
13This document provides instructions on how to install the Metal As A Service (MAAS) software. It has been prepared alongside guides for installing Juju, OpenStack and Landscape as part of a production grade cloud environment. MAAS itself may be used in different ways and you can find documentation for this on the main MAAS website [MAAS docs]. For the purposes of this documentation, the following assumptions have been made:
14* You have sufficient, appropriate node hardware
15* You will be using Juju to assign workloads to MAAS
16* You will be configuring the cluster network to be controlled entirely by MAAS (i.e. DNS and DHCP)
17* If you have a compatible power-management system, any additional hardware required is also installed(e.g. IPMI network).
18
19## Introducing MAAS
20
21Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource.
22
23What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud.
24
25When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova.
26
27MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal.
28
29## Installing MAAS from the Cloud Archive
30
31The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of MAAS, Juju and other tools. It is highly recommended to configure this repository and use it to keep your software up to date:
32
33```
34sudo add-apt-repository cloud-archive:tools
35sudo apt-get update
36```
37
38There are several packages that comprise a MAAS install. These are:
39
40maas-region-controller:
41 Which comprises the 'control' part of the software, including the web-based user interface, the API server and the main database.
42maas-cluster-controller:
43 This includes the software required to manage a cluster of nodes, including managing DHCP and boot images.
44maas-dns:
45 This is a customised DNS service that MAAS can use locally to manage DNS for all the connected nodes.
46mass-dhcp:
47 As for DNS, there is a DHCP service to enable MAAS to correctly enlist nodes and assign IP addresses. The DHCP setup is critical for the correct PXE booting of nodes.
48
49As a convenience, there is also a `maas` metapackage, which will install all these components
50
51
52If you need to separate these services or want to deploy an additional cluster controller, you should install the corresponding packages individually (see [_the description of a typical setup_](https://www.filepicker.io/api/file/orientation.html#setup) for more background on how a typical hardware setup might be arranged).
53
54
55
56
57### Installing the packages
58
59The configuration for the MAAS controller will automatically run and pop up this config screen:
60
61![]( install_cluster-config.png)
62
63Here you will need to enter the hostname for where the region controller can be contacted. In many scenarios, you may be running the region controller (i.e. the web and API interface) from a different network address, for example where a server has several network interfaces.
64
65Once the configuration scripts have run you should see this message telling you that the system is ready to use:
66
67![]( install_controller-config.png)
68
69The web server is started last, so you have to accept this message before the service is run and you can access the Web interface. Then there are just a few more setup steps [_Post-Install tasks_](https://www.filepicker.io/api/file/WMGTttJT6aaLnQrEkAPv?signature=a86d0c3b4e25dba2d34633bbdc6873d9d8e6ae3cecc4672f2219fa81ee478502&policy=eyJoYW5kbGUiOiJXTUdUdHRKVDZhYUxuUXJFa0FQdiIsImV4cGlyeSI6MTM5NTE3NDE2MSwiY2FsbCI6WyJyZWFkIl19#post-install)
70
71The maas-dhcp and maas-dns packages should be installed by default. You can check whether they are installed with:
72
73```
74dpkg -l maas-dhcp maas-dns
75```
76
77If they are missing, then:
78
79```
80sudo apt-get install maas-dhcp maas-dns
81```
82
83And then proceed to the post-install setup below.
84
85If you now use a web browser to connect to the region controller, you should see that MAAS is running, but there will also be some errors on the screen:
86
87![]( install_web-init.png)
88
89The on screen messages will tell you that there are no boot images present, and that you can't login because there is no admin user.
90
91## Create a superuser account
92
93Once MAAS is installed, you'll need to create an administrator account:
94
95```
96sudo maas createadmin --username=root --email=MYEMAIL@EXAMPLE.COM
97```
98
99Substitute your own email address in the command above. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember. The command will prompt for a password to assign to the new user.
100
101You can run this command again for any further administrator accounts you may wish to create, but you need at least one.
102
103## Import the boot images
104
105MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually the first time. To do this you will need to connect to the MAAS API using the maas-cli tool. (see for details). Then you need to run the command:
106
107```
108maas-cli maas node-groups import-boot-images
109```
110
111(substitute in a different profile name for 'maas' if you have called yours something else) This will initiate downloading the required image files. Note that this may take some time depending on your network connection.
112
113
114## Login to the server
115
116To check that everything is working properly, you should try and login to the server now. Both the error messages should have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes attached to this controller.
117
118![]( install-login.png)
119## Configure switches on the network
120
121Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use.
122
123To alleviate this problem, you should enable [Portfast](https://www.symantec.com/business/support/index?page=content&id=HOWTO6019) for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately.
124
125##Add an additional cluster
126
127Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier maintenance, uprades and stability, it is desirable to have at least two operational clusters.
128
129Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the cluster controller software:
130
131```
132sudo add-apt-repository cloud-archive:tools
133sudo apt-get update
134sudo apt-get install maas-cluster-controller
135sudo apt-get install maas-dhcp
136```
137
138During the install process, a configuration window will appear. You merely need to type in the address of the MAAS controller API, like this:
139
140![config-image.png]
141
142## Configure Cluster Controller(s)
143
144### Cluster acceptance
145When you install your first cluster controller on the same system as the region controller, it will be automatically accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user interface as “pending,” until you manually accept them into the MAAS.
146
147To accept a cluster controller, click on the settings “cog” icon at the top right to visit the settings page:
148![]settings.png
149You can either click on “Accept all” or click on the edit icon to edit the cluster. After clicking on the edit icon, you will see this page:
150
151![]cluster-edit.png
152Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes its status from “pending” to “accepted.”
153
154Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how to do this and what choices are to be made.
155
156### Configuration
157MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these networks, by configuring those interfaces in the MAAS user interface.
158
159As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for eth0, which takes us to this page:
160
161![]cluster-interface-edit.png
162Here you can select to what extent you want the cluster controller to manage the network:
163
164DHCP only - this will run a DHCP server on your cluster
165DHCP and DNS - this will run a DHCP server on the cluster and configure the DNS server included with the region controller so that it can be used to look up hosts on this network by name.
166Note
167You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file to work out the IP address of nodes in the cluster.
168If you set the interface to be managed, you now need to provide all of the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able to boot nodes on this network.
169
170!!! note:There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any nodes. Or, if you do want to manage nodes but don’t want the cluster controller to serve DHCP, you may be able to get by without it. This is explained in Manual DHCP configuration.
171
172!!! note: A single cluster controller can manage more than one network, each from a different network interface on the cluster-controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your network architecture.
173
174## Enlisting nodes
175
176Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS controlling DHCP and nodes capable of PXE booting, this is straightforward
177
178Automatic Discovery
179With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the image, contact the MAAS server and shut down.
180
181During this process, the MAAS server will be passed information about the node, including the architecture, MAC address and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web interface. When the nodes have been accepted the selected series of Ubuntu will be installed.
182
183To save time, you can also accept and commission all nodes from the commandline. This requires that you first login with the API key [1], which you can retrieve from the web interface:
184
185```
186maas-cli maas nodes accept-all
187```
188
189### Manually adding nodes
190
191If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the Nodes screen:
192![]add-node.png
193
194Select 'Add node' and manually enter details about the node, including its MAC address. This is used to identify the node when it contacts the DHCP server.
195
196
197
198## Preparing MAAS for Juju using Simplestreams
199
200When Juju bootstraps a cloud, it needs two critical pieces of information:
201
2021. The uuid of the image to use when starting new compute instances.
2032. The URL from which to download the correct version of a tools tarball.
204
205This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata, after understanding a bit about how it works.
206
207The simplestreams format is used to describe related items in a structural fashion.( [See the Launchpad project lp:simplestreams for more details on implementation](https://launchpad.net/simplestreams)). Below we will discuss how Juju determines which metadata to use, and how to create your own images and tools and have Juju use them instead of the defaults.
208
209### Basic Workflow
210
211Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of lookup) are:
212
2131. User supplied location (specified by tools-metadata-url or image-metadata-url config settings).
2142. The environment's cloud storage.
2153. Provider specific locations (eg keystone endpoint if on Openstack).
2164. A web location with metadata for supported public clouds (https://streams.canonical.com).
217
218Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before moving onto the next path location.
219
220Juju ships with public keys used to validate the integrity of image and tools metadata obtained from https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which ship with Juju.
221
222### Image Metadata Contents
223
224Image metadata uses a simplestreams content type of "image-ids". The product id is formed as follows:
225
226com.ubuntu.cloud:server:<series_version>:<arch> For Example:
227com.ubuntu.cloud:server:14.04:amd64 Non-released images (eg beta, daily etc) have product ids like:
228com.ubuntu.cloud.daily:server:13.10:amd64
229
230The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component):
231
232<path_url> |-streams |-v1 |-index.(s)json |-product-foo.(s)json |-product-bar.(s)json
233
234The index file must be called "index.(s)json" (sjson for signed). The various product files are named according to the Path values contained in the index file.
235
236Tools metadata uses a simplestreams content type of "content-download". The product id is formed as follows:
237
238"com.ubuntu.juju:<series_version>:<arch>"
239
240For example:
241
242"com.ubuntu.juju:12.04:amd64"
243
244The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component). In addition, tools tarballs which Juju needs to download are also expected.
245
246|-streams | |-v1 | |-index.(s)json | |-product-foo.(s)json | |-product-bar.(s)json | |-releases |-tools-abc.tar.gz |-tools-def.tar.gz |-tools-xyz.tar.gz
247
248The index file must be called "index.(s)json" (sjson for signed). The product file and tools tarball name(s) match whatever is in the index/product files.
249
250### Configuration
251
252For supported public clouds, no extra configuration is required; things work out-of-the-box. However, for testing purposes, or for non-supported cloud deployments, Juju needs to know where to find the tools and which image to run. Even for supported public clouds where all required metadata is available, the user can put their own metadata in the search path to override what is provided by the cloud.
253
254#### User specified URLs
255
256These are initially specified in the environments.yaml file (and then subsequently copied to the jenv file when the environment is bootstrapped). For images, use "image-metadata-url"; for tools, use "tools-metadata-url". The URLs can point to a world readable container/bucket in the cloud, an address served by a http server, or even a shared directory which is accessible by all node instances running in the cloud.
257
258Assume an Apache http server with base URL `https://juju-metadata` , providing access to information at `<base>/images` and `<base>/tools` . The Juju environment yaml file could have the following entries (one or both):
259
260tools-metadata-url: https://juju-metadata/tools image-metadata-url: https://juju-metadata/images
261
262The required files in each location is as per the directory layout described earlier. For a shared directory, use a URL of the form "file:///sharedpath".
263
264#### Cloud storage
265
266If no matching metadata is found in the user specified URL, environment's cloud storage is searched. No user configuration is required here - all Juju environments are set up with cloud storage which is used to store state information, charms etc. Cloud storage setup is provider dependent; for Amazon and Openstack clouds, the storage is defined by the "control-bucket" value, for Azure, the "storage-account-name" value is relevant.
267
268The (optional) directory structure inside the cloud storage is as follows:
269
270|-tools | |-streams | |-v1 | |-releases | |-images |-streams |-v1
271
272Of course, if only custom image metadata is required, the tools directory will not be required, and vice versa.
273
274Note that if juju bootstrap is run with the `--upload-tools` option, the tools and metadata are placed according to the above structure. That's why the tools are then available for Juju to use.
275
276#### Provider specific storage
277
278Providers may allow additional locations to search for metadata and tools. For OpenStack, Keystone endpoints may be created by the cloud administrator. These are defined as follows:
279
280juju-tools the &LT;path_url&GT; value as described above in Tools Metadata Contentsproduct-streams the &LT;path_url&GT; value as described above in Image Metadata Contents
281
282Other providers may similarly be able to specify locations, though the implementation will vary.
283
284This is the default location used to search for image and tools metadata and is used if no matches are found earlier in any of the above locations. No user configuration is required.
285
286There are two main issues when deploying a private cloud:
287
2881. Image ids will be specific to the cloud.
2892. Often, outside internet access is blocked
290
291Issue 1 means that image id metadata needs to be generated and made available.
292
293Issue 2 means that tools need to be mirrored locally to make them accessible.
294
295Juju tools exist to help with generating and validating image and tools metadata. For tools, it is often easiest to just mirror `https://streams.canonical.com/tools` . However image metadata cannot be simply mirrored because the image ids are taken from the cloud storage provider, so this needs to be generated and validated using the commands described below.
296
297The available Juju metadata tools can be seen by using the help command:
298
299juju help metadata
300
301The overall workflow is:
302
303- Generate image metadata
304- Copy image metadata to somewhere in the metadata search path
305- Optionally, mirror tools to somewhere in the metadata search path
306- Optionally, configure tools-metadata-url and/or image-metadata-url
307
308#### Image metadata
309
310Generate image metadata using
311
312juju metadata generate-image -d <metadata_dir>
313
314As a minimum, the above command needs to know the image id to use and a directory in which to write the files.
315
316Other required parameters like region, series, architecture etc. are taken from the current Juju environment (or an environment specified with the -e option). These parameters can also be overridden on the command line.
317
318The image metadata command can be run multiple times with different regions, series, architecture, and it will keep adding to the metadata files. Once all required image ids have been added, the index and product json files can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the `image-metadata-url` setting or the cloud's storage etc.
319
320Examples:
321
3221. image-metadata-url
323
324- upload contents of to `http://somelocation`
325- set image-metadata-url to `http://somelocation/images`
326
3272. Cloud storage
328
329If run without parameters, the validation command will take all required details from the current Juju environment (or as specified by -e) and output the image id it would use to spin up an instance. Alternatively, series, region, architecture etc. can be specified on the command line to override the values in the environment config.
330#### Tools metadata
331
332Generally, tools and related metadata are mirrored from `https://streams.canonical.com/tools` . However, it is possible to manually generate metadata for a custom built tools tarball.
333
334First, create a tarball of the relevant tools and place in a directory structured like this:
335
336<tools_dir>/tools/releases/
337
338Now generate relevant metadata for the tools by running the command:
339
340juju generate-tools -d <tools_dir>
341
342Finally, the contents of can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the tools-metadata-url setting or the cloud's storage path settings etc.
343
344Examples:
345
3461. tools-metadata-url
347
348- upload contents of the tools dir to `http://somelocation`
349- set tools-metadata-url to `http://somelocation/tools`
350
3512. Cloud storage
352
353upload contents of directly to environment's cloud storage
354
355As with image metadata, the validation command is used to ensure tools are available for Juju to use:
356
357juju metadata validate-tools
358
359The same comments apply. Run the validation tool without parameters to use details from the Juju environment, or override values as required on the command line. See `juju help metadata validate-tools` for more details.
360
361##Appendix I - Using Tags
362##Appendix II - Using the MAAS CLI
363As well as the web interface, many tasks can be performed by accessing the MAAS API directly through the maas-cli command. This section details how to login with this tool and perform some common operations.
364
365###Logging in
366Before the API will accept any commands from maas-cli, you must first login. To do this, you need the API key which can be found in the user interface.
367
368Login to the web interface on your MAAS. Click on the username in the top right corner and select ‘Preferences’ from the menu which appears.
369
370![]maascli-prefs.png
371A new page will load...
372
373![]maascli-key.png
374The very first item is a list of MAAS keys. One will have already been generated when the system was installed. It’s easiest to just select all the text, copy the key (it’s quite long!) and then paste it into the commandline. The format of the login command is:
375
376```
377 maas-cli login <profile-name> <hostname> <key>
378```
379
380The profile created is an easy way of associating your credentials with any subsequent call to the API. So an example login might look like this:
381
382```
383maas-cli login maas http://10.98.0.13/MAAS/api/1.0
384AWSCRMzqMNy:jjk...5e1FenoP82Qm5te2
385```
386which creates the profile ‘maas’ and registers it with the given key at the specified API endpoint. If you omit the credentials, they will be prompted for in the console. It is also possible to use a hyphen, ‘-‘ in place of the credentials. In this case a single line will be read from stdin, stripped of any whitespace and used as the credentials, which can be useful if you are devolping scripts for specific tasks. If an empty string is passed instead of the credentials, the profile will be logged in anonymously (and consequently some of the API calls will not be available)
387
388### maas-cli commands
389The maas-cli command exposes the whole API, so you can do anything you actually can do with MAAS using this command. This leaves us with a vast number of options, which are more fully expressed in the complete [2][MAAS Documentation]
390
391list:
392 lists the details [name url auth-key] of all the currently logged-in profiles.
393
394login <profile> <url> <key>:
395 Logs in to the MAAS controller API at the given URL, using the key provided and
396 associates this connection with the given profile name.
397
398logout <profile>:
399 Logs out from the given profile, flushing the stored credentials.
400
401refresh:
402 Refreshes the API descriptions of all the current logged in profiles. This may become necessary for example when upgrading the maas packages to ensure the command-line options match with the API.
403
404### Useful examples
405
406Displays current status of nodes in the commissioning phase:
407```
408maas cli maas nodes check-commissioning
409```
410
411Accept and commission all discovered nodes:
412```
413maas-cli maas nodes accept-all
414```
415
416List all known nodes:
417```
418maas-cli maas nodes list
419```
420
421Filter the list using specific key/value pairs:
422```
423maas-cli maas nodes list architecture="i386/generic"
424```
425
426Set the power parameters for an ipmi enabled node:
427```
428maas-cli maas node update <system_id> \
429 power_type="ipmi" \
430 power_parameters_power_address=192.168.22.33 \
431 power_parameters_power_user=root \
432 power_parameters_power_pass=ubuntu;
433```
434## Appendix III - Physical Zones
435
436To help you maximise fault-tolerance and performance of the services you deploy, MAAS administrators can define _physical zones_ (or just _zones_ for short), and assign nodes to them. When a user requests a node, they can ask for one that is in a specific zone, or one that is not in a specific zone.
437
438It's up to you as an administrator to decide what a physical zone should represent: it could be a server rack, a room, a data centre, machines attached to the same UPS, or a portion of your network. Zones are most useful when they represent portions of your infrastructure. But you could also use them simply to keep track of where your systems are located.
439
440Each node is in one and only one physical zone. Each MAAS instance ships with a default zone to which nodes are attached by default. If you do not need this feature, you can simply pretend it does not exist.
441
442### Applications
443
444Since you run your own MAAS, its physical zones give you more flexibility than those of a third-party hosted cloud service. That means that you get to design your zones and define what they mean. Below are some examples of how physical zones can help you get the most out of your MAAS.
445
446### Creating a Zone
447
448Only administrators can create and manage zones. To create a physical zone in the web user interface, log in as an administrator and browse to the "Zones" section in the top bar. This will takes you to the zones listing page. At the bottom of the page is a button for creating a new zone:
449
450![]add-zone.png
451
452Or to do it in the [_region-controller API_][#region-controller-api], POST your zone definition to the _"zones"_ endpoint.
453
454### Assigning Nodes to a Zone
455
456Once you have created one or more physical zones, you can set nodes' zones from the nodes listing page in the UI. Select the nodes for which you wish to set a zone, and choose "Set physical zone" from the "Bulk action" dropdown list near the top. A second dropdown list will appear, to let you select which zone you wish to set. Leave it blank to clear nodes' physical zones. Clicking "Go" will apply the change to the selected nodes.
457
458You can also set an individual node's zone on its "Edit node" page. Both ways are available in the API as well: edit an individual node through a request to the node's URI, or set the zone on multiple nodes at once by calling the operation on the endpoint.
459
460### Allocating a Node in a Zone
461
462To deploy in a particular zone, call the method in the [_region-controller API_][#region-controller-api] as before, but pass the parameter with the name of the zone. The method will allocate a node in that zone, or fail with an HTTP 409 ("conflict") error if the zone has no nodes available that match your request.
463
464Alternatively, you may want to request a node that is _not_ in a particular zone, or one that is not in any of several zones. To do that, specify the parameter to . This parameter takes a list of zone names; the allocated node will not be in any of them. Again, if that leaves no nodes available that match your request, the call will return a "conflict" error.
465
466It is possible, though not usually useful, to combine the and parameters. If your choice for is also present in , no node will ever match your request. Or if it's not, then the values will not affect the result of the call at all.
467
0468
=== added file 'Install/Intro.md'
--- Install/Intro.md 1970-01-01 00:00:00 +0000
+++ Install/Intro.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,28 @@
1Title: Introduction
2
3#Ubuntu Cloud Documentation
4
5## Deploying Production Grade OpenStack with MAAS, Juju and Landscape
6
7This documentation has been created to describe best practice in deploying
8a Production Grade installation of OpenStack using current Canonical
9technologies, including bare metal provisioning using MAAS, service
10orchestration with Juju and system management with Landscape.
11
12This documentation is divided into four main topics:
13
14 1. [Installing the MAAS Metal As A Service software](../installing-maas.html)
15 2. [Installing Juju and configuring it to work with MAAS](../installing-juju.html)
16 3. [Using Juju to deploy OpenStack](../installing-openstack.html)
17 4. [Deploying Landscape to manage your OpenStack cloud](../installing-landscape)
18
19Once you have an up and running OpenStack deployment, you should also read
20our [Administration Guide](../admin-intro.html) which details common tasks
21for maintenance and scaling of your service.
22
23
24## Legal notices
25
26
27
28![Canonical logo](./media/logo-canonical_no™-aubergine-hex.jpg)
029
=== added file 'Install/installing-openstack-outline.md'
--- Install/installing-openstack-outline.md 1970-01-01 00:00:00 +0000
+++ Install/installing-openstack-outline.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,395 @@
1Title:Installing OpenStack
2
3# Installing OpenStack
4
5![Openstack](../media/openstack.png)
6
7##Introduction
8
9OpenStack is a versatile, open source cloud environment equally suited to serving up public, private or hybrid clouds. Canonical is a Platinum Member of the OpenStack foundation and has been involved with the OpenStack project since its inception; the software covered in this document has been developed with the intention of providing a streamlined way to deploy and manage OpenStack installations.
10
11### Scope of this documentation
12
13The OpenStack platform is powerful and its uses diverse. This section of documentation
14is primarily concerned with deploying a 'standard' running OpenStack system using, but not limited to, Canonical components such as MAAS, Juju and Ubuntu. Where appropriate other methods and software will be mentioned.
15
16### Assumptions
17
181. Use of MAAS
19 This document is written to provide instructions on how to deploy OpenStack using MAAS for hardware provisioning. If you are not deploying directly on hardware, this method will still work, with a few alterations, assuming you have a properly configured Juju environment. The main difference will be that you will have to provide different configuration options depending on the network configuration.
20
212. Use of Juju
22 This document assumes an up to date, stable release version of Juju.
23
243. Local network configuration
25 This document assumes that you have an adequate local network configuration, including separate interfaces for access to the OpenStack cloud. Ideal networks are laid out in the [MAAS][MAAS documentation for OpenStack]
26
27## Planning an installation
28
29Before deploying any services, it is very useful to take stock of the resources available and how they are to be used. OpenStack comprises of a number of interrelated services (Nova, Swift, etc) which each have differing demands in terms of hosts. For example, the Swift service, which provides object storage, has a different requirement than the Nova service, which provides compute resources.
30
31The minimum requirements for each service and recommendations are laid out in the official [oog][OpenStack Operations Guide] which is available (free) in HTML or various downloadable formats. For guidance, the following minimums are recommended for Ubuntu Cloud:
32
33[insert minimum hardware spec]
34
35
36
37The recommended composition of nodes for deploying OpenStack with MAAS and Juju is that all nodes in the system should be capable of running *ANY* of the services. This is best practice for the robustness of the system, as since any physical node should fail, another can be repurposed to take its place. This obviously extends to any hardware requirements such as extra network interfaces.
38
39If for reasons of economy or otherwise you choose to use different configurations of hardware, you should note that your ability to overcome hardware failure will be reduced. It will also be necessary to target deployments to specific nodes - see the section in the MAAS documentation on tags [MAAS tags]
40
41
42###Create the OpenStack configuration file
43
44We will be using Juju charms to deploy the component parts of OpenStack. Each charm encapsulates everything required to set up a particular service. However, the individual services have many configuration options, some of which we will want to change.
45
46To make this task easier and more reproduceable, we will create a separate configuration file with the relevant options for all the services. This is written in a standard YAML format.
47
48You can download the [openstack-config.yaml] file we will be using from here. It is also reproduced below:
49
50```
51keystone:
52 admin-password: openstack
53 debug: 'true'
54 log-level: DEBUG
55nova-cloud-controller:
56 network-manager: 'Neutron'
57 quantum-security-groups: 'yes'
58 neutron-external-network: Public_Network
59nova-compute:
60 enable-live-migration: 'True'
61 migration-auth-type: "none"
62 virt-type: kvm
63 #virt-type: lxc
64 enable-resize: 'True'
65quantum-gateway:
66 ext-port: 'eth1'
67 plugin: ovs
68glance:
69 ceph-osd-replication-count: 3
70cinder:
71 block-device: None
72 ceph-osd-replication-count: 3
73 overwrite: "true"
74 glance-api-version: 2
75ceph:
76 fsid: a51ce9ea-35cd-4639-9b5e-668625d3c1d8
77 monitor-secret: AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==
78 osd-devices: /dev/sdb
79 osd-reformat: 'True'
80```
81
82For all services, we can configure the `openstack-origin` to point to an install source. In this case, we will rely on the default, which will point to the relevant sources for the Ubuntu 14.04 LTS Trusty release. Further configuration for each service is explained below:
83
84####keystone
85admin password:
86 You should set a memorable password here to be able to access OpenStack when it is deployed
87
88debug:
89 It is useful to set this to 'true' initially, to monitor the setup. this will produce more verbose messaging.
90
91log-level:
92 Similarly, setting the log-level to DEBUG means that more verbose logs can be generated. These options can be changed once the system is set up and running normally.
93
94####nova-cloud-controller
95
96cloud-controller:
97 'Neutron' - Other options are now depricated.
98
99quantum-security-groups:
100 'yes'
101
102neutron-external-network:
103 Public_Network - This is an interface we will use for allowing access to the cloud, and will be defined later
104
105####nova-compute
106enable-live-migration:
107 We have set this to 'True'
108
109migration-auth-type:
110 "none"
111
112virt-type:
113 kvm
114
115enable-resize:
116 'True'
117
118####quantum-gateway
119ext-port:
120 This is where we specify the hardware for the public network. Use 'eth1' or the relevant
121 plugin: ovs
122
123
124####glance
125
126 ceph-osd-replication-count: 3
127
128####cinder
129 openstack-origin: cloud:trusty-icehouse/updates
130 block-device: None
131 ceph-osd-replication-count: 3
132 overwrite: "true"
133 glance-api-version: 2
134
135####ceph
136
137fsid:
138 The fsid is simply a unique identifier. You can generate a suitable value by running `uuidgen` which should return a value which looks like: a51ce9ea-35cd-4639-9b5e-668625d3c1d8
139
140monitor-secret:
141 The monitor secret is a secret string used to authenticate access. There is advice on how to generate a suitable secure secret at [ceph][the ceph website]. A typical value would be `AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==`
142
143osd-devices:
144 This should point (in order of preference) to a device,partition or filename. In this case we will assume secondary device level storage located at `/dev/sdb`
145
146osd-reformat:
147 We will set this to 'True', allowing ceph to reformat the drive on provisioning.
148
149
150##Deploying OpenStack with Juju
151Now that the configuration is defined, we can use Juju to deploy and relate the services.
152
153###Initialising Juju
154Juju requires a minimal amount of setup. Here we assume it has already been configured to work with your MAAS cluster (see the [juju_install][Juju Install Guide] for more information on this.
155
156Firstly, we need to fetch images and tools that Juju will use:
157```
158juju sync-tools --debug
159```
160Then we can create the bootstrap instance:
161
162```
163juju bootstrap --upload-tools --debug
164```
165We use the upload-tools switch to use the local versions of the tools which we just fetched. The debug switch will give verbose output which can be useful. This process may take a few minutes, as Juju is creating an instance and installing the tools. When it has finished, you can check the status of the system with the command:
166```
167juju status
168```
169This should return something like:
170```
171---------- example
172```
173### Deploy the OpenStack Charms
174
175Now that the Juju bootstrap node is up and running we can deploy the services required to make our OpenStack installation. To configure these services properly as they are deployed, we will make use of the configuration file we defined earlier, by passing it along with the `--config` switch with each deploy command. Substitute in the name and path of your config file if different.
176
177It is useful but not essential to deploy the services in the order below. It is also highly reccommended to open an additional terminal window and run the command `juju debug-log`. This will output the logs of all the services as they run, and can be useful for troubleshooting.
178
179It is also recommended to run a `juju status` command periodically, to check that each service has been installed and is running properly. If you see any errors, please consult the [troubleshooting][troubleshooting section below].
180
181```
182juju deploy --to=0 juju-gui
183juju deploy rabbitmq-server
184juju deploy mysql
185juju deploy --config openstack-config.yaml openstack-dashboard
186juju deploy --config openstack-config.yaml keystone
187juju deploy --config openstack-config.yaml ceph -n 3
188juju deploy --config openstack-config.yaml nova-compute -n 3
189juju deploy --config openstack-config.yaml quantum-gateway
190juju deploy --config openstack-config.yaml cinder
191juju deploy --config openstack-config.yaml nova-cloud-controller
192juju deploy --config openstack-config.yaml glance
193juju deploy --config openstack-config.yaml ceph-radosgw
194```
195
196
197### Add relations between the OpenStack services
198
199Although the services are now deployed, they are not yet connected together. Each service currently exists in isolation. We use the `juju add-relation`command to make them aware of each other and set up any relevant connections and protocols. This extra configuration is taken care of by the individual charms themselves.
200
201
202We should start adding relations between charms by setting up the Keystone authorization service and its database, as this will be needed by many of the other connections:
203
204juju add-relation keystone mysql
205
206We wait until the relation is set. After it finishes check it with juju status:
207
208```
209juju status mysql
210juju status keystone
211```
212
213It can take a few moments for this service to settle. Although it is certainly possible to continue adding relations (Juju manages a queue for pending actions) it can be counterproductive in terms of the overall time taken, as many of the relations refer to the same services.
214The following relations also need to be made:
215```
216juju add-relation nova-cloud-controller mysql
217juju add-relation nova-cloud-controller rabbitmq-server
218juju add-relation nova-cloud-controller glance
219juju add-relation nova-cloud-controller keystone
220juju add-relation nova-compute mysql
221juju add-relation nova-compute rabbitmq-server
222juju add-relation nova-compute glance
223juju add-relation nova-compute nova-cloud-controller
224juju add-relation glance mysql
225juju add-relation glance keystone
226juju add-relation cinder keystone
227juju add-relation cinder mysql
228juju add-relation cinder rabbitmq-server
229juju add-relation cinder nova-cloud-controller
230juju add-relation openstack-dashboard keystone
231juju add-relation swift-proxy swift-storage
232juju add-relation swift-proxy keystone
233```
234Finally, the output of juju status should show the all the relations as complete. The OpenStack cloud is now running, but it needs to be populated with some additional components before it is ready for use.
235
236
237
238
239##Preparing OpenStack for use
240
241###Configuring access to Openstack
242
243
244
245The configuration data for OpenStack can be fetched by reading the configuration file generated by the Keystone service. You can also copy this information by logging in to the Horizon (OpenStack Dashboard) service and examining the configuration there. However, we actually need only a few bits of information. The following bash script can be run to extract the relevant information:
246
247```
248#!/bin/bash
249
250set -e
251
252KEYSTONE_IP=`juju status keystone/0 | grep public-address | awk '{ print $2 }' | xargs host | grep -v alias | awk '{ print $4 }'`
253KEYSTONE_ADMIN_TOKEN=`juju ssh keystone/0 "sudo cat /etc/keystone/keystone.conf | grep admin_token" | sed -e '/^M/d' -e 's/.$//' | awk '{ print $3 }'`
254
255echo "Keystone IP: [${KEYSTONE_IP}]"
256echo "Keystone Admin Token: [${KEYSTONE_ADMIN_TOKEN}]"
257
258cat << EOF > ./nova.rc
259export SERVICE_ENDPOINT=http://${KEYSTONE_IP}:35357/v2.0/
260export SERVICE_TOKEN=${KEYSTONE_ADMIN_TOKEN}
261export OS_AUTH_URL=http://${KEYSTONE_IP}:35357/v2.0/
262export OS_USERNAME=admin
263export OS_PASSWORD=openstack
264export OS_TENANT_NAME=admin
265EOF
266
267juju scp ./nova.rc nova-cloud-controller/0:~
268```
269This script extract the required information and then copies the file to the instance running the nova-cloud-controller.
270Before we do any nova or glance command we will load the file we just created:
271
272```
273$ source ./nova.rc
274$ nova endpoints
275```
276
277At this point the output of nova endpoints should show the information of all the available OpenStack endpoints.
278
279### Install the Ubuntu Cloud Image
280
281In order for OpenStack to create instances in its cloud, it needs to have access to relevant images
282$ mkdir ~/iso
283$ cd ~/iso
284$ wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
285
286###Import the Ubuntu Cloud Image into Glance
287!!!Note: glance comes with the package glance-client which may need to be installed where you plan the run the command from
288
289```
290apt-get install glance-client
291glance add name="Trusty x86_64" is_public=true container_format=ovf disk_format=qcow2 < trusty-server-cloudimg-amd64-disk1.img
292```
293###Create OpenStack private network
294Note: nova-manage can be run from the nova-cloud-controller node or any of the nova-compute nodes. To access the node we run the following command:
295
296```
297juju ssh nova-cloud-controller/0
298
299sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --network_size=32 --multi_host=T --bridge_interface=eth0 --bridge=br100
300```
301
302To make sure that we have created the network we can now run the following command:
303
304```
305sudo nova-manage network list
306```
307
308### Create OpenStack public network
309```
310sudo nova-manage floating create --ip_range=1.1.21.64/26
311sudo nova-manage floating list
312```
313Allow ping and ssh access adding them to the default security group
314Note: The following commands are run from a machine where we have the package python-novaclient installed and within a session where we have loaded the above created nova.rc file.
315
316```
317nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
318nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
319```
320
321###Create and register the ssh keys in OpenStack
322Generate a default keypair
323```
324ssh-keygen -t rsa -f ~/.ssh/admin-key
325```
326###Copy the public key into Nova
327We will name it admin-key:
328Note: In the precise version of python-novaclient the command works with --pub_key instead of --pub-key
329
330```
331nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key
332```
333And make sure it’s been successfully created:
334```
335nova keypair-list
336```
337
338###Create a test instance
339We created an image with glance before. Now we need the image ID to start our first instance. The ID can be found with this command:
340```
341nova image-list
342```
343
344Note: we can also use the command glance image-list
345###Boot the instance:
346
347```
348nova boot --flavor=m1.small --image=< image_id_from_glance_index > --key-name admin-key testserver1
349```
350
351###Add a floating IP to the new instance
352First we allocate a floating IP from the ones we created above:
353
354```
355nova floating-ip-create
356```
357
358Then we associate the floating IP obtained above to the new instance:
359
360```
361nova add-floating-ip 9363f677-2a80-447b-a606-a5bd4970b8e6 1.1.21.65
362```
363
364
365### Create and attach a Cinder volume to the instance
366Note: All these steps can be also done through the Horizon Web UI
367
368We make sure that cinder works by creating a 1GB volume and attaching it to the VM:
369
370```
371cinder create --display_name test-cinder1 1
372```
373
374Get the ID of the volume with cinder list:
375
376```
377cinder list
378```
379
380Attach it to the VM as vdb
381
382```
383nova volume-attach test-server1 bbb5c5c2-a5fd-4fe1-89c2-d16fe91578d4 /dev/vdb
384```
385
386Now we should be able to ssh the VM test-server1 from a server with the private key we created above and see that vdb appears in /proc/partitions
387
388
389
390
391[troubleshooting]
392[oog](http://docs.openstack.org/ops/)
393[MAAS tags]
394[openstack-config.yaml]
395[ceph](http://ceph.com/docs/master/dev/mon-bootstrap/)
0396
=== added file 'Install/landcsape.md'
--- Install/landcsape.md 1970-01-01 00:00:00 +0000
+++ Install/landcsape.md 2014-04-15 16:06:33 +0000
@@ -0,0 +1,909 @@
1Title: Landscape
2#Managing OpenStack with Landscape
3
4##About Landscape
5Landscape is a system management tool designed to let you easily manage multiple Ubuntu systems - up to 40,000 with a single Landscape instance. From a single dashboard you can apply package updates and perform other administrative tasks on many machines. You can categorize machines by group, and manage each group separately. You can make changes to targeted machines even when they are offline; the changes will be applied next time they start. Landscape lets you create scripts to automate routine work such as starting and stopping services and performing backups. It lets you use both common Ubuntu repositories and any custom repositories you may create for your own computers. Landscape is particularly adept at security updates; it can highlight newly available packages that involve security fixes so they can be applied quickly. You can use Landscape as a hosted service as part of Ubuntu Advantage, or run it on premises via Landscape Dedicated Server.
6
7##Ubuntu Advantage
8Ubuntu Advantage comprises systems management tools, technical support, access to online resources and support engineers, training, and legal assurance to keep organizations on top of their Ubuntu server, desktop, and cloud deployments. Advantage provides subscriptions at various support levels to help organizations maintain the level of support they need.
9
10
11
12
13
14##Access groups
15
16
17
18
19Landscape lets administrators limit administrative rights on computers
20by assigning them to logical groupings called access groups. Each
21computer can be in only one access group, but you can organize access
22groups hierarchically to mirror the organization of your business. In
23addition to computers, access groups can contain package profiles,
24scripts, and custom graphs.
25
26Creating access groups
27----------------------
28
29A new Landscape installation comes with a single access group, called
30global, which gives any administrators who are associated with roles
31that include that access group control over every computer managed by
32Landscape. Most organizations will want to subdivide administration
33responsibilities by creating logical groupings of computers. You can
34create new access groups from the ACCESS GROUPS menu under your account
35menu.
36
37**Figure 5.1.**
38
39![image](./Chapter%A05.%A0Access%20groups_files/accessgroups1.png)
40
41\
42
43To create a new access group, you must provide two pieces of
44information: a title for the access group and a parent.
45
46To start with, the parent must be the global access group. If you want a
47flat management hierarchy, you can make every access group a child of
48global. Alternatively, you can use parent/child relationships to create
49a hierarchy of access groups. For instance, you could specify different
50sites at a high level, and under them individual buildings, and finally
51individual departments. Such a hierarchy allows you to specify groups of
52computers to be managed together by one administrator. Administrators
53whose roles are associated with higher-level access groups can manage
54all subgroups of which their access group is a parent.
55
56When a new access group is first created, its administrators are those
57who have roles linked to its parent access group, but you can edit the
58roles associated with an access group. To change the roles associated
59with an access group, see
60[below](https://landscape.canonical.com/static/doc/user-guide/ch05.html#associatingadmins "Associating roles with access groups").
61
62Adding computers to access groups
63---------------------------------
64
65To see all the computers currently in an access group, click on the name
66of the group in the ACCESS GROUPS screen. The screen that then appears
67displays information about that group. On the right side of the screen,
68click the word "computers" to show the list of computers that are
69currently members of this access group.
70
71**Figure 5.2.**
72
73![image](./Chapter%A05.%A0Access%20groups_files/accessgroups2.png)
74
75\
76Alternatively, you can click on the COMPUTERS menu item at the top of
77the Landscape screen, and in the selection box at the top of the left
78column, enter `access-group:`{.literal} followed by the name of your
79access group: for instance, `access-group:stagingservers`{.literal}.
80
81To add computers to an access group, click on the COMPUTERS menu item at
82the top of the Landscape screen. The resulting INFO screen shows the
83total number of available computers being managed by Landscape, and the
84number of computers currently selected:
85
86**Figure 5.3.**
87
88![image](./Chapter%A05.%A0Access%20groups_files/accessgroups3.png)
89
90\
91Find computers you wish to include (see the documentation on [selecting
92computers](https://landscape.canonical.com/static/doc/user-guide/ch06.html#selectingcomputers "Selecting computers")),
93then tick the checkbox next to each computer you wish to select. Once
94you've made your selection, click on the INFO menu entry at the top of
95the page Scroll down to the bottom section, choose the access group you
96want from the drop-down list, then click Update access group.
97
98**Figure 5.4.**
99
100![image](./Chapter%A05.%A0Access%20groups_files/accessgroups4.png)
101
102\
103
104Associating roles with access groups
105------------------------------------
106
107An administrator may manage an access group if he is associated with a
108role that has permission to do so. To associate a role with one or more
109access groups, click on the ROLES menu item under your account to
110display a screen that shows a role membership matrix.
111
112**Figure 5.5.**
113
114![image](./Chapter%A05.%A0Access%20groups_files/accessgroups5.png)
115
116\
117The top of that screen shows a list of role names. Click on a role name
118to edit the permissions and access groups associated with that role.
119Note that you cannot modify the GlobalAdmin role, so there is no link
120associated with that label at the top of the matrix.
121
122Editing access groups
123---------------------
124
125To change the name or title of an existing access group, click on the
126name of the group in the ACCESS GROUPS screen, then click on the Edit
127access group link at the top of next screen. Make changes, then click
128Save.
129
130**Figure 5.6.**
131
132![image](./Chapter%A05.%A0Access%20groups_files/accessgroups6.png)
133
134\
135
136Deleting access groups
137----------------------
138
139To delete an existing access group, click on the name of the group in
140the ACCESS GROUPS screen, then click on the Edit access group link at
141the top of next screen. On the resulting screen, click the Delete
142button. You may Confirm the group's deletion, or you can click Cancel to
143abort the operation. When you delete an access group, its resources move
144to its parent access group.
145
146**Figure 5.7.**
147
148
149
150##Managing computers
151
152
153Provisioning new computers
154--------------------------
155
156Landscape can provision computers in two ways: manually, or via metal as
157a service (MAAS). [The Ubuntu wiki explains how to set up
158MAAS](https://wiki.ubuntu.com/ServerTeam/MAAS/).
159
160To manually provision computers, click on PROVISIONING under your
161ACCOUNT menu. Landscape displays a provisioning dashboard that shows the
162number of provisioning servers you have set up, managed systems, and
163pending systems.
164
165**Figure 6.1.**
166
167![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers1.png)
168
169\
170
171To provision new systems, click the Provision new systems link. On the
172Provisioning New Systems screen, the top three fields apply to all the
173computers you wish to provision at one time.
174
175**Figure 6.2.**
176
177![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers2.png)
178
179\
180Enter the Ubuntu release/architecture from a drop-down list; the
181available choices are the two hardware architectures (i386 and amd64)
182for the each Ubuntu release beginning with 12.04. Enter the access group
183to which the new systems should belong from a drop-down list of the
184access groups set up for your account. You can optionally enter user
185data, which Landscape can use for special processing. For instance, you
186could use this field with Ubuntu's
187[cloud-init](https://help.ubuntu.com/community/CloudInit) utility, which
188handles early initialization functions for a cloud instance.
189
190For each computer you wish to provision, enter its MAC address,
191hostname, an optional title that will be displayed on the computer
192listing screen after the computer is set up, and optional tags separated
193by commas that can later help you search for this computer. Click the
194Add more systems link to get a new line of empty boxes into which you
195can add data.
196
197When you click the Next button, Landscape displays a screen that lets
198you review the information you entered.
199
200**Figure 6.3.**
201
202![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers3.png)
203
204\
205You can click on Back to make changes, or Provision to perform the
206operation. Landscape then displays a status screen that at first shows
207the specified computers waiting to boot on the MAAS server.
208
209**Figure 6.4.**
210
211![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers4.png)
212
213\
214
215Registering computers
216---------------------
217
218If a computer is provisioned by Landscape, is it automatically
219registered with Landscape, but when you first install Landscape, your
220computers are not known to the Landscape server. To manage them, you
221must register them with the server. Complete instructions for
222registering client computers with a Landscape server are available at
223https://yourserver/standalone/how-to-register. You can get to this page
224by first clicking on the menu item for your account page on the top
225menu, then on the link in the box on the left side of the page.
226
227Selecting computers
228-------------------
229
230You can select one or more computers individually, or by using searches
231or tags. For each of those approaches, the starting place is the
232COMPUTERS menu entry at the top of the screen. Clicking on it displays a
233list of all computers Landscape knows about.
234
235- To select computers individually, tick the boxes beside their names
236 in the Select computers list.
237
238- Using searches - The upper left corner of the Select computers
239 screen displays the instructions "Refine your selection by searching
240 or selecting from the tags below," followed by a search box. You can
241 enter any string in that box and press Enter, or click the arrow
242 next to the box. Landscape will search both the name and hostname
243 associated with all computers for a match with the search term.
244 Searches are not case-sensitive. A list of matching computers is
245 displayed on the right side of the screen.
246
247 Once you've selected a group of computers, you can apply a tag to
248 them to make it easier to find them again. To do so, with your
249 computers selected, click on INFO under COMPUTERS. In the box under
250 Tags:, enter the tag you want to use and click Add.
251
252- Using tags - Any tags you have already created appear in a list
253 under the search box on the left of the Computers screens. You can
254 click on any tag to display the list of computers associated with
255 it. To select any of the displayed computers, tick the box next to
256 its name, or click Select: All link at the top of the list.
257
258Information about computers
259---------------------------
260
261By clicking on several submenus of the COMPUTERS menu, you can get
262information about selected computers.
263
264- Clicking on ACTIVITIES displays information about actions that may
265 be applied to computers. You can filter the activity log to show
266 All, Pending, Unapproved, or Failed activities. You can click on
267 each activity in the list to display a screen showing details about
268 the activity. On that screen you can Approve, Cancel, Undo, or Redo
269 the activity by clicking on the relevant button.
270
271- Clicking on HARDWARE displays information about the selected
272 computer's processor, memory, network, storage, audio, video, PCI,
273 and USB hardware, as well as BIOS information and CPU flags.
274
275 **Figure 6.5.**
276
277 ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers5.png)
278
279 \
280
281- Clicking on PROCESSES displays information about all processes
282 running on a computer at the last time it checked in with the
283 Landscape server, and lets you end or kill processes by selecting
284 them and clicking on the relevant buttons.
285
286 **Figure 6.6.**
287
288 ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers6.png)
289
290 \
291
292- Clicking on REPORTS displays seven pie charts that show what
293 percentage of computers:
294
295 - are securely patched
296
297 - are covered by upgrade profiles
298
299 - have contacted the server within the last five minutes
300
301 - have applied security updates - four charts show computers that
302 have applied Ubuntu Security Notices within the last two, 14,
303 30, and 60+ days
304
305- Clicking on MONITORING displays graphs of key performance
306 statistics, such as CPU load, memory use, disk use, and network
307 traffic.
308
309 **Figure 6.7.**
310
311 ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers7.png)
312
313 \
314 You can also create custom graphs to display at the top of the page
315 by clicking on the Create some now! link. A drop-down box at the top
316 of the page lets you specify the timeframe the graph data covers:
317 one day, three days, one week, or four weeks. You can download the
318 data behind each graph by clicking the relevant button under the
319 graph.
320
321The activity log
322----------------
323
324The right side of the dashboard that displays when you click on your
325account menu, and when you click on the ACTIVITIES submenu, shows the
326status of Landscape activities, displayed in reverse chronological
327order.
328
329**Figure 6.8.**
330
331![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers8.png)
332
333\
334You can view details on an individual activity by clicking on its
335description. Each activity is labeled with a status; possible values
336are:
337
338- Succeeded
339
340- In progress
341
342- Scheduled
343
344- Queued
345
346- Unapproved
347
348- Canceled
349
350- Failed
351
352You can select a subset to view by clicking on the links above the table
353for All, Pending, Unapproved, or Failed activities.
354
355In addition to the status and description of each activity, the table
356shows what computers the activity applied to, who created it, and when.
357
358Managing users
359--------------
360
361Clicking on USERS displays a list of users on each of the selected
362computers.
363
364**Figure 6.9.**
365
366![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers9.png)
367
368\
369You can select one or more users, then click one of the buttons at the
370top of the screen:
371
372- The ADD button lets you add a new user to the selected computers.
373
374 **Figure 6.10.**
375
376 ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers10.png)
377
378 \
379 You must specify the person's name, a username, and a passphrase.
380 You may also specify a location and telephone numbers. Click the ADD
381 button at the bottom of the screen to complete the operation.
382
383- The DELETE button displays a screen that lets you delete the
384 selected users.
385
386 **Figure 6.11.**
387
388 ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers11.png)
389
390 \
391 You may also tick a checkbox to delete the user's home folders as
392 well. Press the Delete button at the bottom of the screen to
393 complete the operation.
394
395- The EDIT button displays a User details screen that lets you change
396 details such as the person's name, primary group, passphrase,
397 location, and telephone numbers, and add or remove the user from
398 groups on the selected computers.
399
400 **Figure 6.12.**
401
402 ![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers12.png)
403
404 \
405
406- The LOCK button prevents the selected users from logging into their
407 accounts.
408
409- The UNLOCK button lets users into their cars when they've
410 accidentally locked their keys inside. Actually, no, it simply
411 unlocks previously locked accounts.
412
413Managing alerts
414---------------
415
416Landscape uses alerts to notify administrators of conditions that
417require attention. The following types of alerts are available:
418
419- when a pending computer needs to be accepted or rejected
420
421- when you are exceeding your license entitlements for Landscape
422 Dedicated Server (This alert does not apply to the hosted version of
423 Landscape.)
424
425- when new package updates are available for computers
426
427- when new security updates are available for computers
428
429- when a package profile is not applied
430
431- when package reporting fails (Each client runs the command **apt-get
432 update** every 60 minutes. Anything that prevents that command from
433 succeeding is considered a package reporting failure.)
434
435- when an activity requires explicit administrator acceptance or
436 rejection
437
438- when a computer has not contacted the Landscape server for more than
439 five minutes
440
441- when computers need to be rebooted in order for a package update
442 (such as a kernel update) to take effect
443
444To configure alerts, click on the Configure alerts link in the
445dashboard, or click on your account's ALERTS menu item. Tick the check
446box next to each type of alert you want to subscribe to, or click the
447All or None buttons at the top of the table, then click on the Subscribe
448or Unsubscribe button below the table.
449
450**Figure 6.13.**
451
452![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers13.png)
453
454\
455
456The Alerts screen shows the status of each alert. If an alert has not
457been tripped, the status is OK; if it has, the status is Alerted. The
458last column notes whether the alert applies to your account (pending
459computers, for instance, are not yet Landscape clients, but they are
460part of your account), to all computers, or to a specified set of tagged
461computers.
462
463If an alert is tripped, chances are an administrator should investigate
464it. You can see alerts on the account dashboard that displays when you
465click on your account name on the top menu. The description for each
466alert is a link; click on it to see a table of alerts. When you click on
467an alert, the resulting screen shows relevant information about the
468problem. For instance, if you click on an alert about computers having
469issues reporting packages, the table shows the computer affected, the
470error code, and error output text.
471
472**Figure 6.14.**
473
474![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers14.png)
475
476\
477On some alert screens you can download the list of affected computers as
478a CSV file or save the criteria that generated the alert as a saved
479search by clicking the relevant button at the bottom of the screen.
480
481Managing scripts
482----------------
483
484Landscape lets you run scripts on the computers you manage in your
485account. The scripts may be in any language, as long as an interpreter
486for that language is present on the computers on which they are to run.
487You can maintain a library of scripts for common tasks. You can manage
488scripts from the STORED SCRIPTS menu under your account, and run them
489against computers from the SCRIPTS menu under COMPUTERS.
490
491The Stored scripts screen displays a list of existing scripts, along
492with the access groups each belongs to and its creator.
493
494**Figure 6.15.**
495
496![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers15.png)
497
498\
499You can edit a script by clicking on its name. To delete a stored
500script, tick the check box next to its name, then click Remove. If you
501have the proper permissions, Landscape erases the script immediately
502without asking for confirmation.
503
504From the Stored scripts screen you can add a new script by clicking on
505Add stored script.
506
507**Figure 6.16.**
508
509![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers16.png)
510
511\
512On the Create script screen you must enter a title, interpreter, the
513script code, the time within which the script must complete, and the
514access group to which the script belongs. You may enter a default user
515to run the script as; if you don't, you will have to specify the user
516when you choose to run the script. You may also attach as many as five
517files with a maximum of 1MB in total size. On each computer on which a
518script runs, attachments are placed in the directory specified by the
519environment variable LANDSCAPE\_ATTACHMENTS, and are deleted once the
520script has been run. After specifying all the information for a stored
521script, click on Save to save it.
522
523To run a stored script, go to the SCRIPTS menu under COMPUTERS. Here you
524can choose to run a stored script, or run a new script.
525
526**Figure 6.17.**
527
528![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers17.png)
529
530\
531When you choose to run an existing script, Landscape displays the script
532details, which allows you to modify any information. You must specify
533the user on the target computers to run the script as, and schedule the
534script to run either as soon as possible, or at a specified time. When
535you're ready to run the script, click on Run.
536
537To run a new script, you must enter most of the same information you
538would if you were creating a stored script, with three differences.
539
540**Figure 6.18.**
541
542![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers18.png)
543
544\
545On this screen you must specify the user on the target computers to run
546the script as, and you may optionally tick a check box to store the
547script in your script library. You must also schedule the script to run
548either as soon as possible, or at a specified time. When you're ready to
549run the script, click on Run.
550
551Managing upgrade profiles
552-------------------------
553
554An upgrade profile defines a schedule for the times when upgrades are to
555be automatically installed on the machines associated with a specific
556access group. You can associate zero or more computers with each upgrade
557profile via tags to install packages on those computers. You can also
558associate an upgrade profile with an access group, which limits its use
559to only computers within the specified access group. You can manage
560upgrade profiles from the UPGRADE PROFILES link in the PROFILES choice
561under your account.
562
563When you do so, Landscape displays a list of the names and descriptions
564of existing upgrade profiles.
565
566**Figure 6.19.**
567
568![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers19.png)
569
570\
571To see the details of an existing profile, click on its name to display
572a screen that shows the name, schedule, and tags of computers associated
573with the upgrade profile. If you want to change the upgrade profile's
574name or schedule, click on the Edit upgrade profile link. If you want to
575change the computers associated with the upgrade profile, tick or untick
576the check boxes next to the tags on the lower part of the screen, then
577click on the Change button. Though you can see the access group
578associated with the upgrade profile, you cannot change the access groups
579anywhere but from their association with a computer.
580
581To add an upgrade profile, click on the Add upgrade profile link.
582
583**Figure 6.20.**
584
585![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers20.png)
586
587\
588On the resulting Create an upgrade profile screen you must enter a name
589for the upgrade profile. Names can contain only letters, numbers, and
590hyphens. You may check a box to make the upgrade profile apply only to
591security upgrades; if you leave it unchecked, it will target all
592upgrades. Specify the access group to which the upgrade profile belongs
593from a drop-down list. Finally, specify the schedule on which the
594upgrade profile can run. You can specify a number of hours to let the
595upgrade profile run; if it does not complete successfully in that time,
596Landscape will trigger an alert. Click on the Save button to save the
597new upgrade profile.
598
599To delete one or more upgrade profiles, tick a check box next to the
600upgrade profiles' names, then click on the Remove button.
601
602Managing removal profiles
603-------------------------
604
605A removal profile defines a maximum number of days that a computer can
606go without exchanging data with the Landscape server before it is
607automatically removed. If more days pass than the profile's "Days
608without exchange", that computer will automatically be removed and the
609license seat it held will be released. This helps Landscape keep license
610seats open and ensure Landscape is not tracking stale or retired
611computer data for long periods of time. You can associate zero or more
612computers with each removal profile via tags to ensure those computers
613are governed by this removal profile. You can also associate a removal
614profile with an access group, which limits its use to only computers
615within the specified access group. You can manage removal profiles from
616the REMOVAL PROFILES link in the PROFILES choice under your account.
617
618When you do so, Landscape displays a list of the names and descriptions
619of existing removal profiles.
620
621**Figure 6.21.**
622
623![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers21.png)
624
625\
626To see the details of an existing profile, click on its name to display
627a screen that shows the title, name and number of days without exchange
628before the computer is automatically removed, and tags of computers
629associated with the removal profile. If you want to change the removal
630profile's title or number of days before removal, click on the Edit
631removal profile link. If you want to change the computers associated
632with the removal profile, tick or untick the check boxes next to the
633tags on the lower part of the screen, then click on the Change button.
634Though you can see the access group associated with the removal profile,
635you cannot change the access groups anywhere but from their association
636with a computer.
637
638To add a removal profile, click on the Add removal profile link.
639
640**Figure 6.22.**
641
642![image](./Chapter%A06.%A0Managing%20computers_files/managecomputers22.png)
643
644\
645On the resulting Create a removal profile screen you must enter a title
646for the removal profile. Specify the access group to which the removal
647profile belongs from a drop-down list. Finally, specify the number of
648days without exchange that computers will be allowed without contact
649before they are automatically removed and their license seat is
650released. If a computer does not contact Landscape within that number of
651days, it will subsequently be removed. Click on the Save button to save
652the new removal profile.
653
654To delete one or more removal profiles, tick a check box next to the
655removal profiles' names, then click on the Remove button.
656
657##Managing packages
658
659
660A package is a group of related files that comprise everything you need
661to install an application. Packages are stored in repositories, and each
662package is managed via a package profile, which is a record of the
663package's dependencies and conflicts.
664
665Package information
666-------------------
667
668Clicking on PACKAGES under the COMPUTERS menu displays a screen where
669you can search for information about all the packages Landscape knows
670about. You may first specify a package name or other search string, then
671press Enter or click on the arrow next to the box. Landscape then
672displays a list of packages that meet the search criteria.
673
674**Figure 7.1.**
675
676![image](./Chapter%A07.%A0Managing%20packages_files/managepackages1.png)
677
678\
679The top of the screen displays summary information about the packages:
680clickable links to which computers have security updates and other
681upgrades to be installed, and the number of computers that are
682up-to-date and those that have not reported package information.
683
684The next section provides a list of security issues on computers that
685need security updates. You can click on the name or USN number of a
686security issue to see a full Ubuntu Security Notice.
687
688**Figure 7.2.**
689
690![image](./Chapter%A07.%A0Managing%20packages_files/managepackages2.png)
691
692\
693The third section displays package information in the form of four
694numbers for each selected computer: the number of packages available and
695installed, pending upgrades, and held upgrades. You can click on the
696number of pending or held upgrades to see a screen that lets you modify
697the relevant package list and set a time for the upgrades to take place:
698
699**Figure 7.3.**
700
701![image](./Chapter%A07.%A0Managing%20packages_files/managepackages3.png)
702
703
704Finally, a Request upgrades button at the bottom of the screen lets you
705quickly request that all possible upgrades be applied to the selected
706computers. Any resulting activities require explicit administrator
707approval.
708
709Adding a package profile
710------------------------
711
712Landscape uses package profiles (also called meta packages) to make sure
713the proper software is installed when you request packages. You can
714think of a package profile as a package with no file contents, just
715dependencies and conflicts. With that information, the package profile
716can trigger the installation of other packages necessary for the
717requested package to run, or trigger the removal of software that
718conflicts with the requested package. These dependencies and conflicts
719fall under the general category of constraints.
720
721To manage package profiles, click the PROFILES menu entry under your
722account and the Package profiles link. The Package profiles screen
723displays a list of existing package profiles and a link that you can
724click to add a new package profile.
725
726**Figure 7.4.**
727
728![image](./Chapter%A07.%A0Managing%20packages_files/managepackages4.png)
729
730\
731Click on that link to display the Create package profile screen:
732
733**Figure 7.5.**
734
735![image](./Chapter%A07.%A0Managing%20packages_files/managepackages5.png)
736
737\
738Here you enter a name for the package profile, a description (which
739appears at the top of the package profile's information screen), the
740access group to which the package profile should belong, and,
741optionally, any package constraints - packages that this profile depends
742on or conflicts with. The constraints drop-down lists lets you add
743constraints in three ways: based on a computer's installed packages,
744imported from a previously exported CSV file or the output of the **dpkg
745--get-selections** command, or manually added. Use the first option if
746you want to replicate one computer to another, as it makes all currently
747installed packages that are on the selected computer dependencies of the
748package profile you're creating. The second approach imports the
749dependencies of a previously exported package profile. The manual
750approach is suitable when you have few dependencies to add, all of which
751you know.
752
753When you save a package profile, behind the scenes Landscape creates a
754Debian package with the specified dependencies and conflicts and gives
755it a name and a version. Every time you change the package profile,
756Landscape increments the version by one.
757
758If Landscape finds computers on which the package profile should be
759installed, it creates an activity to do so. That activity will run
760unattended, except that you must provide explicit administrator approval
761to remove any packages that the package profile wants to delete.
762
763Exporting a package profile
764---------------------------
765
766You can export a package profile in order to use the same constraints
767it's set up for with a new package profile. To export a package profile,
768click the PROFILES menu entry under your account and the Package
769profiles link. Tick the check box next to the packages you want to
770export, then click Download as CSV.
771
772Modifying a package profile
773---------------------------
774
775To modify a package profile, click the PROFILES menu entry under your
776account and the Package profiles link, then click on the name of a
777package profile in the list.
778
779Deleting a package profile
780--------------------------
781
782To delete a package profile, click the PROFILES menu entry under your
783account and then the Package profiles link. Tick the check box next to
784the packages you want to delete, then click Remove. The package profile
785is deleted immediately, with no prompt to confirm the action.
786
787Repositories
788------------
789
790Packages are stored in repositories. A repository is simply a designated
791location that stores packages. You can manage Landscape repositories
792only via [the Landscape
793API](https://landscape.canonical.com/static/doc/user-guide/ch09.html "Chapter 9. The Landscape API").
794
795
796
797
798* * * * *
799
800[Prev](https://landscape.canonical.com/static/doc/user-guide/ch07.html)
801[Up](https://landscape.canonical.com/static/doc/user-guide/index.html)
802[Next](https://landscape.canonical.com/static/doc/user-guide/ch09.html)
803
804##Use cases
805--------------------
806
807
808You can use Landscape to perform many common system administration tasks
809easily and automatically. Here are a few examples.
810
811How do I upgrade all packages on a certain group of machines?
812-------------------------------------------------------------
813
814First, tag the machines you want to upgrade with a common tag, so you
815can use the tag anytime you need to manage those computers as a group.
816If, for instance, you want to upgrade all your desktop computers, you
817might want to use "desktop" as a tag. Select your computers, then click
818on COMPUTERS on the top menu, and under that INFO. In the box under
819Tags:, enter the tag you want to use and click the Add button.
820
821If you've already tagged the computers, click on COMPUTERS, then click
822on the tag in the left column.
823
824With your desktop computers selected, click on COMPUTERS, then PACKAGES.
825Scroll to the bottom of the screen, where you'll see a Request upgrades
826button. Click it to queue the upgrade tasks.
827
828![image](./Chapter%A08.%A0Use%20cases_files/usecases1.png)
829
830While the upgrade tasks are now in the queue, they will not be executed
831until you approve them. To do so, next to Select:, click All, then click
832on the Approve button at the bottom of the page.
833
834How do I keep all of my file servers automatically up to date?
835--------------------------------------------------------------
836
837The best way is to use [upgrade
838profiles](https://landscape.canonical.com/static/doc/user-guide/ch02.html#defineupgradeprofiles),
839which rely on access groups.
840
841If an access group for your file servers already exists, simply click on
842its name. If not, you must create an access group for them. To do so,
843click on your account, then on ACCESS GROUPS. Specify a name for your
844new access group and click the Save button. You must then add computers
845to the access group. To do that, click on COMPUTERS, then select all
846your file servers by using a tag, if one exists, or a search, or by
847ticking them individually. Once all the computers you want to add to the
848access group are tagged, click on the INFO menu choice, scroll down to
849the bottom section, choose the access group you want from the drop-down
850list, then click the Update access group button.
851
852![image](./Chapter%A08.%A0Use%20cases_files/accessgroups4.png)
853
854Once you have all your file servers in an access group you can create an
855upgrade profile for them. Click on your account, then PROFILES menu
856following the Upgrade profiles link, and then on the Add upgrade profile
857link. Enter a name for the new upgrade profile, choose the access group
858you wish to associate with it, and specify the schedule on which the
859upgrades should run, then click the Save button.
860
861How do I keep Landscape from upgrading a certain package on one of my servers?
862------------------------------------------------------------------------------
863
864First find the package by clicking on COMPUTERS, then PACKAGES. Use the
865search box at the top of the screen to find the package you want. Click
866the triangle on the left of the listing line of the package you want to
867hold, which expands the information for that package. Now click on the
868icon to the left of the package name. A new icon with a lock replaces
869the old one, indicating that this package is to be held during upgrades.
870Scroll to the bottom of the page and click on the Apply Changes button.
871
872![image](./Chapter%A08.%A0Use%20cases_files/usecases2.png)
873
874How do I set up a custom graph?
875-------------------------------
876
877First select the computers whose information you want to see. One good
878way to do so is to create a tag for that group of computers on my
879computers. Suppose you want to monitor the size of the PostgreSQL
880database on your database servers. Select the servers, then click on
881COMPUTERS on the top menu, and INFO under that. In the box under Tags:,
882enter a tag name, such as "db-server," and click the Add button. Next,
883under your account, click on CUSTOM GRAPHS, then on the link to Add
884custom graph. Enter a title, and in the \#! field, enter **/bin/sh** to
885indicate a shell script. In the Code section, enter the commands
886necessary to create the data for the graph. For this example, the
887command might be:
888
889~~~~ {.programlisting}
890psql -tAc "select pg_database_size('postgres')"
891~~~~
892
893For Run as user, enter **postgres**.
894
895Fill in the Y-axix title, then click the Save button at the bottom of
896the page.
897
898![image](./Chapter%A08.%A0Use%20cases_files/usecases3.png)
899
900To view the graph, click on COMPUTERS, then MONITORING. You can select
901the monitoring period from the drop-down box at the top of the window.
902
903How do I ensure all computers with a given tag have a common list of packages installed?
904----------------------------------------------------------------------------------------
905
906Manage them via a [package
907profile](https://landscape.canonical.com/static/doc/user-guide/ch07.html#definepp "Adding a package profile").
908
909
0910
=== removed file 'Installing-Ceph.md'
--- Installing-Ceph.md 2014-04-07 13:23:30 +0000
+++ Installing-Ceph.md 1970-01-01 00:00:00 +0000
@@ -1,56 +0,0 @@
1Title: Installing - Ceph
2Status: Review
3
4# Installing - Ceph
5
6## Introduction
7
8Typically OpenStack uses the local storage of their nodes for the configuration data
9as well as for the object storage provided by Swift and the block storage provided by
10Cinder and Glance. But it also can use Ceph as storage backend. Ceph stripes block
11device images across a cluster. This way it provides a better performance than typical
12standalone server. It allows scalabillity and redundancy needs to be satisfied and
13Cinder's RDB driver used to create, export and connect volumes to instances.
14
15## Scope
16
17This document covers the deployment of Ceph via Juju. Other related documents are
18
19- [Scaling Ceph](Scaling-Ceph.md)
20- [Troubleshooting Ceph](Troubleshooting-Ceph.md)
21- [Appendix Ceph and OpenStack](Appendix-Ceph-and-OpenStack.md)
22
23## Deployment
24
25During the installation of OpenStack we've already seen the deployment of Ceph via
26
27```
28juju deploy --config openstack-config.yaml -n 3 ceph
29juju deploy --config openstack-config.yaml -n 10 ceph-osd
30```
31
32This will install three Ceph nodes configured with the information contained in the
33file `openstack-config.yaml`. This file contains the configuration `block-device: None`
34for Cinder, so that this component does not use the local disk. Instead we're calling
35Additionally 10 Ceph OSD nodes providing the object storage are deployed and related
36to the Ceph nodes by
37
38```
39juju add-relation ceph-osd ceph
40```
41
42Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which
43will scan for the configured storage devices and add them to the pool of available storage.
44Now the relation to Cinder and Glance can be established with
45
46```
47juju add-relation cinder ceph
48juju add-relation glance ceph
49```
50
51so that both are using the storage provided by Ceph.
52
53## See also
54
55- https://manage.jujucharms.com/charms/precise/ceph
56- https://manage.jujucharms.com/charms/precise/ceph-osd
570
=== removed file 'Installing-MAAS.md'
--- Installing-MAAS.md 2014-04-02 23:18:00 +0000
+++ Installing-MAAS.md 1970-01-01 00:00:00 +0000
@@ -1,467 +0,0 @@
1Title: Installing MAAS
2Status: In progress
3Notes:
4
5
6
7
8
9#Installing the MAAS software
10
11##Scope of this documentation
12
13This document provides instructions on how to install the Metal As A Service (MAAS) software. It has been prepared alongside guides for installing Juju, OpenStack and Landscape as part of a production grade cloud environment. MAAS itself may be used in different ways and you can find documentation for this on the main MAAS website [MAAS docs]. For the purposes of this documentation, the following assumptions have been made:
14* You have sufficient, appropriate node hardware
15* You will be using Juju to assign workloads to MAAS
16* You will be configuring the cluster network to be controlled entirely by MAAS (i.e. DNS and DHCP)
17* If you have a compatible power-management system, any additional hardware required is also installed(e.g. IPMI network).
18
19## Introducing MAAS
20
21Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource.
22
23What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud.
24
25When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova.
26
27MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal.
28
29## Installing MAAS from the Cloud Archive
30
31The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of MAAS, Juju and other tools. It is highly recommended to configure this repository and use it to keep your software up to date:
32
33```
34sudo add-apt-repository cloud-archive:tools
35sudo apt-get update
36```
37
38There are several packages that comprise a MAAS install. These are:
39
40maas-region-controller:
41 Which comprises the 'control' part of the software, including the web-based user interface, the API server and the main database.
42maas-cluster-controller:
43 This includes the software required to manage a cluster of nodes, including managing DHCP and boot images.
44maas-dns:
45 This is a customised DNS service that MAAS can use locally to manage DNS for all the connected nodes.
46mass-dhcp:
47 As for DNS, there is a DHCP service to enable MAAS to correctly enlist nodes and assign IP addresses. The DHCP setup is critical for the correct PXE booting of nodes.
48
49As a convenience, there is also a `maas` metapackage, which will install all these components
50
51
52If you need to separate these services or want to deploy an additional cluster controller, you should install the corresponding packages individually (see [_the description of a typical setup_](https://www.filepicker.io/api/file/orientation.html#setup) for more background on how a typical hardware setup might be arranged).
53
54
55
56
57### Installing the packages
58
59The configuration for the MAAS controller will automatically run and pop up this config screen:
60
61![]( install_cluster-config.png)
62
63Here you will need to enter the hostname for where the region controller can be contacted. In many scenarios, you may be running the region controller (i.e. the web and API interface) from a different network address, for example where a server has several network interfaces.
64
65Once the configuration scripts have run you should see this message telling you that the system is ready to use:
66
67![]( install_controller-config.png)
68
69The web server is started last, so you have to accept this message before the service is run and you can access the Web interface. Then there are just a few more setup steps [_Post-Install tasks_](https://www.filepicker.io/api/file/WMGTttJT6aaLnQrEkAPv?signature=a86d0c3b4e25dba2d34633bbdc6873d9d8e6ae3cecc4672f2219fa81ee478502&policy=eyJoYW5kbGUiOiJXTUdUdHRKVDZhYUxuUXJFa0FQdiIsImV4cGlyeSI6MTM5NTE3NDE2MSwiY2FsbCI6WyJyZWFkIl19#post-install)
70
71The maas-dhcp and maas-dns packages should be installed by default. You can check whether they are installed with:
72
73```
74dpkg -l maas-dhcp maas-dns
75```
76
77If they are missing, then:
78
79```
80sudo apt-get install maas-dhcp maas-dns
81```
82
83And then proceed to the post-install setup below.
84
85If you now use a web browser to connect to the region controller, you should see that MAAS is running, but there will also be some errors on the screen:
86
87![]( install_web-init.png)
88
89The on screen messages will tell you that there are no boot images present, and that you can't login because there is no admin user.
90
91## Create a superuser account
92
93Once MAAS is installed, you'll need to create an administrator account:
94
95```
96sudo maas createadmin --username=root --email=MYEMAIL@EXAMPLE.COM
97```
98
99Substitute your own email address in the command above. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember. The command will prompt for a password to assign to the new user.
100
101You can run this command again for any further administrator accounts you may wish to create, but you need at least one.
102
103## Import the boot images
104
105MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually the first time. To do this you will need to connect to the MAAS API using the maas-cli tool. (see for details). Then you need to run the command:
106
107```
108maas-cli maas node-groups import-boot-images
109```
110
111(substitute in a different profile name for 'maas' if you have called yours something else) This will initiate downloading the required image files. Note that this may take some time depending on your network connection.
112
113
114## Login to the server
115
116To check that everything is working properly, you should try and login to the server now. Both the error messages should have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes attached to this controller.
117
118![]( install-login.png)
119## Configure switches on the network
120
121Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use.
122
123To alleviate this problem, you should enable [Portfast](https://www.symantec.com/business/support/index?page=content&id=HOWTO6019) for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately.
124
125##Add an additional cluster
126
127Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier maintenance, uprades and stability, it is desirable to have at least two operational clusters.
128
129Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the cluster controller software:
130
131```
132sudo add-apt-repository cloud-archive:tools
133sudo apt-get update
134sudo apt-get install maas-cluster-controller
135sudo apt-get install maas-dhcp
136```
137
138During the install process, a configuration window will appear. You merely need to type in the address of the MAAS controller API, like this:
139
140![config-image.png]
141
142## Configure Cluster Controller(s)
143
144### Cluster acceptance
145When you install your first cluster controller on the same system as the region controller, it will be automatically accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user interface as “pending,” until you manually accept them into the MAAS.
146
147To accept a cluster controller, click on the settings “cog” icon at the top right to visit the settings page:
148![]settings.png
149You can either click on “Accept all” or click on the edit icon to edit the cluster. After clicking on the edit icon, you will see this page:
150
151![]cluster-edit.png
152Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes its status from “pending” to “accepted.”
153
154Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how to do this and what choices are to be made.
155
156### Configuration
157MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these networks, by configuring those interfaces in the MAAS user interface.
158
159As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for eth0, which takes us to this page:
160
161![]cluster-interface-edit.png
162Here you can select to what extent you want the cluster controller to manage the network:
163
164DHCP only - this will run a DHCP server on your cluster
165DHCP and DNS - this will run a DHCP server on the cluster and configure the DNS server included with the region controller so that it can be used to look up hosts on this network by name.
166Note
167You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file to work out the IP address of nodes in the cluster.
168If you set the interface to be managed, you now need to provide all of the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able to boot nodes on this network.
169
170!!! note:There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any nodes. Or, if you do want to manage nodes but don’t want the cluster controller to serve DHCP, you may be able to get by without it. This is explained in Manual DHCP configuration.
171
172!!! note: A single cluster controller can manage more than one network, each from a different network interface on the cluster-controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your network architecture.
173
174## Enlisting nodes
175
176Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS controlling DHCP and nodes capable of PXE booting, this is straightforward
177
178Automatic Discovery
179With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the image, contact the MAAS server and shut down.
180
181During this process, the MAAS server will be passed information about the node, including the architecture, MAC address and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web interface. When the nodes have been accepted the selected series of Ubuntu will be installed.
182
183To save time, you can also accept and commission all nodes from the commandline. This requires that you first login with the API key [1], which you can retrieve from the web interface:
184
185```
186maas-cli maas nodes accept-all
187```
188
189### Manually adding nodes
190
191If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the Nodes screen:
192![]add-node.png
193
194Select 'Add node' and manually enter details about the node, including its MAC address. This is used to identify the node when it contacts the DHCP server.
195
196
197
198## Preparing MAAS for Juju using Simplestreams
199
200When Juju bootstraps a cloud, it needs two critical pieces of information:
201
2021. The uuid of the image to use when starting new compute instances.
2032. The URL from which to download the correct version of a tools tarball.
204
205This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata, after understanding a bit about how it works.
206
207The simplestreams format is used to describe related items in a structural fashion.( [See the Launchpad project lp:simplestreams for more details on implementation](https://launchpad.net/simplestreams)). Below we will discuss how Juju determines which metadata to use, and how to create your own images and tools and have Juju use them instead of the defaults.
208
209### Basic Workflow
210
211Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of lookup) are:
212
2131. User supplied location (specified by tools-metadata-url or image-metadata-url config settings).
2142. The environment's cloud storage.
2153. Provider specific locations (eg keystone endpoint if on Openstack).
2164. A web location with metadata for supported public clouds (https://streams.canonical.com).
217
218Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before moving onto the next path location.
219
220Juju ships with public keys used to validate the integrity of image and tools metadata obtained from https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which ship with Juju.
221
222### Image Metadata Contents
223
224Image metadata uses a simplestreams content type of "image-ids". The product id is formed as follows:
225
226com.ubuntu.cloud:server:<series_version>:<arch> For Example:
227com.ubuntu.cloud:server:14.04:amd64 Non-released images (eg beta, daily etc) have product ids like:
228com.ubuntu.cloud.daily:server:13.10:amd64
229
230The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component):
231
232<path_url> |-streams |-v1 |-index.(s)json |-product-foo.(s)json |-product-bar.(s)json
233
234The index file must be called "index.(s)json" (sjson for signed). The various product files are named according to the Path values contained in the index file.
235
236Tools metadata uses a simplestreams content type of "content-download". The product id is formed as follows:
237
238"com.ubuntu.juju:<series_version>:<arch>"
239
240For example:
241
242"com.ubuntu.juju:12.04:amd64"
243
244The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component). In addition, tools tarballs which Juju needs to download are also expected.
245
246|-streams | |-v1 | |-index.(s)json | |-product-foo.(s)json | |-product-bar.(s)json | |-releases |-tools-abc.tar.gz |-tools-def.tar.gz |-tools-xyz.tar.gz
247
248The index file must be called "index.(s)json" (sjson for signed). The product file and tools tarball name(s) match whatever is in the index/product files.
249
250### Configuration
251
252For supported public clouds, no extra configuration is required; things work out-of-the-box. However, for testing purposes, or for non-supported cloud deployments, Juju needs to know where to find the tools and which image to run. Even for supported public clouds where all required metadata is available, the user can put their own metadata in the search path to override what is provided by the cloud.
253
254#### User specified URLs
255
256These are initially specified in the environments.yaml file (and then subsequently copied to the jenv file when the environment is bootstrapped). For images, use "image-metadata-url"; for tools, use "tools-metadata-url". The URLs can point to a world readable container/bucket in the cloud, an address served by a http server, or even a shared directory which is accessible by all node instances running in the cloud.
257
258Assume an Apache http server with base URL `https://juju-metadata` , providing access to information at `<base>/images` and `<base>/tools` . The Juju environment yaml file could have the following entries (one or both):
259
260tools-metadata-url: https://juju-metadata/tools image-metadata-url: https://juju-metadata/images
261
262The required files in each location is as per the directory layout described earlier. For a shared directory, use a URL of the form "file:///sharedpath".
263
264#### Cloud storage
265
266If no matching metadata is found in the user specified URL, environment's cloud storage is searched. No user configuration is required here - all Juju environments are set up with cloud storage which is used to store state information, charms etc. Cloud storage setup is provider dependent; for Amazon and Openstack clouds, the storage is defined by the "control-bucket" value, for Azure, the "storage-account-name" value is relevant.
267
268The (optional) directory structure inside the cloud storage is as follows:
269
270|-tools | |-streams | |-v1 | |-releases | |-images |-streams |-v1
271
272Of course, if only custom image metadata is required, the tools directory will not be required, and vice versa.
273
274Note that if juju bootstrap is run with the `--upload-tools` option, the tools and metadata are placed according to the above structure. That's why the tools are then available for Juju to use.
275
276#### Provider specific storage
277
278Providers may allow additional locations to search for metadata and tools. For OpenStack, Keystone endpoints may be created by the cloud administrator. These are defined as follows:
279
280juju-tools the &LT;path_url&GT; value as described above in Tools Metadata Contentsproduct-streams the &LT;path_url&GT; value as described above in Image Metadata Contents
281
282Other providers may similarly be able to specify locations, though the implementation will vary.
283
284This is the default location used to search for image and tools metadata and is used if no matches are found earlier in any of the above locations. No user configuration is required.
285
286There are two main issues when deploying a private cloud:
287
2881. Image ids will be specific to the cloud.
2892. Often, outside internet access is blocked
290
291Issue 1 means that image id metadata needs to be generated and made available.
292
293Issue 2 means that tools need to be mirrored locally to make them accessible.
294
295Juju tools exist to help with generating and validating image and tools metadata. For tools, it is often easiest to just mirror `https://streams.canonical.com/tools` . However image metadata cannot be simply mirrored because the image ids are taken from the cloud storage provider, so this needs to be generated and validated using the commands described below.
296
297The available Juju metadata tools can be seen by using the help command:
298
299juju help metadata
300
301The overall workflow is:
302
303- Generate image metadata
304- Copy image metadata to somewhere in the metadata search path
305- Optionally, mirror tools to somewhere in the metadata search path
306- Optionally, configure tools-metadata-url and/or image-metadata-url
307
308#### Image metadata
309
310Generate image metadata using
311
312juju metadata generate-image -d <metadata_dir>
313
314As a minimum, the above command needs to know the image id to use and a directory in which to write the files.
315
316Other required parameters like region, series, architecture etc. are taken from the current Juju environment (or an environment specified with the -e option). These parameters can also be overridden on the command line.
317
318The image metadata command can be run multiple times with different regions, series, architecture, and it will keep adding to the metadata files. Once all required image ids have been added, the index and product json files can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the `image-metadata-url` setting or the cloud's storage etc.
319
320Examples:
321
3221. image-metadata-url
323
324- upload contents of to `http://somelocation`
325- set image-metadata-url to `http://somelocation/images`
326
3272. Cloud storage
328
329If run without parameters, the validation command will take all required details from the current Juju environment (or as specified by -e) and output the image id it would use to spin up an instance. Alternatively, series, region, architecture etc. can be specified on the command line to override the values in the environment config.
330#### Tools metadata
331
332Generally, tools and related metadata are mirrored from `https://streams.canonical.com/tools` . However, it is possible to manually generate metadata for a custom built tools tarball.
333
334First, create a tarball of the relevant tools and place in a directory structured like this:
335
336<tools_dir>/tools/releases/
337
338Now generate relevant metadata for the tools by running the command:
339
340juju generate-tools -d <tools_dir>
341
342Finally, the contents of can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the tools-metadata-url setting or the cloud's storage path settings etc.
343
344Examples:
345
3461. tools-metadata-url
347
348- upload contents of the tools dir to `http://somelocation`
349- set tools-metadata-url to `http://somelocation/tools`
350
3512. Cloud storage
352
353upload contents of directly to environment's cloud storage
354
355As with image metadata, the validation command is used to ensure tools are available for Juju to use:
356
357juju metadata validate-tools
358
359The same comments apply. Run the validation tool without parameters to use details from the Juju environment, or override values as required on the command line. See `juju help metadata validate-tools` for more details.
360
361##Appendix I - Using Tags
362##Appendix II - Using the MAAS CLI
363As well as the web interface, many tasks can be performed by accessing the MAAS API directly through the maas-cli command. This section details how to login with this tool and perform some common operations.
364
365###Logging in
366Before the API will accept any commands from maas-cli, you must first login. To do this, you need the API key which can be found in the user interface.
367
368Login to the web interface on your MAAS. Click on the username in the top right corner and select ‘Preferences’ from the menu which appears.
369
370![]maascli-prefs.png
371A new page will load...
372
373![]maascli-key.png
374The very first item is a list of MAAS keys. One will have already been generated when the system was installed. It’s easiest to just select all the text, copy the key (it’s quite long!) and then paste it into the commandline. The format of the login command is:
375
376```
377 maas-cli login <profile-name> <hostname> <key>
378```
379
380The profile created is an easy way of associating your credentials with any subsequent call to the API. So an example login might look like this:
381
382```
383maas-cli login maas http://10.98.0.13/MAAS/api/1.0
384AWSCRMzqMNy:jjk...5e1FenoP82Qm5te2
385```
386which creates the profile ‘maas’ and registers it with the given key at the specified API endpoint. If you omit the credentials, they will be prompted for in the console. It is also possible to use a hyphen, ‘-‘ in place of the credentials. In this case a single line will be read from stdin, stripped of any whitespace and used as the credentials, which can be useful if you are devolping scripts for specific tasks. If an empty string is passed instead of the credentials, the profile will be logged in anonymously (and consequently some of the API calls will not be available)
387
388### maas-cli commands
389The maas-cli command exposes the whole API, so you can do anything you actually can do with MAAS using this command. This leaves us with a vast number of options, which are more fully expressed in the complete [2][MAAS Documentation]
390
391list:
392 lists the details [name url auth-key] of all the currently logged-in profiles.
393
394login <profile> <url> <key>:
395 Logs in to the MAAS controller API at the given URL, using the key provided and
396 associates this connection with the given profile name.
397
398logout <profile>:
399 Logs out from the given profile, flushing the stored credentials.
400
401refresh:
402 Refreshes the API descriptions of all the current logged in profiles. This may become necessary for example when upgrading the maas packages to ensure the command-line options match with the API.
403
404### Useful examples
405
406Displays current status of nodes in the commissioning phase:
407```
408maas cli maas nodes check-commissioning
409```
410
411Accept and commission all discovered nodes:
412```
413maas-cli maas nodes accept-all
414```
415
416List all known nodes:
417```
418maas-cli maas nodes list
419```
420
421Filter the list using specific key/value pairs:
422```
423maas-cli maas nodes list architecture="i386/generic"
424```
425
426Set the power parameters for an ipmi enabled node:
427```
428maas-cli maas node update <system_id> \
429 power_type="ipmi" \
430 power_parameters_power_address=192.168.22.33 \
431 power_parameters_power_user=root \
432 power_parameters_power_pass=ubuntu;
433```
434## Appendix III - Physical Zones
435
436To help you maximise fault-tolerance and performance of the services you deploy, MAAS administrators can define _physical zones_ (or just _zones_ for short), and assign nodes to them. When a user requests a node, they can ask for one that is in a specific zone, or one that is not in a specific zone.
437
438It's up to you as an administrator to decide what a physical zone should represent: it could be a server rack, a room, a data centre, machines attached to the same UPS, or a portion of your network. Zones are most useful when they represent portions of your infrastructure. But you could also use them simply to keep track of where your systems are located.
439
440Each node is in one and only one physical zone. Each MAAS instance ships with a default zone to which nodes are attached by default. If you do not need this feature, you can simply pretend it does not exist.
441
442### Applications
443
444Since you run your own MAAS, its physical zones give you more flexibility than those of a third-party hosted cloud service. That means that you get to design your zones and define what they mean. Below are some examples of how physical zones can help you get the most out of your MAAS.
445
446### Creating a Zone
447
448Only administrators can create and manage zones. To create a physical zone in the web user interface, log in as an administrator and browse to the "Zones" section in the top bar. This will takes you to the zones listing page. At the bottom of the page is a button for creating a new zone:
449
450![]add-zone.png
451
452Or to do it in the [_region-controller API_][#region-controller-api], POST your zone definition to the _"zones"_ endpoint.
453
454### Assigning Nodes to a Zone
455
456Once you have created one or more physical zones, you can set nodes' zones from the nodes listing page in the UI. Select the nodes for which you wish to set a zone, and choose "Set physical zone" from the "Bulk action" dropdown list near the top. A second dropdown list will appear, to let you select which zone you wish to set. Leave it blank to clear nodes' physical zones. Clicking "Go" will apply the change to the selected nodes.
457
458You can also set an individual node's zone on its "Edit node" page. Both ways are available in the API as well: edit an individual node through a request to the node's URI, or set the zone on multiple nodes at once by calling the operation on the endpoint.
459
460### Allocating a Node in a Zone
461
462To deploy in a particular zone, call the method in the [_region-controller API_][#region-controller-api] as before, but pass the parameter with the name of the zone. The method will allocate a node in that zone, or fail with an HTTP 409 ("conflict") error if the zone has no nodes available that match your request.
463
464Alternatively, you may want to request a node that is _not_ in a particular zone, or one that is not in any of several zones. To do that, specify the parameter to . This parameter takes a list of zone names; the allocated node will not be in any of them. Again, if that leaves no nodes available that match your request, the call will return a "conflict" error.
465
466It is possible, though not usually useful, to combine the and parameters. If your choice for is also present in , no node will ever match your request. Or if it's not, then the values will not affect the result of the call at all.
467
4680
=== removed file 'Intro.md'
--- Intro.md 2014-04-11 14:51:27 +0000
+++ Intro.md 1970-01-01 00:00:00 +0000
@@ -1,26 +0,0 @@
1#Ubuntu Cloud Documentation
2
3## Deploying Production Grade OpenStack with MAAS, Juju and Landscape
4
5This documentation has been created to describe best practice in deploying
6a Production Grade installation of OpenStack using current Canonical
7technologies, including bare metal provisioning using MAAS, service
8orchestration with Juju and system management with Landscape.
9
10This documentation is divided into four main topics:
11
12 1. [Installing the MAAS Metal As A Service software](../installing-maas.html)
13 2. [Installing Juju and configuring it to work with MAAS](../installing-juju.html)
14 3. [Using Juju to deploy OpenStack](../installing-openstack.html)
15 4. [Deploying Landscape to manage your OpenStack cloud](../installing-landscape)
16
17Once you have an up and running OpenStack deployment, you should also read
18our [Administration Guide](../admin-intro.html) which details common tasks
19for maintenance and scaling of your service.
20
21
22## Legal notices
23
24
25
26![Canonical logo](./media/logo-canonical_no™-aubergine-hex.jpg)
270
=== removed file 'Logging-Juju.md'
--- Logging-Juju.md 2014-04-02 16:18:10 +0000
+++ Logging-Juju.md 1970-01-01 00:00:00 +0000
@@ -1,24 +0,0 @@
1Title: Logging - Juju
2Status: In Progress
3
4# Logging - Juju
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Connecting to rsyslogd
15
16Juju already uses `rsyslogd` for the aggregation of all logs into on centralized log. The
17target of this logging is the file `/var/log/juju/all-machines.log`. You can directly
18access it using the command
19
20````
21$ juju debug-log
22````
23
24**TODO** Describe a way to redirect this log to a central rsyslogd server.
250
=== removed file 'Logging-OpenStack.md'
--- Logging-OpenStack.md 2014-04-02 16:18:10 +0000
+++ Logging-OpenStack.md 1970-01-01 00:00:00 +0000
@@ -1,92 +0,0 @@
1Title: Logging - OpenStack
2Status: In Progress
3
4# Logging - OpenStack
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Connecting to rsyslogd
15
16By default OpenStack is writting its logging output into files into directories for each
17component, like `/var/log/nova` or `/var/log/glance`. For the usage of `rsyslogd` the components
18have to be configured to also log to `syslog`. When doing this also configure each component
19to log into a different syslog facility. This will help you to split the logs into individual
20components on the central logging server. So ensure the following settings:
21
22**/etc/nova/nova.conf:**
23
24````
25use_syslog=True
26syslog_log_facility=LOG_LOCAL0
27````
28
29**/etc/glance/glance-api.conf and /etc/glance/glance-registry.conf:**
30
31````
32use_syslog=True
33syslog_log_facility=LOG_LOCAL1
34````
35
36**/etc/cinder/cinder.conf:**
37
38````
39use_syslog=True
40syslog_log_facility=LOG_LOCAL2
41````
42
43**/etc/keystone/keystone.conf:**
44
45````
46use_syslog=True
47syslog_log_facility=LOG_LOCAL3
48````
49
50The object storage Swift be fault already logs to syslog. So you now can tell the local
51rsyslogd clients to pass the logged information to the logging server. You'll do this
52by creating a `/etc/rsyslog.d/client.conf` containing the line like
53
54````
55*.* @192.16.1.10
56````
57
58where the IP address points to your rsyslogd server. Best is to choose a server that is
59dedicated to this task only. Here you've got to create the file `/etc/rsyslog.d/server.conf`
60contining the settings
61
62````
63# Enable UDP
64$ModLoad imudp
65# Listen on 192.168.1.10 only
66$UDPServerAddress 192.168.1.10
67# Port 514
68$UDPServerRun 514
69# Create logging templates for nova
70$template NovaFile,"/var/log/rsyslog/%HOSTNAME%/nova.log"
71$template NovaAll,"/var/log/rsyslog/nova.log"
72# Log everything else to syslog.log
73$template DynFile,"/var/log/rsyslog/%HOSTNAME%/syslog.log"
74*.* ?DynFile
75# Log various openstack components to their own individual file
76local0.* ?NovaFile
77local0.* ?NovaAll
78& ~
79````
80
81This example only contains the settings for Nova only, the other OpenStack components
82have to be added the same way. Using two templates per component, one containing the
83`%HOSTNAME%` variable and one without it enables a better splitting of the logged
84data. Think about the two example nodes `alpha.example.com` and `bravo.example.com`.
85They will write their logging into the files
86
87- `/var/log/rsyslog/alpha.example.com/nova.log` - only the data of alpha,
88- `/var/log/rsyslog/bravo.example.com/nova.log` - only the data of bravo,
89- `/var/log/rsyslog/nova.log` - the combined data of both.
90
91This allows a quick overview over all nodes as well as the focussed analysis of an
92individual node.
930
=== removed file 'Logging.md'
--- Logging.md 2014-04-02 16:18:10 +0000
+++ Logging.md 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
1Title: Logging
2Status: In Progress
3
4# Logging
5
6The controlling of individual logs is a cumbersome job, even in an environment with only
7few computer system. But it's even more worse in typical clouds with a large number of
8nodes. Here the centrallized approach using `rsyslogd` helps. It allows you to aggregate
9the logging output of all systems in one place. Here the monitoring and analysis gets
10more simple.
11
12Ubuntu uses `rsyslogd` as the default logging service. Since it is natively able to send
13logs to a remote location, you don't have to install anything extra to enable this feature,
14just modify the configuration file. In doing this, consider running your logging over
15a management network or using an encrypted VPN to avoid interception.
160
=== removed file 'Scaling-Ceph.md'
--- Scaling-Ceph.md 2014-04-07 13:23:30 +0000
+++ Scaling-Ceph.md 1970-01-01 00:00:00 +0000
@@ -1,36 +0,0 @@
1Title: Scaling - Ceph
2Status: In Progress
3
4# Scaling - Ceph
5
6## Introduction
7
8Beside the redundancy for more safety and the higher performance through the usage of
9Ceph as storage backend for OpenStack the user also benefits from the more simple way
10of scaling the storage of the needs grow.
11
12## Scope
13
14**TODO**
15
16## Scaling
17
18The addition of Ceph nodes is done using the Juju `add-node` command. By default
19it adds only one node, but it is possible to add the number of wanted nodes as
20argument. To add one more Ceph OSD Daemon node you simply call
21
22```
23juju add-node ceph-osd
24```
25
26Larger numbers of nodes can be added using the `-n` argument, e.g. 5 nodes
27with
28
29```
30juju add-node -n 5 ceph-osd
31```
32
33**Attention:** The adding of more nodes to Ceph leads to a redistribution of data
34between the nodes of an image. This can cause inefficiencies during this process. So
35it should be done in smaller steps.
36
370
=== removed file 'Upgrading-and-Patching-Juju.md'
--- Upgrading-and-Patching-Juju.md 2014-04-02 16:18:10 +0000
+++ Upgrading-and-Patching-Juju.md 1970-01-01 00:00:00 +0000
@@ -1,45 +0,0 @@
1Title: Upgrading and Patching - Juju
2Status: In Progress
3
4# Upgrading and Patching - Juju
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Upgrading
15
16The upgrade of a Juju environment is done using the Juju client and its command
17
18````
19$ juju upgrade-juju
20````
21
22This command sets the version number for all Juju agents to run. This by default
23is the most recent supported version compatible with the comand-line tools version.
24So ensure that you've upgraded the Juju client first.
25
26When run without arguments, `upgrade-juju` will try to upgrade to the following
27versions, in order of preference and depending on the current value of the
28environment's `agent-version` setting:
29
30- The highest patch.build version of the *next* stable major.minor version.
31- The highest patch.build version of the *current* major.minor version.
32
33Both of these depend on the availability of the according tools. On MAAS you've
34got to manage this yourself using the command
35
36````
37$ juju sync-tools
38````
39
40This copies the Juju tools tarball from the official tools store (located
41at https://streams.canonical.com/juju) into your environment.
42
43## Patching
44
45**TODO**
460
=== removed file 'Upgrading-and-Patching-OpenStack.md'
--- Upgrading-and-Patching-OpenStack.md 2014-04-02 16:18:10 +0000
+++ Upgrading-and-Patching-OpenStack.md 1970-01-01 00:00:00 +0000
@@ -1,83 +0,0 @@
1Title: Upgrading and Patching - OpenStack
2Status: In Progress
3
4# Upgrading and Patching - OpenStack
5
6## Introduction
7
8**TODO**
9
10## Scope
11
12**TODO**
13
14## Upgrading
15
16The upgrade of an OpenStack cluster in one big step is an approach requiring additional
17hardware to setup an update cloud beside the productive one and leads to a longer
18outage while your cloud is in read-only mode, the state is transferred to the new
19one and the environments are switched. So the preferred way of upgrading an OpenStack
20cloud is the rolling upgrade of each component of the system piece by piece.
21
22Here you can choose between in-place and side-by-side upgrades. But the first one needs
23to shutdown the regarding component while you're performing its upgrade. Additionally you
24may have troubles in case of a rollback. So to avoid this the side by side upgrade is
25the preferred way here.
26
27Before starting the upgrade itself you should
28
29- Perform some "cleaning" of the environment process to ensure a consistent state; for
30 example, instances not fully purged from the system after deletion may cause
31 indeterminate behavior
32- Read the release notes and documentation
33- Find incompatibilities between your versions
34
35The upgrade tasks here follow the same procedure for each component:
36
371. Configure the new worker
381. Turn off the current worker; during this time hide the downtime using a message
39 queue or a load balancer
401. Take a backup as described earlier of the old worker for a rollback
411. Copy the state of the current to the new worker
421. Start up the new worker
43
44Now repeat these steps for each worker in an approprate order. In case of a problem it
45should be easy to rollback as long as the former worker stays untouched. This is,
46beside the shorter downtime, the most important advantage of the side-by-side upgrade.
47
48The following order for service upgrades seems the most successful:
49
501. Upgrade the OpenStack Identity Service (Keystone).
511. Upgrade the OpenStack Image Service (Glance).
521. Upgrade OpenStack Compute (Nova), including networking components.
531. Upgrade OpenStack Block Storage (Cinder).
541. Upgrade the OpenStack dashboard.
55
56These steps look very easy, but still are a complex procedure depending on your cloud
57configuration. So we recommend to have a testing environment with a near-identical
58architecture to your production system. This doesn't mean that you should use the same
59sizes and hardware, which would be best but expensive. But there are some ways to reduce
60the cost.
61
62- Use your own cloud. The simplest place to start testing the next version of OpenStack
63 is by setting up a new environment inside your own cloud. This may seem odd—especially
64 the double virtualisation used in running compute nodes—but it's a sure way to very
65 quickly test your configuration.
66- Use a public cloud. Especially because your own cloud is unlikely to have sufficient
67 space to scale test to the level of the entire cloud, consider using a public cloud
68 to test the scalability limits of your cloud controller configuration. Most public
69 clouds bill by the hour, which means it can be inexpensive to perform even a test
70 with many nodes.
71- Make another storage endpoint on the same system. If you use an external storage plug-in
72 or shared file system with your cloud, in many cases it's possible to test that it
73 works by creating a second share or endpoint. This will enable you to test the system
74 before entrusting the new version onto your storage.
75- Watch the network. Even at smaller-scale testing, it should be possible to determine
76 whether something is going horribly wrong in intercomponent communication if you
77 look at the network packets and see too many.
78
79**TODO** Add more concrete description here.
80
81## Patching
82
83**TODO**
840
=== removed directory 'build'
=== removed directory 'build/epub'
=== removed directory 'build/html'
=== removed directory 'build/pdf'
=== removed file 'installing-openstack-outline.md'
--- installing-openstack-outline.md 2014-04-11 14:51:27 +0000
+++ installing-openstack-outline.md 1970-01-01 00:00:00 +0000
@@ -1,395 +0,0 @@
1Title:Installing OpenStack
2
3# Installing OpenStack
4
5![Openstack](../media/openstack.png)
6
7##Introduction
8
9OpenStack is a versatile, open source cloud environment equally suited to serving up public, private or hybrid clouds. Canonical is a Platinum Member of the OpenStack foundation and has been involved with the OpenStack project since its inception; the software covered in this document has been developed with the intention of providing a streamlined way to deploy and manage OpenStack installations.
10
11### Scope of this documentation
12
13The OpenStack platform is powerful and its uses diverse. This section of documentation
14is primarily concerned with deploying a 'standard' running OpenStack system using, but not limited to, Canonical components such as MAAS, Juju and Ubuntu. Where appropriate other methods and software will be mentioned.
15
16### Assumptions
17
181. Use of MAAS
19 This document is written to provide instructions on how to deploy OpenStack using MAAS for hardware provisioning. If you are not deploying directly on hardware, this method will still work, with a few alterations, assuming you have a properly configured Juju environment. The main difference will be that you will have to provide different configuration options depending on the network configuration.
20
212. Use of Juju
22 This document assumes an up to date, stable release version of Juju.
23
243. Local network configuration
25 This document assumes that you have an adequate local network configuration, including separate interfaces for access to the OpenStack cloud. Ideal networks are laid out in the [MAAS][MAAS documentation for OpenStack]
26
27## Planning an installation
28
29Before deploying any services, it is very useful to take stock of the resources available and how they are to be used. OpenStack comprises of a number of interrelated services (Nova, Swift, etc) which each have differing demands in terms of hosts. For example, the Swift service, which provides object storage, has a different requirement than the Nova service, which provides compute resources.
30
31The minimum requirements for each service and recommendations are laid out in the official [oog][OpenStack Operations Guide] which is available (free) in HTML or various downloadable formats. For guidance, the following minimums are recommended for Ubuntu Cloud:
32
33[insert minimum hardware spec]
34
35
36
37The recommended composition of nodes for deploying OpenStack with MAAS and Juju is that all nodes in the system should be capable of running *ANY* of the services. This is best practice for the robustness of the system, as since any physical node should fail, another can be repurposed to take its place. This obviously extends to any hardware requirements such as extra network interfaces.
38
39If for reasons of economy or otherwise you choose to use different configurations of hardware, you should note that your ability to overcome hardware failure will be reduced. It will also be necessary to target deployments to specific nodes - see the section in the MAAS documentation on tags [MAAS tags]
40
41
42###Create the OpenStack configuration file
43
44We will be using Juju charms to deploy the component parts of OpenStack. Each charm encapsulates everything required to set up a particular service. However, the individual services have many configuration options, some of which we will want to change.
45
46To make this task easier and more reproduceable, we will create a separate configuration file with the relevant options for all the services. This is written in a standard YAML format.
47
48You can download the [openstack-config.yaml] file we will be using from here. It is also reproduced below:
49
50```
51keystone:
52 admin-password: openstack
53 debug: 'true'
54 log-level: DEBUG
55nova-cloud-controller:
56 network-manager: 'Neutron'
57 quantum-security-groups: 'yes'
58 neutron-external-network: Public_Network
59nova-compute:
60 enable-live-migration: 'True'
61 migration-auth-type: "none"
62 virt-type: kvm
63 #virt-type: lxc
64 enable-resize: 'True'
65quantum-gateway:
66 ext-port: 'eth1'
67 plugin: ovs
68glance:
69 ceph-osd-replication-count: 3
70cinder:
71 block-device: None
72 ceph-osd-replication-count: 3
73 overwrite: "true"
74 glance-api-version: 2
75ceph:
76 fsid: a51ce9ea-35cd-4639-9b5e-668625d3c1d8
77 monitor-secret: AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==
78 osd-devices: /dev/sdb
79 osd-reformat: 'True'
80```
81
82For all services, we can configure the `openstack-origin` to point to an install source. In this case, we will rely on the default, which will point to the relevant sources for the Ubuntu 14.04 LTS Trusty release. Further configuration for each service is explained below:
83
84####keystone
85admin password:
86 You should set a memorable password here to be able to access OpenStack when it is deployed
87
88debug:
89 It is useful to set this to 'true' initially, to monitor the setup. this will produce more verbose messaging.
90
91log-level:
92 Similarly, setting the log-level to DEBUG means that more verbose logs can be generated. These options can be changed once the system is set up and running normally.
93
94####nova-cloud-controller
95
96cloud-controller:
97 'Neutron' - Other options are now depricated.
98
99quantum-security-groups:
100 'yes'
101
102neutron-external-network:
103 Public_Network - This is an interface we will use for allowing access to the cloud, and will be defined later
104
105####nova-compute
106enable-live-migration:
107 We have set this to 'True'
108
109migration-auth-type:
110 "none"
111
112virt-type:
113 kvm
114
115enable-resize:
116 'True'
117
118####quantum-gateway
119ext-port:
120 This is where we specify the hardware for the public network. Use 'eth1' or the relevant
121 plugin: ovs
122
123
124####glance
125
126 ceph-osd-replication-count: 3
127
128####cinder
129 openstack-origin: cloud:trusty-icehouse/updates
130 block-device: None
131 ceph-osd-replication-count: 3
132 overwrite: "true"
133 glance-api-version: 2
134
135####ceph
136
137fsid:
138 The fsid is simply a unique identifier. You can generate a suitable value by running `uuidgen` which should return a value which looks like: a51ce9ea-35cd-4639-9b5e-668625d3c1d8
139
140monitor-secret:
141 The monitor secret is a secret string used to authenticate access. There is advice on how to generate a suitable secure secret at [ceph][the ceph website]. A typical value would be `AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==`
142
143osd-devices:
144 This should point (in order of preference) to a device,partition or filename. In this case we will assume secondary device level storage located at `/dev/sdb`
145
146osd-reformat:
147 We will set this to 'True', allowing ceph to reformat the drive on provisioning.
148
149
150##Deploying OpenStack with Juju
151Now that the configuration is defined, we can use Juju to deploy and relate the services.
152
153###Initialising Juju
154Juju requires a minimal amount of setup. Here we assume it has already been configured to work with your MAAS cluster (see the [juju_install][Juju Install Guide] for more information on this.
155
156Firstly, we need to fetch images and tools that Juju will use:
157```
158juju sync-tools --debug
159```
160Then we can create the bootstrap instance:
161
162```
163juju bootstrap --upload-tools --debug
164```
165We use the upload-tools switch to use the local versions of the tools which we just fetched. The debug switch will give verbose output which can be useful. This process may take a few minutes, as Juju is creating an instance and installing the tools. When it has finished, you can check the status of the system with the command:
166```
167juju status
168```
169This should return something like:
170```
171---------- example
172```
173### Deploy the OpenStack Charms
174
175Now that the Juju bootstrap node is up and running we can deploy the services required to make our OpenStack installation. To configure these services properly as they are deployed, we will make use of the configuration file we defined earlier, by passing it along with the `--config` switch with each deploy command. Substitute in the name and path of your config file if different.
176
177It is useful but not essential to deploy the services in the order below. It is also highly reccommended to open an additional terminal window and run the command `juju debug-log`. This will output the logs of all the services as they run, and can be useful for troubleshooting.
178
179It is also recommended to run a `juju status` command periodically, to check that each service has been installed and is running properly. If you see any errors, please consult the [troubleshooting][troubleshooting section below].
180
181```
182juju deploy --to=0 juju-gui
183juju deploy rabbitmq-server
184juju deploy mysql
185juju deploy --config openstack-config.yaml openstack-dashboard
186juju deploy --config openstack-config.yaml keystone
187juju deploy --config openstack-config.yaml ceph -n 3
188juju deploy --config openstack-config.yaml nova-compute -n 3
189juju deploy --config openstack-config.yaml quantum-gateway
190juju deploy --config openstack-config.yaml cinder
191juju deploy --config openstack-config.yaml nova-cloud-controller
192juju deploy --config openstack-config.yaml glance
193juju deploy --config openstack-config.yaml ceph-radosgw
194```
195
196
197### Add relations between the OpenStack services
198
199Although the services are now deployed, they are not yet connected together. Each service currently exists in isolation. We use the `juju add-relation`command to make them aware of each other and set up any relevant connections and protocols. This extra configuration is taken care of by the individual charms themselves.
200
201
202We should start adding relations between charms by setting up the Keystone authorization service and its database, as this will be needed by many of the other connections:
203
204juju add-relation keystone mysql
205
206We wait until the relation is set. After it finishes check it with juju status:
207
208```
209juju status mysql
210juju status keystone
211```
212
213It can take a few moments for this service to settle. Although it is certainly possible to continue adding relations (Juju manages a queue for pending actions) it can be counterproductive in terms of the overall time taken, as many of the relations refer to the same services.
214The following relations also need to be made:
215```
216juju add-relation nova-cloud-controller mysql
217juju add-relation nova-cloud-controller rabbitmq-server
218juju add-relation nova-cloud-controller glance
219juju add-relation nova-cloud-controller keystone
220juju add-relation nova-compute mysql
221juju add-relation nova-compute rabbitmq-server
222juju add-relation nova-compute glance
223juju add-relation nova-compute nova-cloud-controller
224juju add-relation glance mysql
225juju add-relation glance keystone
226juju add-relation cinder keystone
227juju add-relation cinder mysql
228juju add-relation cinder rabbitmq-server
229juju add-relation cinder nova-cloud-controller
230juju add-relation openstack-dashboard keystone
231juju add-relation swift-proxy swift-storage
232juju add-relation swift-proxy keystone
233```
234Finally, the output of juju status should show the all the relations as complete. The OpenStack cloud is now running, but it needs to be populated with some additional components before it is ready for use.
235
236
237
238
239##Preparing OpenStack for use
240
241###Configuring access to Openstack
242
243
244
245The configuration data for OpenStack can be fetched by reading the configuration file generated by the Keystone service. You can also copy this information by logging in to the Horizon (OpenStack Dashboard) service and examining the configuration there. However, we actually need only a few bits of information. The following bash script can be run to extract the relevant information:
246
247```
248#!/bin/bash
249
250set -e
251
252KEYSTONE_IP=`juju status keystone/0 | grep public-address | awk '{ print $2 }' | xargs host | grep -v alias | awk '{ print $4 }'`
253KEYSTONE_ADMIN_TOKEN=`juju ssh keystone/0 "sudo cat /etc/keystone/keystone.conf | grep admin_token" | sed -e '/^M/d' -e 's/.$//' | awk '{ print $3 }'`
254
255echo "Keystone IP: [${KEYSTONE_IP}]"
256echo "Keystone Admin Token: [${KEYSTONE_ADMIN_TOKEN}]"
257
258cat << EOF > ./nova.rc
259export SERVICE_ENDPOINT=http://${KEYSTONE_IP}:35357/v2.0/
260export SERVICE_TOKEN=${KEYSTONE_ADMIN_TOKEN}
261export OS_AUTH_URL=http://${KEYSTONE_IP}:35357/v2.0/
262export OS_USERNAME=admin
263export OS_PASSWORD=openstack
264export OS_TENANT_NAME=admin
265EOF
266
267juju scp ./nova.rc nova-cloud-controller/0:~
268```
269This script extract the required information and then copies the file to the instance running the nova-cloud-controller.
270Before we do any nova or glance command we will load the file we just created:
271
272```
273$ source ./nova.rc
274$ nova endpoints
275```
276
277At this point the output of nova endpoints should show the information of all the available OpenStack endpoints.
278
279### Install the Ubuntu Cloud Image
280
281In order for OpenStack to create instances in its cloud, it needs to have access to relevant images
282$ mkdir ~/iso
283$ cd ~/iso
284$ wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
285
286###Import the Ubuntu Cloud Image into Glance
287!!!Note: glance comes with the package glance-client which may need to be installed where you plan the run the command from
288
289```
290apt-get install glance-client
291glance add name="Trusty x86_64" is_public=true container_format=ovf disk_format=qcow2 < trusty-server-cloudimg-amd64-disk1.img
292```
293###Create OpenStack private network
294Note: nova-manage can be run from the nova-cloud-controller node or any of the nova-compute nodes. To access the node we run the following command:
295
296```
297juju ssh nova-cloud-controller/0
298
299sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --network_size=32 --multi_host=T --bridge_interface=eth0 --bridge=br100
300```
301
302To make sure that we have created the network we can now run the following command:
303
304```
305sudo nova-manage network list
306```
307
308### Create OpenStack public network
309```
310sudo nova-manage floating create --ip_range=1.1.21.64/26
311sudo nova-manage floating list
312```
313Allow ping and ssh access adding them to the default security group
314Note: The following commands are run from a machine where we have the package python-novaclient installed and within a session where we have loaded the above created nova.rc file.
315
316```
317nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
318nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
319```
320
321###Create and register the ssh keys in OpenStack
322Generate a default keypair
323```
324ssh-keygen -t rsa -f ~/.ssh/admin-key
325```
326###Copy the public key into Nova
327We will name it admin-key:
328Note: In the precise version of python-novaclient the command works with --pub_key instead of --pub-key
329
330```
331nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key
332```
333And make sure it’s been successfully created:
334```
335nova keypair-list
336```
337
338###Create a test instance
339We created an image with glance before. Now we need the image ID to start our first instance. The ID can be found with this command:
340```
341nova image-list
342```
343
344Note: we can also use the command glance image-list
345###Boot the instance:
346
347```
348nova boot --flavor=m1.small --image=< image_id_from_glance_index > --key-name admin-key testserver1
349```
350
351###Add a floating IP to the new instance
352First we allocate a floating IP from the ones we created above:
353
354```
355nova floating-ip-create
356```
357
358Then we associate the floating IP obtained above to the new instance:
359
360```
361nova add-floating-ip 9363f677-2a80-447b-a606-a5bd4970b8e6 1.1.21.65
362```
363
364
365### Create and attach a Cinder volume to the instance
366Note: All these steps can be also done through the Horizon Web UI
367
368We make sure that cinder works by creating a 1GB volume and attaching it to the VM:
369
370```
371cinder create --display_name test-cinder1 1
372```
373
374Get the ID of the volume with cinder list:
375
376```
377cinder list
378```
379
380Attach it to the VM as vdb
381
382```
383nova volume-attach test-server1 bbb5c5c2-a5fd-4fe1-89c2-d16fe91578d4 /dev/vdb
384```
385
386Now we should be able to ssh the VM test-server1 from a server with the private key we created above and see that vdb appears in /proc/partitions
387
388
389
390
391[troubleshooting]
392[oog](http://docs.openstack.org/ops/)
393[MAAS tags]
394[openstack-config.yaml]
395[ceph](http://ceph.com/docs/master/dev/mon-bootstrap/)
3960
=== removed file 'landcsape.md'
--- landcsape.md 2014-03-24 13:49:58 +0000
+++ landcsape.md 1970-01-01 00:00:00 +0000
@@ -1,1297 +0,0 @@
1#Managing OpenStack with Landscape
2
3##About Landscape
4Landscape is a system management tool designed to let you easily manage multiple Ubuntu systems - up to 40,000 with a single Landscape instance. From a single dashboard you can apply package updates and perform other administrative tasks on many machines. You can categorize machines by group, and manage each group separately. You can make changes to targeted machines even when they are offline; the changes will be applied next time they start. Landscape lets you create scripts to automate routine work such as starting and stopping services and performing backups. It lets you use both common Ubuntu repositories and any custom repositories you may create for your own computers. Landscape is particularly adept at security updates; it can highlight newly available packages that involve security fixes so they can be applied quickly. You can use Landscape as a hosted service as part of Ubuntu Advantage, or run it on premises via Landscape Dedicated Server.
5
6##Ubuntu Advantage
7Ubuntu Advantage comprises systems management tools, technical support, access to online resources and support engineers, training, and legal assurance to keep organizations on top of their Ubuntu server, desktop, and cloud deployments. Advantage provides subscriptions at various support levels to help organizations maintain the level of support they need.
8
9##Concepts
10
11###Tags
12
13
14Landscape lets you group multiple computers by applying tags to them.
15You can group computers using any set of characteristics; architecture
16and location might be two logical tagging schemes. Tag names may use any
17combination of letters, numbers, and dashes. Each computer can be
18associated with multiple tags. There is no menu choice for tags; rather,
19you can select multiple computers under the COMPUTERS menu and apply or
20remove one or more tags to all the ones you select on the INFO screen.
21If you want to specify more than one tag at a time for your selected
22computers, separate the tags by spaces.
23
24###Packages
25
26In Linux, a package is a group of related files for an application that
27make it easy to install, upgrade, and remove the application. You can
28manage packages from the PACKAGES menu under COMPUTERS.
29
30###Repositories
31
32Linux distributions like Ubuntu use repositories to hold packages you
33can install on managed computers. While Ubuntu has [several
34repositories](https://help.ubuntu.com/community/Repositories/Ubuntu/)
35that anyone can access, you can also maintain your own repositories on
36your network. This can be useful when you want to maintain packages with
37different versions from those in the community repositories, or if
38you've packages in-house software for installation. Landscape's [12.09
39release
40notes](https://help.landscape.canonical.com/LDS/ReleaseNotes12.09#Repository_Management)
41contain a quick tutorial about repository management.
42
43###Upgrade profiles
44
45An upgrade profile defines a schedule for the times when upgrades are to
46be automatically installed on the machines associated with a specific
47access group. You can associate zero or more computers with each upgrade
48profile via tags to install packages on those computers. You can also
49associate an upgrade profile with an access group, which limits its use
50to only computers within the specified access group. You can manage
51upgrade profiles from the UPGRADE PROFILES link in the PROFILES choice
52under your account.
53
54###Package profiles
55
56A package profile, or meta-package, comprises a set of one or more
57packages, including their dependencies and conflicts (generally called
58constraints), that you can manage as a group. Package profiles specify
59sets of packages that associated systems should always get, or never
60get. You can associate zero or more computers with each package profile
61via tags to install packages on those computers. You can also associate
62a package profile with an access group, which limits its use to only
63computers within the specified access group. You can manage package
64profiles from the Package Profiles link in the PROFILES menu under your
65account.
66
67###Removal profiles
68
69A removal profile defines a maximum number of days that a computer can
70go without exchanging data with the Landscape server before it is
71automatically removed. If more days pass than the profile's "Days
72without exchange", that computer will automatically be removed and the
73license seat it held will be released. This helps Landscape keep license
74seats open and ensure Landscape is not tracking stale or retired
75computer data for long periods of time. You can associate zero or more
76computers with each removal profile via tags to ensure those computers
77are governed by this removal profile. You can also associate a removal
78profile with an access group, which limits its use to only computers
79within the specified access group. You can manage removal profiles from
80the REMOVAL PROFILES link in the PROFILES choice under your account.
81
82Scripts
83-------
84
85Landscape lets you run scripts on the computers you manage in your
86account. The scripts may be in any language, as long as an interpreter
87for that language is present on the computers on which they are to run.
88You can maintain a library of scripts for common tasks. You can manage
89scripts from the STORED SCRIPTS menu under your account, and run them
90against computers from the SCRIPTS menu under COMPUTERS.
91
92Administrators
93--------------
94
95Administrators are people who are authorized to manage computers using
96Landscape. You can manage administrators from the ADMINISTRATORS menu
97under your account.
98
99Access Groups
100-------------
101
102Landscape lets administrators limit administrative rights on computers
103by assigning them to logical groupings called access groups. Each
104computer can be in only one access group. Typical access groups might be
105constucted around organizational units or departments, locations, or
106hardware architecture. You can manage access groups from the ACCESS
107GROUPS menu under your account; read about [how to create access
108groups](https://landscape.canonical.com/static/doc/user-guide/ch05.html#creatingaccessgroups "Creating access groups"),
109[add computers to access
110groups](https://landscape.canonical.com/static/doc/user-guide/ch05.html#addingtoaccessgroups "Adding computers to access groups"),
111and [associate administrators with access
112groups](https://landscape.canonical.com/static/doc/user-guide/ch05.html#associatingadmins "Associating roles with access groups").
113It is good policy to come up with and document a naming convention for
114access groups before you deploy Landscape, so that all administrators
115understand what constitutes an acceptable logical grouping for your
116organization.
117
118Roles
119-----
120
121For each access group, you can assign management privileges to
122administrators via the use of roles. Administrators may be associated
123with multiple roles, and roles may be associated with many access
124groups. You can manage roles from the ROLES menu under your account.
125
126Alerts
127------
128
129Landscape uses alerts to notify administrators of conditions that
130require attention. You can manage alerts from the ALERTS menu under your
131account.
132
133Provisioning
134------------
135
136Landscape lets you provision new computers starting with bare hardware -
137what Canonical calls metal as a service. With MAAS, you provision new
138hardware only as you need it, just as you would bring new cloud
139instances online. [The Ubuntu wiki explains how to set up
140MAAS](https://wiki.ubuntu.com/ServerTeam/MAAS/).
141
142You can provision one or more new computers from the PROVISIONING menu
143under your account.
144
145
146##Managing Landscape
147------------------
148
149
150Prerequisites
151-------------
152
153You can install Landscape Dedicated Server (LDS) on any server with a
154dual-core processor running at 2.0GHz or higher, at least 4GB of RAM,
155and 5GB of disk space. The operating system must be Ubuntu Server 12.04
156LTS x86\_64 or higher. You must also have PostgreSQL installed and
157network ports 80/tcp (http) and 443/tcp (https) open. You can optionally
158open port 22/tcp (ssh) as well for general server maintenance.
159
160Installing
161----------
162
163Refer to the [Recommended
164Deployment](https://help.landscape.canonical.com/LDS/RecommendedDeployment)
165guide in the Landscape wiki for all the information you need to install,
166configure, and start Landscape and the dependent services it relies on.
167
168Upgrading Landscape
169-------------------
170
171The process of upgrading an installed version of Landscape is
172[documented in the Landscape
173wiki](https://help.landscape.canonical.com/LDS/ReleaseNotes#Upgrading).
174
175Backing up and restoring
176------------------------
177
178Landscape uses several PostgreSQL databases and needs to keep them
179consistent. For example, if you remove a computer from Landscape
180management, more than one database needs to be updated. Running a
181utility like `pg_dumpall`{.code} won't guarantee the consistency of the
182backup, because while the dump process does lock all tables in the
183database being backed up, it doesn't care about other databases. The
184result will likely be an inconsistent backup.
185
186Instead, you should perform hot backups by using write-ahead log files
187from PostgreSQL and/or filesystem snapshots in order to take a
188consistent image of all the databases at a given time, or, if you can
189afford some down time, run offline backups. To run offline backups,
190disable the Landscape service and run a normal backup with
191`pg_dump`{.code} or `pg_dumpall`{.code}. Offline backup can take just a
192few minutes for databases at smaller sites, or about half an hour for a
193database with several thousand computers. Bear in mind that Landscape
194can be deployed using several servers, so when you are taking the
195offline backup route, remember to disable all the Landscape services on
196all server machines. See the [PostgreSQL documentation on backup and
197restore](http://www.postgresql.org/docs/9.1/interactive/backup.html) for
198detailed instructions.
199
200In addition to the Landscape databases, make sure you back up certain
201additional important files:
202
203- `/etc/landscape`{.filename}: configuration files and the LDS license
204
205- `/etc/default/landscape-server`{.filename}: file to configure which
206 services will start on this machine
207
208- `/var/lib/landscape/hash-id-databases`{.filename}: these files are
209 recreated by a weekly cron job, which can take several minutes to
210 run, so backing them up can save time
211
212- `/etc/apache2/sites-available/`{.filename}: the Landscape Apache
213 vhost configuration file, usually named after the fully qualified
214 domain name of the server
215
216- `/etc/ssl/certs/`{.filename}: the Landscape server X509 certificate
217
218- `/etc/ssl/private/`{.filename}: the Landscape server X509 key file
219
220- `/etc/ssl/certs/landscape_server_ca.crt`{.filename}: if in use, this
221 is the CA file for the internal CA used to issue the Landscape
222 server certificates
223
224- `/etc/postgresql/8.4/main/`{.filename}: PostgreSQL configuration
225 files - in particular, postgresql.conf for tuning and pg\_hba.conf
226 for access rules. These files may be in a separate host, dedicated
227 to the database. Use subdirectory 9.1 for PostgreSQL version 9.1,
228 etc.
229
230- `/var/log/landscape`{.filename}: all LDS log files
231
232Log files
233---------
234
235Landscape generates several log files in
236`/var/log/landscape`{.filename}:
237
238- `update-alerts`{.filename}: output of that cron job. Used to
239 determine which computers are offline
240
241- `process-alerts`{.filename}: output of that cron job. Used to
242 trigger alerts and send out alert email messages
243
244- `process-profiles`{.filename}: output of that cron job. Used to
245 process upgrade profiles
246
247- `sync_lds_releases`{.filename}: output of that cron job. Used to
248 check for new LDS releases
249
250- `maintenance`{.filename}: output of that cron job. Removes old
251 monitoring data and performs other maintenance tasks
252
253- `update_security_db`{.filename}: output of that cron job. Checks for
254 new Ubuntu Security Notices
255
256- `maas-poller`{.filename}: output of that cron job. Used to check the
257 status of MAAS tasks
258
259- `package-retirement`{.filename}: output of that (optional) cron job.
260 Moves unreferenced packages to another table in the database to
261 speed up package queries
262
263- `appserver-N`{.filename}: output of the application server N, where
264 N (here and below) is a number that distinguishes between multiple
265 instances that may be running
266
267- `appserver_access-N`{.filename}: access log for application server
268 N; the application server handles the web-based user interface
269
270- `message_server-N`{.filename}: output of message server N; the
271 message server handles communication between the clients and the
272 server
273
274- `message_server_access-N`{.filename}: access log for message server
275 N
276
277- `pingserver-N`{.filename}: output of pingserver N; the pingserver
278 tracks client heartbeats to watch for unresponsive clients
279
280- `pingtracker-N`{.filename}: complementary log for pingserver N
281 detailing how the algorithm is working
282
283- `async-frontend-N`{.filename}: log for async-frontend server N; the
284 async front end delivers AJAX-style content to the web user
285 interface
286
287- `api-N`{.filename}: log for API server N; the API services handles
288 requests from landscape-api clients
289
290- `combo-loader-N`{.filename}: log for combo-loader server N, which is
291 responsible for delivering CSS and JavaScript
292
293- `job-handler-N`{.filename}: log for job-handler server N; the job
294 handler service controls individual back-end tasks on the server
295
296- `package-upload-N`{.filename}: output of package-upload server N,
297 which is used in repository management for upload pockets, which are
298 repositories that hold packages that are uploaded to them by
299 authorized users
300
301
302
303##Managing administrators
304
305
306Administrators are people who are authorized to manage computers using
307Landscape. You can manage administrators from the ADMINISTRATORS menu
308under your account.
309
310**Figure 4.1.**
311
312![image](./Chapter%A04.%A0Managing%20administrators_files/manageadmin1.png)
313
314\
315On this page, the upper part of the screen shows a list of existing
316administrators and their email addresses. You may create as many as
3171,000 administrators, or as few as one. If you're running Landscape
318Dedicated Server, the first user you create automatically become an
319administrator of your account. If you're using the hosted version of
320Landscape, Canonical sends you an administrator invitation when your
321account is created. After that, you must create additional
322administrators yourself.
323
324Inviting administrators
325-----------------------
326
327You make someone an administrator by sending that person an invitation
328via email. On the administrator management page, specify the person's
329name and email address, and the administration role you wish the person
330to have. The choices that appear in the drop-down list are the roles
331defined under the ROLES menu. See the discussion of roles below.
332
333When you have specified contact and role information, click on the
334Invite button to send an invitation. The message will go out from the
335email address you specified during Landscape setup.
336
337Users who receive an invitation will see an HTML link in the email
338message. Clicking on the link takes them to a page where they are asked
339to log in to Landscape or create an Ubuntu Single Sign-on account. Once
340they do so, they gain the administrator privileges associated with the
341role to which they've been assigned.
342
343It's worth noting that an administrator invitation is like a blank check
344- the first person who clicks on the link and submits information can
345become an administrator, even if it's not the person with the name and
346email address to which you sent the invitation. Therefore, take care to
347keep track of the status of administrator invitations.
348
349Disabling administrators
350------------------------
351
352To disable one or more administrators, tick the check boxes next to
353their names, then click on the Disable button. The adminstrator is
354permanently disabled and will no longer show up in Landscape. Though
355this operation cannot be reversed, you can send another invitation to
356the same email address.
357
358Roles
359-----
360
361A role is a set of permissions that determine what operations an
362administrator can perform. When you define a role, you also specify a
363set of one or more access groups to which the role applies.
364
365Available permissions:
366
367- View computers
368
369- Manage computer
370
371- Add computers to an access group
372
373- Remove computers from an access group
374
375- Manage pending computers (In the hosted version of Landscape,
376 pending computers are clients that have been set up with the
377 landscape-config tool but have not yet been accepted or rejected by
378 an administrator. Landscape Dedicated Server never needs to have
379 pending computers once it is set up and has an account password
380 assigned.)
381
382- View scripts
383
384- Manage scripts
385
386- View upgrade profiles
387
388- Manage upgrade profiles
389
390- View package profiles
391
392- Manage package profiles
393
394By specifying different permission levels and different access groups to
395which they apply, you can create roles and associate them with
396administrators to get a very granular level of control over sets of
397computers.
398
399
400
401
402##Access groups
403
404
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches