Merge lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support into lp:charms/rabbitmq-server
- Precise Pangolin (12.04)
- ha-support
- Merge into trunk
Proposed by
James Page
Status: | Merged |
---|---|
Merged at revision: | 41 |
Proposed branch: | lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support |
Merge into: | lp:charms/rabbitmq-server |
Diff against target: |
1867 lines (+1486/-255) 17 files modified
.project (+17/-0) .pydevproject (+8/-0) config.yaml (+37/-0) hooks/config-changed (+0/-70) hooks/lib/ceph_utils.py (+256/-0) hooks/lib/cluster_utils.py (+130/-0) hooks/lib/openstack_common.py (+230/-0) hooks/lib/utils.py (+295/-0) hooks/rabbit_utils.py (+176/-0) hooks/rabbitmq-common (+0/-56) hooks/rabbitmq-relations (+0/-127) hooks/rabbitmq_server_relations.py (+302/-0) metadata.yaml (+7/-1) revision (+1/-1) scripts/add_to_cluster (+13/-0) scripts/remove_from_cluster (+4/-0) templates/rabbitmq.config (+10/-0) |
To merge this branch: | bzr merge lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email: mp+165060@code.launchpad.net |
Commit message
Description of the change
This branch includes:
1) Redux from bash->python and use of helpers from the openstack charm helpers project
2) Support for Active/Passive HA using a shared ceph block device and the hacluster suboridinate charm.
To post a comment you must log in.
Revision history for this message
Andreas Hasenack (ahasenack) wrote : | # |
Revision history for this message
James Page (james-page) wrote : | # |
Good spot Andreas
We will get that fixed up so we don't regress the hook interface as it stands today.
- 47. By James Page
-
Fixup hooks to restore previous behaviour for non-clustered services
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file '.project' |
2 | --- .project 1970-01-01 00:00:00 +0000 |
3 | +++ .project 2013-06-06 15:35:31 +0000 |
4 | @@ -0,0 +1,17 @@ |
5 | +<?xml version="1.0" encoding="UTF-8"?> |
6 | +<projectDescription> |
7 | + <name>rabbitmq-server</name> |
8 | + <comment></comment> |
9 | + <projects> |
10 | + </projects> |
11 | + <buildSpec> |
12 | + <buildCommand> |
13 | + <name>org.python.pydev.PyDevBuilder</name> |
14 | + <arguments> |
15 | + </arguments> |
16 | + </buildCommand> |
17 | + </buildSpec> |
18 | + <natures> |
19 | + <nature>org.python.pydev.pythonNature</nature> |
20 | + </natures> |
21 | +</projectDescription> |
22 | |
23 | === added file '.pydevproject' |
24 | --- .pydevproject 1970-01-01 00:00:00 +0000 |
25 | +++ .pydevproject 2013-06-06 15:35:31 +0000 |
26 | @@ -0,0 +1,8 @@ |
27 | +<?xml version="1.0" encoding="UTF-8" standalone="no"?> |
28 | +<?eclipse-pydev version="1.0"?><pydev_project> |
29 | +<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property> |
30 | +<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property> |
31 | +<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH"> |
32 | +<path>/rabbitmq-server/hooks</path> |
33 | +</pydev_pathproperty> |
34 | +</pydev_project> |
35 | |
36 | === modified file 'config.yaml' |
37 | --- config.yaml 2013-03-01 17:36:31 +0000 |
38 | +++ config.yaml 2013-06-06 15:35:31 +0000 |
39 | @@ -27,3 +27,40 @@ |
40 | juju-myservice-0 |
41 | If you're running multiple environments with the same services in them |
42 | this allows you to differentiate between them. |
43 | + # HA configuration settings |
44 | + vip: |
45 | + type: string |
46 | + description: "Virtual IP to use to front rabbitmq in ha configuration" |
47 | + vip_iface: |
48 | + type: string |
49 | + default: eth0 |
50 | + description: "Network Interface where to place the Virtual IP" |
51 | + vip_cidr: |
52 | + type: int |
53 | + default: 24 |
54 | + description: "Netmask that will be used for the Virtual IP" |
55 | + ha-bindiface: |
56 | + type: string |
57 | + default: eth0 |
58 | + description: | |
59 | + Default network interface on which HA cluster will bind to communication |
60 | + with the other members of the HA Cluster. |
61 | + ha-mcastport: |
62 | + type: int |
63 | + default: 5406 |
64 | + description: | |
65 | + Default multicast port number that will be used to communicate between |
66 | + HA Cluster nodes. |
67 | + rbd-size: |
68 | + type: string |
69 | + default: 5G |
70 | + description: | |
71 | + Default rbd storage size to create when setting up block storage. |
72 | + This value should be specified in GB (e.g. 100G). |
73 | + rbd-name: |
74 | + type: string |
75 | + default: rabbitmq1 |
76 | + description: | |
77 | + The name that will be used to create the Ceph's RBD image with. If the |
78 | + image name exists in Ceph, it will be re-used and the data will be |
79 | + overwritten. |
80 | |
81 | === modified symlink 'hooks/amqp-relation-changed' |
82 | === target changed u'rabbitmq-relations' => u'rabbitmq_server_relations.py' |
83 | === removed symlink 'hooks/amqp-relation-joined' |
84 | === target was u'rabbitmq-relations' |
85 | === added symlink 'hooks/ceph-relation-changed' |
86 | === target is u'rabbitmq_server_relations.py' |
87 | === added symlink 'hooks/ceph-relation-joined' |
88 | === target is u'rabbitmq_server_relations.py' |
89 | === modified symlink 'hooks/cluster-relation-changed' |
90 | === target changed u'rabbitmq-relations' => u'rabbitmq_server_relations.py' |
91 | === modified symlink 'hooks/cluster-relation-joined' |
92 | === target changed u'rabbitmq-relations' => u'rabbitmq_server_relations.py' |
93 | === added symlink 'hooks/config-changed' |
94 | === target is u'rabbitmq_server_relations.py' |
95 | === removed file 'hooks/config-changed' |
96 | --- hooks/config-changed 2012-11-21 15:30:39 +0000 |
97 | +++ hooks/config-changed 1970-01-01 00:00:00 +0000 |
98 | @@ -1,70 +0,0 @@ |
99 | -#!/bin/bash |
100 | -set -eu |
101 | - |
102 | -juju-log "rabbitmq-server: Firing config hook" |
103 | - |
104 | -export HOME=/root # (HOME is not set on first run) |
105 | -RABBIT_PLUGINS=/usr/lib/rabbitmq/lib/rabbitmq_server-*/sbin/rabbitmq-plugins |
106 | -if [ "`config-get management_plugin`" == "True" ]; then |
107 | - $RABBIT_PLUGINS enable rabbitmq_management |
108 | - open-port 55672/tcp |
109 | -else |
110 | - $RABBIT_PLUGINS disable rabbitmq_management |
111 | - close-port 55672/tcp |
112 | -fi |
113 | - |
114 | -ssl_enabled=`config-get ssl_enabled` |
115 | - |
116 | -cd /etc/rabbitmq |
117 | - |
118 | -new_config=`mktemp /etc/rabbitmq/.rabbitmq.config.XXXXXX` |
119 | -chgrp rabbitmq "$new_config" |
120 | -chmod g+r "$new_config" |
121 | -exec 3> "$new_config" |
122 | - |
123 | -cat >&3 <<EOF |
124 | -[ |
125 | - {rabbit, [ |
126 | -EOF |
127 | - |
128 | -ssl_key_file=/etc/rabbitmq/rabbit-server-privkey.pem |
129 | -ssl_cert_file=/etc/rabbitmq/rabbit-server-cert.pem |
130 | - |
131 | -if [ "$ssl_enabled" == "True" ]; then |
132 | - umask 027 |
133 | - config-get ssl_key > "$ssl_key_file" |
134 | - config-get ssl_cert > "$ssl_cert_file" |
135 | - chgrp rabbitmq "$ssl_key_file" "$ssl_cert_file" |
136 | - if [ ! -s "$ssl_key_file" ]; then |
137 | - juju-log "ssl_key not set - can't configure SSL" |
138 | - exit 0 |
139 | - fi |
140 | - if [ ! -s "$ssl_cert_file" ]; then |
141 | - juju-log "ssl_cert not set - can't configure SSL" |
142 | - exit 0 |
143 | - fi |
144 | - cat >&3 <<EOF |
145 | - {ssl_listeners, [`config-get ssl_port`]}, |
146 | - {ssl_options, [ |
147 | - {certfile,"$ssl_cert_file"}, |
148 | - {keyfile,"$ssl_key_file"} |
149 | - ]}, |
150 | - open-port `config-get ssl_port`/tcp |
151 | -EOF |
152 | -fi |
153 | - |
154 | -cat >&3 <<EOF |
155 | - {tcp_listeners, [5672]} |
156 | - ]} |
157 | -]. |
158 | -EOF |
159 | - |
160 | -exec 3>&- |
161 | - |
162 | -if [ -f rabbitmq.config ]; then |
163 | - mv rabbitmq.config{,.bak} |
164 | -fi |
165 | - |
166 | -mv "$new_config" rabbitmq.config |
167 | - |
168 | -/etc/init.d/rabbitmq-server restart |
169 | |
170 | === added symlink 'hooks/ha-relation-changed' |
171 | === target is u'rabbitmq_server_relations.py' |
172 | === added symlink 'hooks/ha-relation-joined' |
173 | === target is u'rabbitmq_server_relations.py' |
174 | === modified symlink 'hooks/install' |
175 | === target changed u'rabbitmq-relations' => u'rabbitmq_server_relations.py' |
176 | === added directory 'hooks/lib' |
177 | === added file 'hooks/lib/__init__.py' |
178 | === added file 'hooks/lib/ceph_utils.py' |
179 | --- hooks/lib/ceph_utils.py 1970-01-01 00:00:00 +0000 |
180 | +++ hooks/lib/ceph_utils.py 2013-06-06 15:35:31 +0000 |
181 | @@ -0,0 +1,256 @@ |
182 | +# |
183 | +# Copyright 2012 Canonical Ltd. |
184 | +# |
185 | +# This file is sourced from lp:openstack-charm-helpers |
186 | +# |
187 | +# Authors: |
188 | +# James Page <james.page@ubuntu.com> |
189 | +# Adam Gandelman <adamg@ubuntu.com> |
190 | +# |
191 | + |
192 | +import commands |
193 | +import subprocess |
194 | +import os |
195 | +import shutil |
196 | +import lib.utils as utils |
197 | + |
198 | +KEYRING = '/etc/ceph/ceph.client.%s.keyring' |
199 | +KEYFILE = '/etc/ceph/ceph.client.%s.key' |
200 | + |
201 | +CEPH_CONF = """[global] |
202 | + auth supported = %(auth)s |
203 | + keyring = %(keyring)s |
204 | + mon host = %(mon_hosts)s |
205 | +""" |
206 | + |
207 | + |
208 | +def execute(cmd): |
209 | + subprocess.check_call(cmd) |
210 | + |
211 | + |
212 | +def execute_shell(cmd): |
213 | + subprocess.check_call(cmd, shell=True) |
214 | + |
215 | + |
216 | +def install(): |
217 | + ceph_dir = "/etc/ceph" |
218 | + if not os.path.isdir(ceph_dir): |
219 | + os.mkdir(ceph_dir) |
220 | + utils.install('ceph-common') |
221 | + |
222 | + |
223 | +def rbd_exists(service, pool, rbd_img): |
224 | + (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\ |
225 | + (service, pool)) |
226 | + return rbd_img in out |
227 | + |
228 | + |
229 | +def create_rbd_image(service, pool, image, sizemb): |
230 | + cmd = [ |
231 | + 'rbd', |
232 | + 'create', |
233 | + image, |
234 | + '--size', |
235 | + str(sizemb), |
236 | + '--id', |
237 | + service, |
238 | + '--pool', |
239 | + pool |
240 | + ] |
241 | + execute(cmd) |
242 | + |
243 | + |
244 | +def pool_exists(service, name): |
245 | + (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) |
246 | + return name in out |
247 | + |
248 | + |
249 | +def create_pool(service, name): |
250 | + cmd = [ |
251 | + 'rados', |
252 | + '--id', |
253 | + service, |
254 | + 'mkpool', |
255 | + name |
256 | + ] |
257 | + execute(cmd) |
258 | + |
259 | + |
260 | +def keyfile_path(service): |
261 | + return KEYFILE % service |
262 | + |
263 | + |
264 | +def keyring_path(service): |
265 | + return KEYRING % service |
266 | + |
267 | + |
268 | +def create_keyring(service, key): |
269 | + keyring = keyring_path(service) |
270 | + if os.path.exists(keyring): |
271 | + utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring) |
272 | + cmd = [ |
273 | + 'ceph-authtool', |
274 | + keyring, |
275 | + '--create-keyring', |
276 | + '--name=client.%s' % service, |
277 | + '--add-key=%s' % key |
278 | + ] |
279 | + execute(cmd) |
280 | + utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring) |
281 | + |
282 | + |
283 | +def create_key_file(service, key): |
284 | + # create a file containing the key |
285 | + keyfile = keyfile_path(service) |
286 | + if os.path.exists(keyfile): |
287 | + utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile) |
288 | + fd = open(keyfile, 'w') |
289 | + fd.write(key) |
290 | + fd.close() |
291 | + utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile) |
292 | + |
293 | + |
294 | +def get_ceph_nodes(): |
295 | + hosts = [] |
296 | + for r_id in utils.relation_ids('ceph'): |
297 | + for unit in utils.relation_list(r_id): |
298 | + hosts.append(utils.relation_get('private-address', |
299 | + unit=unit, rid=r_id)) |
300 | + return hosts |
301 | + |
302 | + |
303 | +def configure(service, key, auth): |
304 | + create_keyring(service, key) |
305 | + create_key_file(service, key) |
306 | + hosts = get_ceph_nodes() |
307 | + mon_hosts = ",".join(map(str, hosts)) |
308 | + keyring = keyring_path(service) |
309 | + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
310 | + ceph_conf.write(CEPH_CONF % locals()) |
311 | + modprobe_kernel_module('rbd') |
312 | + |
313 | + |
314 | +def image_mapped(image_name): |
315 | + (rc, out) = commands.getstatusoutput('rbd showmapped') |
316 | + return image_name in out |
317 | + |
318 | + |
319 | +def map_block_storage(service, pool, image): |
320 | + cmd = [ |
321 | + 'rbd', |
322 | + 'map', |
323 | + '%s/%s' % (pool, image), |
324 | + '--user', |
325 | + service, |
326 | + '--secret', |
327 | + keyfile_path(service), |
328 | + ] |
329 | + execute(cmd) |
330 | + |
331 | + |
332 | +def filesystem_mounted(fs): |
333 | + return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
334 | + |
335 | + |
336 | +def make_filesystem(blk_device, fstype='ext4'): |
337 | + utils.juju_log('INFO', |
338 | + 'ceph: Formatting block device %s as filesystem %s.' %\ |
339 | + (blk_device, fstype)) |
340 | + cmd = ['mkfs', '-t', fstype, blk_device] |
341 | + execute(cmd) |
342 | + |
343 | + |
344 | +def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): |
345 | + # mount block device into /mnt |
346 | + cmd = ['mount', '-t', fstype, blk_device, '/mnt'] |
347 | + execute(cmd) |
348 | + |
349 | + # copy data to /mnt |
350 | + try: |
351 | + copy_files(data_src_dst, '/mnt') |
352 | + except: |
353 | + pass |
354 | + |
355 | + # umount block device |
356 | + cmd = ['umount', '/mnt'] |
357 | + execute(cmd) |
358 | + |
359 | + _dir = os.stat(data_src_dst) |
360 | + uid = _dir.st_uid |
361 | + gid = _dir.st_gid |
362 | + |
363 | + # re-mount where the data should originally be |
364 | + cmd = ['mount', '-t', fstype, blk_device, data_src_dst] |
365 | + execute(cmd) |
366 | + |
367 | + # ensure original ownership of new mount. |
368 | + cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] |
369 | + execute(cmd) |
370 | + |
371 | + |
372 | +# TODO: re-use |
373 | +def modprobe_kernel_module(module): |
374 | + utils.juju_log('INFO', 'Loading kernel module') |
375 | + cmd = ['modprobe', module] |
376 | + execute(cmd) |
377 | + cmd = 'echo %s >> /etc/modules' % module |
378 | + execute_shell(cmd) |
379 | + |
380 | + |
381 | +def copy_files(src, dst, symlinks=False, ignore=None): |
382 | + for item in os.listdir(src): |
383 | + s = os.path.join(src, item) |
384 | + d = os.path.join(dst, item) |
385 | + if os.path.isdir(s): |
386 | + shutil.copytree(s, d, symlinks, ignore) |
387 | + else: |
388 | + shutil.copy2(s, d) |
389 | + |
390 | + |
391 | +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
392 | + blk_device, fstype, system_services=[]): |
393 | + """ |
394 | + To be called from the current cluster leader. |
395 | + Ensures given pool and RBD image exists, is mapped to a block device, |
396 | + and the device is formatted and mounted at the given mount_point. |
397 | + |
398 | + If formatting a device for the first time, data existing at mount_point |
399 | + will be migrated to the RBD device before being remounted. |
400 | + |
401 | + All services listed in system_services will be stopped prior to data |
402 | + migration and restarted when complete. |
403 | + """ |
404 | + # Ensure pool, RBD image, RBD mappings are in place. |
405 | + if not pool_exists(service, pool): |
406 | + utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool) |
407 | + create_pool(service, pool) |
408 | + |
409 | + if not rbd_exists(service, pool, rbd_img): |
410 | + utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img) |
411 | + create_rbd_image(service, pool, rbd_img, sizemb) |
412 | + |
413 | + if not image_mapped(rbd_img): |
414 | + utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.') |
415 | + map_block_storage(service, pool, rbd_img) |
416 | + |
417 | + # make file system |
418 | + # TODO: What happens if for whatever reason this is run again and |
419 | + # the data is already in the rbd device and/or is mounted?? |
420 | + # When it is mounted already, it will fail to make the fs |
421 | + # XXX: This is really sketchy! Need to at least add an fstab entry |
422 | + # otherwise this hook will blow away existing data if its executed |
423 | + # after a reboot. |
424 | + if not filesystem_mounted(mount_point): |
425 | + make_filesystem(blk_device, fstype) |
426 | + |
427 | + for svc in system_services: |
428 | + if utils.running(svc): |
429 | + utils.juju_log('INFO', |
430 | + 'Stopping services %s prior to migrating '\ |
431 | + 'data' % svc) |
432 | + utils.stop(svc) |
433 | + |
434 | + place_data_on_ceph(service, blk_device, mount_point, fstype) |
435 | + |
436 | + for svc in system_services: |
437 | + utils.start(svc) |
438 | |
439 | === added file 'hooks/lib/cluster_utils.py' |
440 | --- hooks/lib/cluster_utils.py 1970-01-01 00:00:00 +0000 |
441 | +++ hooks/lib/cluster_utils.py 2013-06-06 15:35:31 +0000 |
442 | @@ -0,0 +1,130 @@ |
443 | +# |
444 | +# Copyright 2012 Canonical Ltd. |
445 | +# |
446 | +# This file is sourced from lp:openstack-charm-helpers |
447 | +# |
448 | +# Authors: |
449 | +# James Page <james.page@ubuntu.com> |
450 | +# Adam Gandelman <adamg@ubuntu.com> |
451 | +# |
452 | + |
453 | +from lib.utils import ( |
454 | + juju_log, |
455 | + relation_ids, |
456 | + relation_list, |
457 | + relation_get, |
458 | + get_unit_hostname, |
459 | + config_get |
460 | + ) |
461 | +import subprocess |
462 | +import os |
463 | + |
464 | + |
465 | +def is_clustered(): |
466 | + for r_id in (relation_ids('ha') or []): |
467 | + for unit in (relation_list(r_id) or []): |
468 | + clustered = relation_get('clustered', |
469 | + rid=r_id, |
470 | + unit=unit) |
471 | + if clustered: |
472 | + return True |
473 | + return False |
474 | + |
475 | + |
476 | +def is_leader(resource): |
477 | + cmd = [ |
478 | + "crm", "resource", |
479 | + "show", resource |
480 | + ] |
481 | + try: |
482 | + status = subprocess.check_output(cmd) |
483 | + except subprocess.CalledProcessError: |
484 | + return False |
485 | + else: |
486 | + if get_unit_hostname() in status: |
487 | + return True |
488 | + else: |
489 | + return False |
490 | + |
491 | + |
492 | +def peer_units(): |
493 | + peers = [] |
494 | + for r_id in (relation_ids('cluster') or []): |
495 | + for unit in (relation_list(r_id) or []): |
496 | + peers.append(unit) |
497 | + return peers |
498 | + |
499 | + |
500 | +def oldest_peer(peers): |
501 | + local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] |
502 | + for peer in peers: |
503 | + remote_unit_no = peer.split('/')[1] |
504 | + if remote_unit_no < local_unit_no: |
505 | + return False |
506 | + return True |
507 | + |
508 | + |
509 | +def eligible_leader(resource): |
510 | + if is_clustered(): |
511 | + if not is_leader(resource): |
512 | + juju_log('INFO', 'Deferring action to CRM leader.') |
513 | + return False |
514 | + else: |
515 | + peers = peer_units() |
516 | + if peers and not oldest_peer(peers): |
517 | + juju_log('INFO', 'Deferring action to oldest service unit.') |
518 | + return False |
519 | + return True |
520 | + |
521 | + |
522 | +def https(): |
523 | + ''' |
524 | + Determines whether enough data has been provided in configuration |
525 | + or relation data to configure HTTPS |
526 | + . |
527 | + returns: boolean |
528 | + ''' |
529 | + if config_get('use-https') == "yes": |
530 | + return True |
531 | + if config_get('ssl_cert') and config_get('ssl_key'): |
532 | + return True |
533 | + for r_id in relation_ids('identity-service'): |
534 | + for unit in relation_list(r_id): |
535 | + if (relation_get('https_keystone', rid=r_id, unit=unit) and |
536 | + relation_get('ssl_cert', rid=r_id, unit=unit) and |
537 | + relation_get('ssl_key', rid=r_id, unit=unit) and |
538 | + relation_get('ca_cert', rid=r_id, unit=unit)): |
539 | + return True |
540 | + return False |
541 | + |
542 | + |
543 | +def determine_api_port(public_port): |
544 | + ''' |
545 | + Determine correct API server listening port based on |
546 | + existence of HTTPS reverse proxy and/or haproxy. |
547 | + |
548 | + public_port: int: standard public port for given service |
549 | + |
550 | + returns: int: the correct listening port for the API service |
551 | + ''' |
552 | + i = 0 |
553 | + if len(peer_units()) > 0 or is_clustered(): |
554 | + i += 1 |
555 | + if https(): |
556 | + i += 1 |
557 | + return public_port - (i * 10) |
558 | + |
559 | + |
560 | +def determine_haproxy_port(public_port): |
561 | + ''' |
562 | + Description: Determine correct proxy listening port based on public IP + |
563 | + existence of HTTPS reverse proxy. |
564 | + |
565 | + public_port: int: standard public port for given service |
566 | + |
567 | + returns: int: the correct listening port for the HAProxy service |
568 | + ''' |
569 | + i = 0 |
570 | + if https(): |
571 | + i += 1 |
572 | + return public_port - (i * 10) |
573 | |
574 | === added file 'hooks/lib/openstack_common.py' |
575 | --- hooks/lib/openstack_common.py 1970-01-01 00:00:00 +0000 |
576 | +++ hooks/lib/openstack_common.py 2013-06-06 15:35:31 +0000 |
577 | @@ -0,0 +1,230 @@ |
578 | +#!/usr/bin/python |
579 | + |
580 | +# Common python helper functions used for OpenStack charms. |
581 | + |
582 | +import apt_pkg as apt |
583 | +import subprocess |
584 | +import os |
585 | + |
586 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
587 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
588 | + |
589 | +ubuntu_openstack_release = { |
590 | + 'oneiric': 'diablo', |
591 | + 'precise': 'essex', |
592 | + 'quantal': 'folsom', |
593 | + 'raring': 'grizzly', |
594 | +} |
595 | + |
596 | + |
597 | +openstack_codenames = { |
598 | + '2011.2': 'diablo', |
599 | + '2012.1': 'essex', |
600 | + '2012.2': 'folsom', |
601 | + '2013.1': 'grizzly', |
602 | + '2013.2': 'havana', |
603 | +} |
604 | + |
605 | +# The ugly duckling |
606 | +swift_codenames = { |
607 | + '1.4.3': 'diablo', |
608 | + '1.4.8': 'essex', |
609 | + '1.7.4': 'folsom', |
610 | + '1.7.6': 'grizzly', |
611 | + '1.7.7': 'grizzly', |
612 | + '1.8.0': 'grizzly', |
613 | +} |
614 | + |
615 | + |
616 | +def juju_log(msg): |
617 | + subprocess.check_call(['juju-log', msg]) |
618 | + |
619 | + |
620 | +def error_out(msg): |
621 | + juju_log("FATAL ERROR: %s" % msg) |
622 | + exit(1) |
623 | + |
624 | + |
625 | +def lsb_release(): |
626 | + '''Return /etc/lsb-release in a dict''' |
627 | + lsb = open('/etc/lsb-release', 'r') |
628 | + d = {} |
629 | + for l in lsb: |
630 | + k, v = l.split('=') |
631 | + d[k.strip()] = v.strip() |
632 | + return d |
633 | + |
634 | + |
635 | +def get_os_codename_install_source(src): |
636 | + '''Derive OpenStack release codename from a given installation source.''' |
637 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
638 | + |
639 | + rel = '' |
640 | + if src == 'distro': |
641 | + try: |
642 | + rel = ubuntu_openstack_release[ubuntu_rel] |
643 | + except KeyError: |
644 | + e = 'Code not derive openstack release for '\ |
645 | + 'this Ubuntu release: %s' % rel |
646 | + error_out(e) |
647 | + return rel |
648 | + |
649 | + if src.startswith('cloud:'): |
650 | + ca_rel = src.split(':')[1] |
651 | + ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
652 | + return ca_rel |
653 | + |
654 | + # Best guess match based on deb string provided |
655 | + if src.startswith('deb') or src.startswith('ppa'): |
656 | + for k, v in openstack_codenames.iteritems(): |
657 | + if v in src: |
658 | + return v |
659 | + |
660 | + |
661 | +def get_os_codename_version(vers): |
662 | + '''Determine OpenStack codename from version number.''' |
663 | + try: |
664 | + return openstack_codenames[vers] |
665 | + except KeyError: |
666 | + e = 'Could not determine OpenStack codename for version %s' % vers |
667 | + error_out(e) |
668 | + |
669 | + |
670 | +def get_os_version_codename(codename): |
671 | + '''Determine OpenStack version number from codename.''' |
672 | + for k, v in openstack_codenames.iteritems(): |
673 | + if v == codename: |
674 | + return k |
675 | + e = 'Code not derive OpenStack version for '\ |
676 | + 'codename: %s' % codename |
677 | + error_out(e) |
678 | + |
679 | + |
680 | +def get_os_codename_package(pkg): |
681 | + '''Derive OpenStack release codename from an installed package.''' |
682 | + apt.init() |
683 | + cache = apt.Cache() |
684 | + try: |
685 | + pkg = cache[pkg] |
686 | + except: |
687 | + e = 'Could not determine version of installed package: %s' % pkg |
688 | + error_out(e) |
689 | + |
690 | + vers = apt.UpstreamVersion(pkg.current_ver.ver_str) |
691 | + |
692 | + try: |
693 | + if 'swift' in pkg.name: |
694 | + vers = vers[:5] |
695 | + return swift_codenames[vers] |
696 | + else: |
697 | + vers = vers[:6] |
698 | + return openstack_codenames[vers] |
699 | + except KeyError: |
700 | + e = 'Could not determine OpenStack codename for version %s' % vers |
701 | + error_out(e) |
702 | + |
703 | + |
704 | +def get_os_version_package(pkg): |
705 | + '''Derive OpenStack version number from an installed package.''' |
706 | + codename = get_os_codename_package(pkg) |
707 | + |
708 | + if 'swift' in pkg: |
709 | + vers_map = swift_codenames |
710 | + else: |
711 | + vers_map = openstack_codenames |
712 | + |
713 | + for version, cname in vers_map.iteritems(): |
714 | + if cname == codename: |
715 | + return version |
716 | + e = "Could not determine OpenStack version for package: %s" % pkg |
717 | + error_out(e) |
718 | + |
719 | + |
720 | +def configure_installation_source(rel): |
721 | + '''Configure apt installation source.''' |
722 | + |
723 | + def _import_key(keyid): |
724 | + cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ |
725 | + "--recv-keys %s" % keyid |
726 | + try: |
727 | + subprocess.check_call(cmd.split(' ')) |
728 | + except subprocess.CalledProcessError: |
729 | + error_out("Error importing repo key %s" % keyid) |
730 | + |
731 | + if rel == 'distro': |
732 | + return |
733 | + elif rel[:4] == "ppa:": |
734 | + src = rel |
735 | + subprocess.check_call(["add-apt-repository", "-y", src]) |
736 | + elif rel[:3] == "deb": |
737 | + l = len(rel.split('|')) |
738 | + if l == 2: |
739 | + src, key = rel.split('|') |
740 | + juju_log("Importing PPA key from keyserver for %s" % src) |
741 | + _import_key(key) |
742 | + elif l == 1: |
743 | + src = rel |
744 | + else: |
745 | + error_out("Invalid openstack-release: %s" % rel) |
746 | + |
747 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
748 | + f.write(src) |
749 | + elif rel[:6] == 'cloud:': |
750 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
751 | + rel = rel.split(':')[1] |
752 | + u_rel = rel.split('-')[0] |
753 | + ca_rel = rel.split('-')[1] |
754 | + |
755 | + if u_rel != ubuntu_rel: |
756 | + e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
757 | + 'version (%s)' % (ca_rel, ubuntu_rel) |
758 | + error_out(e) |
759 | + |
760 | + if 'staging' in ca_rel: |
761 | + # staging is just a regular PPA. |
762 | + os_rel = ca_rel.split('/')[0] |
763 | + ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
764 | + cmd = 'add-apt-repository -y %s' % ppa |
765 | + subprocess.check_call(cmd.split(' ')) |
766 | + return |
767 | + |
768 | + # map charm config options to actual archive pockets. |
769 | + pockets = { |
770 | + 'folsom': 'precise-updates/folsom', |
771 | + 'folsom/updates': 'precise-updates/folsom', |
772 | + 'folsom/proposed': 'precise-proposed/folsom', |
773 | + 'grizzly': 'precise-updates/grizzly', |
774 | + 'grizzly/updates': 'precise-updates/grizzly', |
775 | + 'grizzly/proposed': 'precise-proposed/grizzly' |
776 | + } |
777 | + |
778 | + try: |
779 | + pocket = pockets[ca_rel] |
780 | + except KeyError: |
781 | + e = 'Invalid Cloud Archive release specified: %s' % rel |
782 | + error_out(e) |
783 | + |
784 | + src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
785 | + _import_key(CLOUD_ARCHIVE_KEY_ID) |
786 | + |
787 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
788 | + f.write(src) |
789 | + else: |
790 | + error_out("Invalid openstack-release specified: %s" % rel) |
791 | + |
792 | + |
793 | +def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
794 | + """ |
795 | + Write an rc file in the charm-delivered directory containing |
796 | + exported environment variables provided by env_vars. Any charm scripts run |
797 | + outside the juju hook environment can source this scriptrc to obtain |
798 | + updated config information necessary to perform health checks or |
799 | + service changes. |
800 | + """ |
801 | + unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-') |
802 | + juju_rc_path = "/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path) |
803 | + with open(juju_rc_path, 'wb') as rc_script: |
804 | + rc_script.write( |
805 | + "#!/bin/bash\n") |
806 | + [rc_script.write('export %s=%s\n' % (u, p)) |
807 | + for u, p in env_vars.iteritems() if u != "script_path"] |
808 | |
809 | === added file 'hooks/lib/utils.py' |
810 | --- hooks/lib/utils.py 1970-01-01 00:00:00 +0000 |
811 | +++ hooks/lib/utils.py 2013-06-06 15:35:31 +0000 |
812 | @@ -0,0 +1,295 @@ |
813 | +# |
814 | +# Copyright 2012 Canonical Ltd. |
815 | +# |
816 | +# This file is sourced from lp:openstack-charm-helpers |
817 | +# |
818 | +# Authors: |
819 | +# James Page <james.page@ubuntu.com> |
820 | +# Paul Collins <paul.collins@canonical.com> |
821 | +# Adam Gandelman <adamg@ubuntu.com> |
822 | +# |
823 | + |
824 | +import json |
825 | +import os |
826 | +import subprocess |
827 | +import socket |
828 | +import sys |
829 | + |
830 | + |
831 | +def do_hooks(hooks): |
832 | + hook = os.path.basename(sys.argv[0]) |
833 | + |
834 | + try: |
835 | + hook_func = hooks[hook] |
836 | + except KeyError: |
837 | + juju_log('INFO', |
838 | + "This charm doesn't know how to handle '{}'.".format(hook)) |
839 | + else: |
840 | + hook_func() |
841 | + |
842 | + |
843 | +def install(*pkgs): |
844 | + cmd = [ |
845 | + 'apt-get', |
846 | + '-y', |
847 | + 'install' |
848 | + ] |
849 | + for pkg in pkgs: |
850 | + cmd.append(pkg) |
851 | + subprocess.check_call(cmd) |
852 | + |
853 | +TEMPLATES_DIR = 'templates' |
854 | + |
855 | +try: |
856 | + import jinja2 |
857 | +except ImportError: |
858 | + install('python-jinja2') |
859 | + import jinja2 |
860 | + |
861 | +try: |
862 | + import dns.resolver |
863 | +except ImportError: |
864 | + install('python-dnspython') |
865 | + import dns.resolver |
866 | + |
867 | + |
868 | +def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
869 | + templates = jinja2.Environment( |
870 | + loader=jinja2.FileSystemLoader(template_dir) |
871 | + ) |
872 | + template = templates.get_template(template_name) |
873 | + return template.render(context) |
874 | + |
875 | +CLOUD_ARCHIVE = \ |
876 | +""" # Ubuntu Cloud Archive |
877 | +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
878 | +""" |
879 | + |
880 | +CLOUD_ARCHIVE_POCKETS = { |
881 | + 'folsom': 'precise-updates/folsom', |
882 | + 'folsom/updates': 'precise-updates/folsom', |
883 | + 'folsom/proposed': 'precise-proposed/folsom', |
884 | + 'grizzly': 'precise-updates/grizzly', |
885 | + 'grizzly/updates': 'precise-updates/grizzly', |
886 | + 'grizzly/proposed': 'precise-proposed/grizzly' |
887 | + } |
888 | + |
889 | + |
890 | +def configure_source(): |
891 | + source = str(config_get('openstack-origin')) |
892 | + if not source: |
893 | + return |
894 | + if source.startswith('ppa:'): |
895 | + cmd = [ |
896 | + 'add-apt-repository', |
897 | + source |
898 | + ] |
899 | + subprocess.check_call(cmd) |
900 | + if source.startswith('cloud:'): |
901 | + install('ubuntu-cloud-keyring') |
902 | + pocket = source.split(':')[1] |
903 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
904 | + apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket])) |
905 | + if source.startswith('deb'): |
906 | + l = len(source.split('|')) |
907 | + if l == 2: |
908 | + (apt_line, key) = source.split('|') |
909 | + cmd = [ |
910 | + 'apt-key', |
911 | + 'adv', '--keyserver keyserver.ubuntu.com', |
912 | + '--recv-keys', key |
913 | + ] |
914 | + subprocess.check_call(cmd) |
915 | + elif l == 1: |
916 | + apt_line = source |
917 | + |
918 | + with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt: |
919 | + apt.write(apt_line + "\n") |
920 | + cmd = [ |
921 | + 'apt-get', |
922 | + 'update' |
923 | + ] |
924 | + subprocess.check_call(cmd) |
925 | + |
926 | +# Protocols |
927 | +TCP = 'TCP' |
928 | +UDP = 'UDP' |
929 | + |
930 | + |
931 | +def expose(port, protocol='TCP'): |
932 | + cmd = [ |
933 | + 'open-port', |
934 | + '{}/{}'.format(port, protocol) |
935 | + ] |
936 | + subprocess.check_call(cmd) |
937 | + |
938 | + |
939 | +def open_port(port, protocol='TCP'): |
940 | + expose(port, protocol) |
941 | + |
942 | + |
943 | +def close_port(port, protocol='TCP'): |
944 | + cmd = [ |
945 | + 'close-port', |
946 | + '{}/{}'.format(port, protocol) |
947 | + ] |
948 | + subprocess.check_call(cmd) |
949 | + |
950 | + |
951 | +def juju_log(severity, message): |
952 | + cmd = [ |
953 | + 'juju-log', |
954 | + '--log-level', severity, |
955 | + message |
956 | + ] |
957 | + subprocess.check_call(cmd) |
958 | + |
959 | + |
960 | +def relation_ids(relation): |
961 | + cmd = [ |
962 | + 'relation-ids', |
963 | + relation |
964 | + ] |
965 | + result = str(subprocess.check_output(cmd)).split() |
966 | + if result == "": |
967 | + return None |
968 | + else: |
969 | + return result |
970 | + |
971 | + |
972 | +def relation_list(rid): |
973 | + cmd = [ |
974 | + 'relation-list', |
975 | + '-r', rid, |
976 | + ] |
977 | + result = str(subprocess.check_output(cmd)).split() |
978 | + if result == "": |
979 | + return None |
980 | + else: |
981 | + return result |
982 | + |
983 | + |
984 | +def relation_get(attribute, unit=None, rid=None): |
985 | + cmd = [ |
986 | + 'relation-get', |
987 | + ] |
988 | + if rid: |
989 | + cmd.append('-r') |
990 | + cmd.append(rid) |
991 | + cmd.append(attribute) |
992 | + if unit: |
993 | + cmd.append(unit) |
994 | + value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
995 | + if value == "": |
996 | + return None |
997 | + else: |
998 | + return value |
999 | + |
1000 | + |
1001 | +def relation_set(**kwargs): |
1002 | + cmd = [ |
1003 | + 'relation-set' |
1004 | + ] |
1005 | + args = [] |
1006 | + for k, v in kwargs.items(): |
1007 | + if k == 'rid': |
1008 | + if v: |
1009 | + cmd.append('-r') |
1010 | + cmd.append(v) |
1011 | + else: |
1012 | + args.append('{}={}'.format(k, v)) |
1013 | + cmd += args |
1014 | + subprocess.check_call(cmd) |
1015 | + |
1016 | + |
1017 | +def unit_get(attribute): |
1018 | + cmd = [ |
1019 | + 'unit-get', |
1020 | + attribute |
1021 | + ] |
1022 | + value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1023 | + if value == "": |
1024 | + return None |
1025 | + else: |
1026 | + return value |
1027 | + |
1028 | + |
1029 | +def config_get(attribute): |
1030 | + cmd = [ |
1031 | + 'config-get', |
1032 | + '--format', |
1033 | + 'json', |
1034 | + ] |
1035 | + out = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1036 | + cfg = json.loads(out) |
1037 | + |
1038 | + try: |
1039 | + return cfg[attribute] |
1040 | + except KeyError: |
1041 | + return None |
1042 | + |
1043 | + |
1044 | +def get_unit_hostname(): |
1045 | + return socket.gethostname() |
1046 | + |
1047 | + |
1048 | +def get_host_ip(hostname=unit_get('private-address')): |
1049 | + try: |
1050 | + # Test to see if already an IPv4 address |
1051 | + socket.inet_aton(hostname) |
1052 | + return hostname |
1053 | + except socket.error: |
1054 | + answers = dns.resolver.query(hostname, 'A') |
1055 | + if answers: |
1056 | + return answers[0].address |
1057 | + return None |
1058 | + |
1059 | + |
1060 | +def _svc_control(service, action): |
1061 | + subprocess.check_call(['service', service, action]) |
1062 | + |
1063 | + |
1064 | +def restart(*services): |
1065 | + for service in services: |
1066 | + _svc_control(service, 'restart') |
1067 | + |
1068 | + |
1069 | +def stop(*services): |
1070 | + for service in services: |
1071 | + _svc_control(service, 'stop') |
1072 | + |
1073 | + |
1074 | +def start(*services): |
1075 | + for service in services: |
1076 | + _svc_control(service, 'start') |
1077 | + |
1078 | + |
1079 | +def reload(*services): |
1080 | + for service in services: |
1081 | + try: |
1082 | + _svc_control(service, 'reload') |
1083 | + except subprocess.CalledProcessError: |
1084 | + # Reload failed - either service does not support reload |
1085 | + # or it was not running - restart will fixup most things |
1086 | + _svc_control(service, 'restart') |
1087 | + |
1088 | + |
1089 | +def running(service): |
1090 | + try: |
1091 | + output = subprocess.check_output(['service', service, 'status']) |
1092 | + except subprocess.CalledProcessError: |
1093 | + return False |
1094 | + else: |
1095 | + if ("start/running" in output or |
1096 | + "is running" in output): |
1097 | + return True |
1098 | + else: |
1099 | + return False |
1100 | + |
1101 | + |
1102 | +def is_relation_made(relation, key='private-address'): |
1103 | + for r_id in (relation_ids(relation) or []): |
1104 | + for unit in (relation_list(r_id) or []): |
1105 | + if relation_get(key, rid=r_id, unit=unit): |
1106 | + return True |
1107 | + return False |
1108 | |
1109 | === added file 'hooks/rabbit_utils.py' |
1110 | --- hooks/rabbit_utils.py 1970-01-01 00:00:00 +0000 |
1111 | +++ hooks/rabbit_utils.py 2013-06-06 15:35:31 +0000 |
1112 | @@ -0,0 +1,176 @@ |
1113 | +import os |
1114 | +import pwd |
1115 | +import grp |
1116 | +import re |
1117 | +import subprocess |
1118 | +import glob |
1119 | +import lib.utils as utils |
1120 | +import apt_pkg as apt |
1121 | + |
1122 | +PACKAGES = ['pwgen', 'rabbitmq-server'] |
1123 | + |
1124 | +RABBITMQ_CTL = '/usr/sbin/rabbitmqctl' |
1125 | +COOKIE_PATH = '/var/lib/rabbitmq/.erlang.cookie' |
1126 | +ENV_CONF = '/etc/rabbitmq/rabbitmq-env.conf' |
1127 | +RABBITMQ_CONF = '/etc/rabbitmq/rabbitmq.config' |
1128 | + |
1129 | +def vhost_exists(vhost): |
1130 | + cmd = [RABBITMQ_CTL, 'list_vhosts'] |
1131 | + out = subprocess.check_output(cmd) |
1132 | + for line in out.split('\n')[1:]: |
1133 | + if line == vhost: |
1134 | + utils.juju_log('INFO', 'vhost (%s) already exists.' % vhost) |
1135 | + return True |
1136 | + return False |
1137 | + |
1138 | + |
1139 | +def create_vhost(vhost): |
1140 | + if vhost_exists(vhost): |
1141 | + return |
1142 | + cmd = [RABBITMQ_CTL, 'add_vhost', vhost] |
1143 | + subprocess.check_call(cmd) |
1144 | + utils.juju_log('INFO', 'Created new vhost (%s).' % vhost) |
1145 | + |
1146 | + |
1147 | +def user_exists(user): |
1148 | + cmd = [RABBITMQ_CTL, 'list_users'] |
1149 | + out = subprocess.check_output(cmd) |
1150 | + for line in out.split('\n')[1:]: |
1151 | + _user = line.split('\t')[0] |
1152 | + if _user == user: |
1153 | + admin = line.split('\t')[1] |
1154 | + return True, (admin == '[administrator]') |
1155 | + return False, False |
1156 | + |
1157 | + |
1158 | +def create_user(user, password, admin=False): |
1159 | + exists, is_admin = user_exists(user) |
1160 | + |
1161 | + if not exists: |
1162 | + cmd = [RABBITMQ_CTL, 'add_user', user, password] |
1163 | + subprocess.check_call(cmd) |
1164 | + utils.juju_log('INFO', 'Created new user (%s).' % user) |
1165 | + |
1166 | + if admin == is_admin: |
1167 | + return |
1168 | + |
1169 | + if admin: |
1170 | + cmd = [RABBITMQ_CTL, 'set_user_tags', user, 'administrator'] |
1171 | + utils.juju_log('INFO', 'Granting user (%s) admin access.') |
1172 | + else: |
1173 | + cmd = [RABBITMQ_CTL, 'set_user_tags', user] |
1174 | + utils.juju_log('INFO', 'Revoking user (%s) admin access.') |
1175 | + |
1176 | + |
1177 | +def grant_permissions(user, vhost): |
1178 | + cmd = [RABBITMQ_CTL, 'set_permissions', '-p', |
1179 | + vhost, user, '.*', '.*', '.*'] |
1180 | + subprocess.check_call(cmd) |
1181 | + |
1182 | + |
1183 | +def service(action): |
1184 | + cmd = ['service', 'rabbitmq-server', action] |
1185 | + subprocess.check_call(cmd) |
1186 | + |
1187 | + |
1188 | +def rabbit_version(): |
1189 | + apt.init() |
1190 | + cache = apt.Cache() |
1191 | + pkg = cache['rabbitmq-server'] |
1192 | + if pkg.current_ver: |
1193 | + return apt.upstream_version(pkg.current_ver.ver_str) |
1194 | + else: |
1195 | + return None |
1196 | + |
1197 | + |
1198 | +def cluster_with(host): |
1199 | + utils.juju_log('INFO', 'Clustering with remote rabbit host (%s).' % host) |
1200 | + vers = rabbit_version() |
1201 | + if vers >= '3.0.1-1': |
1202 | + cluster_cmd = 'join_cluster' |
1203 | + else: |
1204 | + cluster_cmd = 'cluster' |
1205 | + out = subprocess.check_output([RABBITMQ_CTL, 'cluster_status']) |
1206 | + for line in out.split('\n'): |
1207 | + if re.search(host, line): |
1208 | + utils.juju_log('INFO', 'Host already clustered with %s.' % host) |
1209 | + return |
1210 | + cmd = [RABBITMQ_CTL, 'stop_app'] |
1211 | + subprocess.check_call(cmd) |
1212 | + cmd = [RABBITMQ_CTL, cluster_cmd, 'rabbit@%s' % host] |
1213 | + subprocess.check_call(cmd) |
1214 | + cmd = [RABBITMQ_CTL, 'start_app'] |
1215 | + subprocess.check_call(cmd) |
1216 | + |
1217 | + |
1218 | +def set_node_name(name): |
1219 | + # update or append RABBITMQ_NODENAME to environment config. |
1220 | + # rabbitmq.conf.d is not present on all releases, so use or create |
1221 | + # rabbitmq-env.conf instead. |
1222 | + if not os.path.isfile(ENV_CONF): |
1223 | + utils.juju_log('INFO', '%s does not exist, creating.' % ENV_CONF) |
1224 | + with open(ENV_CONF, 'wb') as out: |
1225 | + out.write('RABBITMQ_NODENAME=%s\n' % name) |
1226 | + return |
1227 | + |
1228 | + out = [] |
1229 | + f = False |
1230 | + for line in open(ENV_CONF).readlines(): |
1231 | + if line.strip().startswith('RABBITMQ_NODENAME'): |
1232 | + f = True |
1233 | + line = 'RABBITMQ_NODENAME=%s\n' % name |
1234 | + out.append(line) |
1235 | + if not f: |
1236 | + out.append('RABBITMQ_NODENAME=%s\n' % name) |
1237 | + utils.juju_log('INFO', 'Updating %s, RABBITMQ_NODENAME=%s' %\ |
1238 | + (ENV_CONF, name)) |
1239 | + with open(ENV_CONF, 'wb') as conf: |
1240 | + conf.write(''.join(out)) |
1241 | + |
1242 | + |
1243 | +def get_node_name(): |
1244 | + if not os.path.exists(ENV_CONF): |
1245 | + return None |
1246 | + env_conf = open(ENV_CONF, 'r').readlines() |
1247 | + node_name = None |
1248 | + for l in env_conf: |
1249 | + if l.startswith('RABBITMQ_NODENAME'): |
1250 | + node_name = l.split('=')[1].strip() |
1251 | + return node_name |
1252 | + |
1253 | + |
1254 | +def _manage_plugin(plugin, action): |
1255 | + os.environ['HOME'] = '/root' |
1256 | + _rabbitmq_plugins = \ |
1257 | + glob.glob('/usr/lib/rabbitmq/lib/rabbitmq_server-*/sbin/rabbitmq-plugins')[0] |
1258 | + subprocess.check_call([ _rabbitmq_plugins, action, plugin]) |
1259 | + |
1260 | + |
1261 | +def enable_plugin(plugin): |
1262 | + _manage_plugin(plugin, 'enable') |
1263 | + |
1264 | + |
1265 | +def disable_plugin(plugin): |
1266 | + _manage_plugin(plugin, 'disable') |
1267 | + |
1268 | +ssl_key_file = "/etc/rabbitmq/rabbit-server-privkey.pem" |
1269 | +ssl_cert_file = "/etc/rabbitmq/rabbit-server-cert.pem" |
1270 | + |
1271 | + |
1272 | +def enable_ssl(ssl_key, ssl_cert, ssl_port): |
1273 | + uid = pwd.getpwnam("root").pw_uid |
1274 | + gid = grp.getgrnam("rabbitmq").gr_gid |
1275 | + with open(ssl_key_file, 'w') as key_file: |
1276 | + key_file.write(ssl_key) |
1277 | + os.chmod(ssl_key_file, 0640) |
1278 | + os.chown(ssl_key_file, uid, gid) |
1279 | + with open(ssl_cert_file, 'w') as cert_file: |
1280 | + cert_file.write(ssl_cert) |
1281 | + os.chmod(ssl_cert_file, 0640) |
1282 | + os.chown(ssl_cert_file, uid, gid) |
1283 | + with open(RABBITMQ_CONF, 'w') as rmq_conf: |
1284 | + rmq_conf.write(utils.render_template(os.path.basename(RABBITMQ_CONF), |
1285 | + { "ssl_port": ssl_port, |
1286 | + "ssl_cert_file": ssl_cert_file, |
1287 | + "ssl_key_file": ssl_key_file}) |
1288 | + ) |
1289 | |
1290 | === removed file 'hooks/rabbitmq-common' |
1291 | --- hooks/rabbitmq-common 2011-12-05 19:55:29 +0000 |
1292 | +++ hooks/rabbitmq-common 1970-01-01 00:00:00 +0000 |
1293 | @@ -1,56 +0,0 @@ |
1294 | -#!/bin/bash |
1295 | -# |
1296 | -# rabbitmq-common - common formula shell functions and config variables |
1297 | -# |
1298 | -# Copyright (C) 2011 Canonical Ltd. |
1299 | -# Author: Adam Gandelman <adam.gandelman@canonical.com> |
1300 | -# |
1301 | -# This program is free software: you can redistribute it and/or modify |
1302 | -# it under the terms of the GNU General Public License as published by |
1303 | -# the Free Software Foundation, either version 3 of the License, or |
1304 | -# (at your option) any later version. |
1305 | -# |
1306 | -# This program is distributed in the hope that it will be useful, |
1307 | -# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1308 | -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1309 | -# GNU General Public License for more details. |
1310 | -# |
1311 | -# You should have received a copy of the GNU General Public License |
1312 | -# along with this program. If not, see <http://www.gnu.org/licenses/>. |
1313 | - |
1314 | -RABBIT_CTL='rabbitmqctl' |
1315 | -HOSTNAME=`hostname -f` |
1316 | -ERLANG_COOKIE="/var/lib/rabbitmq/.erlang.cookie" |
1317 | - |
1318 | -function user_exists { |
1319 | - $RABBIT_CTL list_users | grep -wq "^$1" |
1320 | -} |
1321 | - |
1322 | -function user_is_admin { |
1323 | - $RABBIT_CTL list_users | grep -w "^$1" | grep -q "administrator" |
1324 | -} |
1325 | - |
1326 | -function vhost_exists { |
1327 | - $RABBIT_CTL list_vhosts | grep "^$VHOST\$" >/dev/null |
1328 | -} |
1329 | - |
1330 | -function create_vhost { |
1331 | - $RABBIT_CTL add_vhost $VHOST |
1332 | -} |
1333 | - |
1334 | -function user_create { |
1335 | - juju-log "rabbitmq: Creating user $1." |
1336 | - |
1337 | - $RABBIT_CTL add_user $1 $PASSWORD || return 1 |
1338 | - |
1339 | - # grant the user all permissions on the default vhost / |
1340 | - # TODO: investigate sane permissions |
1341 | - juju-log "rabbitmq: Granting permission to $1 on vhost /" |
1342 | - $RABBIT_CTL set_permissions -p $VHOST $1 ".*" ".*" ".*" |
1343 | - |
1344 | - if [[ $2 == 'admin' ]] ; then |
1345 | - user_is_admin $1 && return 0 |
1346 | - juju-log "rabbitmq: Granting user $1 admin access" |
1347 | - $RABBIT_CTL set_user_tags "$1" administrator || return 1 |
1348 | - fi |
1349 | -} |
1350 | |
1351 | === removed file 'hooks/rabbitmq-relations' |
1352 | --- hooks/rabbitmq-relations 2013-03-01 17:36:31 +0000 |
1353 | +++ hooks/rabbitmq-relations 1970-01-01 00:00:00 +0000 |
1354 | @@ -1,127 +0,0 @@ |
1355 | -#!/bin/bash |
1356 | -# |
1357 | -# rabbitmq-relations - relations to be used by formula, referenced |
1358 | -# via symlink |
1359 | -# |
1360 | -# Copyright (C) 2011 Canonical Ltd. |
1361 | -# Author: Adam Gandelman <adam.gandelman@canonical.com> |
1362 | -# |
1363 | -# This program is free software: you can redistribute it and/or modify |
1364 | -# it under the terms of the GNU General Public License as published by |
1365 | -# the Free Software Foundation, either version 3 of the License, or |
1366 | -# (at your option) any later version. |
1367 | -# |
1368 | -# This program is distributed in the hope that it will be useful, |
1369 | -# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1370 | -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1371 | -# GNU General Public License for more details. |
1372 | -# |
1373 | -# You should have received a copy of the GNU General Public License |
1374 | -# along with this program. If not, see <http://www.gnu.org/licenses/>. |
1375 | - |
1376 | -set -u |
1377 | -FORMULA_DIR=$(dirname $0) |
1378 | -ARG0=${0##*/} |
1379 | - |
1380 | -if [[ -e $FORMULA_DIR/rabbitmq-common ]] ; then |
1381 | - . $FORMULA_DIR/rabbitmq-common |
1382 | -else |
1383 | - juju-log "rabbitmq-server: ERROR Could not load $FORMULA_DIR/rabbitmq-common" |
1384 | - exit 1 |
1385 | -fi |
1386 | - |
1387 | -juju-log "rabbitmq-server: Firing hook $ARG0." |
1388 | - |
1389 | -function install_hook() { |
1390 | - [ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f";done ) |
1391 | - |
1392 | - [[ ! `which pwgen` ]] && apt-get -y install pwgen |
1393 | - DEBIAN_FRONTEND=noninteractive apt-get -qqy \ |
1394 | - install --no-install-recommends rabbitmq-server |
1395 | - rc=$? |
1396 | - service rabbitmq-server stop |
1397 | - open-port 5672/tcp |
1398 | -} |
1399 | - |
1400 | -function upgrade_charm { |
1401 | - [ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f";done ) |
1402 | -} |
1403 | - |
1404 | -function amqp_changed() { |
1405 | - # Connecting clients should request a username and vhost. |
1406 | - # In reponse, we generate a password for new users, |
1407 | - # grant the user access on the default vhost "/", |
1408 | - # and tell it where to reach us. |
1409 | - RABBIT_USER=`relation-get username` |
1410 | - VHOST=`relation-get vhost` |
1411 | - if [[ -z $RABBIT_USER ]] || [[ -z $VHOST ]] ; then |
1412 | - juju-log "rabbitmq-server: RABBIT_USER||VHOST not yet received from peer." |
1413 | - exit 0 |
1414 | - fi |
1415 | - PASSWD_FILE="/var/lib/juju/$RABBIT_USER.passwd" |
1416 | - if [[ -e $PASSWD_FILE ]] ; then |
1417 | - PASSWORD=$(cat $PASSWD_FILE) |
1418 | - else |
1419 | - PASSWORD=$(pwgen 10 1) |
1420 | - echo $PASSWORD >$PASSWD_FILE |
1421 | - chmod 0400 $PASSWD_FILE |
1422 | - fi |
1423 | - if ! vhost_exists ; then |
1424 | - juju-log "rabbitmq-server: Creating vhost $VHOST" |
1425 | - create_vhost |
1426 | - fi |
1427 | - if ! user_exists $RABBIT_USER ; then |
1428 | - juju-log "rabbitmq-server: Creating user $RABBIT_USER" |
1429 | - user_create $RABBIT_USER admin || exit 1 |
1430 | - else |
1431 | - juju-log "rabbitmq-server: user $RABBIT_USER already exists." |
1432 | - fi |
1433 | - juju-log "rabbitmq-server: Returning credentials for $RABBIT_USER@$HOSTNAME" |
1434 | - relation-set hostname=`unit-get private-address` |
1435 | - relation-set password=$PASSWORD |
1436 | -} |
1437 | - |
1438 | -function cluster_joined { |
1439 | - REMOTE_UNIT_ID=$(echo $JUJU_REMOTE_UNIT | cut -d/ -f2) |
1440 | - LOCAL_UNIT_ID=$(echo $JUJU_UNIT_NAME | cut -d/ -f2) |
1441 | - [[ $LOCAL_UNIT_ID -gt $REMOTE_UNIT_ID ]] && echo "Relation greater" && exit 0 |
1442 | - if [[ ! -e $ERLANG_COOKIE ]] ; then |
1443 | - juju-log "rabbitmq-server: ERROR Could not find cookie at $ERLANG_COOKIE" |
1444 | - exit 1 |
1445 | - fi |
1446 | - relation-set cookie=$(cat $ERLANG_COOKIE) host=$HOSTNAME |
1447 | -} |
1448 | - |
1449 | -function cluster_changed { |
1450 | - REMOTE_UNIT_ID=$(echo $JUJU_REMOTE_UNIT | cut -d/ -f2) |
1451 | - LOCAL_UNIT_ID=$(echo $JUJU_UNIT_NAME | cut -d/ -f2) |
1452 | - [[ $LOCAL_UNIT_ID -lt $REMOTE_UNIT_ID ]] && echo "Relation lesser" && exit 0 |
1453 | - |
1454 | - REMOTE_HOST=$(relation-get host) |
1455 | - COOKIE_VALUE=$(relation-get cookie) |
1456 | - [[ -z $REMOTE_HOST ]] || [[ -z $COOKIE_VALUE ]] && \ |
1457 | - juju-log "rabbimtq-server: REMOTE_HOST||COOKIE_VALUE not yet set." \ |
1458 | - exit 0 |
1459 | - |
1460 | - service rabbitmq-server stop |
1461 | - echo -n $COOKIE_VALUE > $ERLANG_COOKIE |
1462 | - service rabbitmq-server start |
1463 | - rabbitmqctl reset |
1464 | - rabbitmqctl cluster rabbit@$HOSTNAME rabbit@$REMOTE_HOST |
1465 | - rabbitmqctl start_app |
1466 | -} |
1467 | - |
1468 | -case $ARG0 in |
1469 | - "install") install_hook ;; |
1470 | - "start") service rabbitmq-server status || service rabbitmq-server start ;; |
1471 | - "stop") service rabbitmq-server status && service rabbitmq-server stop ;; |
1472 | - "upgrade-charm") upgrade_charm ;; |
1473 | - "amqp-relation-joined") exit 0 ;; |
1474 | - "amqp-relation-changed") amqp_changed ;; |
1475 | - "cluster-relation-joined") cluster_joined ;; |
1476 | - "cluster-relation-changed") cluster_changed ;; |
1477 | -esac |
1478 | - |
1479 | -rc=$? |
1480 | -juju-log "rabbitmq-server: Hook $ARG0 complete. Exiting $rc" |
1481 | -exit $rc |
1482 | |
1483 | === added file 'hooks/rabbitmq_server_relations.py' |
1484 | --- hooks/rabbitmq_server_relations.py 1970-01-01 00:00:00 +0000 |
1485 | +++ hooks/rabbitmq_server_relations.py 2013-06-06 15:35:31 +0000 |
1486 | @@ -0,0 +1,302 @@ |
1487 | +#!/usr/bin/python |
1488 | + |
1489 | +import os |
1490 | +import shutil |
1491 | +import sys |
1492 | +import subprocess |
1493 | +import glob |
1494 | + |
1495 | + |
1496 | +import rabbit_utils as rabbit |
1497 | +import lib.utils as utils |
1498 | +import lib.cluster_utils as cluster |
1499 | +import lib.ceph_utils as ceph |
1500 | +import lib.openstack_common as openstack |
1501 | + |
1502 | + |
1503 | +SERVICE_NAME = os.getenv('JUJU_UNIT_NAME').split('/')[0] |
1504 | +POOL_NAME = SERVICE_NAME |
1505 | +RABBIT_DIR = '/var/lib/rabbitmq' |
1506 | + |
1507 | + |
1508 | +def install(): |
1509 | + pre_install_hooks() |
1510 | + utils.install(*rabbit.PACKAGES) |
1511 | + utils.expose(5672) |
1512 | + |
1513 | + |
1514 | +def amqp_changed(relation_id=None, remote_unit=None): |
1515 | + if not cluster.eligible_leader('res_rabbitmq_vip'): |
1516 | + msg = 'amqp_changed(): Deferring amqp_changed to eligible_leader.' |
1517 | + utils.juju_log('INFO', msg) |
1518 | + return |
1519 | + |
1520 | + rabbit_user = utils.relation_get('username', rid=relation_id, |
1521 | + unit=remote_unit) |
1522 | + vhost = utils.relation_get('vhost', rid=relation_id, unit=remote_unit) |
1523 | + if None in [rabbit_user, vhost]: |
1524 | + utils.juju_log('INFO', 'amqp_changed(): Relation not ready.') |
1525 | + return |
1526 | + |
1527 | + password_file = os.path.join(RABBIT_DIR, '%s.passwd' % rabbit_user) |
1528 | + if os.path.exists(password_file): |
1529 | + password = open(password_file).read().strip() |
1530 | + else: |
1531 | + cmd = ['pwgen', '64', '1'] |
1532 | + password = subprocess.check_output(cmd).strip() |
1533 | + with open(password_file, 'wb') as out: |
1534 | + out.write(password) |
1535 | + |
1536 | + rabbit.create_vhost(vhost) |
1537 | + rabbit.create_user(rabbit_user, password) |
1538 | + rabbit.grant_permissions(rabbit_user, vhost) |
1539 | + rabbit_hostname = utils.unit_get('private-address') |
1540 | + |
1541 | + relation_settings = { |
1542 | + 'password': password, |
1543 | + 'hostname': rabbit_hostname |
1544 | + } |
1545 | + if cluster.is_clustered(): |
1546 | + relation_settings['clustered'] = 'true' |
1547 | + relation_settings['vip'] = utils.config_get('vip') |
1548 | + if relation_id: |
1549 | + relation_settings['rid'] = relation_id |
1550 | + utils.relation_set(**relation_settings) |
1551 | + |
1552 | + |
1553 | +def cluster_joined(): |
1554 | + if utils.is_relation_made('ha'): |
1555 | + utils.juju_log('INFO', |
1556 | + 'hacluster relation is present, skipping native '\ |
1557 | + 'rabbitmq cluster config.') |
1558 | + return |
1559 | + l_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] |
1560 | + r_unit_no = os.getenv('JUJU_REMOTE_UNIT').split('/')[1] |
1561 | + if l_unit_no > r_unit_no: |
1562 | + utils.juju_log('INFO', 'cluster_joined: Relation greater.') |
1563 | + return |
1564 | + rabbit.COOKIE_PATH = '/var/lib/rabbitmq/.erlang.cookie' |
1565 | + if not os.path.isfile(rabbit.COOKIE_PATH): |
1566 | + utils.juju_log('ERROR', 'erlang cookie missing from %s' %\ |
1567 | + rabbit.COOKIE_PATH) |
1568 | + cookie = open(rabbit.COOKIE_PATH, 'r').read().strip() |
1569 | + local_hostname = subprocess.check_output(['hostname']).strip() |
1570 | + utils.relation_set(cookie=cookie, host=local_hostname) |
1571 | + |
1572 | + |
1573 | +def cluster_changed(): |
1574 | + if utils.is_relation_made('ha'): |
1575 | + utils.juju_log('INFO', |
1576 | + 'hacluster relation is present, skipping native '\ |
1577 | + 'rabbitmq cluster config.') |
1578 | + return |
1579 | + l_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] |
1580 | + r_unit_no = os.getenv('JUJU_REMOTE_UNIT').split('/')[1] |
1581 | + if l_unit_no < r_unit_no: |
1582 | + utils.juju_log('INFO', 'cluster_joined: Relation lesser.') |
1583 | + return |
1584 | + |
1585 | + remote_host = utils.relation_get('host') |
1586 | + cookie = utils.relation_get('cookie') |
1587 | + if None in [remote_host, cookie]: |
1588 | + utils.juju_log('INFO', |
1589 | + 'cluster_joined: remote_host|cookie not yet set.') |
1590 | + return |
1591 | + |
1592 | + if open(rabbit.COOKIE_PATH, 'r').read().strip() == cookie: |
1593 | + utils.juju_log('INFO', 'Cookie already synchronized with peer.') |
1594 | + return |
1595 | + |
1596 | + utils.juju_log('INFO', 'Synchronizing erlang cookie from peer.') |
1597 | + rabbit.service('stop') |
1598 | + with open(rabbit.COOKIE_PATH, 'wb') as out: |
1599 | + out.write(cookie) |
1600 | + rabbit.service('start') |
1601 | + rabbit.cluster_with(remote_host) |
1602 | + |
1603 | + |
1604 | +def ha_joined(): |
1605 | + corosync_bindiface = utils.config_get('ha-bindiface') |
1606 | + corosync_mcastport = utils.config_get('ha-mcastport') |
1607 | + vip = utils.config_get('vip') |
1608 | + vip_iface = utils.config_get('vip_iface') |
1609 | + vip_cidr = utils.config_get('vip_cidr') |
1610 | + rbd_name = utils.config_get('rbd-name') |
1611 | + |
1612 | + if None in [corosync_bindiface, corosync_mcastport, vip, vip_iface, |
1613 | + vip_cidr, rbd_name]: |
1614 | + utils.juju_log('ERROR', 'Insufficient configuration data to '\ |
1615 | + 'configure hacluster.') |
1616 | + sys.exit(1) |
1617 | + |
1618 | + if not utils.is_relation_made('ceph', 'auth'): |
1619 | + utils.juju_log('INFO', |
1620 | + 'ha_joined: No ceph relation yet, deferring.') |
1621 | + return |
1622 | + |
1623 | + name = '%s@localhost' % SERVICE_NAME |
1624 | + if rabbit.get_node_name() != name: |
1625 | + utils.juju_log('INFO', 'Stopping rabbitmq-server.') |
1626 | + utils.stop('rabbitmq-server') |
1627 | + rabbit.set_node_name('%s@localhost' % SERVICE_NAME) |
1628 | + else: |
1629 | + utils.juju_log('INFO', 'Node name already set to %s.' % name) |
1630 | + |
1631 | + relation_settings = {} |
1632 | + relation_settings['corosync_bindiface'] = corosync_bindiface |
1633 | + relation_settings['corosync_mcastport'] = corosync_mcastport |
1634 | + |
1635 | + relation_settings['resources'] = { |
1636 | + 'res_rabbitmq_rbd': 'ocf:ceph:rbd', |
1637 | + 'res_rabbitmq_fs': 'ocf:heartbeat:Filesystem', |
1638 | + 'res_rabbitmq_vip': 'ocf:heartbeat:IPaddr2', |
1639 | + 'res_rabbitmq-server': 'lsb:rabbitmq-server', |
1640 | + } |
1641 | + |
1642 | + relation_settings['resource_params'] = { |
1643 | + 'res_rabbitmq_rbd': 'params name="%s" pool="%s" user="%s" ' |
1644 | + 'secret="%s"' % \ |
1645 | + (rbd_name, POOL_NAME, |
1646 | + SERVICE_NAME, ceph.keyfile_path(SERVICE_NAME)), |
1647 | + 'res_rabbitmq_fs': 'params device="/dev/rbd/%s/%s" directory="%s" '\ |
1648 | + 'fstype="ext4" op start start-delay="10s"' %\ |
1649 | + (POOL_NAME, rbd_name, RABBIT_DIR), |
1650 | + 'res_rabbitmq_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' %\ |
1651 | + (vip, vip_cidr, vip_iface), |
1652 | + 'res_rabbitmq-server': 'op start start-delay="5s" op monitor interval="5s"', |
1653 | + } |
1654 | + |
1655 | + relation_settings['groups'] = { |
1656 | + 'grp_rabbitmq': 'res_rabbitmq_rbd res_rabbitmq_fs res_rabbitmq_vip '\ |
1657 | + 'res_rabbitmq-server', |
1658 | + } |
1659 | + |
1660 | + for rel_id in utils.relation_ids('ha'): |
1661 | + utils.relation_set(rid=rel_id, **relation_settings) |
1662 | + |
1663 | + env_vars = { |
1664 | + 'OPENSTACK_PORT_EPMD': 4369, |
1665 | + 'OPENSTACK_PORT_MCASTPORT': utils.config_get('ha-mcastport'), |
1666 | + } |
1667 | + openstack.save_script_rc(**env_vars) |
1668 | + |
1669 | + |
1670 | +def ha_changed(): |
1671 | + if not cluster.is_clustered(): |
1672 | + return |
1673 | + vip = utils.config_get('vip') |
1674 | + utils.juju_log('INFO', 'ha_changed(): We are now HA clustered. ' |
1675 | + 'Advertising our VIP (%s) to all AMQP clients.' %\ |
1676 | + vip) |
1677 | + # need to re-authenticate all clients since node-name changed. |
1678 | + for rid in utils.relation_ids('amqp'): |
1679 | + for unit in utils.relation_list(rid): |
1680 | + amqp_changed(relation_id=rid, remote_unit=unit) |
1681 | + |
1682 | + |
1683 | +def ceph_joined(): |
1684 | + utils.juju_log('INFO', 'Start Ceph Relation Joined') |
1685 | + ceph.install() |
1686 | + utils.juju_log('INFO', 'Finish Ceph Relation Joined') |
1687 | + |
1688 | + |
1689 | +def ceph_changed(): |
1690 | + utils.juju_log('INFO', 'Start Ceph Relation Changed') |
1691 | + auth = utils.relation_get('auth') |
1692 | + key = utils.relation_get('key') |
1693 | + if None in [auth, key]: |
1694 | + utils.juju_log('INFO', 'Missing key or auth in relation') |
1695 | + sys.exit(0) |
1696 | + |
1697 | + ceph.configure(service=SERVICE_NAME, key=key, auth=auth) |
1698 | + |
1699 | + if cluster.eligible_leader('res_rabbitmq_vip'): |
1700 | + rbd_img = utils.config_get('rbd-name') |
1701 | + rbd_size = utils.config_get('rbd-size') |
1702 | + sizemb = int(rbd_size.split('G')[0]) * 1024 |
1703 | + blk_device = '/dev/rbd/%s/%s' % (POOL_NAME, rbd_img) |
1704 | + ceph.ensure_ceph_storage(service=SERVICE_NAME, pool=POOL_NAME, |
1705 | + rbd_img=rbd_img, sizemb=sizemb, |
1706 | + fstype='ext4', mount_point=RABBIT_DIR, |
1707 | + blk_device=blk_device, |
1708 | + system_services=['rabbitmq-server']) |
1709 | + else: |
1710 | + utils.juju_log('INFO', |
1711 | + 'This is not the peer leader. Not configuring RBD.') |
1712 | + utils.juju_log('INFO', 'Stopping rabbitmq-server.') |
1713 | + utils.stop('rabbitmq-server') |
1714 | + |
1715 | + # If 'ha' relation has been made before the 'ceph' relation |
1716 | + # it is important to make sure the ha-relation data is being |
1717 | + # sent. |
1718 | + if utils.is_relation_made('ha'): |
1719 | + utils.juju_log('INFO', '*ha* relation exists. Triggering ha_joined()') |
1720 | + ha_joined() |
1721 | + else: |
1722 | + utils.juju_log('INFO', '*ha* relation does not exist.') |
1723 | + utils.juju_log('INFO', 'Finish Ceph Relation Changed') |
1724 | + |
1725 | + |
1726 | +def upgrade_charm(): |
1727 | + pre_install_hooks() |
1728 | + # Ensure older passwd files in /var/lib/juju are moved to |
1729 | + # /var/lib/rabbitmq which will end up replicated if clustered. |
1730 | + for f in [f for f in os.listdir('/var/lib/juju') |
1731 | + if os.path.isfile(os.path.join('/var/lib/juju', f))]: |
1732 | + if f.endswith('.passwd'): |
1733 | + s = os.path.join('/var/lib/juju', f) |
1734 | + d = os.path.join('/var/lib/rabbitmq', f) |
1735 | + utils.juju_log('INFO', |
1736 | + 'upgrade_charm: Migrating stored passwd' |
1737 | + ' from %s to %s.' % (s, d)) |
1738 | + shutil.move(s, d) |
1739 | + |
1740 | +MAN_PLUGIN = 'rabbitmq_management' |
1741 | + |
1742 | +def config_changed(): |
1743 | + if utils.config_get('management_plugin') == True: |
1744 | + rabbit.enable_plugin(MAN_PLUGIN) |
1745 | + utils.open_port(55672) |
1746 | + else: |
1747 | + rabbit.disable_plugin(MAN_PLUGIN) |
1748 | + utils.close_port(55672) |
1749 | + |
1750 | + if utils.config_get('ssl_enabled') == True: |
1751 | + ssl_key = utils.config_get('ssl_key') |
1752 | + ssl_cert = utils.config_get('ssl_cert') |
1753 | + ssl_port = utils.config_get('ssl_port') |
1754 | + if None in [ ssl_key, ssl_cert, ssl_port ]: |
1755 | + utils.juju_log('ERROR', |
1756 | + 'Please provide ssl_key, ssl_cert and ssl_port' |
1757 | + ' config when enabling SSL support') |
1758 | + sys.exit(1) |
1759 | + else: |
1760 | + rabbit.enable_ssl(ssl_key, ssl_cert, ssl_port) |
1761 | + utils.open_port(ssl_port) |
1762 | + else: |
1763 | + if os.path.exists(rabbit.RABBITMQ_CONF): |
1764 | + os.remove(rabbit.RABBITMQ_CONF) |
1765 | + utils.close_port(utils.config_get('ssl_port')) |
1766 | + |
1767 | + utils.restart('rabbitmq-server') |
1768 | + |
1769 | + |
1770 | +def pre_install_hooks(): |
1771 | + for f in glob.glob('exec.d/*/charm-pre-install'): |
1772 | + if os.path.isfile(f) and os.access(f, os.X_OK): |
1773 | + subprocess.check_call(['sh', '-c', f]) |
1774 | + |
1775 | +hooks = { |
1776 | + 'install': install, |
1777 | + 'amqp-relation-changed': amqp_changed, |
1778 | + 'cluster-relation-joined': cluster_joined, |
1779 | + 'cluster-relation-changed': cluster_changed, |
1780 | + 'ha-relation-joined': ha_joined, |
1781 | + 'ha-relation-changed': ha_changed, |
1782 | + 'ceph-relation-joined': ceph_joined, |
1783 | + 'ceph-relation-changed': ceph_changed, |
1784 | + 'upgrade-charm': upgrade_charm, |
1785 | + 'config-changed': config_changed |
1786 | +} |
1787 | + |
1788 | +utils.do_hooks(hooks) |
1789 | |
1790 | === removed symlink 'hooks/start' |
1791 | === target was u'rabbitmq-relations' |
1792 | === removed symlink 'hooks/stop' |
1793 | === target was u'rabbitmq-relations' |
1794 | === modified symlink 'hooks/upgrade-charm' |
1795 | === target changed u'rabbitmq-relations' => u'rabbitmq_server_relations.py' |
1796 | === modified file 'metadata.yaml' |
1797 | --- metadata.yaml 2013-04-22 19:38:52 +0000 |
1798 | +++ metadata.yaml 2013-06-06 15:35:31 +0000 |
1799 | @@ -9,9 +9,15 @@ |
1800 | provides: |
1801 | amqp: |
1802 | interface: rabbitmq |
1803 | +requires: |
1804 | nrpe-external-master: |
1805 | interface: nrpe-external-master |
1806 | scope: container |
1807 | + ha: |
1808 | + interface: hacluster |
1809 | + scope: container |
1810 | + ceph: |
1811 | + interface: ceph-client |
1812 | peers: |
1813 | cluster: |
1814 | - interface: rabbitmq |
1815 | + interface: rabbitmq-ha |
1816 | |
1817 | === modified file 'revision' |
1818 | --- revision 2013-03-13 18:36:42 +0000 |
1819 | +++ revision 2013-06-06 15:35:31 +0000 |
1820 | @@ -1,1 +1,1 @@ |
1821 | -38 |
1822 | +95 |
1823 | |
1824 | === added directory 'scripts' |
1825 | === added file 'scripts/add_to_cluster' |
1826 | --- scripts/add_to_cluster 1970-01-01 00:00:00 +0000 |
1827 | +++ scripts/add_to_cluster 2013-06-06 15:35:31 +0000 |
1828 | @@ -0,0 +1,13 @@ |
1829 | +#!/bin/bash |
1830 | +service corosync start || /bin/true |
1831 | +sleep 2 |
1832 | +while ! service pacemaker start; do |
1833 | + echo "Attempting to start pacemaker" |
1834 | + sleep 1; |
1835 | +done; |
1836 | +crm node online |
1837 | +sleep 2 |
1838 | +while crm status | egrep -q 'Stopped$'; do |
1839 | + echo "Waiting for nodes to come online" |
1840 | + sleep 1 |
1841 | +done |
1842 | |
1843 | === added file 'scripts/remove_from_cluster' |
1844 | --- scripts/remove_from_cluster 1970-01-01 00:00:00 +0000 |
1845 | +++ scripts/remove_from_cluster 2013-06-06 15:35:31 +0000 |
1846 | @@ -0,0 +1,4 @@ |
1847 | +#!/bin/bash |
1848 | +crm node standby |
1849 | +service pacemaker stop |
1850 | +service corosync stop |
1851 | |
1852 | === added directory 'templates' |
1853 | === added file 'templates/rabbitmq.config' |
1854 | --- templates/rabbitmq.config 1970-01-01 00:00:00 +0000 |
1855 | +++ templates/rabbitmq.config 2013-06-06 15:35:31 +0000 |
1856 | @@ -0,0 +1,10 @@ |
1857 | +[ |
1858 | + {rabbit, [ |
1859 | + {ssl_listeners, [{{ ssl_port }}]}, |
1860 | + {ssl_options, [ |
1861 | + {certfile, "{{ ssl_cert_file }}"}, |
1862 | + {keyfile, "{{ ssl_key_file }}"} |
1863 | + ]}, |
1864 | + {tcp_listeners, [5672]} |
1865 | + ]} |
1866 | +]. |
1867 | \ No newline at end of file |
In the shell version, amqp_changed() sets a "hostname" in the end: USER@$HOSTNAME"
juju-log "rabbitmq-server: Returning credentials for $RABBIT_
relation-set hostname=`unit-get private-address`
relation-set password=$PASSW
The python version doesn't, it only sets the password: settings = {
relation_
'password': password
}
And the vip if it's clustered. If should set the hostname too.
That being said, "hostname" is a somewhat bad name for this. Just looking at it you never know which hostname it's talking about. Maybe something like "rabbit_hostname" would be better? Just a thought, as this would break compatibility with consumers of this charm (unless both are set).