Merge lp:~gandelman-a/charm-helpers/ha_os_cleanup into lp:charm-helpers

Proposed by Adam Gandelman
Status: Merged
Merged at revision: 49
Proposed branch: lp:~gandelman-a/charm-helpers/ha_os_cleanup
Merge into: lp:charm-helpers
Diff against target: 2359 lines (+235/-1710)
16 files modified
charmhelpers/contrib/hahelpers/IMPORT (+0/-7)
charmhelpers/contrib/hahelpers/apache.py (+13/-151)
charmhelpers/contrib/hahelpers/ceph.py (+72/-50)
charmhelpers/contrib/hahelpers/cluster.py (+15/-13)
charmhelpers/contrib/hahelpers/haproxy_utils.py (+0/-55)
charmhelpers/contrib/hahelpers/utils.py (+0/-333)
charmhelpers/contrib/openstack/IMPORT (+0/-9)
charmhelpers/contrib/openstack/nova/essex (+0/-43)
charmhelpers/contrib/openstack/nova/folsom (+0/-81)
charmhelpers/contrib/openstack/nova/nova-common (+0/-147)
charmhelpers/contrib/openstack/openstack-common (+0/-781)
charmhelpers/contrib/openstack/utils.py (+2/-5)
tests/contrib/hahelpers/test_apache_utils.py (+44/-0)
tests/contrib/hahelpers/test_ceph_utils.py (+59/-0)
tests/contrib/hahelpers/test_cluster_utils.py (+2/-2)
tests/contrib/openstack/test_openstack_utils.py (+28/-33)
To merge this branch: bzr merge lp:~gandelman-a/charm-helpers/ha_os_cleanup
Reviewer Review Type Date Requested Status
James Page Approve
Adam Gandelman (community) Needs Resubmitting
Review via email: mp+174028@code.launchpad.net

Commit message

Removes use of local helpers from hahelpers (from code thats still in use).
Drops a bunch of unused shell helper code from contrib.openstack.
Avoids installing packages at module load time

Description of the change

This cleans up the hahelpers and openstack helpers a bit. Anything that still relies on localized helper code is likely unused and will be dropped from the helpers once the Openstack templating stuff lands.

To post a comment you must log in.
52. By Adam Gandelman

hahelpers lint cleanup.

Revision history for this message
James Page (james-page) wrote :

Hi Adam

Nice work!

Feedback:

1) hookenv.log as juju_log

Nice idea; however some calls have the level/message ordered incorrectly:

            juju_log('INFO', 'Deferring action to CRM leader.')

I found quite a few instances of this.

2) units.install/utils.restart

I found quite a bit of use of these two deprecated helpers; can we switch to using core.host now? Not sure whether that was meant to be in this branch or not.

3) ceph.py/haproxy.py

Is this still in use? Its using utils exclusively.

review: Needs Fixing
53. By Adam Gandelman

Fix log vs juju_log usage.

54. By Adam Gandelman

Drop unused code from apache.py.

55. By Adam Gandelman

Drop haproxy.py, this will now be handled by template contexts.

56. By Adam Gandelman

ceph.py: Remove execute() for direct subprocess usage. Usage host.log instead of local juju_log.

57. By Adam Gandelman

drop hahelpers.utils. Add some ceph tests for code thats currently used.

58. By Adam Gandelman

apache.py: update log usage.

Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Hey James- Thanks for looking at this.

Updated:

- Updated logging

- I've dropped utils.py entirely and updated the hahelpers to use common helpers. I've proposed one local helper that doesn't have an comparable core helper (utils.running()). I've proposed the following, and moved it from utils.py to ceph.py in the meantime:

   https://code.launchpad.net/~gandelman-a/charm-helpers/service_running/+merge/174261

- I dropped old code from apache.py and dropped haproxy.py entirely. Added some tests for code in ceph.py thats currently in use by charms that sync from lp:charm-helpers. Theres still some unused stuff in ceph.py that may be worth keeping if we want to move mysql + rabbitmq (and other RBD users) to use charm-helpers.

review: Needs Resubmitting
Revision history for this message
James Page (james-page) wrote :

I have some plans to rework the ceph helper into a generic storage helper.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== removed file 'charmhelpers/contrib/hahelpers/IMPORT'
2--- charmhelpers/contrib/hahelpers/IMPORT 2013-05-30 23:26:24 +0000
3+++ charmhelpers/contrib/hahelpers/IMPORT 1970-01-01 00:00:00 +0000
4@@ -1,7 +0,0 @@
5-Source: lp:~openstack-charmers/openstack-charm-helpers/ha-helpers
6-
7-ha-helpers/lib/apache_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/apache_utils.py
8-ha-helpers/lib/cluster_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/cluster_utils.py
9-ha-helpers/lib/ceph_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/ceph_utils.py
10-ha-helpers/lib/haproxy_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/haproxy_utils.py
11-ha-helpers/lib/utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/utils.py
12
13=== renamed file 'charmhelpers/contrib/hahelpers/apache_utils.py' => 'charmhelpers/contrib/hahelpers/apache.py'
14--- charmhelpers/contrib/hahelpers/apache_utils.py 2013-06-12 08:20:24 +0000
15+++ charmhelpers/contrib/hahelpers/apache.py 2013-07-11 19:29:39 +0000
16@@ -8,34 +8,24 @@
17 # Adam Gandelman <adamg@ubuntu.com>
18 #
19
20-from utils import (
21+import subprocess
22+
23+from charmhelpers.core.hookenv import (
24+ config as config_get,
25+ relation_get,
26 relation_ids,
27- relation_list,
28- relation_get,
29- render_template,
30- juju_log,
31- config_get,
32- install,
33- get_host_ip,
34- restart
35- )
36-from cluster_utils import https
37-
38-import os
39-import subprocess
40-from base64 import b64decode
41-
42-APACHE_SITE_DIR = "/etc/apache2/sites-available"
43-SITE_TEMPLATE = "apache2_site.tmpl"
44-RELOAD_CHECK = "To activate the new configuration"
45+ related_units as relation_list,
46+ log,
47+ INFO,
48+)
49
50
51 def get_cert():
52 cert = config_get('ssl_cert')
53 key = config_get('ssl_key')
54 if not (cert and key):
55- juju_log('INFO',
56- "Inspecting identity-service relations for SSL certificate.")
57+ log("Inspecting identity-service relations for SSL certificate.",
58+ level=INFO)
59 cert = key = None
60 for r_id in relation_ids('identity-service'):
61 for unit in relation_list(r_id):
62@@ -50,8 +40,8 @@
63
64 def get_ca_cert():
65 ca_cert = None
66- juju_log('INFO',
67- "Inspecting identity-service relations for CA SSL certificate.")
68+ log("Inspecting identity-service relations for CA SSL certificate.",
69+ level=INFO)
70 for r_id in relation_ids('identity-service'):
71 for unit in relation_list(r_id):
72 if not ca_cert:
73@@ -66,131 +56,3 @@
74 'w') as crt:
75 crt.write(ca_cert)
76 subprocess.check_call(['update-ca-certificates', '--fresh'])
77-
78-
79-def enable_https(port_maps, namespace, cert, key, ca_cert=None):
80- '''
81- For a given number of port mappings, configures apache2
82- HTTPs local reverse proxying using certficates and keys provided in
83- either configuration data (preferred) or relation data. Assumes ports
84- are not in use (calling charm should ensure that).
85-
86- port_maps: dict: external to internal port mappings
87- namespace: str: name of charm
88- '''
89- def _write_if_changed(path, new_content):
90- content = None
91- if os.path.exists(path):
92- with open(path, 'r') as f:
93- content = f.read().strip()
94- if content != new_content:
95- with open(path, 'w') as f:
96- f.write(new_content)
97- return True
98- else:
99- return False
100-
101- juju_log('INFO', "Enabling HTTPS for port mappings: {}".format(port_maps))
102- http_restart = False
103-
104- if cert:
105- cert = b64decode(cert)
106- if key:
107- key = b64decode(key)
108- if ca_cert:
109- ca_cert = b64decode(ca_cert)
110-
111- if not cert and not key:
112- juju_log('ERROR',
113- "Expected but could not find SSL certificate data, not "
114- "configuring HTTPS!")
115- return False
116-
117- install('apache2')
118- if RELOAD_CHECK in subprocess.check_output(['a2enmod', 'ssl',
119- 'proxy', 'proxy_http']):
120- http_restart = True
121-
122- ssl_dir = os.path.join('/etc/apache2/ssl', namespace)
123- if not os.path.exists(ssl_dir):
124- os.makedirs(ssl_dir)
125-
126- if (_write_if_changed(os.path.join(ssl_dir, 'cert'), cert)):
127- http_restart = True
128- if (_write_if_changed(os.path.join(ssl_dir, 'key'), key)):
129- http_restart = True
130- os.chmod(os.path.join(ssl_dir, 'key'), 0600)
131-
132- install_ca_cert(ca_cert)
133-
134- sites_dir = '/etc/apache2/sites-available'
135- for ext_port, int_port in port_maps.items():
136- juju_log('INFO',
137- 'Creating apache2 reverse proxy vhost'
138- ' for {}:{}'.format(ext_port,
139- int_port))
140- site = "{}_{}".format(namespace, ext_port)
141- site_path = os.path.join(sites_dir, site)
142- with open(site_path, 'w') as fsite:
143- context = {
144- "ext": ext_port,
145- "int": int_port,
146- "namespace": namespace,
147- "private_address": get_host_ip()
148- }
149- fsite.write(render_template(SITE_TEMPLATE,
150- context))
151-
152- if RELOAD_CHECK in subprocess.check_output(['a2ensite', site]):
153- http_restart = True
154-
155- if http_restart:
156- restart('apache2')
157-
158- return True
159-
160-
161-def disable_https(port_maps, namespace):
162- '''
163- Ensure HTTPS reverse proxying is disables for given port mappings
164-
165- port_maps: dict: of ext -> int port mappings
166- namespace: str: name of chamr
167- '''
168- juju_log('INFO', 'Ensuring HTTPS disabled for {}'.format(port_maps))
169-
170- if (not os.path.exists('/etc/apache2') or
171- not os.path.exists(os.path.join('/etc/apache2/ssl', namespace))):
172- return
173-
174- http_restart = False
175- for ext_port in port_maps.keys():
176- if os.path.exists(os.path.join(APACHE_SITE_DIR,
177- "{}_{}".format(namespace,
178- ext_port))):
179- juju_log('INFO',
180- "Disabling HTTPS reverse proxy"
181- " for {} {}.".format(namespace,
182- ext_port))
183- if (RELOAD_CHECK in
184- subprocess.check_output(['a2dissite',
185- '{}_{}'.format(namespace,
186- ext_port)])):
187- http_restart = True
188-
189- if http_restart:
190- restart(['apache2'])
191-
192-
193-def setup_https(port_maps, namespace, cert, key, ca_cert=None):
194- '''
195- Ensures HTTPS is either enabled or disabled for given port
196- mapping.
197-
198- port_maps: dict: of ext -> int port mappings
199- namespace: str: name of charm
200- '''
201- if not https:
202- disable_https(port_maps, namespace)
203- else:
204- enable_https(port_maps, namespace, cert, key, ca_cert)
205
206=== renamed file 'charmhelpers/contrib/hahelpers/ceph_utils.py' => 'charmhelpers/contrib/hahelpers/ceph.py'
207--- charmhelpers/contrib/hahelpers/ceph_utils.py 2013-06-12 08:20:24 +0000
208+++ charmhelpers/contrib/hahelpers/ceph.py 2013-07-11 19:29:39 +0000
209@@ -9,10 +9,31 @@
210 #
211
212 import commands
213-import subprocess
214 import os
215 import shutil
216-import utils
217+
218+from subprocess import (
219+ check_call,
220+ check_output,
221+ CalledProcessError
222+)
223+
224+from charmhelpers.core.hookenv import (
225+ relation_get,
226+ relation_ids,
227+ related_units,
228+ log,
229+ INFO,
230+)
231+
232+from charmhelpers.core.host import (
233+ apt_install,
234+ mount,
235+ mounts,
236+ service_start,
237+ service_stop,
238+ umount,
239+)
240
241 KEYRING = '/etc/ceph/ceph.client.%s.keyring'
242 KEYFILE = '/etc/ceph/ceph.client.%s.key'
243@@ -24,23 +45,30 @@
244 """
245
246
247-def execute(cmd):
248- subprocess.check_call(cmd)
249-
250-
251-def execute_shell(cmd):
252- subprocess.check_call(cmd, shell=True)
253+def running(service):
254+ # this local util can be dropped as soon the following branch lands
255+ # in lp:charm-helpers
256+ # https://code.launchpad.net/~gandelman-a/charm-helpers/service_running/
257+ try:
258+ output = check_output(['service', service, 'status'])
259+ except CalledProcessError:
260+ return False
261+ else:
262+ if ("start/running" in output or "is running" in output):
263+ return True
264+ else:
265+ return False
266
267
268 def install():
269 ceph_dir = "/etc/ceph"
270 if not os.path.isdir(ceph_dir):
271 os.mkdir(ceph_dir)
272- utils.install('ceph-common')
273+ apt_install('ceph-common', fatal=True)
274
275
276 def rbd_exists(service, pool, rbd_img):
277- (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\
278+ (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %
279 (service, pool))
280 return rbd_img in out
281
282@@ -56,8 +84,8 @@
283 service,
284 '--pool',
285 pool
286- ]
287- execute(cmd)
288+ ]
289+ check_call(cmd)
290
291
292 def pool_exists(service, name):
293@@ -72,8 +100,8 @@
294 service,
295 'mkpool',
296 name
297- ]
298- execute(cmd)
299+ ]
300+ check_call(cmd)
301
302
303 def keyfile_path(service):
304@@ -87,35 +115,34 @@
305 def create_keyring(service, key):
306 keyring = keyring_path(service)
307 if os.path.exists(keyring):
308- utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring)
309+ log('ceph: Keyring exists at %s.' % keyring, level=INFO)
310 cmd = [
311 'ceph-authtool',
312 keyring,
313 '--create-keyring',
314 '--name=client.%s' % service,
315 '--add-key=%s' % key
316- ]
317- execute(cmd)
318- utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring)
319+ ]
320+ check_call(cmd)
321+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
322
323
324 def create_key_file(service, key):
325 # create a file containing the key
326 keyfile = keyfile_path(service)
327 if os.path.exists(keyfile):
328- utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile)
329+ log('ceph: Keyfile exists at %s.' % keyfile, level=INFO)
330 fd = open(keyfile, 'w')
331 fd.write(key)
332 fd.close()
333- utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile)
334+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
335
336
337 def get_ceph_nodes():
338 hosts = []
339- for r_id in utils.relation_ids('ceph'):
340- for unit in utils.relation_list(r_id):
341- hosts.append(utils.relation_get('private-address',
342- unit=unit, rid=r_id))
343+ for r_id in relation_ids('ceph'):
344+ for unit in related_units(r_id):
345+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
346 return hosts
347
348
349@@ -144,26 +171,24 @@
350 service,
351 '--secret',
352 keyfile_path(service),
353- ]
354- execute(cmd)
355+ ]
356+ check_call(cmd)
357
358
359 def filesystem_mounted(fs):
360- return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
361+ return fs in [f for m, f in mounts()]
362
363
364 def make_filesystem(blk_device, fstype='ext4'):
365- utils.juju_log('INFO',
366- 'ceph: Formatting block device %s as filesystem %s.' %\
367- (blk_device, fstype))
368+ log('ceph: Formatting block device %s as filesystem %s.' %
369+ (blk_device, fstype), level=INFO)
370 cmd = ['mkfs', '-t', fstype, blk_device]
371- execute(cmd)
372+ check_call(cmd)
373
374
375 def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
376 # mount block device into /mnt
377- cmd = ['mount', '-t', fstype, blk_device, '/mnt']
378- execute(cmd)
379+ mount(blk_device, '/mnt')
380
381 # copy data to /mnt
382 try:
383@@ -172,29 +197,27 @@
384 pass
385
386 # umount block device
387- cmd = ['umount', '/mnt']
388- execute(cmd)
389+ umount('/mnt')
390
391 _dir = os.stat(data_src_dst)
392 uid = _dir.st_uid
393 gid = _dir.st_gid
394
395 # re-mount where the data should originally be
396- cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
397- execute(cmd)
398+ mount(blk_device, data_src_dst, persist=True)
399
400 # ensure original ownership of new mount.
401 cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
402- execute(cmd)
403+ check_call(cmd)
404
405
406 # TODO: re-use
407 def modprobe_kernel_module(module):
408- utils.juju_log('INFO', 'Loading kernel module')
409+ log('ceph: Loading kernel module', level=INFO)
410 cmd = ['modprobe', module]
411- execute(cmd)
412+ check_call(cmd)
413 cmd = 'echo %s >> /etc/modules' % module
414- execute_shell(cmd)
415+ check_call(cmd, shell=True)
416
417
418 def copy_files(src, dst, symlinks=False, ignore=None):
419@@ -222,15 +245,15 @@
420 """
421 # Ensure pool, RBD image, RBD mappings are in place.
422 if not pool_exists(service, pool):
423- utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool)
424+ log('ceph: Creating new pool %s.' % pool, level=INFO)
425 create_pool(service, pool)
426
427 if not rbd_exists(service, pool, rbd_img):
428- utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img)
429+ log('ceph: Creating RBD image (%s).' % rbd_img, level=INFO)
430 create_rbd_image(service, pool, rbd_img, sizemb)
431
432 if not image_mapped(rbd_img):
433- utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.')
434+ log('ceph: Mapping RBD Image as a Block Device.', level=INFO)
435 map_block_storage(service, pool, rbd_img)
436
437 # make file system
438@@ -244,13 +267,12 @@
439 make_filesystem(blk_device, fstype)
440
441 for svc in system_services:
442- if utils.running(svc):
443- utils.juju_log('INFO',
444- 'Stopping services %s prior to migrating '\
445- 'data' % svc)
446- utils.stop(svc)
447+ if running(svc):
448+ log('Stopping services %s prior to migrating data.' % svc,
449+ level=INFO)
450+ service_stop(svc)
451
452 place_data_on_ceph(service, blk_device, mount_point, fstype)
453
454 for svc in system_services:
455- utils.start(svc)
456+ service_start(svc)
457
458=== renamed file 'charmhelpers/contrib/hahelpers/cluster_utils.py' => 'charmhelpers/contrib/hahelpers/cluster.py'
459--- charmhelpers/contrib/hahelpers/cluster_utils.py 2013-07-09 18:05:11 +0000
460+++ charmhelpers/contrib/hahelpers/cluster.py 2013-07-11 19:29:39 +0000
461@@ -1,23 +1,25 @@
462 #
463 # Copyright 2012 Canonical Ltd.
464 #
465-# This file is sourced from lp:openstack-charm-helpers
466-#
467 # Authors:
468 # James Page <james.page@ubuntu.com>
469 # Adam Gandelman <adamg@ubuntu.com>
470 #
471
472-from utils import (
473- juju_log,
474+import subprocess
475+import os
476+
477+from socket import gethostname as get_unit_hostname
478+
479+from charmhelpers.core.hookenv import (
480+ log,
481 relation_ids,
482- relation_list,
483+ related_units as relation_list,
484 relation_get,
485- get_unit_hostname,
486- config_get
487+ config as config_get,
488+ INFO,
489+ ERROR,
490 )
491-import subprocess
492-import os
493
494
495 class HAIncompleteConfig(Exception):
496@@ -39,7 +41,7 @@
497 cmd = [
498 "crm", "resource",
499 "show", resource
500- ]
501+ ]
502 try:
503 status = subprocess.check_output(cmd)
504 except subprocess.CalledProcessError:
505@@ -71,12 +73,12 @@
506 def eligible_leader(resource):
507 if is_clustered():
508 if not is_leader(resource):
509- juju_log('INFO', 'Deferring action to CRM leader.')
510+ log('Deferring action to CRM leader.', level=INFO)
511 return False
512 else:
513 peers = peer_units()
514 if peers and not oldest_peer(peers):
515- juju_log('INFO', 'Deferring action to oldest service unit.')
516+ log('Deferring action to oldest service unit.', level=INFO)
517 return False
518 return True
519
520@@ -153,7 +155,7 @@
521 missing = []
522 [missing.append(s) for s, v in conf.iteritems() if v is None]
523 if missing:
524- juju_log('Insufficient config data to configure hacluster.')
525+ log('Insufficient config data to configure hacluster.', level=ERROR)
526 raise HAIncompleteConfig
527 return conf
528
529
530=== removed file 'charmhelpers/contrib/hahelpers/haproxy_utils.py'
531--- charmhelpers/contrib/hahelpers/haproxy_utils.py 2013-06-12 08:20:24 +0000
532+++ charmhelpers/contrib/hahelpers/haproxy_utils.py 1970-01-01 00:00:00 +0000
533@@ -1,55 +0,0 @@
534-#
535-# Copyright 2012 Canonical Ltd.
536-#
537-# This file is sourced from lp:openstack-charm-helpers
538-#
539-# Authors:
540-# James Page <james.page@ubuntu.com>
541-# Adam Gandelman <adamg@ubuntu.com>
542-#
543-
544-from utils import (
545- relation_ids,
546- relation_list,
547- relation_get,
548- unit_get,
549- reload,
550- render_template
551- )
552-import os
553-
554-HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
555-HAPROXY_DEFAULT = '/etc/default/haproxy'
556-
557-
558-def configure_haproxy(service_ports):
559- '''
560- Configure HAProxy based on the current peers in the service
561- cluster using the provided port map:
562-
563- "swift": [ 8080, 8070 ]
564-
565- HAproxy will also be reloaded/started if required
566-
567- service_ports: dict: dict of lists of [ frontend, backend ]
568- '''
569- cluster_hosts = {}
570- cluster_hosts[os.getenv('JUJU_UNIT_NAME').replace('/', '-')] = \
571- unit_get('private-address')
572- for r_id in relation_ids('cluster'):
573- for unit in relation_list(r_id):
574- cluster_hosts[unit.replace('/', '-')] = \
575- relation_get(attribute='private-address',
576- rid=r_id,
577- unit=unit)
578- context = {
579- 'units': cluster_hosts,
580- 'service_ports': service_ports
581- }
582- with open(HAPROXY_CONF, 'w') as f:
583- f.write(render_template(os.path.basename(HAPROXY_CONF),
584- context))
585- with open(HAPROXY_DEFAULT, 'w') as f:
586- f.write('ENABLED=1')
587-
588- reload('haproxy')
589
590=== removed file 'charmhelpers/contrib/hahelpers/utils.py'
591--- charmhelpers/contrib/hahelpers/utils.py 2013-06-12 08:20:24 +0000
592+++ charmhelpers/contrib/hahelpers/utils.py 1970-01-01 00:00:00 +0000
593@@ -1,333 +0,0 @@
594-#
595-# Copyright 2012 Canonical Ltd.
596-#
597-# This file is sourced from lp:openstack-charm-helpers
598-#
599-# Authors:
600-# James Page <james.page@ubuntu.com>
601-# Paul Collins <paul.collins@canonical.com>
602-# Adam Gandelman <adamg@ubuntu.com>
603-#
604-
605-import json
606-import os
607-import subprocess
608-import socket
609-import sys
610-
611-
612-def do_hooks(hooks):
613- hook = os.path.basename(sys.argv[0])
614-
615- try:
616- hook_func = hooks[hook]
617- except KeyError:
618- juju_log('INFO',
619- "This charm doesn't know how to handle '{}'.".format(hook))
620- else:
621- hook_func()
622-
623-
624-def install(*pkgs):
625- cmd = [
626- 'apt-get',
627- '-y',
628- 'install'
629- ]
630- for pkg in pkgs:
631- cmd.append(pkg)
632- subprocess.check_call(cmd)
633-
634-TEMPLATES_DIR = 'templates'
635-
636-try:
637- import jinja2
638-except ImportError:
639- install('python-jinja2')
640- import jinja2
641-
642-try:
643- import dns.resolver
644-except ImportError:
645- install('python-dnspython')
646- import dns.resolver
647-
648-
649-def render_template(template_name, context, template_dir=TEMPLATES_DIR):
650- templates = jinja2.Environment(
651- loader=jinja2.FileSystemLoader(template_dir)
652- )
653- template = templates.get_template(template_name)
654- return template.render(context)
655-
656-CLOUD_ARCHIVE = \
657-""" # Ubuntu Cloud Archive
658-deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
659-"""
660-
661-CLOUD_ARCHIVE_POCKETS = {
662- 'folsom': 'precise-updates/folsom',
663- 'folsom/updates': 'precise-updates/folsom',
664- 'folsom/proposed': 'precise-proposed/folsom',
665- 'grizzly': 'precise-updates/grizzly',
666- 'grizzly/updates': 'precise-updates/grizzly',
667- 'grizzly/proposed': 'precise-proposed/grizzly'
668- }
669-
670-
671-def configure_source():
672- source = str(config_get('openstack-origin'))
673- if not source:
674- return
675- if source.startswith('ppa:'):
676- cmd = [
677- 'add-apt-repository',
678- source
679- ]
680- subprocess.check_call(cmd)
681- if source.startswith('cloud:'):
682- # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg:
683- # cloud:precise-folsom/updates or cloud:precise-folsom/proposed
684- install('ubuntu-cloud-keyring')
685- pocket = source.split(':')[1]
686- pocket = pocket.split('-')[1]
687- with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
688- apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
689- if source.startswith('deb'):
690- l = len(source.split('|'))
691- if l == 2:
692- (apt_line, key) = source.split('|')
693- cmd = [
694- 'apt-key',
695- 'adv', '--keyserver keyserver.ubuntu.com',
696- '--recv-keys', key
697- ]
698- subprocess.check_call(cmd)
699- elif l == 1:
700- apt_line = source
701-
702- with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
703- apt.write(apt_line + "\n")
704- cmd = [
705- 'apt-get',
706- 'update'
707- ]
708- subprocess.check_call(cmd)
709-
710-# Protocols
711-TCP = 'TCP'
712-UDP = 'UDP'
713-
714-
715-def expose(port, protocol='TCP'):
716- cmd = [
717- 'open-port',
718- '{}/{}'.format(port, protocol)
719- ]
720- subprocess.check_call(cmd)
721-
722-
723-def juju_log(severity, message):
724- cmd = [
725- 'juju-log',
726- '--log-level', severity,
727- message
728- ]
729- subprocess.check_call(cmd)
730-
731-
732-cache = {}
733-
734-
735-def cached(func):
736- def wrapper(*args, **kwargs):
737- global cache
738- key = str((func, args, kwargs))
739- try:
740- return cache[key]
741- except KeyError:
742- res = func(*args, **kwargs)
743- cache[key] = res
744- return res
745- return wrapper
746-
747-
748-@cached
749-def relation_ids(relation):
750- cmd = [
751- 'relation-ids',
752- relation
753- ]
754- result = str(subprocess.check_output(cmd)).split()
755- if result == "":
756- return None
757- else:
758- return result
759-
760-
761-@cached
762-def relation_list(rid):
763- cmd = [
764- 'relation-list',
765- '-r', rid,
766- ]
767- result = str(subprocess.check_output(cmd)).split()
768- if result == "":
769- return None
770- else:
771- return result
772-
773-
774-@cached
775-def relation_get(attribute, unit=None, rid=None):
776- cmd = [
777- 'relation-get',
778- ]
779- if rid:
780- cmd.append('-r')
781- cmd.append(rid)
782- cmd.append(attribute)
783- if unit:
784- cmd.append(unit)
785- value = subprocess.check_output(cmd).strip() # IGNORE:E1103
786- if value == "":
787- return None
788- else:
789- return value
790-
791-
792-@cached
793-def relation_get_dict(relation_id=None, remote_unit=None):
794- """Obtain all relation data as dict by way of JSON"""
795- cmd = [
796- 'relation-get', '--format=json'
797- ]
798- if relation_id:
799- cmd.append('-r')
800- cmd.append(relation_id)
801- if remote_unit:
802- remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None)
803- os.environ['JUJU_REMOTE_UNIT'] = remote_unit
804- j = subprocess.check_output(cmd)
805- if remote_unit and remote_unit_orig:
806- os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig
807- d = json.loads(j)
808- settings = {}
809- # convert unicode to strings
810- for k, v in d.iteritems():
811- settings[str(k)] = str(v)
812- return settings
813-
814-
815-def relation_set(**kwargs):
816- cmd = [
817- 'relation-set'
818- ]
819- args = []
820- for k, v in kwargs.items():
821- if k == 'rid':
822- if v:
823- cmd.append('-r')
824- cmd.append(v)
825- else:
826- args.append('{}={}'.format(k, v))
827- cmd += args
828- subprocess.check_call(cmd)
829-
830-
831-@cached
832-def unit_get(attribute):
833- cmd = [
834- 'unit-get',
835- attribute
836- ]
837- value = subprocess.check_output(cmd).strip() # IGNORE:E1103
838- if value == "":
839- return None
840- else:
841- return value
842-
843-
844-@cached
845-def config_get(attribute):
846- cmd = [
847- 'config-get',
848- '--format',
849- 'json',
850- ]
851- out = subprocess.check_output(cmd).strip() # IGNORE:E1103
852- cfg = json.loads(out)
853-
854- try:
855- return cfg[attribute]
856- except KeyError:
857- return None
858-
859-
860-@cached
861-def get_unit_hostname():
862- return socket.gethostname()
863-
864-
865-@cached
866-def get_host_ip(hostname=None):
867- hostname = hostname or unit_get('private-address')
868- try:
869- # Test to see if already an IPv4 address
870- socket.inet_aton(hostname)
871- return hostname
872- except socket.error:
873- answers = dns.resolver.query(hostname, 'A')
874- if answers:
875- return answers[0].address
876- return None
877-
878-
879-def _svc_control(service, action):
880- subprocess.check_call(['service', service, action])
881-
882-
883-def restart(*services):
884- for service in services:
885- _svc_control(service, 'restart')
886-
887-
888-def stop(*services):
889- for service in services:
890- _svc_control(service, 'stop')
891-
892-
893-def start(*services):
894- for service in services:
895- _svc_control(service, 'start')
896-
897-
898-def reload(*services):
899- for service in services:
900- try:
901- _svc_control(service, 'reload')
902- except subprocess.CalledProcessError:
903- # Reload failed - either service does not support reload
904- # or it was not running - restart will fixup most things
905- _svc_control(service, 'restart')
906-
907-
908-def running(service):
909- try:
910- output = subprocess.check_output(['service', service, 'status'])
911- except subprocess.CalledProcessError:
912- return False
913- else:
914- if ("start/running" in output or
915- "is running" in output):
916- return True
917- else:
918- return False
919-
920-
921-def is_relation_made(relation, key='private-address'):
922- for r_id in (relation_ids(relation) or []):
923- for unit in (relation_list(r_id) or []):
924- if relation_get(key, rid=r_id, unit=unit):
925- return True
926- return False
927
928=== removed file 'charmhelpers/contrib/openstack/IMPORT'
929--- charmhelpers/contrib/openstack/IMPORT 2013-05-30 23:26:24 +0000
930+++ charmhelpers/contrib/openstack/IMPORT 1970-01-01 00:00:00 +0000
931@@ -1,9 +0,0 @@
932-Source: lp:~openstack-charmers/openstack-charm-helpers/ha-helpers
933-
934-ha-helpers/lib/openstack-common -> charm-helpers/charmhelpers/contrib/openstackhelpers/openstack-common
935-ha-helpers/lib/openstack_common.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/openstack_common.py
936-ha-helpers/lib/nova -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova
937-ha-helpers/lib/nova/nova-common -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/nova-common
938-ha-helpers/lib/nova/grizzly -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/grizzly
939-ha-helpers/lib/nova/essex -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/essex
940-ha-helpers/lib/nova/folsom -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/folsom
941
942=== removed directory 'charmhelpers/contrib/openstack/nova'
943=== removed file 'charmhelpers/contrib/openstack/nova/essex'
944--- charmhelpers/contrib/openstack/nova/essex 2013-05-30 23:26:24 +0000
945+++ charmhelpers/contrib/openstack/nova/essex 1970-01-01 00:00:00 +0000
946@@ -1,43 +0,0 @@
947-#!/bin/bash -e
948-
949-# Essex-specific functions
950-
951-nova_set_or_update() {
952- # Set a config option in nova.conf or api-paste.ini, depending
953- # Defaults to updating nova.conf
954- local key=$1
955- local value=$2
956- local conf_file=$3
957- local pattern=""
958-
959- local nova_conf=${NOVA_CONF:-/etc/nova/nova.conf}
960- local api_conf=${API_CONF:-/etc/nova/api-paste.ini}
961- local libvirtd_conf=${LIBVIRTD_CONF:-/etc/libvirt/libvirtd.conf}
962- [[ -z $key ]] && juju-log "$CHARM set_or_update: value $value missing key" && exit 1
963- [[ -z $value ]] && juju-log "$CHARM set_or_update: key $key missing value" && exit 1
964- [[ -z "$conf_file" ]] && conf_file=$nova_conf
965-
966- case "$conf_file" in
967- "$nova_conf") match="\-\-$key="
968- pattern="--$key="
969- out=$pattern
970- ;;
971- "$api_conf"|"$libvirtd_conf") match="^$key = "
972- pattern="$match"
973- out="$key = "
974- ;;
975- *) error_out "ERROR: set_or_update: Invalid conf_file ($conf_file)"
976- esac
977-
978- cat $conf_file | grep "$match$value" >/dev/null &&
979- juju-log "$CHARM: $key=$value already in set in $conf_file" \
980- && return 0
981- if cat $conf_file | grep "$match" >/dev/null ; then
982- juju-log "$CHARM: Updating $conf_file, $key=$value"
983- sed -i "s|\($pattern\).*|\1$value|" $conf_file
984- else
985- juju-log "$CHARM: Setting new option $key=$value in $conf_file"
986- echo "$out$value" >>$conf_file
987- fi
988- CONFIG_CHANGED=True
989-}
990
991=== removed file 'charmhelpers/contrib/openstack/nova/folsom'
992--- charmhelpers/contrib/openstack/nova/folsom 2013-05-30 23:26:24 +0000
993+++ charmhelpers/contrib/openstack/nova/folsom 1970-01-01 00:00:00 +0000
994@@ -1,81 +0,0 @@
995-#!/bin/bash -e
996-
997-# Folsom-specific functions
998-
999-nova_set_or_update() {
1000- # TODO: This needs to be shared among folsom, grizzly and beyond.
1001- # Set a config option in nova.conf or api-paste.ini, depending
1002- # Defaults to updating nova.conf
1003- local key="$1"
1004- local value="$2"
1005- local conf_file="$3"
1006- local section="${4:-DEFAULT}"
1007-
1008- local nova_conf=${NOVA_CONF:-/etc/nova/nova.conf}
1009- local api_conf=${API_CONF:-/etc/nova/api-paste.ini}
1010- local quantum_conf=${QUANTUM_CONF:-/etc/quantum/quantum.conf}
1011- local quantum_api_conf=${QUANTUM_API_CONF:-/etc/quantum/api-paste.ini}
1012- local quantum_plugin_conf=${QUANTUM_PLUGIN_CONF:-/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini}
1013- local libvirtd_conf=${LIBVIRTD_CONF:-/etc/libvirt/libvirtd.conf}
1014-
1015- [[ -z $key ]] && juju-log "$CHARM: set_or_update: value $value missing key" && exit 1
1016- [[ -z $value ]] && juju-log "$CHARM: set_or_update: key $key missing value" && exit 1
1017-
1018- [[ -z "$conf_file" ]] && conf_file=$nova_conf
1019-
1020- local pattern=""
1021- case "$conf_file" in
1022- "$nova_conf") match="^$key="
1023- pattern="$key="
1024- out=$pattern
1025- ;;
1026- "$api_conf"|"$quantum_conf"|"$quantum_api_conf"|"$quantum_plugin_conf"| \
1027- "$libvirtd_conf")
1028- match="^$key = "
1029- pattern="$match"
1030- out="$key = "
1031- ;;
1032- *) juju-log "$CHARM ERROR: set_or_update: Invalid conf_file ($conf_file)"
1033- esac
1034-
1035- cat $conf_file | grep "$match$value" >/dev/null &&
1036- juju-log "$CHARM: $key=$value already in set in $conf_file" \
1037- && return 0
1038-
1039- case $conf_file in
1040- "$quantum_conf"|"$quantum_api_conf"|"$quantum_plugin_conf")
1041- python -c "
1042-import ConfigParser
1043-config = ConfigParser.RawConfigParser()
1044-config.read('$conf_file')
1045-config.set('$section','$key','$value')
1046-with open('$conf_file', 'wb') as configfile:
1047- config.write(configfile)
1048-"
1049- ;;
1050- *)
1051- if cat $conf_file | grep "$match" >/dev/null ; then
1052- juju-log "$CHARM: Updating $conf_file, $key=$value"
1053- sed -i "s|\($pattern\).*|\1$value|" $conf_file
1054- else
1055- juju-log "$CHARM: Setting new option $key=$value in $conf_file"
1056- echo "$out$value" >>$conf_file
1057- fi
1058- ;;
1059- esac
1060- CONFIG_CHANGED="True"
1061-}
1062-
1063-# Upgrade Helpers
1064-nova_pre_upgrade() {
1065- # Pre-upgrade helper. Caller should pass the version of OpenStack we are
1066- # upgrading from.
1067- return 0 # Nothing to do here, yet.
1068-}
1069-
1070-nova_post_upgrade() {
1071- # Post-upgrade helper. Caller should pass the version of OpenStack we are
1072- # upgrading from.
1073- juju-log "$CHARM: Running post-upgrade hook: $upgrade_from -> folsom."
1074- # nothing to do here yet.
1075-}
1076
1077=== removed symlink 'charmhelpers/contrib/openstack/nova/grizzly'
1078=== target was u'folsom'
1079=== removed file 'charmhelpers/contrib/openstack/nova/nova-common'
1080--- charmhelpers/contrib/openstack/nova/nova-common 2013-05-30 23:26:24 +0000
1081+++ charmhelpers/contrib/openstack/nova/nova-common 1970-01-01 00:00:00 +0000
1082@@ -1,147 +0,0 @@
1083-#!/bin/bash -e
1084-
1085-# Common utility functions used across all nova charms.
1086-
1087-CONFIG_CHANGED=False
1088-
1089-# Load the common OpenStack helper library.
1090-if [[ -e $CHARM_DIR/lib/openstack-common ]] ; then
1091- . $CHARM_DIR/lib/openstack-common
1092-else
1093- juju-log "Couldn't load $CHARM_DIR/lib/opentack-common." && exit 1
1094-fi
1095-
1096-set_or_update() {
1097- # Update config flags in nova.conf or api-paste.ini.
1098- # Config layout changed in Folsom, so this is now OpenStack release specific.
1099- local rel=$(get_os_codename_package "nova-common")
1100- . $CHARM_DIR/lib/nova/$rel
1101- nova_set_or_update $@
1102-}
1103-
1104-function set_config_flags() {
1105- # Set user-defined nova.conf flags from deployment config
1106- juju-log "$CHARM: Processing config-flags."
1107- flags=$(config-get config-flags)
1108- if [[ "$flags" != "None" && -n "$flags" ]] ; then
1109- for f in $(echo $flags | sed -e 's/,/ /g') ; do
1110- k=$(echo $f | cut -d= -f1)
1111- v=$(echo $f | cut -d= -f2)
1112- set_or_update "$k" "$v"
1113- done
1114- fi
1115-}
1116-
1117-configure_volume_service() {
1118- local svc="$1"
1119- local cur_vers="$(get_os_codename_package "nova-common")"
1120- case "$svc" in
1121- "cinder")
1122- set_or_update "volume_api_class" "nova.volume.cinder.API" ;;
1123- "nova-volume")
1124- # nova-volume only supported before grizzly.
1125- [[ "$cur_vers" == "essex" ]] || [[ "$cur_vers" == "folsom" ]] &&
1126- set_or_update "volume_api_class" "nova.volume.api.API"
1127- ;;
1128- *) juju-log "$CHARM ERROR - configure_volume_service: Invalid service $svc"
1129- return 1 ;;
1130- esac
1131-}
1132-
1133-function configure_network_manager {
1134- local manager="$1"
1135- echo "$CHARM: configuring $manager network manager"
1136- case $1 in
1137- "FlatManager")
1138- set_or_update "network_manager" "nova.network.manager.FlatManager"
1139- ;;
1140- "FlatDHCPManager")
1141- set_or_update "network_manager" "nova.network.manager.FlatDHCPManager"
1142-
1143- if [[ "$CHARM" == "nova-compute" ]] ; then
1144- local flat_interface=$(config-get flat-interface)
1145- local ec2_host=$(relation-get ec2_host)
1146- set_or_update flat_inteface "$flat_interface"
1147- set_or_update ec2_dmz_host "$ec2_host"
1148-
1149- # Ensure flat_interface has link.
1150- if ip link show $flat_interface >/dev/null 2>&1 ; then
1151- ip link set $flat_interface up
1152- fi
1153-
1154- # work around (LP: #1035172)
1155- if [[ -e /dev/vhost-net ]] ; then
1156- iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM \
1157- --checksum-fill
1158- fi
1159- fi
1160-
1161- ;;
1162- "Quantum")
1163- local local_ip=$(get_ip `unit-get private-address`)
1164- [[ -n $local_ip ]] || {
1165- juju-log "Unable to resolve local IP address"
1166- exit 1
1167- }
1168- set_or_update "network_api_class" "nova.network.quantumv2.api.API"
1169- set_or_update "quantum_auth_strategy" "keystone"
1170- set_or_update "core_plugin" "$QUANTUM_CORE_PLUGIN" "$QUANTUM_CONF"
1171- set_or_update "bind_host" "0.0.0.0" "$QUANTUM_CONF"
1172- if [ "$QUANTUM_PLUGIN" == "ovs" ]; then
1173- set_or_update "tenant_network_type" "gre" $QUANTUM_PLUGIN_CONF "OVS"
1174- set_or_update "enable_tunneling" "True" $QUANTUM_PLUGIN_CONF "OVS"
1175- set_or_update "tunnel_id_ranges" "1:1000" $QUANTUM_PLUGIN_CONF "OVS"
1176- set_or_update "local_ip" "$local_ip" $QUANTUM_PLUGIN_CONF "OVS"
1177- fi
1178- ;;
1179- *) juju-log "ERROR: Invalid network manager $1" && exit 1 ;;
1180- esac
1181-}
1182-
1183-function trigger_remote_service_restarts() {
1184- # Trigger a service restart on all other nova nodes that have a relation
1185- # via the cloud-controller interface.
1186-
1187- # possible relations to other nova services.
1188- local relations="cloud-compute nova-volume-service"
1189-
1190- for rel in $relations; do
1191- local r_ids=$(relation-ids $rel)
1192- for r_id in $r_ids ; do
1193- juju-log "$CHARM: Triggering a service restart on relation $r_id."
1194- relation-set -r $r_id restart-trigger=$(uuid)
1195- done
1196- done
1197-}
1198-
1199-do_openstack_upgrade() {
1200- # update openstack components to those provided by a new installation source
1201- # it is assumed the calling hook has confirmed that the upgrade is sane.
1202- local rel="$1"
1203- shift
1204- local packages=$@
1205-
1206- orig_os_rel=$(get_os_codename_package "nova-common")
1207- new_rel=$(get_os_codename_install_source "$rel")
1208-
1209- # Backup the config directory.
1210- local stamp=$(date +"%Y%m%d%M%S")
1211- tar -pcf /var/lib/juju/$CHARM-backup-$stamp.tar $CONF_DIR
1212-
1213- # load the release helper library for pre/post upgrade hooks specific to the
1214- # release we are upgrading to.
1215- . $CHARM_DIR/lib/nova/$new_rel
1216-
1217- # new release specific pre-upgrade hook
1218- nova_pre_upgrade "$orig_os_rel"
1219-
1220- # Setup apt repository access and kick off the actual package upgrade.
1221- configure_install_source "$rel"
1222- apt-get update
1223- DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confold -y \
1224- install --no-install-recommends $packages
1225-
1226- # new release sepcific post-upgrade hook
1227- nova_post_upgrade "$orig_os_rel"
1228-
1229-}
1230
1231=== removed file 'charmhelpers/contrib/openstack/openstack-common'
1232--- charmhelpers/contrib/openstack/openstack-common 2013-05-30 23:26:24 +0000
1233+++ charmhelpers/contrib/openstack/openstack-common 1970-01-01 00:00:00 +0000
1234@@ -1,781 +0,0 @@
1235-#!/bin/bash -e
1236-
1237-# Common utility functions used across all OpenStack charms.
1238-
1239-error_out() {
1240- juju-log "$CHARM ERROR: $@"
1241- exit 1
1242-}
1243-
1244-function service_ctl_status {
1245- # Return 0 if a service is running, 1 otherwise.
1246- local svc="$1"
1247- local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }')
1248- case $status in
1249- "start") return 0 ;;
1250- "stop") return 1 ;;
1251- *) error_out "Unexpected status of service $svc: $status" ;;
1252- esac
1253-}
1254-
1255-function service_ctl {
1256- # control a specific service, or all (as defined by $SERVICES)
1257- # service restarts will only occur depending on global $CONFIG_CHANGED,
1258- # which should be updated in charm's set_or_update().
1259- local config_changed=${CONFIG_CHANGED:-True}
1260- if [[ $1 == "all" ]] ; then
1261- ctl="$SERVICES"
1262- else
1263- ctl="$1"
1264- fi
1265- action="$2"
1266- if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then
1267- error_out "ERROR service_ctl: Not enough arguments"
1268- fi
1269-
1270- for i in $ctl ; do
1271- case $action in
1272- "start")
1273- service_ctl_status $i || service $i start ;;
1274- "stop")
1275- service_ctl_status $i && service $i stop || return 0 ;;
1276- "restart")
1277- if [[ "$config_changed" == "True" ]] ; then
1278- service_ctl_status $i && service $i restart || service $i start
1279- fi
1280- ;;
1281- esac
1282- if [[ $? != 0 ]] ; then
1283- juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action"
1284- fi
1285- done
1286- # all configs should have been reloaded on restart of all services, reset
1287- # flag if its being used.
1288- if [[ "$action" == "restart" ]] && [[ -n "$CONFIG_CHANGED" ]] &&
1289- [[ "$ctl" == "all" ]]; then
1290- CONFIG_CHANGED="False"
1291- fi
1292-}
1293-
1294-function configure_install_source {
1295- # Setup and configure installation source based on a config flag.
1296- local src="$1"
1297-
1298- # Default to installing from the main Ubuntu archive.
1299- [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0
1300-
1301- . /etc/lsb-release
1302-
1303- # standard 'ppa:someppa/name' format.
1304- if [[ "${src:0:4}" == "ppa:" ]] ; then
1305- juju-log "$CHARM: Configuring installation from custom src ($src)"
1306- add-apt-repository -y "$src" || error_out "Could not configure PPA access."
1307- return 0
1308- fi
1309-
1310- # standard 'deb http://url/ubuntu main' entries. gpg key ids must
1311- # be appended to the end of url after a |, ie:
1312- # 'deb http://url/ubuntu main|$GPGKEYID'
1313- if [[ "${src:0:3}" == "deb" ]] ; then
1314- juju-log "$CHARM: Configuring installation from custom src URL ($src)"
1315- if echo "$src" | grep -q "|" ; then
1316- # gpg key id tagged to end of url folloed by a |
1317- url=$(echo $src | cut -d'|' -f1)
1318- key=$(echo $src | cut -d'|' -f2)
1319- juju-log "$CHARM: Importing repository key: $key"
1320- apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \
1321- juju-log "$CHARM WARN: Could not import key from keyserver: $key"
1322- else
1323- juju-log "$CHARM No repository key specified."
1324- url="$src"
1325- fi
1326- echo "$url" > /etc/apt/sources.list.d/juju_deb.list
1327- return 0
1328- fi
1329-
1330- # Cloud Archive
1331- if [[ "${src:0:6}" == "cloud:" ]] ; then
1332-
1333- # current os releases supported by the UCA.
1334- local cloud_archive_versions="folsom grizzly"
1335-
1336- local ca_rel=$(echo $src | cut -d: -f2)
1337- local u_rel=$(echo $ca_rel | cut -d- -f1)
1338- local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1)
1339-
1340- [[ "$u_rel" != "$DISTRIB_CODENAME" ]] &&
1341- error_out "Cannot install from Cloud Archive pocket $src " \
1342- "on this Ubuntu version ($DISTRIB_CODENAME)!"
1343-
1344- valid_release=""
1345- for rel in $cloud_archive_versions ; do
1346- if [[ "$os_rel" == "$rel" ]] ; then
1347- valid_release=1
1348- juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive."
1349- fi
1350- done
1351- if [[ -z "$valid_release" ]] ; then
1352- error_out "OpenStack release ($os_rel) not supported by "\
1353- "the Ubuntu Cloud Archive."
1354- fi
1355-
1356- # CA staging repos are standard PPAs.
1357- if echo $ca_rel | grep -q "staging" ; then
1358- add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging
1359- return 0
1360- fi
1361-
1362- # the others are LP-external deb repos.
1363- case "$ca_rel" in
1364- "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
1365- "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
1366- "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
1367- "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
1368- *) error_out "Invalid Cloud Archive repo specified: $src"
1369- esac
1370-
1371- apt-get -y install ubuntu-cloud-keyring
1372- entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main"
1373- echo "$entry" \
1374- >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list
1375- return 0
1376- fi
1377-
1378- error_out "Invalid installation source specified in config: $src"
1379-
1380-}
1381-
1382-get_os_codename_install_source() {
1383- # derive the openstack release provided by a supported installation source.
1384- local rel="$1"
1385- local codename="unknown"
1386- . /etc/lsb-release
1387-
1388- # map ubuntu releases to the openstack version shipped with it.
1389- if [[ "$rel" == "distro" ]] ; then
1390- case "$DISTRIB_CODENAME" in
1391- "oneiric") codename="diablo" ;;
1392- "precise") codename="essex" ;;
1393- "quantal") codename="folsom" ;;
1394- "raring") codename="grizzly" ;;
1395- esac
1396- fi
1397-
1398- # derive version from cloud archive strings.
1399- if [[ "${rel:0:6}" == "cloud:" ]] ; then
1400- rel=$(echo $rel | cut -d: -f2)
1401- local u_rel=$(echo $rel | cut -d- -f1)
1402- local ca_rel=$(echo $rel | cut -d- -f2)
1403- if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then
1404- case "$ca_rel" in
1405- "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging")
1406- codename="folsom" ;;
1407- "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging")
1408- codename="grizzly" ;;
1409- esac
1410- fi
1411- fi
1412-
1413- # have a guess based on the deb string provided
1414- if [[ "${rel:0:3}" == "deb" ]] || \
1415- [[ "${rel:0:3}" == "ppa" ]] ; then
1416- CODENAMES="diablo essex folsom grizzly havana"
1417- for cname in $CODENAMES; do
1418- if echo $rel | grep -q $cname; then
1419- codename=$cname
1420- fi
1421- done
1422- fi
1423- echo $codename
1424-}
1425-
1426-get_os_codename_package() {
1427- local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none"
1428- pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs
1429- case "${pkg_vers:0:6}" in
1430- "2011.2") echo "diablo" ;;
1431- "2012.1") echo "essex" ;;
1432- "2012.2") echo "folsom" ;;
1433- "2013.1") echo "grizzly" ;;
1434- "2013.2") echo "havana" ;;
1435- esac
1436-}
1437-
1438-get_os_version_codename() {
1439- case "$1" in
1440- "diablo") echo "2011.2" ;;
1441- "essex") echo "2012.1" ;;
1442- "folsom") echo "2012.2" ;;
1443- "grizzly") echo "2013.1" ;;
1444- "havana") echo "2013.2" ;;
1445- esac
1446-}
1447-
1448-get_ip() {
1449- dpkg -l | grep -q python-dnspython || {
1450- apt-get -y install python-dnspython 2>&1 > /dev/null
1451- }
1452- hostname=$1
1453- python -c "
1454-import dns.resolver
1455-import socket
1456-try:
1457- # Test to see if already an IPv4 address
1458- socket.inet_aton('$hostname')
1459- print '$hostname'
1460-except socket.error:
1461- try:
1462- answers = dns.resolver.query('$hostname', 'A')
1463- if answers:
1464- print answers[0].address
1465- except dns.resolver.NXDOMAIN:
1466- pass
1467-"
1468-}
1469-
1470-# Common storage routines used by cinder, nova-volume and swift-storage.
1471-clean_storage() {
1472- # if configured to overwrite existing storage, we unmount the block-dev
1473- # if mounted and clear any previous pv signatures
1474- local block_dev="$1"
1475- juju-log "Cleaining storage '$block_dev'"
1476- if grep -q "^$block_dev" /proc/mounts ; then
1477- mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }')
1478- juju-log "Unmounting $block_dev from $mp"
1479- umount "$mp" || error_out "ERROR: Could not unmount storage from $mp"
1480- fi
1481- if pvdisplay "$block_dev" >/dev/null 2>&1 ; then
1482- juju-log "Removing existing LVM PV signatures from $block_dev"
1483-
1484- # deactivate any volgroups that may be built on this dev
1485- vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }')
1486- if [[ -n "$vg" ]] ; then
1487- juju-log "Deactivating existing volume group: $vg"
1488- vgchange -an "$vg" ||
1489- error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?"
1490- fi
1491- echo "yes" | pvremove -ff "$block_dev" ||
1492- error_out "Could not pvremove $block_dev"
1493- else
1494- juju-log "Zapping disk of all GPT and MBR structures"
1495- sgdisk --zap-all $block_dev ||
1496- error_out "Unable to zap $block_dev"
1497- fi
1498-}
1499-
1500-function get_block_device() {
1501- # given a string, return full path to the block device for that
1502- # if input is not a block device, find a loopback device
1503- local input="$1"
1504-
1505- case "$input" in
1506- /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist."
1507- echo "$input"; return 0;;
1508- /*) :;;
1509- *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist."
1510- echo "/dev/$input"; return 0;;
1511- esac
1512-
1513- # this represents a file
1514- # support "/path/to/file|5G"
1515- local fpath size oifs="$IFS"
1516- if [ "${input#*|}" != "${input}" ]; then
1517- size=${input##*|}
1518- fpath=${input%|*}
1519- else
1520- fpath=${input}
1521- size=5G
1522- fi
1523-
1524- ## loop devices are not namespaced. This is bad for containers.
1525- ## it means that the output of 'losetup' may have the given $fpath
1526- ## in it, but that may not represent this containers $fpath, but
1527- ## another containers. To address that, we really need to
1528- ## allow some uniq container-id to be expanded within path.
1529- ## TODO: find a unique container-id that will be consistent for
1530- ## this container throughout its lifetime and expand it
1531- ## in the fpath.
1532- # fpath=${fpath//%{id}/$THAT_ID}
1533-
1534- local found=""
1535- # parse through 'losetup -a' output, looking for this file
1536- # output is expected to look like:
1537- # /dev/loop0: [0807]:961814 (/tmp/my.img)
1538- found=$(losetup -a |
1539- awk 'BEGIN { found=0; }
1540- $3 == f { sub(/:$/,"",$1); print $1; found=found+1; }
1541- END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \
1542- f="($fpath)")
1543-
1544- if [ $? -ne 0 ]; then
1545- echo "multiple devices found for $fpath: $found" 1>&2
1546- return 1;
1547- fi
1548-
1549- [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; }
1550-
1551- if [ -n "$found" ]; then
1552- echo "confused, $found is not a block device for $fpath";
1553- return 1;
1554- fi
1555-
1556- # no existing device was found, create one
1557- mkdir -p "${fpath%/*}"
1558- truncate --size "$size" "$fpath" ||
1559- { echo "failed to create $fpath of size $size"; return 1; }
1560-
1561- found=$(losetup --find --show "$fpath") ||
1562- { echo "failed to setup loop device for $fpath" 1>&2; return 1; }
1563-
1564- echo "$found"
1565- return 0
1566-}
1567-
1568-HAPROXY_CFG=/etc/haproxy/haproxy.cfg
1569-HAPROXY_DEFAULT=/etc/default/haproxy
1570-##########################################################################
1571-# Description: Configures HAProxy services for Openstack API's
1572-# Parameters:
1573-# Space delimited list of service:port:mode combinations for which
1574-# haproxy service configuration should be generated for. The function
1575-# assumes the name of the peer relation is 'cluster' and that every
1576-# service unit in the peer relation is running the same services.
1577-#
1578-# Services that do not specify :mode in parameter will default to http.
1579-#
1580-# Example
1581-# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http
1582-##########################################################################
1583-configure_haproxy() {
1584- local address=`unit-get private-address`
1585- local name=${JUJU_UNIT_NAME////-}
1586- cat > $HAPROXY_CFG << EOF
1587-global
1588- log 127.0.0.1 local0
1589- log 127.0.0.1 local1 notice
1590- maxconn 20000
1591- user haproxy
1592- group haproxy
1593- spread-checks 0
1594-
1595-defaults
1596- log global
1597- mode http
1598- option httplog
1599- option dontlognull
1600- retries 3
1601- timeout queue 1000
1602- timeout connect 1000
1603- timeout client 30000
1604- timeout server 30000
1605-
1606-listen stats :8888
1607- mode http
1608- stats enable
1609- stats hide-version
1610- stats realm Haproxy\ Statistics
1611- stats uri /
1612- stats auth admin:password
1613-
1614-EOF
1615- for service in $@; do
1616- local service_name=$(echo $service | cut -d : -f 1)
1617- local haproxy_listen_port=$(echo $service | cut -d : -f 2)
1618- local api_listen_port=$(echo $service | cut -d : -f 3)
1619- local mode=$(echo $service | cut -d : -f 4)
1620- [[ -z "$mode" ]] && mode="http"
1621- juju-log "Adding haproxy configuration entry for $service "\
1622- "($haproxy_listen_port -> $api_listen_port)"
1623- cat >> $HAPROXY_CFG << EOF
1624-listen $service_name 0.0.0.0:$haproxy_listen_port
1625- balance roundrobin
1626- mode $mode
1627- option ${mode}log
1628- server $name $address:$api_listen_port check
1629-EOF
1630- local r_id=""
1631- local unit=""
1632- for r_id in `relation-ids cluster`; do
1633- for unit in `relation-list -r $r_id`; do
1634- local unit_name=${unit////-}
1635- local unit_address=`relation-get -r $r_id private-address $unit`
1636- if [ -n "$unit_address" ]; then
1637- echo " server $unit_name $unit_address:$api_listen_port check" \
1638- >> $HAPROXY_CFG
1639- fi
1640- done
1641- done
1642- done
1643- echo "ENABLED=1" > $HAPROXY_DEFAULT
1644- service haproxy restart
1645-}
1646-
1647-##########################################################################
1648-# Description: Query HA interface to determine is cluster is configured
1649-# Returns: 0 if configured, 1 if not configured
1650-##########################################################################
1651-is_clustered() {
1652- local r_id=""
1653- local unit=""
1654- for r_id in $(relation-ids ha); do
1655- if [ -n "$r_id" ]; then
1656- for unit in $(relation-list -r $r_id); do
1657- clustered=$(relation-get -r $r_id clustered $unit)
1658- if [ -n "$clustered" ]; then
1659- juju-log "Unit is haclustered"
1660- return 0
1661- fi
1662- done
1663- fi
1664- done
1665- juju-log "Unit is not haclustered"
1666- return 1
1667-}
1668-
1669-##########################################################################
1670-# Description: Return a list of all peers in cluster relations
1671-##########################################################################
1672-peer_units() {
1673- local peers=""
1674- local r_id=""
1675- for r_id in $(relation-ids cluster); do
1676- peers="$peers $(relation-list -r $r_id)"
1677- done
1678- echo $peers
1679-}
1680-
1681-##########################################################################
1682-# Description: Determines whether the current unit is the oldest of all
1683-# its peers - supports partial leader election
1684-# Returns: 0 if oldest, 1 if not
1685-##########################################################################
1686-oldest_peer() {
1687- peers=$1
1688- local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2)
1689- for peer in $peers; do
1690- echo "Comparing $JUJU_UNIT_NAME with peers: $peers"
1691- local r_unit_no=$(echo $peer | cut -d / -f 2)
1692- if (($r_unit_no<$l_unit_no)); then
1693- juju-log "Not oldest peer; deferring"
1694- return 1
1695- fi
1696- done
1697- juju-log "Oldest peer; might take charge?"
1698- return 0
1699-}
1700-
1701-##########################################################################
1702-# Description: Determines whether the current service units is the
1703-# leader within a) a cluster of its peers or b) across a
1704-# set of unclustered peers.
1705-# Parameters: CRM resource to check ownership of if clustered
1706-# Returns: 0 if leader, 1 if not
1707-##########################################################################
1708-eligible_leader() {
1709- if is_clustered; then
1710- if ! is_leader $1; then
1711- juju-log 'Deferring action to CRM leader'
1712- return 1
1713- fi
1714- else
1715- peers=$(peer_units)
1716- if [ -n "$peers" ] && ! oldest_peer "$peers"; then
1717- juju-log 'Deferring action to oldest service unit.'
1718- return 1
1719- fi
1720- fi
1721- return 0
1722-}
1723-
1724-##########################################################################
1725-# Description: Query Cluster peer interface to see if peered
1726-# Returns: 0 if peered, 1 if not peered
1727-##########################################################################
1728-is_peered() {
1729- local r_id=$(relation-ids cluster)
1730- if [ -n "$r_id" ]; then
1731- if [ -n "$(relation-list -r $r_id)" ]; then
1732- juju-log "Unit peered"
1733- return 0
1734- fi
1735- fi
1736- juju-log "Unit not peered"
1737- return 1
1738-}
1739-
1740-##########################################################################
1741-# Description: Determines whether host is owner of clustered services
1742-# Parameters: Name of CRM resource to check ownership of
1743-# Returns: 0 if leader, 1 if not leader
1744-##########################################################################
1745-is_leader() {
1746- hostname=`hostname`
1747- if [ -x /usr/sbin/crm ]; then
1748- if crm resource show $1 | grep -q $hostname; then
1749- juju-log "$hostname is cluster leader."
1750- return 0
1751- fi
1752- fi
1753- juju-log "$hostname is not cluster leader."
1754- return 1
1755-}
1756-
1757-##########################################################################
1758-# Description: Determines whether enough data has been provided in
1759-# configuration or relation data to configure HTTPS.
1760-# Parameters: None
1761-# Returns: 0 if HTTPS can be configured, 1 if not.
1762-##########################################################################
1763-https() {
1764- local r_id=""
1765- if [[ -n "$(config-get ssl_cert)" ]] &&
1766- [[ -n "$(config-get ssl_key)" ]] ; then
1767- return 0
1768- fi
1769- for r_id in $(relation-ids identity-service) ; do
1770- for unit in $(relation-list -r $r_id) ; do
1771- if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] &&
1772- [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] &&
1773- [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] &&
1774- [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then
1775- return 0
1776- fi
1777- done
1778- done
1779- return 1
1780-}
1781-
1782-##########################################################################
1783-# Description: For a given number of port mappings, configures apache2
1784-# HTTPs local reverse proxying using certficates and keys provided in
1785-# either configuration data (preferred) or relation data. Assumes ports
1786-# are not in use (calling charm should ensure that).
1787-# Parameters: Variable number of proxy port mappings as
1788-# $internal:$external.
1789-# Returns: 0 if reverse proxy(s) have been configured, 0 if not.
1790-##########################################################################
1791-enable_https() {
1792- local port_maps="$@"
1793- local http_restart=""
1794- juju-log "Enabling HTTPS for port mappings: $port_maps."
1795-
1796- # allow overriding of keystone provided certs with those set manually
1797- # in config.
1798- local cert=$(config-get ssl_cert)
1799- local key=$(config-get ssl_key)
1800- local ca_cert=""
1801- if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then
1802- juju-log "Inspecting identity-service relations for SSL certificate."
1803- local r_id=""
1804- cert=""
1805- key=""
1806- ca_cert=""
1807- for r_id in $(relation-ids identity-service) ; do
1808- for unit in $(relation-list -r $r_id) ; do
1809- [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)"
1810- [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)"
1811- [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)"
1812- done
1813- done
1814- [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di)
1815- [[ -n "$key" ]] && key=$(echo $key | base64 -di)
1816- [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di)
1817- else
1818- juju-log "Using SSL certificate provided in service config."
1819- fi
1820-
1821- [[ -z "$cert" ]] || [[ -z "$key" ]] &&
1822- juju-log "Expected but could not find SSL certificate data, not "\
1823- "configuring HTTPS!" && return 1
1824-
1825- apt-get -y install apache2
1826- a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" &&
1827- http_restart=1
1828-
1829- mkdir -p /etc/apache2/ssl/$CHARM
1830- echo "$cert" >/etc/apache2/ssl/$CHARM/cert
1831- echo "$key" >/etc/apache2/ssl/$CHARM/key
1832- if [[ -n "$ca_cert" ]] ; then
1833- juju-log "Installing Keystone supplied CA cert."
1834- echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt
1835- update-ca-certificates --fresh
1836-
1837- # XXX TODO: Find a better way of exporting this?
1838- if [[ "$CHARM" == "nova-cloud-controller" ]] ; then
1839- [[ -e /var/www/keystone_juju_ca_cert.crt ]] &&
1840- rm -rf /var/www/keystone_juju_ca_cert.crt
1841- ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \
1842- /var/www/keystone_juju_ca_cert.crt
1843- fi
1844-
1845- fi
1846- for port_map in $port_maps ; do
1847- local ext_port=$(echo $port_map | cut -d: -f1)
1848- local int_port=$(echo $port_map | cut -d: -f2)
1849- juju-log "Creating apache2 reverse proxy vhost for $port_map."
1850- cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END
1851-Listen $ext_port
1852-NameVirtualHost *:$ext_port
1853-<VirtualHost *:$ext_port>
1854- ServerName $(unit-get private-address)
1855- SSLEngine on
1856- SSLCertificateFile /etc/apache2/ssl/$CHARM/cert
1857- SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key
1858- ProxyPass / http://localhost:$int_port/
1859- ProxyPassReverse / http://localhost:$int_port/
1860- ProxyPreserveHost on
1861-</VirtualHost>
1862-<Proxy *>
1863- Order deny,allow
1864- Allow from all
1865-</Proxy>
1866-<Location />
1867- Order allow,deny
1868- Allow from all
1869-</Location>
1870-END
1871- a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
1872- http_restart=1
1873- done
1874- if [[ -n "$http_restart" ]] ; then
1875- service apache2 restart
1876- fi
1877-}
1878-
1879-##########################################################################
1880-# Description: Ensure HTTPS reverse proxying is disabled for given port
1881-# mappings.
1882-# Parameters: Variable number of proxy port mappings as
1883-# $internal:$external.
1884-# Returns: 0 if reverse proxy is not active for all portmaps, 1 on error.
1885-##########################################################################
1886-disable_https() {
1887- local port_maps="$@"
1888- local http_restart=""
1889- juju-log "Ensuring HTTPS disabled for $port_maps."
1890- ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0
1891- for port_map in $port_maps ; do
1892- local ext_port=$(echo $port_map | cut -d: -f1)
1893- local int_port=$(echo $port_map | cut -d: -f2)
1894- if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then
1895- juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map."
1896- a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
1897- http_restart=1
1898- fi
1899- done
1900- if [[ -n "$http_restart" ]] ; then
1901- service apache2 restart
1902- fi
1903-}
1904-
1905-
1906-##########################################################################
1907-# Description: Ensures HTTPS is either enabled or disabled for given port
1908-# mapping.
1909-# Parameters: Variable number of proxy port mappings as
1910-# $internal:$external.
1911-# Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not.
1912-##########################################################################
1913-setup_https() {
1914- # configure https via apache reverse proxying either
1915- # using certs provided by config or keystone.
1916- [[ -z "$CHARM" ]] &&
1917- error_out "setup_https(): CHARM not set."
1918- if ! https ; then
1919- disable_https $@
1920- else
1921- enable_https $@
1922- fi
1923-}
1924-
1925-##########################################################################
1926-# Description: Determine correct API server listening port based on
1927-# existence of HTTPS reverse proxy and/or haproxy.
1928-# Paremeters: The standard public port for given service.
1929-# Returns: The correct listening port for API service.
1930-##########################################################################
1931-determine_api_port() {
1932- local public_port="$1"
1933- local i=0
1934- ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1]
1935- https >/dev/null 2>&1 && i=$[$i + 1]
1936- echo $[$public_port - $[$i * 10]]
1937-}
1938-
1939-##########################################################################
1940-# Description: Determine correct proxy listening port based on public IP +
1941-# existence of HTTPS reverse proxy.
1942-# Paremeters: The standard public port for given service.
1943-# Returns: The correct listening port for haproxy service public address.
1944-##########################################################################
1945-determine_haproxy_port() {
1946- local public_port="$1"
1947- local i=0
1948- https >/dev/null 2>&1 && i=$[$i + 1]
1949- echo $[$public_port - $[$i * 10]]
1950-}
1951-
1952-##########################################################################
1953-# Description: Print the value for a given config option in an OpenStack
1954-# .ini style configuration file.
1955-# Parameters: File path, option to retrieve, optional
1956-# section name (default=DEFAULT)
1957-# Returns: Prints value if set, prints nothing otherwise.
1958-##########################################################################
1959-local_config_get() {
1960- # return config values set in openstack .ini config files.
1961- # default placeholders starting (eg, %AUTH_HOST%) treated as
1962- # unset values.
1963- local file="$1"
1964- local option="$2"
1965- local section="$3"
1966- [[ -z "$section" ]] && section="DEFAULT"
1967- python -c "
1968-import ConfigParser
1969-config = ConfigParser.RawConfigParser()
1970-config.read('$file')
1971-try:
1972- value = config.get('$section', '$option')
1973-except:
1974- print ''
1975- exit(0)
1976-if value.startswith('%'): exit(0)
1977-print value
1978-"
1979-}
1980-
1981-##########################################################################
1982-# Description: Creates an rc file exporting environment variables to a
1983-# script_path local to the charm's installed directory.
1984-# Any charm scripts run outside the juju hook environment can source this
1985-# scriptrc to obtain updated config information necessary to perform health
1986-# checks or service changes
1987-#
1988-# Parameters:
1989-# An array of '=' delimited ENV_VAR:value combinations to export.
1990-# If optional script_path key is not provided in the array, script_path
1991-# defaults to scripts/scriptrc
1992-##########################################################################
1993-function save_script_rc {
1994- if [ ! -n "$JUJU_UNIT_NAME" ]; then
1995- echo "Error: Missing JUJU_UNIT_NAME environment variable"
1996- exit 1
1997- fi
1998- # our default unit_path
1999- unit_path="/var/lib/juju/units/${JUJU_UNIT_NAME/\//-}/charm/scripts/scriptrc"
2000- echo $unit_path
2001- tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc"
2002-
2003- echo "#!/bin/bash" > $tmp_rc
2004- for env_var in "${@}"
2005- do
2006- if `echo $env_var | grep -q script_path`; then
2007- # well then we need to reset the new unit-local script path
2008- unit_path="/var/lib/juju/units/${JUJU_UNIT_NAME/\//-}/charm/${env_var/script_path=/}"
2009- else
2010- echo "export $env_var" >> $tmp_rc
2011- fi
2012- done
2013- chmod 755 $tmp_rc
2014- mv $tmp_rc $unit_path
2015-}
2016
2017=== renamed file 'charmhelpers/contrib/openstack/openstack_utils.py' => 'charmhelpers/contrib/openstack/utils.py'
2018--- charmhelpers/contrib/openstack/openstack_utils.py 2013-07-09 18:48:53 +0000
2019+++ charmhelpers/contrib/openstack/utils.py 2013-07-11 19:29:39 +0000
2020@@ -9,6 +9,7 @@
2021
2022 from charmhelpers.core.hookenv import (
2023 config,
2024+ log as juju_log,
2025 )
2026
2027 from charmhelpers.core.host import (
2028@@ -45,12 +46,8 @@
2029 }
2030
2031
2032-def juju_log(msg):
2033- subprocess.check_call(['juju-log', msg])
2034-
2035-
2036 def error_out(msg):
2037- juju_log("FATAL ERROR: %s" % msg)
2038+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
2039 sys.exit(1)
2040
2041
2042
2043=== added file 'tests/contrib/hahelpers/test_apache_utils.py'
2044--- tests/contrib/hahelpers/test_apache_utils.py 1970-01-01 00:00:00 +0000
2045+++ tests/contrib/hahelpers/test_apache_utils.py 2013-07-11 19:29:39 +0000
2046@@ -0,0 +1,44 @@
2047+from mock import patch
2048+
2049+from testtools import TestCase
2050+
2051+import charmhelpers.contrib.hahelpers.apache as apache_utils
2052+
2053+
2054+class ApacheUtilsTests(TestCase):
2055+ def setUp(self):
2056+ super(ApacheUtilsTests, self).setUp()
2057+ [self._patch(m) for m in [
2058+ 'log',
2059+ 'config_get',
2060+ 'relation_get',
2061+ 'relation_ids',
2062+ 'relation_list',
2063+ ]]
2064+
2065+ def _patch(self, method):
2066+ _m = patch.object(apache_utils, method)
2067+ mock = _m.start()
2068+ self.addCleanup(_m.stop)
2069+ setattr(self, method, mock)
2070+
2071+ def test_get_cert_from_config(self):
2072+ '''Ensure cert and key from charm config override relation'''
2073+ self.config_get.side_effect = [
2074+ 'some_ca_cert', # config_get('ssl_cert')
2075+ 'some_ca_key', # config_Get('ssl_key')
2076+ ]
2077+ result = apache_utils.get_cert()
2078+ self.assertEquals(('some_ca_cert', 'some_ca_key'), result)
2079+
2080+ def test_get_cert_from_Relation(self):
2081+ self.config_get.return_value = None
2082+ self.relation_ids.return_value = 'identity-service:0'
2083+ self.relation_list.return_value = 'keystone/0'
2084+ self.relation_get.side_effect = [
2085+ 'keystone_provided_cert',
2086+ 'keystone_provided_key',
2087+ ]
2088+ result = apache_utils.get_cert()
2089+ self.assertEquals(('keystone_provided_cert', 'keystone_provided_key'),
2090+ result)
2091
2092=== added file 'tests/contrib/hahelpers/test_ceph_utils.py'
2093--- tests/contrib/hahelpers/test_ceph_utils.py 1970-01-01 00:00:00 +0000
2094+++ tests/contrib/hahelpers/test_ceph_utils.py 2013-07-11 19:29:39 +0000
2095@@ -0,0 +1,59 @@
2096+from mock import patch
2097+
2098+from testtools import TestCase
2099+
2100+import charmhelpers.contrib.hahelpers.ceph as ceph_utils
2101+
2102+
2103+LS_POOLS = """
2104+images
2105+volumes
2106+rbd
2107+"""
2108+
2109+
2110+class CephUtilsTests(TestCase):
2111+ def setUp(self):
2112+ super(CephUtilsTests, self).setUp()
2113+ [self._patch(m) for m in [
2114+ 'check_call',
2115+ 'log',
2116+ ]]
2117+
2118+ def _patch(self, method):
2119+ _m = patch.object(ceph_utils, method)
2120+ mock = _m.start()
2121+ self.addCleanup(_m.stop)
2122+ setattr(self, method, mock)
2123+
2124+ def test_create_keyring(self):
2125+ '''It creates a new ceph keyring'''
2126+ ceph_utils.create_keyring('cinder', 'cephkey')
2127+ _cmd = ['ceph-authtool', '/etc/ceph/ceph.client.cinder.keyring',
2128+ '--create-keyring', '--name=client.cinder',
2129+ '--add-key=cephkey']
2130+ self.check_call.assert_called_with(_cmd)
2131+
2132+ def test_create_pool(self):
2133+ '''It creates rados pool correctly'''
2134+ ceph_utils.create_pool(service='cinder', name='foo')
2135+ self.check_call.assert_called_with(
2136+ ['rados', '--id', 'cinder', 'mkpool', 'foo']
2137+ )
2138+
2139+ def test_keyring_path(self):
2140+ '''It correctly dervies keyring path from service name'''
2141+ result = ceph_utils.keyring_path('cinder')
2142+ self.assertEquals('/etc/ceph/ceph.client.cinder.keyring', result)
2143+
2144+ @patch('commands.getstatusoutput')
2145+ def test_pool_exists(self, get_output):
2146+ '''It detects an rbd pool exists'''
2147+ get_output.return_value = (0, LS_POOLS)
2148+ self.assertTrue(ceph_utils.pool_exists('cinder', 'volumes'))
2149+
2150+ @patch('commands.getstatusoutput')
2151+ def test_pool_does_not_exist(self, get_output):
2152+ '''It detects an rbd pool exists'''
2153+ get_output.return_value = (0, LS_POOLS)
2154+ self.assertFalse(ceph_utils.pool_exists('cinder', 'foo'))
2155
2156=== modified file 'tests/contrib/hahelpers/test_cluster_utils.py'
2157--- tests/contrib/hahelpers/test_cluster_utils.py 2013-07-09 18:05:11 +0000
2158+++ tests/contrib/hahelpers/test_cluster_utils.py 2013-07-11 19:29:39 +0000
2159@@ -4,14 +4,14 @@
2160 from subprocess import CalledProcessError
2161 from testtools import TestCase
2162
2163-import charmhelpers.contrib.hahelpers.cluster_utils as cluster_utils
2164+import charmhelpers.contrib.hahelpers.cluster as cluster_utils
2165
2166
2167 class ClusterUtilsTests(TestCase):
2168 def setUp(self):
2169 super(ClusterUtilsTests, self).setUp()
2170 [self._patch(m) for m in [
2171- 'juju_log',
2172+ 'log',
2173 'relation_ids',
2174 'relation_list',
2175 'relation_get',
2176
2177=== modified file 'tests/contrib/openstack/test_openstack_utils.py'
2178--- tests/contrib/openstack/test_openstack_utils.py 2013-07-09 18:48:53 +0000
2179+++ tests/contrib/openstack/test_openstack_utils.py 2013-07-11 19:29:39 +0000
2180@@ -1,12 +1,13 @@
2181-from copy import copy
2182 import os
2183+import subprocess
2184 import unittest
2185+
2186+from copy import copy
2187 from testtools import TestCase
2188-import charmhelpers.contrib.openstack.openstack_utils as openstack
2189-import subprocess
2190-
2191 from mock import MagicMock, patch, call
2192
2193+import charmhelpers.contrib.openstack.utils as openstack
2194+
2195
2196 # mocked return of openstack.lsb_release()
2197 FAKE_RELEASE = {
2198@@ -76,7 +77,7 @@
2199 cache.__getitem__.side_effect = cache_get
2200 return cache
2201
2202- @patch('charmhelpers.contrib.openstack.openstack_utils.lsb_release')
2203+ @patch('charmhelpers.contrib.openstack.utils.lsb_release')
2204 def test_os_codename_from_install_source(self, mocked_lsb):
2205 '''Test mapping install source to OpenStack release name'''
2206 mocked_lsb.return_value = FAKE_RELEASE
2207@@ -108,14 +109,14 @@
2208 openstack.get_os_version_install_source('cloud:precise-grizzly')
2209 version.assert_called_with('grizzly')
2210
2211- @patch('charmhelpers.contrib.openstack.openstack_utils.lsb_release')
2212+ @patch('charmhelpers.contrib.openstack.utils.lsb_release')
2213 def test_os_codename_from_bad_install_source(self, mocked_lsb):
2214 '''Test mapping install source to OpenStack release name'''
2215 _fake_release = copy(FAKE_RELEASE)
2216 _fake_release['DISTRIB_CODENAME'] = 'natty'
2217
2218 mocked_lsb.return_value = _fake_release
2219- _e = 'charmhelpers.contrib.openstack.openstack_utils.error_out'
2220+ _e = 'charmhelpers.contrib.openstack.utils.error_out'
2221 with patch(_e) as mocked_err:
2222 openstack.get_os_codename_install_source('distro')
2223 _er = ('Could not derive openstack release for this Ubuntu '
2224@@ -127,7 +128,7 @@
2225 self.assertEquals(openstack.get_os_codename_version('2013.1'),
2226 'grizzly')
2227
2228- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2229+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2230 def test_os_codename_from_bad_version(self, mocked_error):
2231 '''Test mapping a bad OpenStack numerical versions to code name'''
2232 openstack.get_os_codename_version('2014.5.5')
2233@@ -140,7 +141,7 @@
2234 self.assertEquals(openstack.get_os_version_codename('folsom'),
2235 '2012.2')
2236
2237- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2238+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2239 def test_os_version_from_bad_codename(self, mocked_error):
2240 '''Test mapping a bad OpenStack codename to numerical version'''
2241 openstack.get_os_version_codename('foo')
2242@@ -157,7 +158,7 @@
2243 self.assertEquals(openstack.get_os_codename_package(pkg),
2244 vers['os_release'])
2245
2246- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2247+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2248 def test_os_codename_from_bad_package_version(self, mocked_error):
2249 '''Test deriving OpenStack codename for a poorly versioned package'''
2250 with patch('apt_pkg.Cache') as cache:
2251@@ -166,7 +167,7 @@
2252 _e = ('Could not determine OpenStack codename for version 2016.1')
2253 mocked_error.assert_called_with(_e)
2254
2255- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2256+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2257 def test_os_codename_from_bad_package(self, mocked_error):
2258 '''Test deriving OpenStack codename from an uninstalled package'''
2259 with patch('apt_pkg.Cache') as cache:
2260@@ -189,7 +190,7 @@
2261 openstack.get_os_codename_package('foo', fatal=False)
2262 )
2263
2264- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2265+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2266 def test_os_version_from_package(self, mocked_error):
2267 '''Test deriving OpenStack version from an installed package'''
2268 with patch('apt_pkg.Cache') as cache:
2269@@ -200,7 +201,7 @@
2270 self.assertEquals(openstack.get_os_version_package(pkg),
2271 vers['os_version'])
2272
2273- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2274+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2275 def test_os_version_from_bad_package(self, mocked_error):
2276 '''Test deriving OpenStack version from an uninstalled package'''
2277 with patch('apt_pkg.Cache') as cache:
2278@@ -223,20 +224,14 @@
2279 openstack.get_os_version_package('foo', fatal=False)
2280 )
2281
2282- def test_juju_log(self):
2283- '''Test shelling out to juju-log'''
2284- with patch('subprocess.check_call') as mocked_subprocess:
2285- openstack.juju_log('foo')
2286- mocked_subprocess.assert_called_with(['juju-log', 'foo'])
2287-
2288+ @patch.object(openstack, 'juju_log')
2289 @patch('sys.exit')
2290- def test_error_out(self, mocked_exit):
2291+ def test_error_out(self, mocked_exit, juju_log):
2292 '''Test erroring out'''
2293- with patch('subprocess.check_call') as mocked_subprocess:
2294- openstack.error_out('Everything broke.')
2295- _log = ['juju-log', 'FATAL ERROR: Everything broke.']
2296- mocked_subprocess.assert_called_with(_log)
2297- mocked_exit.assert_called_with(1)
2298+ openstack.error_out('Everything broke.')
2299+ _log = 'FATAL ERROR: Everything broke.'
2300+ juju_log.assert_called_with(_log, level='ERROR')
2301+ mocked_exit.assert_called_with(1)
2302
2303 def test_configure_install_source_distro(self):
2304 '''Test configuring installation from distro'''
2305@@ -251,8 +246,8 @@
2306 mock.assert_called_with(ex_cmd)
2307
2308 @patch('__builtin__.open')
2309- @patch('charmhelpers.contrib.openstack.openstack_utils.juju_log')
2310- @patch('charmhelpers.contrib.openstack.openstack_utils.import_key')
2311+ @patch('charmhelpers.contrib.openstack.utils.juju_log')
2312+ @patch('charmhelpers.contrib.openstack.utils.import_key')
2313 def test_configure_install_source_deb_url(self, _import, _log, _open):
2314 '''Test configuring installation source from deb repo url'''
2315 _file = MagicMock(spec=file)
2316@@ -267,12 +262,12 @@
2317 openstack.configure_installation_source(src)
2318 _file.__enter__().write.assert_called_with(src)
2319
2320- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2321+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2322 def test_configure_bad_install_source(self, _error):
2323 openstack.configure_installation_source('foo')
2324 _error.assert_called_with('Invalid openstack-release specified: foo')
2325
2326- @patch('charmhelpers.contrib.openstack.openstack_utils.lsb_release')
2327+ @patch('charmhelpers.contrib.openstack.utils.lsb_release')
2328 def test_configure_install_source_uca_staging(self, _lsb):
2329 '''Test configuring installation source from UCA staging sources'''
2330 _lsb.return_value = FAKE_RELEASE
2331@@ -285,8 +280,8 @@
2332 _subp.assert_called_with(cmd)
2333
2334 @patch('__builtin__.open')
2335- @patch('charmhelpers.contrib.openstack.openstack_utils.import_key')
2336- @patch('charmhelpers.contrib.openstack.openstack_utils.lsb_release')
2337+ @patch('charmhelpers.contrib.openstack.utils.import_key')
2338+ @patch('charmhelpers.contrib.openstack.utils.lsb_release')
2339 def test_configure_install_source_uca_repos(self, _lsb, _import, _open):
2340 '''Test configuring installation source from UCA sources'''
2341 _lsb.return_value = FAKE_RELEASE
2342@@ -301,7 +296,7 @@
2343 )
2344 _file.__enter__().write.assert_called_with(url)
2345
2346- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2347+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2348 def test_configure_install_source_bad_uca(self, mocked_error):
2349 '''Test configuring installation source from bad UCA source'''
2350 try:
2351@@ -321,7 +316,7 @@
2352 '--recv-keys', 'foo']
2353 _subp.assert_called_with(cmd)
2354
2355- @patch('charmhelpers.contrib.openstack.openstack_utils.error_out')
2356+ @patch('charmhelpers.contrib.openstack.utils.error_out')
2357 def test_import_bad_apt_key(self, mocked_error):
2358 '''Ensure error when importing apt key fails'''
2359 with patch('subprocess.check_call') as _subp:

Subscribers

People subscribed via source and target branches