Merge lp:~andreserl/charms/precise/glance/port into lp:~openstack-charmers/charms/precise/glance/python-redux

Proposed by Andres Rodriguez
Status: Merged
Approved by: Adam Gandelman
Approved revision: 200
Merged at revision: 200
Proposed branch: lp:~andreserl/charms/precise/glance/port
Merge into: lp:~openstack-charmers/charms/precise/glance/python-redux
Diff against target: 3628 lines (+1690/-1475)
23 files modified
.coveragerc (+6/-0)
Makefile (+4/-0)
charm-helpers.yaml (+1/-1)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/apache_utils.py (+0/-196)
hooks/charmhelpers/contrib/hahelpers/ceph.py (+291/-0)
hooks/charmhelpers/contrib/hahelpers/ceph_utils.py (+0/-256)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+181/-0)
hooks/charmhelpers/contrib/hahelpers/cluster_utils.py (+0/-176)
hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py (+0/-55)
hooks/charmhelpers/contrib/hahelpers/utils.py (+0/-333)
hooks/charmhelpers/contrib/openstack/context.py (+29/-5)
hooks/charmhelpers/contrib/openstack/openstack_utils.py (+0/-270)
hooks/charmhelpers/contrib/openstack/templating.py (+133/-103)
hooks/charmhelpers/contrib/openstack/utils.py (+273/-0)
hooks/charmhelpers/core/hookenv.py (+3/-2)
hooks/charmhelpers/core/host.py (+39/-29)
hooks/glance_contexts.py (+1/-1)
hooks/glance_relations.py (+29/-42)
hooks/glance_utils.py (+4/-6)
tests/__init__.py (+3/-0)
tests/test_glance_relations.py (+517/-0)
tests/test_utils.py (+118/-0)
To merge this branch: bzr merge lp:~andreserl/charms/precise/glance/port
Reviewer Review Type Date Requested Status
Andres Rodriguez (community) Abstain
Adam Gandelman (community) Needs Fixing
Review via email: mp+180224@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Generally looks good. A couple of things:

- Can you sync the latest charm-helpers? There has been a lot added, and we've already rearranged the layout helpers in contrib.cluster and contrib.openstack. Also, with the version of the helpers synced currently, I cannot run the tests without python-dnspython installed. This should be resolved if you sync the latest, but it will require some modifications to your charms to properly import stuff as per the restructured I mention.

- Needs some lint cleanup

- Can you add a .coveragerc and adjust the Makefile to also report on code coverage? This will let you know what code you haven't hit with your tests. See the current nova pyredux branches for examples on that.

It would be good to get glance_utils.py covered by tests, too, but this is a good start and shouldn't block merging this.

review: Needs Fixing
197. By Andres Rodriguez

fix lint

198. By Andres Rodriguez

use latest charmhelpers and make modifications accordingly

199. By Andres Rodriguez

Add coveragerc

Revision history for this message
Andres Rodriguez (andreserl) wrote :

Adam,

Addressed:

1. sync charm-helpers and adapt to correctly import.
2. lint cleanup done
3. Add coveragerc
4. Is in my TODO.

review: Approve
Revision history for this message
Andres Rodriguez (andreserl) :
review: Abstain
200. By Andres Rodriguez

really add coveragerc

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file '.coveragerc'
--- .coveragerc 1970-01-01 00:00:00 +0000
+++ .coveragerc 2013-08-14 22:57:15 +0000
@@ -0,0 +1,6 @@
1[report]
2# Regexes for lines to exclude from consideration
3exclude_lines =
4 if __name__ == .__main__.:
5include=
6 hooks/glance_*
07
=== modified file 'Makefile'
--- Makefile 2013-07-11 14:17:30 +0000
+++ Makefile 2013-08-14 22:57:15 +0000
@@ -6,3 +6,7 @@
66
7sync:7sync:
8 @charm-helper-sync -c charm-helpers-sync.yaml8 @charm-helper-sync -c charm-helpers-sync.yaml
9
10test:
11 #@nosetests -svd tests/
12 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage tests
913
=== modified file 'charm-helpers.yaml'
--- charm-helpers.yaml 2013-07-05 19:12:08 +0000
+++ charm-helpers.yaml 2013-08-14 22:57:15 +0000
@@ -1,4 +1,4 @@
1branch: lp:~openstack-charmers/charm-tools/pyrewrite-helpers1branch: lp:charm-helpers
2destination: hooks/charmhelpers2destination: hooks/charmhelpers
3include:3include:
4 - core4 - core
55
=== added file 'hooks/__init__.py'
=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,58 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import subprocess
12
13from charmhelpers.core.hookenv import (
14 config as config_get,
15 relation_get,
16 relation_ids,
17 related_units as relation_list,
18 log,
19 INFO,
20)
21
22
23def get_cert():
24 cert = config_get('ssl_cert')
25 key = config_get('ssl_key')
26 if not (cert and key):
27 log("Inspecting identity-service relations for SSL certificate.",
28 level=INFO)
29 cert = key = None
30 for r_id in relation_ids('identity-service'):
31 for unit in relation_list(r_id):
32 if not cert:
33 cert = relation_get('ssl_cert',
34 rid=r_id, unit=unit)
35 if not key:
36 key = relation_get('ssl_key',
37 rid=r_id, unit=unit)
38 return (cert, key)
39
40
41def get_ca_cert():
42 ca_cert = None
43 log("Inspecting identity-service relations for CA SSL certificate.",
44 level=INFO)
45 for r_id in relation_ids('identity-service'):
46 for unit in relation_list(r_id):
47 if not ca_cert:
48 ca_cert = relation_get('ca_cert',
49 rid=r_id, unit=unit)
50 return ca_cert
51
52
53def install_ca_cert(ca_cert):
54 if ca_cert:
55 with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
56 'w') as crt:
57 crt.write(ca_cert)
58 subprocess.check_call(['update-ca-certificates', '--fresh'])
059
=== removed file 'hooks/charmhelpers/contrib/hahelpers/apache_utils.py'
--- hooks/charmhelpers/contrib/hahelpers/apache_utils.py 2013-06-24 16:05:57 +0000
+++ hooks/charmhelpers/contrib/hahelpers/apache_utils.py 1970-01-01 00:00:00 +0000
@@ -1,196 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11from utils import (
12 relation_ids,
13 relation_list,
14 relation_get,
15 render_template,
16 juju_log,
17 config_get,
18 install,
19 get_host_ip,
20 restart
21 )
22from cluster_utils import https
23
24import os
25import subprocess
26from base64 import b64decode
27
28APACHE_SITE_DIR = "/etc/apache2/sites-available"
29SITE_TEMPLATE = "apache2_site.tmpl"
30RELOAD_CHECK = "To activate the new configuration"
31
32
33def get_cert():
34 cert = config_get('ssl_cert')
35 key = config_get('ssl_key')
36 if not (cert and key):
37 juju_log('INFO',
38 "Inspecting identity-service relations for SSL certificate.")
39 cert = key = None
40 for r_id in relation_ids('identity-service'):
41 for unit in relation_list(r_id):
42 if not cert:
43 cert = relation_get('ssl_cert',
44 rid=r_id, unit=unit)
45 if not key:
46 key = relation_get('ssl_key',
47 rid=r_id, unit=unit)
48 return (cert, key)
49
50
51def get_ca_cert():
52 ca_cert = None
53 juju_log('INFO',
54 "Inspecting identity-service relations for CA SSL certificate.")
55 for r_id in relation_ids('identity-service'):
56 for unit in relation_list(r_id):
57 if not ca_cert:
58 ca_cert = relation_get('ca_cert',
59 rid=r_id, unit=unit)
60 return ca_cert
61
62
63def install_ca_cert(ca_cert):
64 if ca_cert:
65 with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
66 'w') as crt:
67 crt.write(ca_cert)
68 subprocess.check_call(['update-ca-certificates', '--fresh'])
69
70
71def enable_https(port_maps, namespace, cert, key, ca_cert=None):
72 '''
73 For a given number of port mappings, configures apache2
74 HTTPs local reverse proxying using certficates and keys provided in
75 either configuration data (preferred) or relation data. Assumes ports
76 are not in use (calling charm should ensure that).
77
78 port_maps: dict: external to internal port mappings
79 namespace: str: name of charm
80 '''
81 def _write_if_changed(path, new_content):
82 content = None
83 if os.path.exists(path):
84 with open(path, 'r') as f:
85 content = f.read().strip()
86 if content != new_content:
87 with open(path, 'w') as f:
88 f.write(new_content)
89 return True
90 else:
91 return False
92
93 juju_log('INFO', "Enabling HTTPS for port mappings: {}".format(port_maps))
94 http_restart = False
95
96 if cert:
97 cert = b64decode(cert)
98 if key:
99 key = b64decode(key)
100 if ca_cert:
101 ca_cert = b64decode(ca_cert)
102
103 if not cert and not key:
104 juju_log('ERROR',
105 "Expected but could not find SSL certificate data, not "
106 "configuring HTTPS!")
107 return False
108
109 install('apache2')
110 if RELOAD_CHECK in subprocess.check_output(['a2enmod', 'ssl',
111 'proxy', 'proxy_http']):
112 http_restart = True
113
114 ssl_dir = os.path.join('/etc/apache2/ssl', namespace)
115 if not os.path.exists(ssl_dir):
116 os.makedirs(ssl_dir)
117
118 if (_write_if_changed(os.path.join(ssl_dir, 'cert'), cert)):
119 http_restart = True
120 if (_write_if_changed(os.path.join(ssl_dir, 'key'), key)):
121 http_restart = True
122 os.chmod(os.path.join(ssl_dir, 'key'), 0600)
123
124 install_ca_cert(ca_cert)
125
126 sites_dir = '/etc/apache2/sites-available'
127 for ext_port, int_port in port_maps.items():
128 juju_log('INFO',
129 'Creating apache2 reverse proxy vhost'
130 ' for {}:{}'.format(ext_port,
131 int_port))
132 site = "{}_{}".format(namespace, ext_port)
133 site_path = os.path.join(sites_dir, site)
134 with open(site_path, 'w') as fsite:
135 context = {
136 "ext": ext_port,
137 "int": int_port,
138 "namespace": namespace,
139 "private_address": get_host_ip()
140 }
141 fsite.write(render_template(SITE_TEMPLATE,
142 context))
143
144 if RELOAD_CHECK in subprocess.check_output(['a2ensite', site]):
145 http_restart = True
146
147 if http_restart:
148 restart('apache2')
149
150 return True
151
152
153def disable_https(port_maps, namespace):
154 '''
155 Ensure HTTPS reverse proxying is disables for given port mappings
156
157 port_maps: dict: of ext -> int port mappings
158 namespace: str: name of chamr
159 '''
160 juju_log('INFO', 'Ensuring HTTPS disabled for {}'.format(port_maps))
161
162 if (not os.path.exists('/etc/apache2') or
163 not os.path.exists(os.path.join('/etc/apache2/ssl', namespace))):
164 return
165
166 http_restart = False
167 for ext_port in port_maps.keys():
168 if os.path.exists(os.path.join(APACHE_SITE_DIR,
169 "{}_{}".format(namespace,
170 ext_port))):
171 juju_log('INFO',
172 "Disabling HTTPS reverse proxy"
173 " for {} {}.".format(namespace,
174 ext_port))
175 if (RELOAD_CHECK in
176 subprocess.check_output(['a2dissite',
177 '{}_{}'.format(namespace,
178 ext_port)])):
179 http_restart = True
180
181 if http_restart:
182 restart(['apache2'])
183
184
185def setup_https(port_maps, namespace, cert, key, ca_cert=None):
186 '''
187 Ensures HTTPS is either enabled or disabled for given port
188 mapping.
189
190 port_maps: dict: of ext -> int port mappings
191 namespace: str: name of charm
192 '''
193 if not https:
194 disable_https(port_maps, namespace)
195 else:
196 enable_https(port_maps, namespace, cert, key, ca_cert)
1970
=== added file 'hooks/charmhelpers/contrib/hahelpers/ceph.py'
--- hooks/charmhelpers/contrib/hahelpers/ceph.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/ceph.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,291 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import commands
12import os
13import shutil
14import time
15
16from subprocess import (
17 check_call,
18 check_output,
19 CalledProcessError
20)
21
22from charmhelpers.core.hookenv import (
23 relation_get,
24 relation_ids,
25 related_units,
26 log,
27 INFO,
28 ERROR
29)
30
31from charmhelpers.core.host import (
32 apt_install,
33 mount,
34 mounts,
35 service_start,
36 service_stop,
37 umount,
38)
39
40KEYRING = '/etc/ceph/ceph.client.%s.keyring'
41KEYFILE = '/etc/ceph/ceph.client.%s.key'
42
43CEPH_CONF = """[global]
44 auth supported = %(auth)s
45 keyring = %(keyring)s
46 mon host = %(mon_hosts)s
47"""
48
49
50def running(service):
51 # this local util can be dropped as soon the following branch lands
52 # in lp:charm-helpers
53 # https://code.launchpad.net/~gandelman-a/charm-helpers/service_running/
54 try:
55 output = check_output(['service', service, 'status'])
56 except CalledProcessError:
57 return False
58 else:
59 if ("start/running" in output or "is running" in output):
60 return True
61 else:
62 return False
63
64
65def install():
66 ceph_dir = "/etc/ceph"
67 if not os.path.isdir(ceph_dir):
68 os.mkdir(ceph_dir)
69 apt_install('ceph-common', fatal=True)
70
71
72def rbd_exists(service, pool, rbd_img):
73 (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %
74 (service, pool))
75 return rbd_img in out
76
77
78def create_rbd_image(service, pool, image, sizemb):
79 cmd = [
80 'rbd',
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)
91
92
93def pool_exists(service, name):
94 (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
95 return name in out
96
97
98def create_pool(service, name):
99 cmd = [
100 'rados',
101 '--id',
102 service,
103 'mkpool',
104 name
105 ]
106 check_call(cmd)
107
108
109def keyfile_path(service):
110 return KEYFILE % service
111
112
113def keyring_path(service):
114 return KEYRING % service
115
116
117def create_keyring(service, key):
118 keyring = keyring_path(service)
119 if os.path.exists(keyring):
120 log('ceph: Keyring exists at %s.' % keyring, level=INFO)
121 cmd = [
122 'ceph-authtool',
123 keyring,
124 '--create-keyring',
125 '--name=client.%s' % service,
126 '--add-key=%s' % key
127 ]
128 check_call(cmd)
129 log('ceph: Created new ring at %s.' % keyring, level=INFO)
130
131
132def create_key_file(service, key):
133 # create a file containing the key
134 keyfile = keyfile_path(service)
135 if os.path.exists(keyfile):
136 log('ceph: Keyfile exists at %s.' % keyfile, level=INFO)
137 fd = open(keyfile, 'w')
138 fd.write(key)
139 fd.close()
140 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
141
142
143def get_ceph_nodes():
144 hosts = []
145 for r_id in relation_ids('ceph'):
146 for unit in related_units(r_id):
147 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
148 return hosts
149
150
151def configure(service, key, auth):
152 create_keyring(service, key)
153 create_key_file(service, key)
154 hosts = get_ceph_nodes()
155 mon_hosts = ",".join(map(str, hosts))
156 keyring = keyring_path(service)
157 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
158 ceph_conf.write(CEPH_CONF % locals())
159 modprobe_kernel_module('rbd')
160
161
162def image_mapped(image_name):
163 (rc, out) = commands.getstatusoutput('rbd showmapped')
164 return image_name in out
165
166
167def map_block_storage(service, pool, image):
168 cmd = [
169 'rbd',
170 'map',
171 '%s/%s' % (pool, image),
172 '--user',
173 service,
174 '--secret',
175 keyfile_path(service),
176 ]
177 check_call(cmd)
178
179
180def filesystem_mounted(fs):
181 return fs in [f for m, f in mounts()]
182
183
184def make_filesystem(blk_device, fstype='ext4', timeout=10):
185 count = 0
186 e_noent = os.errno.ENOENT
187 while not os.path.exists(blk_device):
188 if count >= timeout:
189 log('ceph: gave up waiting on block device %s' % blk_device,
190 level=ERROR)
191 raise IOError(e_noent, os.strerror(e_noent), blk_device)
192 log('ceph: waiting for block device %s to appear' % blk_device,
193 level=INFO)
194 count += 1
195 time.sleep(1)
196 else:
197 log('ceph: Formatting block device %s as filesystem %s.' %
198 (blk_device, fstype), level=INFO)
199 check_call(['mkfs', '-t', fstype, blk_device])
200
201
202def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
203 # mount block device into /mnt
204 mount(blk_device, '/mnt')
205
206 # copy data to /mnt
207 try:
208 copy_files(data_src_dst, '/mnt')
209 except:
210 pass
211
212 # umount block device
213 umount('/mnt')
214
215 _dir = os.stat(data_src_dst)
216 uid = _dir.st_uid
217 gid = _dir.st_gid
218
219 # re-mount where the data should originally be
220 mount(blk_device, data_src_dst, persist=True)
221
222 # ensure original ownership of new mount.
223 cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
224 check_call(cmd)
225
226
227# TODO: re-use
228def modprobe_kernel_module(module):
229 log('ceph: Loading kernel module', level=INFO)
230 cmd = ['modprobe', module]
231 check_call(cmd)
232 cmd = 'echo %s >> /etc/modules' % module
233 check_call(cmd, shell=True)
234
235
236def copy_files(src, dst, symlinks=False, ignore=None):
237 for item in os.listdir(src):
238 s = os.path.join(src, item)
239 d = os.path.join(dst, item)
240 if os.path.isdir(s):
241 shutil.copytree(s, d, symlinks, ignore)
242 else:
243 shutil.copy2(s, d)
244
245
246def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
247 blk_device, fstype, system_services=[]):
248 """
249 To be called from the current cluster leader.
250 Ensures given pool and RBD image exists, is mapped to a block device,
251 and the device is formatted and mounted at the given mount_point.
252
253 If formatting a device for the first time, data existing at mount_point
254 will be migrated to the RBD device before being remounted.
255
256 All services listed in system_services will be stopped prior to data
257 migration and restarted when complete.
258 """
259 # Ensure pool, RBD image, RBD mappings are in place.
260 if not pool_exists(service, pool):
261 log('ceph: Creating new pool %s.' % pool, level=INFO)
262 create_pool(service, pool)
263
264 if not rbd_exists(service, pool, rbd_img):
265 log('ceph: Creating RBD image (%s).' % rbd_img, level=INFO)
266 create_rbd_image(service, pool, rbd_img, sizemb)
267
268 if not image_mapped(rbd_img):
269 log('ceph: Mapping RBD Image as a Block Device.', level=INFO)
270 map_block_storage(service, pool, rbd_img)
271
272 # make file system
273 # TODO: What happens if for whatever reason this is run again and
274 # the data is already in the rbd device and/or is mounted??
275 # When it is mounted already, it will fail to make the fs
276 # XXX: This is really sketchy! Need to at least add an fstab entry
277 # otherwise this hook will blow away existing data if its executed
278 # after a reboot.
279 if not filesystem_mounted(mount_point):
280 make_filesystem(blk_device, fstype)
281
282 for svc in system_services:
283 if running(svc):
284 log('Stopping services %s prior to migrating data.' % svc,
285 level=INFO)
286 service_stop(svc)
287
288 place_data_on_ceph(service, blk_device, mount_point, fstype)
289
290 for svc in system_services:
291 service_start(svc)
0292
=== removed file 'hooks/charmhelpers/contrib/hahelpers/ceph_utils.py'
--- hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 2013-06-24 16:05:57 +0000
+++ hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 1970-01-01 00:00:00 +0000
@@ -1,256 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import commands
12import subprocess
13import os
14import shutil
15import utils
16
17KEYRING = '/etc/ceph/ceph.client.%s.keyring'
18KEYFILE = '/etc/ceph/ceph.client.%s.key'
19
20CEPH_CONF = """[global]
21 auth supported = %(auth)s
22 keyring = %(keyring)s
23 mon host = %(mon_hosts)s
24"""
25
26
27def execute(cmd):
28 subprocess.check_call(cmd)
29
30
31def execute_shell(cmd):
32 subprocess.check_call(cmd, shell=True)
33
34
35def install():
36 ceph_dir = "/etc/ceph"
37 if not os.path.isdir(ceph_dir):
38 os.mkdir(ceph_dir)
39 utils.install('ceph-common')
40
41
42def rbd_exists(service, pool, rbd_img):
43 (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\
44 (service, pool))
45 return rbd_img in out
46
47
48def create_rbd_image(service, pool, image, sizemb):
49 cmd = [
50 'rbd',
51 'create',
52 image,
53 '--size',
54 str(sizemb),
55 '--id',
56 service,
57 '--pool',
58 pool
59 ]
60 execute(cmd)
61
62
63def pool_exists(service, name):
64 (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
65 return name in out
66
67
68def create_pool(service, name):
69 cmd = [
70 'rados',
71 '--id',
72 service,
73 'mkpool',
74 name
75 ]
76 execute(cmd)
77
78
79def keyfile_path(service):
80 return KEYFILE % service
81
82
83def keyring_path(service):
84 return KEYRING % service
85
86
87def create_keyring(service, key):
88 keyring = keyring_path(service)
89 if os.path.exists(keyring):
90 utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring)
91 cmd = [
92 'ceph-authtool',
93 keyring,
94 '--create-keyring',
95 '--name=client.%s' % service,
96 '--add-key=%s' % key
97 ]
98 execute(cmd)
99 utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring)
100
101
102def create_key_file(service, key):
103 # create a file containing the key
104 keyfile = keyfile_path(service)
105 if os.path.exists(keyfile):
106 utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile)
107 fd = open(keyfile, 'w')
108 fd.write(key)
109 fd.close()
110 utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile)
111
112
113def get_ceph_nodes():
114 hosts = []
115 for r_id in utils.relation_ids('ceph'):
116 for unit in utils.relation_list(r_id):
117 hosts.append(utils.relation_get('private-address',
118 unit=unit, rid=r_id))
119 return hosts
120
121
122def configure(service, key, auth):
123 create_keyring(service, key)
124 create_key_file(service, key)
125 hosts = get_ceph_nodes()
126 mon_hosts = ",".join(map(str, hosts))
127 keyring = keyring_path(service)
128 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
129 ceph_conf.write(CEPH_CONF % locals())
130 modprobe_kernel_module('rbd')
131
132
133def image_mapped(image_name):
134 (rc, out) = commands.getstatusoutput('rbd showmapped')
135 return image_name in out
136
137
138def map_block_storage(service, pool, image):
139 cmd = [
140 'rbd',
141 'map',
142 '%s/%s' % (pool, image),
143 '--user',
144 service,
145 '--secret',
146 keyfile_path(service),
147 ]
148 execute(cmd)
149
150
151def filesystem_mounted(fs):
152 return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
153
154
155def make_filesystem(blk_device, fstype='ext4'):
156 utils.juju_log('INFO',
157 'ceph: Formatting block device %s as filesystem %s.' %\
158 (blk_device, fstype))
159 cmd = ['mkfs', '-t', fstype, blk_device]
160 execute(cmd)
161
162
163def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
164 # mount block device into /mnt
165 cmd = ['mount', '-t', fstype, blk_device, '/mnt']
166 execute(cmd)
167
168 # copy data to /mnt
169 try:
170 copy_files(data_src_dst, '/mnt')
171 except:
172 pass
173
174 # umount block device
175 cmd = ['umount', '/mnt']
176 execute(cmd)
177
178 _dir = os.stat(data_src_dst)
179 uid = _dir.st_uid
180 gid = _dir.st_gid
181
182 # re-mount where the data should originally be
183 cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
184 execute(cmd)
185
186 # ensure original ownership of new mount.
187 cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
188 execute(cmd)
189
190
191# TODO: re-use
192def modprobe_kernel_module(module):
193 utils.juju_log('INFO', 'Loading kernel module')
194 cmd = ['modprobe', module]
195 execute(cmd)
196 cmd = 'echo %s >> /etc/modules' % module
197 execute_shell(cmd)
198
199
200def copy_files(src, dst, symlinks=False, ignore=None):
201 for item in os.listdir(src):
202 s = os.path.join(src, item)
203 d = os.path.join(dst, item)
204 if os.path.isdir(s):
205 shutil.copytree(s, d, symlinks, ignore)
206 else:
207 shutil.copy2(s, d)
208
209
210def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
211 blk_device, fstype, system_services=[]):
212 """
213 To be called from the current cluster leader.
214 Ensures given pool and RBD image exists, is mapped to a block device,
215 and the device is formatted and mounted at the given mount_point.
216
217 If formatting a device for the first time, data existing at mount_point
218 will be migrated to the RBD device before being remounted.
219
220 All services listed in system_services will be stopped prior to data
221 migration and restarted when complete.
222 """
223 # Ensure pool, RBD image, RBD mappings are in place.
224 if not pool_exists(service, pool):
225 utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool)
226 create_pool(service, pool)
227
228 if not rbd_exists(service, pool, rbd_img):
229 utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img)
230 create_rbd_image(service, pool, rbd_img, sizemb)
231
232 if not image_mapped(rbd_img):
233 utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.')
234 map_block_storage(service, pool, rbd_img)
235
236 # make file system
237 # TODO: What happens if for whatever reason this is run again and
238 # the data is already in the rbd device and/or is mounted??
239 # When it is mounted already, it will fail to make the fs
240 # XXX: This is really sketchy! Need to at least add an fstab entry
241 # otherwise this hook will blow away existing data if its executed
242 # after a reboot.
243 if not filesystem_mounted(mount_point):
244 make_filesystem(blk_device, fstype)
245
246 for svc in system_services:
247 if utils.running(svc):
248 utils.juju_log('INFO',
249 'Stopping services %s prior to migrating '\
250 'data' % svc)
251 utils.stop(svc)
252
253 place_data_on_ceph(service, blk_device, mount_point, fstype)
254
255 for svc in system_services:
256 utils.start(svc)
2570
=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,181 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# James Page <james.page@ubuntu.com>
6# Adam Gandelman <adamg@ubuntu.com>
7#
8
9import subprocess
10import os
11
12from socket import gethostname as get_unit_hostname
13
14from charmhelpers.core.hookenv import (
15 log,
16 relation_ids,
17 related_units as relation_list,
18 relation_get,
19 config as config_get,
20 INFO,
21 ERROR,
22 unit_get,
23)
24
25
26class HAIncompleteConfig(Exception):
27 pass
28
29
30def is_clustered():
31 for r_id in (relation_ids('ha') or []):
32 for unit in (relation_list(r_id) or []):
33 clustered = relation_get('clustered',
34 rid=r_id,
35 unit=unit)
36 if clustered:
37 return True
38 return False
39
40
41def is_leader(resource):
42 cmd = [
43 "crm", "resource",
44 "show", resource
45 ]
46 try:
47 status = subprocess.check_output(cmd)
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if get_unit_hostname() in status:
52 return True
53 else:
54 return False
55
56
57def peer_units():
58 peers = []
59 for r_id in (relation_ids('cluster') or []):
60 for unit in (relation_list(r_id) or []):
61 peers.append(unit)
62 return peers
63
64
65def oldest_peer(peers):
66 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
67 for peer in peers:
68 remote_unit_no = int(peer.split('/')[1])
69 if remote_unit_no < local_unit_no:
70 return False
71 return True
72
73
74def eligible_leader(resource):
75 if is_clustered():
76 if not is_leader(resource):
77 log('Deferring action to CRM leader.', level=INFO)
78 return False
79 else:
80 peers = peer_units()
81 if peers and not oldest_peer(peers):
82 log('Deferring action to oldest service unit.', level=INFO)
83 return False
84 return True
85
86
87def https():
88 '''
89 Determines whether enough data has been provided in configuration
90 or relation data to configure HTTPS
91 .
92 returns: boolean
93 '''
94 if config_get('use-https') == "yes":
95 return True
96 if config_get('ssl_cert') and config_get('ssl_key'):
97 return True
98 for r_id in relation_ids('identity-service'):
99 for unit in relation_list(r_id):
100 if None not in [
101 relation_get('https_keystone', rid=r_id, unit=unit),
102 relation_get('ssl_cert', rid=r_id, unit=unit),
103 relation_get('ssl_key', rid=r_id, unit=unit),
104 relation_get('ca_cert', rid=r_id, unit=unit),
105 ]:
106 return True
107 return False
108
109
110def determine_api_port(public_port):
111 '''
112 Determine correct API server listening port based on
113 existence of HTTPS reverse proxy and/or haproxy.
114
115 public_port: int: standard public port for given service
116
117 returns: int: the correct listening port for the API service
118 '''
119 i = 0
120 if len(peer_units()) > 0 or is_clustered():
121 i += 1
122 if https():
123 i += 1
124 return public_port - (i * 10)
125
126
127def determine_haproxy_port(public_port):
128 '''
129 Description: Determine correct proxy listening port based on public IP +
130 existence of HTTPS reverse proxy.
131
132 public_port: int: standard public port for given service
133
134 returns: int: the correct listening port for the HAProxy service
135 '''
136 i = 0
137 if https():
138 i += 1
139 return public_port - (i * 10)
140
141
142def get_hacluster_config():
143 '''
144 Obtains all relevant configuration from charm configuration required
145 for initiating a relation to hacluster:
146
147 ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
148
149 returns: dict: A dict containing settings keyed by setting name.
150 raises: HAIncompleteConfig if settings are missing.
151 '''
152 settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
153 conf = {}
154 for setting in settings:
155 conf[setting] = config_get(setting)
156 missing = []
157 [missing.append(s) for s, v in conf.iteritems() if v is None]
158 if missing:
159 log('Insufficient config data to configure hacluster.', level=ERROR)
160 raise HAIncompleteConfig
161 return conf
162
163
164def canonical_url(configs, vip_setting='vip'):
165 '''
166 Returns the correct HTTP URL to this host given the state of HTTPS
167 configuration and hacluster.
168
169 :configs : OSTemplateRenderer: A config tempating object to inspect for
170 a complete https context.
171 :vip_setting: str: Setting in charm config that specifies
172 VIP address.
173 '''
174 scheme = 'http'
175 if 'https' in configs.complete_contexts():
176 scheme = 'https'
177 if is_clustered():
178 addr = config_get(vip_setting)
179 else:
180 addr = unit_get('private-address')
181 return '%s://%s' % (scheme, addr)
0182
=== removed file 'hooks/charmhelpers/contrib/hahelpers/cluster_utils.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 2013-07-09 17:24:46 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 1970-01-01 00:00:00 +0000
@@ -1,176 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11from utils import (
12 juju_log,
13 relation_ids,
14 relation_list,
15 relation_get,
16 get_unit_hostname,
17 config_get
18)
19import subprocess
20import os
21
22
23class HAIncompleteConfig(Exception):
24 pass
25
26
27def is_clustered():
28 for r_id in (relation_ids('ha') or []):
29 for unit in (relation_list(r_id) or []):
30 clustered = relation_get('clustered',
31 rid=r_id,
32 unit=unit)
33 if clustered:
34 return True
35 return False
36
37
38def is_leader(resource):
39 cmd = [
40 "crm", "resource",
41 "show", resource
42 ]
43 try:
44 status = subprocess.check_output(cmd)
45 except subprocess.CalledProcessError:
46 return False
47 else:
48 if get_unit_hostname() in status:
49 return True
50 else:
51 return False
52
53
54def peer_units():
55 peers = []
56 for r_id in (relation_ids('cluster') or []):
57 for unit in (relation_list(r_id) or []):
58 peers.append(unit)
59 return peers
60
61
62def oldest_peer(peers):
63 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
64 for peer in peers:
65 remote_unit_no = int(peer.split('/')[1])
66 if remote_unit_no < local_unit_no:
67 return False
68 return True
69
70
71def eligible_leader(resource):
72 if is_clustered():
73 if not is_leader(resource):
74 juju_log('INFO', 'Deferring action to CRM leader.')
75 return False
76 else:
77 peers = peer_units()
78 if peers and not oldest_peer(peers):
79 juju_log('INFO', 'Deferring action to oldest service unit.')
80 return False
81 return True
82
83
84def https():
85 '''
86 Determines whether enough data has been provided in configuration
87 or relation data to configure HTTPS
88 .
89 returns: boolean
90 '''
91 if config_get('use-https') == "yes":
92 return True
93 if config_get('ssl_cert') and config_get('ssl_key'):
94 return True
95 for r_id in relation_ids('identity-service'):
96 for unit in relation_list(r_id):
97 if (relation_get('https_keystone', rid=r_id, unit=unit) and
98 relation_get('ssl_cert', rid=r_id, unit=unit) and
99 relation_get('ssl_key', rid=r_id, unit=unit) and
100 relation_get('ca_cert', rid=r_id, unit=unit)):
101 return True
102 return False
103
104
105def determine_api_port(public_port):
106 '''
107 Determine correct API server listening port based on
108 existence of HTTPS reverse proxy and/or haproxy.
109
110 public_port: int: standard public port for given service
111
112 returns: int: the correct listening port for the API service
113 '''
114 i = 0
115 if len(peer_units()) > 0 or is_clustered():
116 i += 1
117 if https():
118 i += 1
119 return public_port - (i * 10)
120
121
122def determine_haproxy_port(public_port):
123 '''
124 Description: Determine correct proxy listening port based on public IP +
125 existence of HTTPS reverse proxy.
126
127 public_port: int: standard public port for given service
128
129 returns: int: the correct listening port for the HAProxy service
130 '''
131 i = 0
132 if https():
133 i += 1
134 return public_port - (i * 10)
135
136
137def get_hacluster_config():
138 '''
139 Obtains all relevant configuration from charm configuration required
140 for initiating a relation to hacluster:
141
142 ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
143
144 returns: dict: A dict containing settings keyed by setting name.
145 raises: HAIncompleteConfig if settings are missing.
146 '''
147 settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
148 conf = {}
149 for setting in settings:
150 conf[setting] = config_get(setting)
151 missing = []
152 [missing.append(s) for s, v in conf.iteritems() if v is None]
153 if missing:
154 juju_log('Insufficient config data to configure hacluster.')
155 raise HAIncompleteConfig
156 return conf
157
158
159def canonical_url(configs, vip_setting='vip'):
160 '''
161 Returns the correct HTTP URL to this host given the state of HTTPS
162 configuration and hacluster.
163
164 :configs : OSTemplateRenderer: A config tempating object to inspect for
165 a complete https context.
166 :vip_setting: str: Setting in charm config that specifies
167 VIP address.
168 '''
169 scheme = 'http'
170 if 'https' in configs.complete_contexts():
171 scheme = 'https'
172 if is_clustered():
173 addr = config_get(vip_setting)
174 else:
175 addr = get_unit_hostname()
176 return '%s://%s' % (scheme, addr)
1770
=== removed file 'hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py'
--- hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 2013-06-24 16:05:57 +0000
+++ hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 1970-01-01 00:00:00 +0000
@@ -1,55 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11from utils import (
12 relation_ids,
13 relation_list,
14 relation_get,
15 unit_get,
16 reload,
17 render_template
18 )
19import os
20
21HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
22HAPROXY_DEFAULT = '/etc/default/haproxy'
23
24
25def configure_haproxy(service_ports):
26 '''
27 Configure HAProxy based on the current peers in the service
28 cluster using the provided port map:
29
30 "swift": [ 8080, 8070 ]
31
32 HAproxy will also be reloaded/started if required
33
34 service_ports: dict: dict of lists of [ frontend, backend ]
35 '''
36 cluster_hosts = {}
37 cluster_hosts[os.getenv('JUJU_UNIT_NAME').replace('/', '-')] = \
38 unit_get('private-address')
39 for r_id in relation_ids('cluster'):
40 for unit in relation_list(r_id):
41 cluster_hosts[unit.replace('/', '-')] = \
42 relation_get(attribute='private-address',
43 rid=r_id,
44 unit=unit)
45 context = {
46 'units': cluster_hosts,
47 'service_ports': service_ports
48 }
49 with open(HAPROXY_CONF, 'w') as f:
50 f.write(render_template(os.path.basename(HAPROXY_CONF),
51 context))
52 with open(HAPROXY_DEFAULT, 'w') as f:
53 f.write('ENABLED=1')
54
55 reload('haproxy')
560
=== removed file 'hooks/charmhelpers/contrib/hahelpers/utils.py'
--- hooks/charmhelpers/contrib/hahelpers/utils.py 2013-06-24 16:05:57 +0000
+++ hooks/charmhelpers/contrib/hahelpers/utils.py 1970-01-01 00:00:00 +0000
@@ -1,333 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Paul Collins <paul.collins@canonical.com>
9# Adam Gandelman <adamg@ubuntu.com>
10#
11
12import json
13import os
14import subprocess
15import socket
16import sys
17
18
19def do_hooks(hooks):
20 hook = os.path.basename(sys.argv[0])
21
22 try:
23 hook_func = hooks[hook]
24 except KeyError:
25 juju_log('INFO',
26 "This charm doesn't know how to handle '{}'.".format(hook))
27 else:
28 hook_func()
29
30
31def install(*pkgs):
32 cmd = [
33 'apt-get',
34 '-y',
35 'install'
36 ]
37 for pkg in pkgs:
38 cmd.append(pkg)
39 subprocess.check_call(cmd)
40
41TEMPLATES_DIR = 'templates'
42
43try:
44 import jinja2
45except ImportError:
46 install('python-jinja2')
47 import jinja2
48
49try:
50 import dns.resolver
51except ImportError:
52 install('python-dnspython')
53 import dns.resolver
54
55
56def render_template(template_name, context, template_dir=TEMPLATES_DIR):
57 templates = jinja2.Environment(
58 loader=jinja2.FileSystemLoader(template_dir)
59 )
60 template = templates.get_template(template_name)
61 return template.render(context)
62
63CLOUD_ARCHIVE = \
64""" # Ubuntu Cloud Archive
65deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
66"""
67
68CLOUD_ARCHIVE_POCKETS = {
69 'folsom': 'precise-updates/folsom',
70 'folsom/updates': 'precise-updates/folsom',
71 'folsom/proposed': 'precise-proposed/folsom',
72 'grizzly': 'precise-updates/grizzly',
73 'grizzly/updates': 'precise-updates/grizzly',
74 'grizzly/proposed': 'precise-proposed/grizzly'
75 }
76
77
78def configure_source():
79 source = str(config_get('openstack-origin'))
80 if not source:
81 return
82 if source.startswith('ppa:'):
83 cmd = [
84 'add-apt-repository',
85 source
86 ]
87 subprocess.check_call(cmd)
88 if source.startswith('cloud:'):
89 # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg:
90 # cloud:precise-folsom/updates or cloud:precise-folsom/proposed
91 install('ubuntu-cloud-keyring')
92 pocket = source.split(':')[1]
93 pocket = pocket.split('-')[1]
94 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
95 apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
96 if source.startswith('deb'):
97 l = len(source.split('|'))
98 if l == 2:
99 (apt_line, key) = source.split('|')
100 cmd = [
101 'apt-key',
102 'adv', '--keyserver keyserver.ubuntu.com',
103 '--recv-keys', key
104 ]
105 subprocess.check_call(cmd)
106 elif l == 1:
107 apt_line = source
108
109 with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
110 apt.write(apt_line + "\n")
111 cmd = [
112 'apt-get',
113 'update'
114 ]
115 subprocess.check_call(cmd)
116
117# Protocols
118TCP = 'TCP'
119UDP = 'UDP'
120
121
122def expose(port, protocol='TCP'):
123 cmd = [
124 'open-port',
125 '{}/{}'.format(port, protocol)
126 ]
127 subprocess.check_call(cmd)
128
129
130def juju_log(severity, message):
131 cmd = [
132 'juju-log',
133 '--log-level', severity,
134 message
135 ]
136 subprocess.check_call(cmd)
137
138
139cache = {}
140
141
142def cached(func):
143 def wrapper(*args, **kwargs):
144 global cache
145 key = str((func, args, kwargs))
146 try:
147 return cache[key]
148 except KeyError:
149 res = func(*args, **kwargs)
150 cache[key] = res
151 return res
152 return wrapper
153
154
155@cached
156def relation_ids(relation):
157 cmd = [
158 'relation-ids',
159 relation
160 ]
161 result = str(subprocess.check_output(cmd)).split()
162 if result == "":
163 return None
164 else:
165 return result
166
167
168@cached
169def relation_list(rid):
170 cmd = [
171 'relation-list',
172 '-r', rid,
173 ]
174 result = str(subprocess.check_output(cmd)).split()
175 if result == "":
176 return None
177 else:
178 return result
179
180
181@cached
182def relation_get(attribute, unit=None, rid=None):
183 cmd = [
184 'relation-get',
185 ]
186 if rid:
187 cmd.append('-r')
188 cmd.append(rid)
189 cmd.append(attribute)
190 if unit:
191 cmd.append(unit)
192 value = subprocess.check_output(cmd).strip() # IGNORE:E1103
193 if value == "":
194 return None
195 else:
196 return value
197
198
199@cached
200def relation_get_dict(relation_id=None, remote_unit=None):
201 """Obtain all relation data as dict by way of JSON"""
202 cmd = [
203 'relation-get', '--format=json'
204 ]
205 if relation_id:
206 cmd.append('-r')
207 cmd.append(relation_id)
208 if remote_unit:
209 remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None)
210 os.environ['JUJU_REMOTE_UNIT'] = remote_unit
211 j = subprocess.check_output(cmd)
212 if remote_unit and remote_unit_orig:
213 os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig
214 d = json.loads(j)
215 settings = {}
216 # convert unicode to strings
217 for k, v in d.iteritems():
218 settings[str(k)] = str(v)
219 return settings
220
221
222def relation_set(**kwargs):
223 cmd = [
224 'relation-set'
225 ]
226 args = []
227 for k, v in kwargs.items():
228 if k == 'rid':
229 if v:
230 cmd.append('-r')
231 cmd.append(v)
232 else:
233 args.append('{}={}'.format(k, v))
234 cmd += args
235 subprocess.check_call(cmd)
236
237
238@cached
239def unit_get(attribute):
240 cmd = [
241 'unit-get',
242 attribute
243 ]
244 value = subprocess.check_output(cmd).strip() # IGNORE:E1103
245 if value == "":
246 return None
247 else:
248 return value
249
250
251@cached
252def config_get(attribute):
253 cmd = [
254 'config-get',
255 '--format',
256 'json',
257 ]
258 out = subprocess.check_output(cmd).strip() # IGNORE:E1103
259 cfg = json.loads(out)
260
261 try:
262 return cfg[attribute]
263 except KeyError:
264 return None
265
266
267@cached
268def get_unit_hostname():
269 return socket.gethostname()
270
271
272@cached
273def get_host_ip(hostname=None):
274 hostname = hostname or unit_get('private-address')
275 try:
276 # Test to see if already an IPv4 address
277 socket.inet_aton(hostname)
278 return hostname
279 except socket.error:
280 answers = dns.resolver.query(hostname, 'A')
281 if answers:
282 return answers[0].address
283 return None
284
285
286def _svc_control(service, action):
287 subprocess.check_call(['service', service, action])
288
289
290def restart(*services):
291 for service in services:
292 _svc_control(service, 'restart')
293
294
295def stop(*services):
296 for service in services:
297 _svc_control(service, 'stop')
298
299
300def start(*services):
301 for service in services:
302 _svc_control(service, 'start')
303
304
305def reload(*services):
306 for service in services:
307 try:
308 _svc_control(service, 'reload')
309 except subprocess.CalledProcessError:
310 # Reload failed - either service does not support reload
311 # or it was not running - restart will fixup most things
312 _svc_control(service, 'restart')
313
314
315def running(service):
316 try:
317 output = subprocess.check_output(['service', service, 'status'])
318 except subprocess.CalledProcessError:
319 return False
320 else:
321 if ("start/running" in output or
322 "is running" in output):
323 return True
324 else:
325 return False
326
327
328def is_relation_made(relation, key='private-address'):
329 for r_id in (relation_ids(relation) or []):
330 for unit in (relation_list(r_id) or []):
331 if relation_get(key, rid=r_id, unit=unit):
332 return True
333 return False
3340
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2013-07-05 19:12:08 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2013-08-14 22:57:15 +0000
@@ -16,7 +16,7 @@
16 unit_get,16 unit_get,
17)17)
1818
19from charmhelpers.contrib.hahelpers.cluster_utils import (19from charmhelpers.contrib.hahelpers.cluster import (
20 determine_api_port,20 determine_api_port,
21 determine_haproxy_port,21 determine_haproxy_port,
22 https,22 https,
@@ -24,7 +24,7 @@
24 peer_units,24 peer_units,
25)25)
2626
27from charmhelpers.contrib.hahelpers.apache_utils import (27from charmhelpers.contrib.hahelpers.apache import (
28 get_cert,28 get_cert,
29 get_ca_cert,29 get_ca_cert,
30)30)
@@ -42,7 +42,7 @@
42 if v is None or v == '':42 if v is None or v == '':
43 _missing.append(k)43 _missing.append(k)
44 if _missing:44 if _missing:
45 print 'Missing required data: %s' % ' '.join(_missing)45 log('Missing required data: %s' % ' '.join(_missing), level='INFO')
46 return False46 return False
47 return True47 return True
4848
@@ -106,8 +106,8 @@
106 'admin_password': relation_get('service_password', rid=rid,106 'admin_password': relation_get('service_password', rid=rid,
107 unit=unit),107 unit=unit),
108 # XXX: Hard-coded http.108 # XXX: Hard-coded http.
109 'service_protocol': 'http',109 'service_protocol': 'http',
110 'auth_protocol': 'http',110 'auth_protocol': 'http',
111 }111 }
112 if not context_complete(ctxt):112 if not context_complete(ctxt):
113 return {}113 return {}
@@ -202,6 +202,30 @@
202 with open('/etc/default/haproxy', 'w') as out:202 with open('/etc/default/haproxy', 'w') as out:
203 out.write('ENABLED=1\n')203 out.write('ENABLED=1\n')
204 return ctxt204 return ctxt
205 log('HAProxy context is incomplete, this unit has no peers.')
206 return {}
207
208
209class ImageServiceContext(OSContextGenerator):
210 interfaces = ['image-servce']
211
212 def __call__(self):
213 '''
214 Obtains the glance API server from the image-service relation. Useful
215 in nova and cinder (currently).
216 '''
217 log('Generating template context for image-service.')
218 rids = relation_ids('image-service')
219 if not rids:
220 return {}
221 for rid in rids:
222 for unit in related_units(rid):
223 api_server = relation_get('glance-api-server',
224 rid=rid, unit=unit)
225 if api_server:
226 return {'glance_api_servers': api_server}
227 log('ImageService context is incomplete. '
228 'Missing required relation data.')
205 return {}229 return {}
206230
207231
208232
=== removed file 'hooks/charmhelpers/contrib/openstack/openstack_utils.py'
--- hooks/charmhelpers/contrib/openstack/openstack_utils.py 2013-07-11 17:29:14 +0000
+++ hooks/charmhelpers/contrib/openstack/openstack_utils.py 1970-01-01 00:00:00 +0000
@@ -1,270 +0,0 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4
5import apt_pkg as apt
6import subprocess
7import os
8import sys
9
10from distutils.version import StrictVersion
11
12from charmhelpers.core.hookenv import (
13 config,
14 charm_dir,
15)
16
17CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
18CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
19
20ubuntu_openstack_release = {
21 'oneiric': 'diablo',
22 'precise': 'essex',
23 'quantal': 'folsom',
24 'raring': 'grizzly',
25}
26
27
28openstack_codenames = {
29 '2011.2': 'diablo',
30 '2012.1': 'essex',
31 '2012.2': 'folsom',
32 '2013.1': 'grizzly',
33 '2013.2': 'havana',
34}
35
36# The ugly duckling
37swift_codenames = {
38 '1.4.3': 'diablo',
39 '1.4.8': 'essex',
40 '1.7.4': 'folsom',
41 '1.7.6': 'grizzly',
42 '1.7.7': 'grizzly',
43 '1.8.0': 'grizzly',
44}
45
46
47def juju_log(msg):
48 subprocess.check_call(['juju-log', msg])
49
50
51def error_out(msg):
52 juju_log("FATAL ERROR: %s" % msg)
53 sys.exit(1)
54
55
56def lsb_release():
57 '''Return /etc/lsb-release in a dict'''
58 lsb = open('/etc/lsb-release', 'r')
59 d = {}
60 for l in lsb:
61 k, v = l.split('=')
62 d[k.strip()] = v.strip()
63 return d
64
65
66def get_os_codename_install_source(src):
67 '''Derive OpenStack release codename from a given installation source.'''
68 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
69
70 rel = ''
71 if src == 'distro':
72 try:
73 rel = ubuntu_openstack_release[ubuntu_rel]
74 except KeyError:
75 e = 'Could not derive openstack release for '\
76 'this Ubuntu release: %s' % ubuntu_rel
77 error_out(e)
78 return rel
79
80 if src.startswith('cloud:'):
81 ca_rel = src.split(':')[1]
82 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
83 return ca_rel
84
85 # Best guess match based on deb string provided
86 if src.startswith('deb') or src.startswith('ppa'):
87 for k, v in openstack_codenames.iteritems():
88 if v in src:
89 return v
90
91
92def get_os_version_install_source(src):
93 codename = get_os_codename_install_source(src)
94 return get_os_version_codename(codename)
95
96
97def get_os_codename_version(vers):
98 '''Determine OpenStack codename from version number.'''
99 try:
100 return openstack_codenames[vers]
101 except KeyError:
102 e = 'Could not determine OpenStack codename for version %s' % vers
103 error_out(e)
104
105
106def get_os_version_codename(codename):
107 '''Determine OpenStack version number from codename.'''
108 for k, v in openstack_codenames.iteritems():
109 if v == codename:
110 return k
111 e = 'Could not derive OpenStack version for '\
112 'codename: %s' % codename
113 error_out(e)
114
115
116def get_os_codename_package(pkg, fatal=True):
117 '''Derive OpenStack release codename from an installed package.'''
118 apt.init()
119 cache = apt.Cache()
120
121 try:
122 pkg = cache[pkg]
123 except:
124 if not fatal:
125 return None
126 e = 'Could not determine version of installed package: %s' % pkg
127 error_out(e)
128
129 if pkg.current_ver != None:
130 vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
131 else:
132 return None
133
134 try:
135 if 'swift' in pkg.name:
136 vers = vers[:5]
137 return swift_codenames[vers]
138 else:
139 vers = vers[:6]
140 return openstack_codenames[vers]
141 except KeyError:
142 e = 'Could not determine OpenStack codename for version %s' % vers
143 error_out(e)
144
145
146def get_os_version_package(pkg, fatal=True):
147 '''Derive OpenStack version number from an installed package.'''
148 codename = get_os_codename_package(pkg, fatal=fatal)
149
150 if not codename:
151 return None
152
153 if 'swift' in pkg:
154 vers_map = swift_codenames
155 else:
156 vers_map = openstack_codenames
157
158 for version, cname in vers_map.iteritems():
159 if cname == codename:
160 return version
161 #e = "Could not determine OpenStack version for package: %s" % pkg
162 #error_out(e)
163
164
165def import_key(keyid):
166 cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
167 "--recv-keys %s" % keyid
168 try:
169 subprocess.check_call(cmd.split(' '))
170 except subprocess.CalledProcessError:
171 error_out("Error importing repo key %s" % keyid)
172
173
174def configure_installation_source(rel):
175 '''Configure apt installation source.'''
176 if rel == 'distro':
177 return
178 elif rel[:4] == "ppa:":
179 src = rel
180 subprocess.check_call(["add-apt-repository", "-y", src])
181 elif rel[:3] == "deb":
182 l = len(rel.split('|'))
183 if l == 2:
184 src, key = rel.split('|')
185 juju_log("Importing PPA key from keyserver for %s" % src)
186 import_key(key)
187 elif l == 1:
188 src = rel
189 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
190 f.write(src)
191 elif rel[:6] == 'cloud:':
192 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
193 rel = rel.split(':')[1]
194 u_rel = rel.split('-')[0]
195 ca_rel = rel.split('-')[1]
196
197 if u_rel != ubuntu_rel:
198 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
199 'version (%s)' % (ca_rel, ubuntu_rel)
200 error_out(e)
201
202 if 'staging' in ca_rel:
203 # staging is just a regular PPA.
204 os_rel = ca_rel.split('/')[0]
205 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
206 cmd = 'add-apt-repository -y %s' % ppa
207 subprocess.check_call(cmd.split(' '))
208 return
209
210 # map charm config options to actual archive pockets.
211 pockets = {
212 'folsom': 'precise-updates/folsom',
213 'folsom/updates': 'precise-updates/folsom',
214 'folsom/proposed': 'precise-proposed/folsom',
215 'grizzly': 'precise-updates/grizzly',
216 'grizzly/updates': 'precise-updates/grizzly',
217 'grizzly/proposed': 'precise-proposed/grizzly'
218 }
219
220 try:
221 pocket = pockets[ca_rel]
222 except KeyError:
223 e = 'Invalid Cloud Archive release specified: %s' % rel
224 error_out(e)
225
226 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
227 # TODO: Replace key import with cloud archive keyring pkg.
228 import_key(CLOUD_ARCHIVE_KEY_ID)
229
230 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
231 f.write(src)
232 else:
233 error_out("Invalid openstack-release specified: %s" % rel)
234
235
236def save_script_rc(script_path="scripts/scriptrc", **env_vars):
237 """
238 Write an rc file in the charm-delivered directory containing
239 exported environment variables provided by env_vars. Any charm scripts run
240 outside the juju hook environment can source this scriptrc to obtain
241 updated config information necessary to perform health checks or
242 service changes.
243 """
244 unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-')
245 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
246 if not os.path.exists(os.path.dirname(juju_rc_path)):
247 os.mkdir(os.path.dirname(juju_rc_path))
248 with open(juju_rc_path, 'wb') as rc_script:
249 rc_script.write(
250 "#!/bin/bash\n")
251 [rc_script.write('export %s=%s\n' % (u, p))
252 for u, p in env_vars.iteritems() if u != "script_path"]
253
254
255def openstack_upgrade_available(package):
256 """
257 Determines if an OpenStack upgrade is available from installation
258 source, based on version of installed package.
259
260 :param package: str: Name of installed package.
261
262 :returns: bool: : Returns True if configured installation source offers
263 a newer version of package.
264
265 """
266
267 src = config('openstack-origin')
268 cur_vers = get_os_version_package(package)
269 available_vers = get_os_version_install_source(src)
270 return StrictVersion(available_vers) > StrictVersion(cur_vers)
2710
=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 2013-07-05 19:12:08 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-08-14 22:57:15 +0000
@@ -1,85 +1,20 @@
1import logging
2import os1import os
32
3from charmhelpers.core.host import apt_install
4
5from charmhelpers.core.hookenv import (
6 log,
7 ERROR,
8 INFO
9)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
12
4try:13try:
5 import jinja214 from jinja2 import FileSystemLoader, ChoiceLoader, Environment
6except ImportError:15except ImportError:
7 pass16 # python-jinja2 may not be installed yet, or we're running unittests.
817 FileSystemLoader = ChoiceLoader = Environment = None
9
10logging.basicConfig(level=logging.INFO)
11
12"""
13
14WIP Abstract templating system for the OpenStack charms.
15
16The idea is that an openstack charm can register a number of config files
17associated with common context generators. The context generators are
18responsible for inspecting charm config/relation data/deployment state
19and presenting correct context to the template. Generic context generators
20could live somewhere in charmhelpers.contrib.openstack, and each
21charm can implement their own specific ones as well.
22
23Ideally a charm would register all its config files somewhere in its namespace,
24eg cinder_utils.py:
25
26from charmhelpers.contrib.openstack import templating, context
27
28config_files = {
29 '/etc/cinder/cinder.conf': [context.shared_db,
30 context.amqp,
31 context.ceph],
32 '/etc/cinder/api-paste.ini': [context.identity_service]
33}
34
35configs = templating.OSConfigRenderer(template_dir='templates/')
36
37[configs.register(k, v) for k, v in config_files.iteritems()]
38
39Hooks can then render config files as need, eg:
40
41def config_changed():
42 configs.render_all()
43
44def db_changed():
45 configs.render('/etc/cinder/cinder.conf')
46 check_call(['cinder-manage', 'db', 'sync'])
47
48This would look very similar for nova/glance/etc.
49
50
51The OSTemplteLoader is responsible for creating a jinja2.ChoiceLoader that
52should help reduce fragmentation of a charms' templates across OpenStack
53releases, so we do not need to maintain many copies of templates or juggle
54symlinks. The constructed loader lets the template be loaded from the most
55recent OS release-specific template dir or a base template dir.
56
57For example, say cinder has no changes in config structure across any OS
58releases, all OS releases share the same templates from the base directory:
59
60
61templates/api-paste.ini
62templates/cinder.conf
63
64Then, since Grizzly and beyond, cinder.conf's format has changed:
65
66templates/api-paste.ini
67templates/cinder.conf
68templates/grizzly/cinder.conf
69
70
71Grizzly and beyond will load from templates/grizzly, but any release prior will
72load from templates/. If some change in Icehouse breaks config format again:
73
74templates/api-paste.ini
75templates/cinder.conf
76templates/grizzly/cinder.conf
77templates/icehouse/cinder.conf
78
79Icehouse and beyond will load from icehouse/, Grizzly + Havan from grizzly/,
80previous releases from the base templates/
81
82"""
8318
8419
85class OSConfigException(Exception):20class OSConfigException(Exception):
@@ -107,36 +42,36 @@
107 jinja2.FilesystemLoaders, ordered in descending42 jinja2.FilesystemLoaders, ordered in descending
108 order by OpenStack release.43 order by OpenStack release.
109 """44 """
110 tmpl_dirs = (45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
111 ('essex', os.path.join(templates_dir, 'essex')),46 for rel in OPENSTACK_CODENAMES.itervalues()]
112 ('folsom', os.path.join(templates_dir, 'folsom')),
113 ('grizzly', os.path.join(templates_dir, 'grizzly')),
114 ('havana', os.path.join(templates_dir, 'havana')),
115 ('icehouse', os.path.join(templates_dir, 'icehouse')),
116 )
11747
118 if not os.path.isdir(templates_dir):48 if not os.path.isdir(templates_dir):
119 logging.error('Templates directory not found @ %s.' % templates_dir)49 log('Templates directory not found @ %s.' % templates_dir,
50 level=ERROR)
120 raise OSConfigException51 raise OSConfigException
12152
122 # the bottom contains tempaltes_dir and possibly a common templates dir53 # the bottom contains tempaltes_dir and possibly a common templates dir
123 # shipped with the helper.54 # shipped with the helper.
124 loaders = [jinja2.FileSystemLoader(templates_dir)]55 loaders = [FileSystemLoader(templates_dir)]
125 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')56 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
126 if os.path.isdir(helper_templates):57 if os.path.isdir(helper_templates):
127 loaders.append(jinja2.FileSystemLoader(helper_templates))58 loaders.append(FileSystemLoader(helper_templates))
12859
129 for rel, tmpl_dir in tmpl_dirs:60 for rel, tmpl_dir in tmpl_dirs:
130 if os.path.isdir(tmpl_dir):61 if os.path.isdir(tmpl_dir):
131 loaders.insert(0, jinja2.FileSystemLoader(tmpl_dir))62 loaders.insert(0, FileSystemLoader(tmpl_dir))
132 if rel == os_release:63 if rel == os_release:
133 break64 break
134 logging.info('Creating choice loader with dirs: %s' %65 log('Creating choice loader with dirs: %s' %
135 [l.searchpath for l in loaders])66 [l.searchpath for l in loaders], level=INFO)
136 return jinja2.ChoiceLoader(loaders)67 return ChoiceLoader(loaders)
13768
13869
139class OSConfigTemplate(object):70class OSConfigTemplate(object):
71 """
72 Associates a config file template with a list of context generators.
73 Responsible for constructing a template context based on those generators.
74 """
140 def __init__(self, config_file, contexts):75 def __init__(self, config_file, contexts):
141 self.config_file = config_file76 self.config_file = config_file
14277
@@ -170,46 +105,141 @@
170105
171106
172class OSConfigRenderer(object):107class OSConfigRenderer(object):
108 """
109 This class provides a common templating system to be used by OpenStack
110 charms. It is intended to help charms share common code and templates,
111 and ease the burden of managing config templates across multiple OpenStack
112 releases.
113
114 Basic usage:
115 # import some common context generates from charmhelpers
116 from charmhelpers.contrib.openstack import context
117
118 # Create a renderer object for a specific OS release.
119 configs = OSConfigRenderer(templates_dir='/tmp/templates',
120 openstack_release='folsom')
121 # register some config files with context generators.
122 configs.register(config_file='/etc/nova/nova.conf',
123 contexts=[context.SharedDBContext(),
124 context.AMQPContext()])
125 configs.register(config_file='/etc/nova/api-paste.ini',
126 contexts=[context.IdentityServiceContext()])
127 configs.register(config_file='/etc/haproxy/haproxy.conf',
128 contexts=[context.HAProxyContext()])
129 # write out a single config
130 configs.write('/etc/nova/nova.conf')
131 # write out all registered configs
132 configs.write_all()
133
134 Details:
135
136 OpenStack Releases and template loading
137 ---------------------------------------
138 When the object is instantiated, it is associated with a specific OS
139 release. This dictates how the template loader will be constructed.
140
141 The constructed loader attempts to load the template from several places
142 in the following order:
143 - from the most recent OS release-specific template dir (if one exists)
144 - the base templates_dir
145 - a template directory shipped in the charm with this helper file.
146
147
148 For the example above, '/tmp/templates' contains the following structure:
149 /tmp/templates/nova.conf
150 /tmp/templates/api-paste.ini
151 /tmp/templates/grizzly/api-paste.ini
152 /tmp/templates/havana/api-paste.ini
153
154 Since it was registered with the grizzly release, it first seraches
155 the grizzly directory for nova.conf, then the templates dir.
156
157 When writing api-paste.ini, it will find the template in the grizzly
158 directory.
159
160 If the object were created with folsom, it would fall back to the
161 base templates dir for its api-paste.ini template.
162
163 This system should help manage changes in config files through
164 openstack releases, allowing charms to fall back to the most recently
165 updated config template for a given release
166
167 The haproxy.conf, since it is not shipped in the templates dir, will
168 be loaded from the module directory's template directory, eg
169 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
170 us to ship common templates (haproxy, apache) with the helpers.
171
172 Context generators
173 ---------------------------------------
174 Context generators are used to generate template contexts during hook
175 execution. Doing so may require inspecting service relations, charm
176 config, etc. When registered, a config file is associated with a list
177 of generators. When a template is rendered and written, all context
178 generates are called in a chain to generate the context dictionary
179 passed to the jinja2 template. See context.py for more info.
180 """
173 def __init__(self, templates_dir, openstack_release):181 def __init__(self, templates_dir, openstack_release):
174 if not os.path.isdir(templates_dir):182 if not os.path.isdir(templates_dir):
175 logging.error('Could not locate templates dir %s' % templates_dir)183 log('Could not locate templates dir %s' % templates_dir,
184 level=ERROR)
176 raise OSConfigException185 raise OSConfigException
186
177 self.templates_dir = templates_dir187 self.templates_dir = templates_dir
178 self.openstack_release = openstack_release188 self.openstack_release = openstack_release
179 self.templates = {}189 self.templates = {}
180 self._tmpl_env = None190 self._tmpl_env = None
181191
192 if None in [Environment, ChoiceLoader, FileSystemLoader]:
193 # if this code is running, the object is created pre-install hook.
194 # jinja2 shouldn't get touched until the module is reloaded on next
195 # hook execution, with proper jinja2 bits successfully imported.
196 apt_install('python-jinja2')
197
182 def register(self, config_file, contexts):198 def register(self, config_file, contexts):
199 """
200 Register a config file with a list of context generators to be called
201 during rendering.
202 """
183 self.templates[config_file] = OSConfigTemplate(config_file=config_file,203 self.templates[config_file] = OSConfigTemplate(config_file=config_file,
184 contexts=contexts)204 contexts=contexts)
185 logging.info('Registered config file: %s' % config_file)205 log('Registered config file: %s' % config_file, level=INFO)
186206
187 def _get_tmpl_env(self):207 def _get_tmpl_env(self):
188 if not self._tmpl_env:208 if not self._tmpl_env:
189 loader = get_loader(self.templates_dir, self.openstack_release)209 loader = get_loader(self.templates_dir, self.openstack_release)
190 self._tmpl_env = jinja2.Environment(loader=loader)210 self._tmpl_env = Environment(loader=loader)
211
212 def _get_template(self, template):
213 self._get_tmpl_env()
214 template = self._tmpl_env.get_template(template)
215 log('Loaded template from %s' % template.filename, level=INFO)
216 return template
191217
192 def render(self, config_file):218 def render(self, config_file):
193 if config_file not in self.templates:219 if config_file not in self.templates:
194 logging.error('Config not registered: %s' % config_file)220 log('Config not registered: %s' % config_file, level=ERROR)
195 raise OSConfigException221 raise OSConfigException
196 ctxt = self.templates[config_file].context()222 ctxt = self.templates[config_file].context()
197 _tmpl = os.path.basename(config_file)223 _tmpl = os.path.basename(config_file)
198 logging.info('Rendering from template: %s' % _tmpl)224 log('Rendering from template: %s' % _tmpl, level=INFO)
199 self._get_tmpl_env()225 template = self._get_template(_tmpl)
200 _tmpl = self._tmpl_env.get_template(_tmpl)226 return template.render(ctxt)
201 logging.info('Loaded template from %s' % _tmpl.filename)
202 return _tmpl.render(ctxt)
203227
204 def write(self, config_file):228 def write(self, config_file):
229 """
230 Write a single config file, raises if config file is not registered.
231 """
205 if config_file not in self.templates:232 if config_file not in self.templates:
206 logging.error('Config not registered: %s' % config_file)233 log('Config not registered: %s' % config_file, level=ERROR)
207 raise OSConfigException234 raise OSConfigException
208 with open(config_file, 'wb') as out:235 with open(config_file, 'wb') as out:
209 out.write(self.render(config_file))236 out.write(self.render(config_file))
210 logging.info('Wrote template %s.' % config_file)237 log('Wrote template %s.' % config_file, level=INFO)
211238
212 def write_all(self):239 def write_all(self):
240 """
241 Write out all registered config files.
242 """
213 [self.write(k) for k in self.templates.iterkeys()]243 [self.write(k) for k in self.templates.iterkeys()]
214244
215 def set_release(self, openstack_release):245 def set_release(self, openstack_release):
216246
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,273 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4
5from collections import OrderedDict
6
7import apt_pkg as apt
8import subprocess
9import os
10import sys
11
12from charmhelpers.core.hookenv import (
13 config,
14 log as juju_log,
15 charm_dir,
16)
17
18from charmhelpers.core.host import (
19 lsb_release,
20 apt_install,
21)
22
23CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
24CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
25
26UBUNTU_OPENSTACK_RELEASE = OrderedDict([
27 ('oneiric', 'diablo'),
28 ('precise', 'essex'),
29 ('quantal', 'folsom'),
30 ('raring', 'grizzly'),
31 ('saucy', 'havana'),
32])
33
34
35OPENSTACK_CODENAMES = OrderedDict([
36 ('2011.2', 'diablo'),
37 ('2012.1', 'essex'),
38 ('2012.2', 'folsom'),
39 ('2013.1', 'grizzly'),
40 ('2013.2', 'havana'),
41 ('2014.1', 'icehouse'),
42])
43
44# The ugly duckling
45SWIFT_CODENAMES = {
46 '1.4.3': 'diablo',
47 '1.4.8': 'essex',
48 '1.7.4': 'folsom',
49 '1.7.6': 'grizzly',
50 '1.7.7': 'grizzly',
51 '1.8.0': 'grizzly',
52 '1.9.0': 'havana',
53 '1.9.1': 'havana',
54}
55
56
57def error_out(msg):
58 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
59 sys.exit(1)
60
61
62def get_os_codename_install_source(src):
63 '''Derive OpenStack release codename from a given installation source.'''
64 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
65 rel = ''
66 if src == 'distro':
67 try:
68 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
69 except KeyError:
70 e = 'Could not derive openstack release for '\
71 'this Ubuntu release: %s' % ubuntu_rel
72 error_out(e)
73 return rel
74
75 if src.startswith('cloud:'):
76 ca_rel = src.split(':')[1]
77 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
78 return ca_rel
79
80 # Best guess match based on deb string provided
81 if src.startswith('deb') or src.startswith('ppa'):
82 for k, v in OPENSTACK_CODENAMES.iteritems():
83 if v in src:
84 return v
85
86
87def get_os_version_install_source(src):
88 codename = get_os_codename_install_source(src)
89 return get_os_version_codename(codename)
90
91
92def get_os_codename_version(vers):
93 '''Determine OpenStack codename from version number.'''
94 try:
95 return OPENSTACK_CODENAMES[vers]
96 except KeyError:
97 e = 'Could not determine OpenStack codename for version %s' % vers
98 error_out(e)
99
100
101def get_os_version_codename(codename):
102 '''Determine OpenStack version number from codename.'''
103 for k, v in OPENSTACK_CODENAMES.iteritems():
104 if v == codename:
105 return k
106 e = 'Could not derive OpenStack version for '\
107 'codename: %s' % codename
108 error_out(e)
109
110
111def get_os_codename_package(package, fatal=True):
112 '''Derive OpenStack release codename from an installed package.'''
113 apt.init()
114 cache = apt.Cache()
115
116 try:
117 pkg = cache[package]
118 except:
119 if not fatal:
120 return None
121 # the package is unknown to the current apt cache.
122 e = 'Could not determine version of package with no installation '\
123 'candidate: %s' % package
124 error_out(e)
125
126 if not pkg.current_ver:
127 if not fatal:
128 return None
129 # package is known, but no version is currently installed.
130 e = 'Could not determine version of uninstalled package: %s' % package
131 error_out(e)
132
133 vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
134
135 try:
136 if 'swift' in pkg.name:
137 vers = vers[:5]
138 return SWIFT_CODENAMES[vers]
139 else:
140 vers = vers[:6]
141 return OPENSTACK_CODENAMES[vers]
142 except KeyError:
143 e = 'Could not determine OpenStack codename for version %s' % vers
144 error_out(e)
145
146
147def get_os_version_package(pkg, fatal=True):
148 '''Derive OpenStack version number from an installed package.'''
149 codename = get_os_codename_package(pkg, fatal=fatal)
150
151 if not codename:
152 return None
153
154 if 'swift' in pkg:
155 vers_map = SWIFT_CODENAMES
156 else:
157 vers_map = OPENSTACK_CODENAMES
158
159 for version, cname in vers_map.iteritems():
160 if cname == codename:
161 return version
162 #e = "Could not determine OpenStack version for package: %s" % pkg
163 #error_out(e)
164
165
166def import_key(keyid):
167 cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
168 "--recv-keys %s" % keyid
169 try:
170 subprocess.check_call(cmd.split(' '))
171 except subprocess.CalledProcessError:
172 error_out("Error importing repo key %s" % keyid)
173
174
175def configure_installation_source(rel):
176 '''Configure apt installation source.'''
177 if rel == 'distro':
178 return
179 elif rel[:4] == "ppa:":
180 src = rel
181 subprocess.check_call(["add-apt-repository", "-y", src])
182 elif rel[:3] == "deb":
183 l = len(rel.split('|'))
184 if l == 2:
185 src, key = rel.split('|')
186 juju_log("Importing PPA key from keyserver for %s" % src)
187 import_key(key)
188 elif l == 1:
189 src = rel
190 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
191 f.write(src)
192 elif rel[:6] == 'cloud:':
193 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
194 rel = rel.split(':')[1]
195 u_rel = rel.split('-')[0]
196 ca_rel = rel.split('-')[1]
197
198 if u_rel != ubuntu_rel:
199 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
200 'version (%s)' % (ca_rel, ubuntu_rel)
201 error_out(e)
202
203 if 'staging' in ca_rel:
204 # staging is just a regular PPA.
205 os_rel = ca_rel.split('/')[0]
206 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
207 cmd = 'add-apt-repository -y %s' % ppa
208 subprocess.check_call(cmd.split(' '))
209 return
210
211 # map charm config options to actual archive pockets.
212 pockets = {
213 'folsom': 'precise-updates/folsom',
214 'folsom/updates': 'precise-updates/folsom',
215 'folsom/proposed': 'precise-proposed/folsom',
216 'grizzly': 'precise-updates/grizzly',
217 'grizzly/updates': 'precise-updates/grizzly',
218 'grizzly/proposed': 'precise-proposed/grizzly',
219 'havana': 'precise-updates/havana',
220 'havana/updates': 'precise-updates/havana',
221 'havana/proposed': 'precise-proposed/havana',
222 }
223
224 try:
225 pocket = pockets[ca_rel]
226 except KeyError:
227 e = 'Invalid Cloud Archive release specified: %s' % rel
228 error_out(e)
229
230 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
231 apt_install('ubuntu-cloud-keyring', fatal=True)
232
233 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
234 f.write(src)
235 else:
236 error_out("Invalid openstack-release specified: %s" % rel)
237
238
239def save_script_rc(script_path="scripts/scriptrc", **env_vars):
240 """
241 Write an rc file in the charm-delivered directory containing
242 exported environment variables provided by env_vars. Any charm scripts run
243 outside the juju hook environment can source this scriptrc to obtain
244 updated config information necessary to perform health checks or
245 service changes.
246 """
247 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
248 if not os.path.exists(os.path.dirname(juju_rc_path)):
249 os.mkdir(os.path.dirname(juju_rc_path))
250 with open(juju_rc_path, 'wb') as rc_script:
251 rc_script.write(
252 "#!/bin/bash\n")
253 [rc_script.write('export %s=%s\n' % (u, p))
254 for u, p in env_vars.iteritems() if u != "script_path"]
255
256
257def openstack_upgrade_available(package):
258 """
259 Determines if an OpenStack upgrade is available from installation
260 source, based on version of installed package.
261
262 :param package: str: Name of installed package.
263
264 :returns: bool: : Returns True if configured installation source offers
265 a newer version of package.
266
267 """
268
269 src = config('openstack-origin')
270 cur_vers = get_os_version_package(package)
271 available_vers = get_os_version_install_source(src)
272 apt.init()
273 return apt.version_compare(available_vers, cur_vers) == 1
0274
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2013-07-05 16:17:32 +0000
+++ hooks/charmhelpers/core/hookenv.py 2013-08-14 22:57:15 +0000
@@ -197,7 +197,7 @@
197 relid_cmd_line = ['relation-ids', '--format=json']197 relid_cmd_line = ['relation-ids', '--format=json']
198 if reltype is not None:198 if reltype is not None:
199 relid_cmd_line.append(reltype)199 relid_cmd_line.append(reltype)
200 return json.loads(subprocess.check_output(relid_cmd_line))200 return json.loads(subprocess.check_output(relid_cmd_line)) or []
201 return []201 return []
202202
203203
@@ -208,7 +208,7 @@
208 units_cmd_line = ['relation-list', '--format=json']208 units_cmd_line = ['relation-list', '--format=json']
209 if relid is not None:209 if relid is not None:
210 units_cmd_line.extend(('-r', relid))210 units_cmd_line.extend(('-r', relid))
211 return json.loads(subprocess.check_output(units_cmd_line))211 return json.loads(subprocess.check_output(units_cmd_line)) or []
212212
213213
214@cached214@cached
@@ -335,5 +335,6 @@
335 return decorated335 return decorated
336 return wrapper336 return wrapper
337337
338
338def charm_dir():339def charm_dir():
339 return os.environ.get('CHARM_DIR')340 return os.environ.get('CHARM_DIR')
340341
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2013-07-05 16:17:32 +0000
+++ hooks/charmhelpers/core/host.py 2013-08-14 22:57:15 +0000
@@ -9,12 +9,14 @@
9import os9import os
10import pwd10import pwd
11import grp11import grp
12import random
13import string
12import subprocess14import subprocess
13import hashlib15import hashlib
1416
15from collections import OrderedDict17from collections import OrderedDict
1618
17from hookenv import log, execution_environment19from hookenv import log
1820
1921
20def service_start(service_name):22def service_start(service_name):
@@ -39,6 +41,18 @@
39 return subprocess.call(cmd) == 041 return subprocess.call(cmd) == 0
4042
4143
44def service_running(service):
45 try:
46 output = subprocess.check_output(['service', service, 'status'])
47 except subprocess.CalledProcessError:
48 return False
49 else:
50 if ("start/running" in output or "is running" in output):
51 return True
52 else:
53 return False
54
55
42def adduser(username, password=None, shell='/bin/bash', system_user=False):56def adduser(username, password=None, shell='/bin/bash', system_user=False):
43 """Add a user"""57 """Add a user"""
44 try:58 try:
@@ -74,36 +88,33 @@
7488
75def rsync(from_path, to_path, flags='-r', options=None):89def rsync(from_path, to_path, flags='-r', options=None):
76 """Replicate the contents of a path"""90 """Replicate the contents of a path"""
77 context = execution_environment()
78 options = options or ['--delete', '--executability']91 options = options or ['--delete', '--executability']
79 cmd = ['/usr/bin/rsync', flags]92 cmd = ['/usr/bin/rsync', flags]
80 cmd.extend(options)93 cmd.extend(options)
81 cmd.append(from_path.format(**context))94 cmd.append(from_path)
82 cmd.append(to_path.format(**context))95 cmd.append(to_path)
83 log(" ".join(cmd))96 log(" ".join(cmd))
84 return subprocess.check_output(cmd).strip()97 return subprocess.check_output(cmd).strip()
8598
8699
87def symlink(source, destination):100def symlink(source, destination):
88 """Create a symbolic link"""101 """Create a symbolic link"""
89 context = execution_environment()
90 log("Symlinking {} as {}".format(source, destination))102 log("Symlinking {} as {}".format(source, destination))
91 cmd = [103 cmd = [
92 'ln',104 'ln',
93 '-sf',105 '-sf',
94 source.format(**context),106 source,
95 destination.format(**context)107 destination,
96 ]108 ]
97 subprocess.check_call(cmd)109 subprocess.check_call(cmd)
98110
99111
100def mkdir(path, owner='root', group='root', perms=0555, force=False):112def mkdir(path, owner='root', group='root', perms=0555, force=False):
101 """Create a directory"""113 """Create a directory"""
102 context = execution_environment()
103 log("Making dir {} {}:{} {:o}".format(path, owner, group,114 log("Making dir {} {}:{} {:o}".format(path, owner, group,
104 perms))115 perms))
105 uid = pwd.getpwnam(owner.format(**context)).pw_uid116 uid = pwd.getpwnam(owner).pw_uid
106 gid = grp.getgrnam(group.format(**context)).gr_gid117 gid = grp.getgrnam(group).gr_gid
107 realpath = os.path.abspath(path)118 realpath = os.path.abspath(path)
108 if os.path.exists(realpath):119 if os.path.exists(realpath):
109 if force and not os.path.isdir(realpath):120 if force and not os.path.isdir(realpath):
@@ -114,28 +125,15 @@
114 os.chown(realpath, uid, gid)125 os.chown(realpath, uid, gid)
115126
116127
117def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs):128def write_file(path, content, owner='root', group='root', perms=0444):
118 """Create or overwrite a file with the contents of a string"""129 """Create or overwrite a file with the contents of a string"""
119 context = execution_environment()130 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
120 context.update(kwargs)131 uid = pwd.getpwnam(owner).pw_uid
121 log("Writing file {} {}:{} {:o}".format(path, owner, group,132 gid = grp.getgrnam(group).gr_gid
122 perms))133 with open(path, 'w') as target:
123 uid = pwd.getpwnam(owner.format(**context)).pw_uid
124 gid = grp.getgrnam(group.format(**context)).gr_gid
125 with open(path.format(**context), 'w') as target:
126 os.fchown(target.fileno(), uid, gid)134 os.fchown(target.fileno(), uid, gid)
127 os.fchmod(target.fileno(), perms)135 os.fchmod(target.fileno(), perms)
128 target.write(fmtstr.format(**context))136 target.write(content)
129
130
131def render_template_file(source, destination, **kwargs):
132 """Create or overwrite a file using a template"""
133 log("Rendering template {} for {}".format(source,
134 destination))
135 context = execution_environment()
136 with open(source.format(**context), 'r') as template:
137 write_file(destination.format(**context), template.read(),
138 **kwargs)
139137
140138
141def filter_installed_packages(packages):139def filter_installed_packages(packages):
@@ -271,3 +269,15 @@
271 k, v = l.split('=')269 k, v = l.split('=')
272 d[k.strip()] = v.strip()270 d[k.strip()] = v.strip()
273 return d271 return d
272
273
274def pwgen(length=None):
275 '''Generate a random pasword.'''
276 if length is None:
277 length = random.choice(range(35, 45))
278 alphanumeric_chars = [
279 l for l in (string.letters + string.digits)
280 if l not in 'l0QD1vAEIOUaeiou']
281 random_chars = [
282 random.choice(alphanumeric_chars) for _ in range(length)]
283 return(''.join(random_chars))
274284
=== modified file 'hooks/glance_contexts.py'
--- hooks/glance_contexts.py 2013-07-05 21:13:02 +0000
+++ hooks/glance_contexts.py 2013-08-14 22:57:15 +0000
@@ -8,7 +8,7 @@
8 ApacheSSLContext as SSLContext,8 ApacheSSLContext as SSLContext,
9)9)
1010
11from charmhelpers.contrib.hahelpers.cluster_utils import (11from charmhelpers.contrib.hahelpers.cluster import (
12 determine_api_port,12 determine_api_port,
13 determine_haproxy_port,13 determine_haproxy_port,
14)14)
1515
=== modified file 'hooks/glance_relations.py'
--- hooks/glance_relations.py 2013-07-11 15:07:15 +0000
+++ hooks/glance_relations.py 2013-08-14 22:57:15 +0000
@@ -13,7 +13,6 @@
13 PACKAGES,13 PACKAGES,
14 SERVICES,14 SERVICES,
15 CHARM,15 CHARM,
16 SERVICE_NAME,
17 GLANCE_REGISTRY_CONF,16 GLANCE_REGISTRY_CONF,
18 GLANCE_REGISTRY_PASTE_INI,17 GLANCE_REGISTRY_PASTE_INI,
19 GLANCE_API_CONF,18 GLANCE_API_CONF,
@@ -22,12 +21,13 @@
22 CEPH_CONF, )21 CEPH_CONF, )
2322
24from charmhelpers.core.hookenv import (23from charmhelpers.core.hookenv import (
25 config as charm_conf,24 config,
26 Hooks,25 Hooks,
27 log as juju_log,26 log as juju_log,
28 relation_get,27 relation_get,
29 relation_set,28 relation_set,
30 relation_ids,29 relation_ids,
30 service_name,
31 unit_get,31 unit_get,
32 UnregisteredHookError, )32 UnregisteredHookError, )
3333
@@ -37,16 +37,14 @@
37 apt_update,37 apt_update,
38 service_stop, )38 service_stop, )
3939
40from charmhelpers.contrib.hahelpers.cluster_utils import (40from charmhelpers.contrib.hahelpers.cluster import (
41 eligible_leader,41 eligible_leader,
42 is_clustered, )42 is_clustered, )
4343
44from charmhelpers.contrib.openstack.openstack_utils import (44from charmhelpers.contrib.openstack.utils import (
45 configure_installation_source,45 configure_installation_source,
46 get_os_codename_package,46 get_os_codename_package,
47 get_os_codename_install_source,47 openstack_upgrade_available,
48 get_os_version_codename,
49 save_script_rc,
50 lsb_release, )48 lsb_release, )
5149
52from subprocess import (50from subprocess import (
@@ -58,13 +56,12 @@
5856
59CONFIGS = register_configs()57CONFIGS = register_configs()
6058
61config = charm_conf()
6259
63@hooks.hook('install')60@hooks.hook('install')
64def install_hook():61def install_hook():
65 juju_log('Installing glance packages')62 juju_log('Installing glance packages')
6663
67 src = config['openstack-origin']64 src = config('openstack-origin')
68 if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and65 if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and
69 src == 'distro'):66 src == 'distro'):
70 src = 'cloud:precise-folsom'67 src = 'cloud:precise-folsom'
@@ -82,7 +79,7 @@
8279
83@hooks.hook('shared-db-relation-joined')80@hooks.hook('shared-db-relation-joined')
84def db_joined():81def db_joined():
85 relation_set(database=config['database'], username=config['database-user'],82 relation_set(database=config('database'), username=config('database-user'),
86 hostname=unit_get('private-address'))83 hostname=unit_get('private-address'))
8784
8885
@@ -123,13 +120,13 @@
123120
124 host = unit_get('private-address')121 host = unit_get('private-address')
125 if is_clustered():122 if is_clustered():
126 host = config["vip"]123 host = config("vip")
127124
128 relation_data = {125 relation_data = {
129 'glance-api-server': "%s://%s:9292" % (scheme, host), }126 'glance_api_server': "%s://%s:9292" % (scheme, host), }
130127
131 juju_log("%s: image-service_joined: To peer glance-api-server=%s" %128 juju_log("%s: image-service_joined: To peer glance_api_server=%s" %
132 (CHARM, relation_data['glance-api-server']))129 (CHARM, relation_data['glance_api_server']))
133130
134 relation_set(relation_id=relation_id, **relation_data)131 relation_set(relation_id=relation_id, **relation_data)
135132
@@ -164,7 +161,7 @@
164 juju_log('ceph relation incomplete. Peer not ready?')161 juju_log('ceph relation incomplete. Peer not ready?')
165 return162 return
166163
167 if not ensure_ceph_keyring(service=SERVICE_NAME):164 if not ensure_ceph_keyring(service=service_name()):
168 juju_log('Could not create ceph keyring: peer not ready?')165 juju_log('Could not create ceph keyring: peer not ready?')
169 return166 return
170167
@@ -172,7 +169,7 @@
172 CONFIGS.write(CEPH_CONF)169 CONFIGS.write(CEPH_CONF)
173170
174 if eligible_leader(CLUSTER_RES):171 if eligible_leader(CLUSTER_RES):
175 ensure_ceph_pool(service=SERVICE_NAME)172 ensure_ceph_pool(service=service_name())
176173
177174
178@hooks.hook('identity-service-relation-joined')175@hooks.hook('identity-service-relation-joined')
@@ -187,13 +184,13 @@
187184
188 host = unit_get('private-address')185 host = unit_get('private-address')
189 if is_clustered():186 if is_clustered():
190 host = config["vip"]187 host = config("vip")
191188
192 url = "%s://%s:9292" % (scheme, host)189 url = "%s://%s:9292" % (scheme, host)
193190
194 relation_data = {191 relation_data = {
195 'service': 'glance',192 'service': 'glance',
196 'region': config['region'],193 'region': config('region'),
197 'public_url': url,194 'public_url': url,
198 'admin_url': url,195 'admin_url': url,
199 'internal_url': url, }196 'internal_url': url, }
@@ -226,25 +223,16 @@
226@hooks.hook('config-changed')223@hooks.hook('config-changed')
227@restart_on_change(restart_map())224@restart_on_change(restart_map())
228def config_changed():225def config_changed():
229 # Determine whether or not we should do an upgrade, based on whether or not226 if openstack_upgrade_available('glance-common'):
230 # the version offered in openstack-origin is greater than what is installed227 juju_log('Upgrading OpenStack release')
231 install_src = config["openstack-origin"]
232 available = get_os_codename_install_source(install_src)
233 installed = get_os_codename_package("glance-common")
234
235 if (available and
236 get_os_version_codename(available) >
237 get_os_version_codename(installed)):
238 juju_log('%s: Upgrading OpenStack release: %s -> %s' %
239 (CHARM, installed, available))
240 do_openstack_upgrade(CONFIGS)228 do_openstack_upgrade(CONFIGS)
241229
242 configure_https()230 configure_https()
243231
244 env_vars = {'OPENSTACK_PORT_MCASTPORT': config["ha-mcastport"],232 #env_vars = {'OPENSTACK_PORT_MCASTPORT': config("ha-mcastport"),
245 'OPENSTACK_SERVICE_API': "glance-api",233 # 'OPENSTACK_SERVICE_API': "glance-api",
246 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"}234 # 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"}
247 save_script_rc(**env_vars)235 #save_script_rc(**env_vars)
248236
249237
250@hooks.hook('cluster-relation-changed')238@hooks.hook('cluster-relation-changed')
@@ -261,11 +249,11 @@
261249
262@hooks.hook('ha-relation-joined')250@hooks.hook('ha-relation-joined')
263def ha_relation_joined():251def ha_relation_joined():
264 corosync_bindiface = config["ha-bindiface"]252 corosync_bindiface = config("ha-bindiface")
265 corosync_mcastport = config["ha-mcastport"]253 corosync_mcastport = config("ha-mcastport")
266 vip = config["vip"]254 vip = config("vip")
267 vip_iface = config["vip_iface"]255 vip_iface = config("vip_iface")
268 vip_cidr = config["vip_cidr"]256 vip_cidr = config("vip_cidr")
269257
270 #if vip and vip_iface and vip_cidr and \258 #if vip and vip_iface and vip_cidr and \
271 # corosync_bindiface and corosync_mcastport:259 # corosync_bindiface and corosync_mcastport:
@@ -299,7 +287,7 @@
299 juju_log('glance subordinate is not fully clustered.')287 juju_log('glance subordinate is not fully clustered.')
300 return288 return
301 if eligible_leader(CLUSTER_RES):289 if eligible_leader(CLUSTER_RES):
302 host = config["vip"]290 host = config("vip")
303 scheme = "http"291 scheme = "http"
304 if 'https' in CONFIGS.complete_contexts():292 if 'https' in CONFIGS.complete_contexts():
305 scheme = "https"293 scheme = "https"
@@ -309,14 +297,14 @@
309 for r_id in relation_ids('identity-service'):297 for r_id in relation_ids('identity-service'):
310 relation_set(relation_id=r_id,298 relation_set(relation_id=r_id,
311 service="glance",299 service="glance",
312 region=config["region"],300 region=config("region"),
313 public_url=url,301 public_url=url,
314 admin_url=url,302 admin_url=url,
315 internal_url=url)303 internal_url=url)
316304
317 for r_id in relation_ids('image-service'):305 for r_id in relation_ids('image-service'):
318 relation_data = {306 relation_data = {
319 'glance-api-server': url, }307 'glance_api_server': url, }
320 relation_set(relation_id=r_id, **relation_data)308 relation_set(relation_id=r_id, **relation_data)
321309
322310
@@ -333,7 +321,6 @@
333 else:321 else:
334 cmd = ['a2dissite', 'openstack_https_frontend']322 cmd = ['a2dissite', 'openstack_https_frontend']
335 check_call(cmd)323 check_call(cmd)
336 return
337324
338 for r_id in relation_ids('identity-service'):325 for r_id in relation_ids('identity-service'):
339 keystone_joined(relation_id=r_id)326 keystone_joined(relation_id=r_id)
340327
=== modified file 'hooks/glance_utils.py'
--- hooks/glance_utils.py 2013-07-11 14:51:24 +0000
+++ hooks/glance_utils.py 2013-08-14 22:57:15 +0000
@@ -16,24 +16,23 @@
16 log as juju_log,16 log as juju_log,
17 relation_get,17 relation_get,
18 relation_ids,18 relation_ids,
19 related_units,19 related_units, )
20 service_name, )
2120
22from charmhelpers.contrib.openstack import (21from charmhelpers.contrib.openstack import (
23 templating,22 templating,
24 context, )23 context, )
2524
26from charmhelpers.contrib.hahelpers.cluster_utils import (25from charmhelpers.contrib.hahelpers.cluster import (
27 eligible_leader,26 eligible_leader,
28)27)
2928
30from charmhelpers.contrib.hahelpers.ceph_utils import (29from charmhelpers.contrib.hahelpers.ceph import (
31 create_keyring as ceph_create_keyring,30 create_keyring as ceph_create_keyring,
32 create_pool as ceph_create_pool,31 create_pool as ceph_create_pool,
33 keyring_path as ceph_keyring_path,32 keyring_path as ceph_keyring_path,
34 pool_exists as ceph_pool_exists, )33 pool_exists as ceph_pool_exists, )
3534
36from charmhelpers.contrib.openstack.openstack_utils import (35from charmhelpers.contrib.openstack.utils import (
37 get_os_codename_install_source,36 get_os_codename_install_source,
38 get_os_codename_package,37 get_os_codename_package,
39 configure_installation_source, )38 configure_installation_source, )
@@ -48,7 +47,6 @@
48 "glance-api", "glance-registry", ]47 "glance-api", "glance-registry", ]
4948
50CHARM = "glance"49CHARM = "glance"
51SERVICE_NAME = service_name()
5250
53GLANCE_REGISTRY_CONF = "/etc/glance/glance-registry.conf"51GLANCE_REGISTRY_CONF = "/etc/glance/glance-registry.conf"
54GLANCE_REGISTRY_PASTE_INI = "/etc/glance/glance-registry-paste.ini"52GLANCE_REGISTRY_PASTE_INI = "/etc/glance/glance-registry-paste.ini"
5553
=== added directory 'tests'
=== added file 'tests/__init__.py'
--- tests/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/__init__.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,3 @@
1import sys
2
3sys.path.append('hooks/')
04
=== added file 'tests/test_glance_relations.py'
--- tests/test_glance_relations.py 1970-01-01 00:00:00 +0000
+++ tests/test_glance_relations.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,517 @@
1from mock import call, patch, MagicMock
2
3from tests.test_utils import CharmTestCase
4
5import hooks.glance_utils as utils
6
7_reg = utils.register_configs
8_map = utils.restart_map
9
10utils.register_configs = MagicMock()
11utils.restart_map = MagicMock()
12
13import hooks.glance_relations as relations
14
15utils.register_configs = _reg
16utils.restart_map = _map
17
18TO_PATCH = [
19 # charmhelpers.core.hookenv
20 'Hooks',
21 'config',
22 'juju_log',
23 'relation_ids',
24 'relation_set',
25 'relation_get',
26 'service_name',
27 'unit_get',
28 # charmhelpers.core.host
29 'apt_install',
30 'apt_update',
31 'restart_on_change',
32 'service_stop',
33 #charmhelpers.contrib.openstack.utils
34 'configure_installation_source',
35 'get_os_codename_package',
36 'openstack_upgrade_available',
37 # charmhelpers.contrib.hahelpers.cluster_utils
38 'eligible_leader',
39 'is_clustered',
40 # glance_utils
41 'restart_map',
42 'register_configs',
43 'do_openstack_upgrade',
44 'migrate_database',
45 'ensure_ceph_keyring',
46 'ensure_ceph_pool',
47 # other
48 'getstatusoutput',
49 'check_call',
50]
51
52
53class GlanceRelationTests(CharmTestCase):
54 def setUp(self):
55 super(GlanceRelationTests, self).setUp(relations, TO_PATCH)
56 self.config.side_effect = self.test_config.get
57
58 def test_install_hook(self):
59 repo = 'cloud:precise-grizzly'
60 self.test_config.set('openstack-origin', repo)
61 self.service_stop.return_value = True
62 relations.install_hook()
63 self.configure_installation_source.assert_called_with(repo)
64 self.assertTrue(self.apt_update.called)
65 self.apt_install.assert_called_with(['apache2', 'glance', 'python-mysqldb',
66 'python-swift', 'python-keystone',
67 'uuid', 'haproxy'])
68
69 def test_db_joined(self):
70 self.unit_get.return_value = 'glance.foohost.com'
71 relations.db_joined()
72 self.relation_set.assert_called_with(database='glance', username='glance',
73 hostname='glance.foohost.com')
74 self.unit_get.assert_called_with('private-address')
75
76 @patch.object(relations, 'CONFIGS')
77 def test_db_changed_missing_relation_data(self, configs):
78 configs.complete_contexts = MagicMock()
79 configs.complete_contexts.return_value = []
80 relations.db_changed()
81 self.juju_log.assert_called_with(
82 'shared-db relation incomplete. Peer not ready?'
83 )
84
85 def _shared_db_test(self, configs):
86 configs.complete_contexts = MagicMock()
87 configs.complete_contexts.return_value = ['shared-db']
88 configs.write = MagicMock()
89 relations.db_changed()
90
91 @patch.object(relations, 'CONFIGS')
92 def test_db_changed_no_essex(self, configs):
93 self._shared_db_test(configs)
94 self.assertEquals([call('/etc/glance/glance-registry.conf'),
95 call('/etc/glance/glance-api.conf')],
96 configs.write.call_args_list)
97 self.juju_log.assert_called_with(
98 'Cluster leader, performing db sync'
99 )
100 self.migrate_database.assert_called_with()
101
102 @patch.object(relations, 'CONFIGS')
103 def test_db_changed_with_essex_not_setting_version_control(self, configs):
104 self.get_os_codename_package.return_value = "essex"
105 self.getstatusoutput.return_value = (0, "version")
106 self._shared_db_test(configs)
107 self.assertEquals([call('/etc/glance/glance-registry.conf')],
108 configs.write.call_args_list)
109 self.juju_log.assert_called_with(
110 'Cluster leader, performing db sync'
111 )
112 self.migrate_database.assert_called_with()
113
114 @patch.object(relations, 'CONFIGS')
115 def test_db_changed_with_essex_setting_version_control(self, configs):
116 self.get_os_codename_package.return_value = "essex"
117 self.getstatusoutput.return_value = (1, "version")
118 self._shared_db_test(configs)
119 self.assertEquals([call('/etc/glance/glance-registry.conf')],
120 configs.write.call_args_list)
121 self.check_call.assert_called_with(
122 ["glance-manage", "version_control", "0"]
123 )
124 self.juju_log.assert_called_with(
125 'Cluster leader, performing db sync'
126 )
127 self.migrate_database.assert_called_with()
128
129 @patch.object(relations, 'CONFIGS')
130 def test_image_service_joined_clustered_with_https(self, configs):
131 configs.complete_contexts = MagicMock()
132 configs.complete_contexts.return_value = ['https']
133 configs.write = MagicMock()
134 self.unit_get.return_value = 'glance.foohost.com'
135 self.is_clustered.return_value = True
136 self.test_config.set('vip', '10.10.10.10')
137 relations.image_service_joined()
138 self.assertTrue(self.eligible_leader.called)
139 self.unit_get.assert_called_with('private-address')
140 self.relation_set.assert_called_with(relation_id=None,
141 glance_api_server="https://10.10.10.10:9292")
142
143 @patch.object(relations, 'CONFIGS')
144 def test_image_service_joined_not_clustered_with_https(self, configs):
145 configs.complete_contexts = MagicMock()
146 configs.complete_contexts.return_value = ['https']
147 configs.write = MagicMock()
148 self.unit_get.return_value = 'glance.foohost.com'
149 self.is_clustered.return_value = False
150 relations.image_service_joined()
151 self.assertTrue(self.eligible_leader.called)
152 self.unit_get.assert_called_with('private-address')
153 self.relation_set.assert_called_with(relation_id=None,
154 glance_api_server="https://glance.foohost.com:9292")
155
156 @patch.object(relations, 'CONFIGS')
157 def test_image_service_joined_clustered_with_http(self, configs):
158 configs.complete_contexts = MagicMock()
159 configs.complete_contexts.return_value = ['']
160 configs.write = MagicMock()
161 self.unit_get.return_value = 'glance.foohost.com'
162 self.is_clustered.return_value = True
163 self.test_config.set('vip', '10.10.10.10')
164 relations.image_service_joined()
165 self.assertTrue(self.eligible_leader.called)
166 self.unit_get.assert_called_with('private-address')
167 self.relation_set.assert_called_with(relation_id=None,
168 glance_api_server="http://10.10.10.10:9292")
169
170 @patch.object(relations, 'CONFIGS')
171 def test_image_service_joined_not_clustered_with_http(self, configs):
172 configs.complete_contexts = MagicMock()
173 configs.complete_contexts.return_value = []
174 configs.write = MagicMock()
175 self.unit_get.return_value = 'glance.foohost.com'
176 self.is_clustered.return_value = False
177 relations.image_service_joined()
178 self.assertTrue(self.eligible_leader.called)
179 self.unit_get.assert_called_with('private-address')
180 self.relation_set.assert_called_with(relation_id=None,
181 glance_api_server="http://glance.foohost.com:9292")
182
183 @patch.object(relations, 'CONFIGS')
184 def test_object_store_joined_without_identity_service(self, configs):
185 configs.complete_contexts = MagicMock()
186 configs.complete_contexts.return_value = ['']
187 configs.write = MagicMock()
188 relations.object_store_joined()
189 self.juju_log.assert_called_with(
190 'Deferring swift stora configuration until '
191 'an identity-service relation exists'
192 )
193
194 @patch.object(relations, 'CONFIGS')
195 def test_object_store_joined_with_identity_service_without_object_store(self, configs):
196 configs.complete_contexts = MagicMock()
197 configs.complete_contexts.return_value = ['identity-service']
198 configs.write = MagicMock()
199 relations.object_store_joined()
200 self.juju_log.assert_called_with(
201 'swift relation incomplete'
202 )
203
204 @patch.object(relations, 'CONFIGS')
205 def test_object_store_joined_with_identity_service_with_object_store(self, configs):
206 configs.complete_contexts = MagicMock()
207 configs.complete_contexts.return_value = ['identity-service', 'object-store']
208 configs.write = MagicMock()
209 relations.object_store_joined()
210 self.assertEquals([call('/etc/glance/glance-api.conf')],
211 configs.write.call_args_list)
212
213 @patch('os.mkdir')
214 @patch('os.path.isdir')
215 def test_ceph_joined(self, isdir, mkdir):
216 isdir.return_value = False
217 relations.ceph_joined()
218 mkdir.assert_called_with('/etc/ceph')
219 self.apt_install.assert_called_with(['ceph-common', 'python-ceph'])
220
221 @patch.object(relations, 'CONFIGS')
222 def test_ceph_changed_missing_relation_data(self, configs):
223 configs.complete_contexts = MagicMock()
224 configs.complete_contexts.return_value = []
225 configs.write = MagicMock()
226 relations.ceph_changed()
227 self.juju_log.assert_called_with(
228 'ceph relation incomplete. Peer not ready?'
229 )
230
231 @patch.object(relations, 'CONFIGS')
232 def test_ceph_changed_no_keyring(self, configs):
233 configs.complete_contexts = MagicMock()
234 configs.complete_contexts.return_value = ['ceph']
235 configs.write = MagicMock()
236 self.ensure_ceph_keyring.return_value = False
237 relations.ceph_changed()
238 self.juju_log.assert_called_with(
239 'Could not create ceph keyring: peer not ready?'
240 )
241
242 @patch.object(relations, 'CONFIGS')
243 def test_ceph_changed_with_key_and_relation_data(self, configs):
244 configs.complete_contexts = MagicMock()
245 configs.complete_contexts.return_value = ['ceph']
246 configs.write = MagicMock()
247 self.ensure_ceph_keyring.return_value = True
248 relations.ceph_changed()
249 self.assertEquals([call('/etc/glance/glance-api.conf'),
250 call('/etc/ceph/ceph.conf')],
251 configs.write.call_args_list)
252 self.ensure_ceph_pool.assert_called_with(service=self.service_name())
253
254 @patch.object(relations, 'CONFIGS')
255 def test_keystone_joined_not_clustered(self, configs):
256 configs.complete_contexts = MagicMock()
257 configs.complete_contexts.return_value = ['']
258 configs.write = MagicMock()
259 self.unit_get.return_value = 'glance.foohost.com'
260 self.test_config.set('region', 'FirstRegion')
261 self.is_clustered.return_value = False
262 relations.keystone_joined()
263 self.unit_get.assert_called_with('private-address')
264 self.relation_set.assert_called_with(
265 relation_id=None,
266 service='glance',
267 region='FirstRegion',
268 public_url='http://glance.foohost.com:9292',
269 admin_url='http://glance.foohost.com:9292',
270 internal_url='http://glance.foohost.com:9292',
271 )
272
273 @patch.object(relations, 'CONFIGS')
274 def test_keystone_joined_clustered(self, configs):
275 configs.complete_contexts = MagicMock()
276 configs.complete_contexts.return_value = ['']
277 configs.write = MagicMock()
278 self.unit_get.return_value = 'glance.foohost.com'
279 self.test_config.set('region', 'FirstRegion')
280 self.test_config.set('vip', '10.10.10.10')
281 self.is_clustered.return_value = True
282 relations.keystone_joined()
283 self.unit_get.assert_called_with('private-address')
284 self.relation_set.assert_called_with(
285 relation_id=None,
286 service='glance',
287 region='FirstRegion',
288 public_url='http://10.10.10.10:9292',
289 admin_url='http://10.10.10.10:9292',
290 internal_url='http://10.10.10.10:9292',
291 )
292
293
294 @patch.object(relations, 'CONFIGS')
295 def test_keystone_joined_not_clustered_with_https(self, configs):
296 configs.complete_contexts = MagicMock()
297 configs.complete_contexts.return_value = ['https']
298 configs.write = MagicMock()
299 self.unit_get.return_value = 'glance.foohost.com'
300 self.test_config.set('region', 'FirstRegion')
301 self.is_clustered.return_value = False
302 relations.keystone_joined()
303 self.unit_get.assert_called_with('private-address')
304 self.relation_set.assert_called_with(
305 relation_id=None,
306 service='glance',
307 region='FirstRegion',
308 public_url='https://glance.foohost.com:9292',
309 admin_url='https://glance.foohost.com:9292',
310 internal_url='https://glance.foohost.com:9292',
311 )
312
313 @patch.object(relations, 'CONFIGS')
314 def test_keystone_joined_clustered_with_https(self, configs):
315 configs.complete_contexts = MagicMock()
316 configs.complete_contexts.return_value = ['https']
317 configs.write = MagicMock()
318 self.unit_get.return_value = 'glance.foohost.com'
319 self.test_config.set('region', 'FirstRegion')
320 self.test_config.set('vip', '10.10.10.10')
321 self.is_clustered.return_value = True
322 relations.keystone_joined()
323 self.unit_get.assert_called_with('private-address')
324 self.relation_set.assert_called_with(
325 relation_id=None,
326 service='glance',
327 region='FirstRegion',
328 public_url='https://10.10.10.10:9292',
329 admin_url='https://10.10.10.10:9292',
330 internal_url='https://10.10.10.10:9292',
331 )
332
333 @patch.object(relations, 'configure_https')
334 @patch.object(relations, 'CONFIGS')
335 def test_keystone_changed_no_object_store_relation(self, configs, configure_https):
336 configs.complete_contexts = MagicMock()
337 configs.complete_contexts.return_value = ['identity-service']
338 configs.write = MagicMock()
339 self.relation_ids.return_value = False
340 relations.keystone_changed()
341 self.assertEquals([call('/etc/glance/glance-api.conf'),
342 call('/etc/glance/glance-registry.conf'),
343 call('/etc/glance/glance-api-paste.ini'),
344 call('/etc/glance/glance-registry-paste.ini')],
345 configs.write.call_args_list)
346 self.assertTrue(configure_https.called)
347
348 @patch.object(relations, 'configure_https')
349 @patch.object(relations, 'object_store_joined')
350 @patch.object(relations, 'CONFIGS')
351 def test_keystone_changed_no_object_store_relation(self, configs, object_store_joined, configure_https):
352 configs.complete_contexts = MagicMock()
353 configs.complete_contexts.return_value = ['identity-service']
354 configs.write = MagicMock()
355 self.relation_ids.return_value = True
356 relations.keystone_changed()
357 self.assertEquals([call('/etc/glance/glance-api.conf'),
358 call('/etc/glance/glance-registry.conf'),
359 call('/etc/glance/glance-api-paste.ini'),
360 call('/etc/glance/glance-registry-paste.ini')],
361 configs.write.call_args_list)
362 object_store_joined.assert_called_with()
363 self.assertTrue(configure_https.called)
364
365 @patch.object(relations, 'configure_https')
366 def test_config_changed_no_openstack_upgrade(self, configure_https):
367 self.openstack_upgrade_available.return_value = False
368 relations.config_changed()
369 self.assertTrue(configure_https.called)
370
371 @patch.object(relations, 'configure_https')
372 def test_config_changed_with_openstack_upgrade(self, configure_https):
373 self.openstack_upgrade_available.return_value = True
374 relations.config_changed()
375 self.juju_log.assert_called_with(
376 'Upgrading OpenStack release'
377 )
378 self.assertTrue(self.do_openstack_upgrade.called)
379 self.assertTrue(configure_https.called)
380
381 @patch.object(relations, 'CONFIGS')
382 def test_cluster_changed(self, configs):
383 configs.complete_contexts = MagicMock()
384 configs.complete_contexts.return_value = ['cluster']
385 configs.write = MagicMock()
386 relations.cluster_changed()
387 self.assertEquals([call('/etc/glance/glance-api.conf'),
388 call('/etc/haproxy/haproxy.cfg')],
389 configs.write.call_args_list)
390
391 @patch.object(relations, 'cluster_changed')
392 def test_upgrade_charm(self, cluster_changed):
393 relations.upgrade_charm()
394 cluster_changed.assert_called_with()
395
396 def test_ha_relation_joined(self):
397 self.test_config.set('ha-bindiface', 'em0')
398 self.test_config.set('ha-mcastport', '8080')
399 self.test_config.set('vip', '10.10.10.10')
400 self.test_config.set('vip_iface', 'em1')
401 self.test_config.set('vip_cidr', '24')
402 relations.ha_relation_joined()
403 args = {
404 'corosync_bindiface': 'em0',
405 'corosync_mcastport': '8080',
406 'init_services': {'res_glance_haproxy': 'haproxy'},
407 'resources': {'res_glance_vip': 'ocf:heartbeat:IPaddr2',
408 'res_glance_haproxy': 'lsb:haproxy'},
409 'resource_params': {'res_glance_vip': 'params ip="10.10.10.10" cidr_netmask="24" nic="em1"',
410 'res_glance_haproxy': 'op monitor interval="5s"'},
411 'clones': {'cl_glance_haproxy': 'res_glance_haproxy'}
412 }
413 self.relation_set.assert_called_with(**args)
414
415 def test_ha_relation_changed_not_clustered(self):
416 self.relation_get.return_value = False
417 relations.ha_relation_changed()
418 self.juju_log.assert_called_with('glance subordinate is not fully clustered.')
419
420 @patch.object(relations, 'CONFIGS')
421 def test_ha_relation_changed_with_https(self, configs):
422 configs.complete_contexts = MagicMock()
423 configs.complete_contexts.return_value = ['https']
424 configs.write = MagicMock()
425 self.relation_get.return_value = True
426 self.test_config.set('vip', '10.10.10.10')
427 self.test_config.set('region', 'FirstRegion')
428 self.relation_ids.return_value = ['relation-made:0']
429 relations.ha_relation_changed()
430 self.juju_log.assert_called_with('glance: Cluster configured, notifying other services')
431 self.assertEquals([call('identity-service'), call('image-service')],
432 self.relation_ids.call_args_list)
433 ex = [
434 call(service='glance',
435 region='FirstRegion',
436 public_url='https://10.10.10.10:9292',
437 internal_url='https://10.10.10.10:9292',
438 relation_id='relation-made:0',
439 admin_url='https://10.10.10.10:9292'),
440 call(glance_api_server='https://10.10.10.10:9292',
441 relation_id='relation-made:0')
442 ]
443 self.assertEquals(ex, self.relation_set.call_args_list)
444
445 @patch.object(relations, 'CONFIGS')
446 def test_ha_relation_changed_with_http(self, configs):
447 configs.complete_contexts = MagicMock()
448 configs.complete_contexts.return_value = ['']
449 configs.write = MagicMock()
450 self.relation_get.return_value = True
451 self.test_config.set('vip', '10.10.10.10')
452 self.test_config.set('region', 'FirstRegion')
453 self.relation_ids.return_value = ['relation-made:0']
454 relations.ha_relation_changed()
455 self.juju_log.assert_called_with('glance: Cluster configured, notifying other services')
456 self.assertEquals([call('identity-service'), call('image-service')],
457 self.relation_ids.call_args_list)
458 ex = [
459 call(service='glance',
460 region='FirstRegion',
461 public_url='http://10.10.10.10:9292',
462 internal_url='http://10.10.10.10:9292',
463 relation_id='relation-made:0',
464 admin_url='http://10.10.10.10:9292'),
465 call(glance_api_server='http://10.10.10.10:9292',
466 relation_id='relation-made:0')
467 ]
468 self.assertEquals(ex, self.relation_set.call_args_list)
469
470 @patch.object(relations, 'keystone_joined')
471 @patch.object(relations, 'CONFIGS')
472 def test_configure_https_enable_with_identity_service(self, configs, keystone_joined):
473 configs.complete_contexts = MagicMock()
474 configs.complete_contexts.return_value = ['https']
475 configs.write = MagicMock()
476 self.relation_ids.return_value = ['identity-service:0']
477 relations.configure_https()
478 cmd = ['a2ensite', 'openstack_https_frontend']
479 self.check_call.assert_called_with(cmd)
480 keystone_joined.assert_called_with(relation_id='identity-service:0')
481
482 @patch.object(relations, 'keystone_joined')
483 @patch.object(relations, 'CONFIGS')
484 def test_configure_https_disable_with_keystone_joined(self, configs, keystone_joined):
485 configs.complete_contexts = MagicMock()
486 configs.complete_contexts.return_value = ['']
487 configs.write = MagicMock()
488 self.relation_ids.return_value = ['identity-service:0']
489 relations.configure_https()
490 cmd = ['a2dissite', 'openstack_https_frontend']
491 self.check_call.assert_called_with(cmd)
492 keystone_joined.assert_called_with(relation_id='identity-service:0')
493
494 @patch.object(relations, 'image_service_joined')
495 @patch.object(relations, 'CONFIGS')
496 def test_configure_https_enable_with_image_service(self, configs, image_service_joined):
497 configs.complete_contexts = MagicMock()
498 configs.complete_contexts.return_value = ['https']
499 configs.write = MagicMock()
500 self.relation_ids.return_value = ['image-service:0']
501 relations.configure_https()
502 cmd = ['a2ensite', 'openstack_https_frontend']
503 self.check_call.assert_called_with(cmd)
504 image_service_joined.assert_called_with(relation_id='image-service:0')
505
506 @patch.object(relations, 'image_service_joined')
507 @patch.object(relations, 'CONFIGS')
508 def test_configure_https_disable_with_image_service(self, configs, image_service_joined):
509 configs.complete_contexts = MagicMock()
510 configs.complete_contexts.return_value = ['']
511 configs.write = MagicMock()
512 self.relation_ids.return_value = ['image-service:0']
513 relations.configure_https()
514 cmd = ['a2dissite', 'openstack_https_frontend']
515 self.check_call.assert_called_with(cmd)
516 image_service_joined.assert_called_with(relation_id='image-service:0')
517
0518
=== added file 'tests/test_utils.py'
--- tests/test_utils.py 1970-01-01 00:00:00 +0000
+++ tests/test_utils.py 2013-08-14 22:57:15 +0000
@@ -0,0 +1,118 @@
1import logging
2import unittest
3import os
4import yaml
5
6from contextlib import contextmanager
7from mock import patch, MagicMock
8
9
10def load_config():
11 '''
12 Walk backwords from __file__ looking for config.yaml, load and return the
13 'options' section'
14 '''
15 config = None
16 f = __file__
17 while config is None:
18 d = os.path.dirname(f)
19 if os.path.isfile(os.path.join(d, 'config.yaml')):
20 config = os.path.join(d, 'config.yaml')
21 break
22 f = d
23
24 if not config:
25 logging.error('Could not find config.yaml in any parent directory '
26 'of %s. ' % file)
27 raise Exception
28
29 return yaml.safe_load(open(config).read())['options']
30
31
32def get_default_config():
33 '''
34 Load default charm config from config.yaml return as a dict.
35 If no default is set in config.yaml, its value is None.
36 '''
37 default_config = {}
38 config = load_config()
39 for k, v in config.iteritems():
40 if 'default' in v:
41 default_config[k] = v['default']
42 else:
43 default_config[k] = None
44 return default_config
45
46
47class CharmTestCase(unittest.TestCase):
48 def setUp(self, obj, patches):
49 super(CharmTestCase, self).setUp()
50 self.patches = patches
51 self.obj = obj
52 self.test_config = TestConfig()
53 self.test_relation = TestRelation()
54 self.patch_all()
55
56 def patch(self, method):
57 _m = patch.object(self.obj, method)
58 mock = _m.start()
59 self.addCleanup(_m.stop)
60 return mock
61
62 def patch_all(self):
63 for method in self.patches:
64 setattr(self, method, self.patch(method))
65
66
67class TestConfig(object):
68 def __init__(self):
69 self.config = get_default_config()
70
71 def get(self, attr=None):
72 if not attr:
73 return self.get_all()
74 try:
75 return self.config[attr]
76 except KeyError:
77 return None
78
79 def get_all(self):
80 return self.config
81
82 def set(self, attr, value):
83 if attr not in self.config:
84 raise KeyError
85 self.config[attr] = value
86
87
88class TestRelation(object):
89 def __init__(self, relation_data={}):
90 self.relation_data = relation_data
91
92 def set(self, relation_data):
93 self.relation_data = relation_data
94
95 def get(self, attr=None, unit=None, rid=None):
96 if attr == None:
97 return self.relation_data
98 elif attr in self.relation_data:
99 return self.relation_data[attr]
100 return None
101
102
103@contextmanager
104def patch_open():
105 '''Patch open() to allow mocking both open() itself and the file that is
106 yielded.
107
108 Yields the mock for "open" and "file", respectively.'''
109 mock_open = MagicMock(spec=open)
110 mock_file = MagicMock(spec=file)
111
112 @contextmanager
113 def stub_open(*args, **kwargs):
114 mock_open(*args, **kwargs)
115 yield mock_file
116
117 with patch('__builtin__.open', stub_open):
118 yield mock_open, mock_file

Subscribers

People subscribed via source and target branches