Merge lp:~andreserl/charms/precise/glance/port into lp:~openstack-charmers/charms/precise/glance/python-redux
- Precise Pangolin (12.04)
- port
- Merge into python-redux
Proposed by
Andres Rodriguez
Status: | Merged |
---|---|
Approved by: | Adam Gandelman |
Approved revision: | 200 |
Merged at revision: | 200 |
Proposed branch: | lp:~andreserl/charms/precise/glance/port |
Merge into: | lp:~openstack-charmers/charms/precise/glance/python-redux |
Diff against target: |
3628 lines (+1690/-1475) 23 files modified
.coveragerc (+6/-0) Makefile (+4/-0) charm-helpers.yaml (+1/-1) hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0) hooks/charmhelpers/contrib/hahelpers/apache_utils.py (+0/-196) hooks/charmhelpers/contrib/hahelpers/ceph.py (+291/-0) hooks/charmhelpers/contrib/hahelpers/ceph_utils.py (+0/-256) hooks/charmhelpers/contrib/hahelpers/cluster.py (+181/-0) hooks/charmhelpers/contrib/hahelpers/cluster_utils.py (+0/-176) hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py (+0/-55) hooks/charmhelpers/contrib/hahelpers/utils.py (+0/-333) hooks/charmhelpers/contrib/openstack/context.py (+29/-5) hooks/charmhelpers/contrib/openstack/openstack_utils.py (+0/-270) hooks/charmhelpers/contrib/openstack/templating.py (+133/-103) hooks/charmhelpers/contrib/openstack/utils.py (+273/-0) hooks/charmhelpers/core/hookenv.py (+3/-2) hooks/charmhelpers/core/host.py (+39/-29) hooks/glance_contexts.py (+1/-1) hooks/glance_relations.py (+29/-42) hooks/glance_utils.py (+4/-6) tests/__init__.py (+3/-0) tests/test_glance_relations.py (+517/-0) tests/test_utils.py (+118/-0) |
To merge this branch: | bzr merge lp:~andreserl/charms/precise/glance/port |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Andres Rodriguez (community) | Abstain | ||
Adam Gandelman (community) | Needs Fixing | ||
Review via email: mp+180224@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
- 197. By Andres Rodriguez
-
fix lint
- 198. By Andres Rodriguez
-
use latest charmhelpers and make modifications accordingly
- 199. By Andres Rodriguez
-
Add coveragerc
Revision history for this message
Andres Rodriguez (andreserl) wrote : | # |
Adam,
Addressed:
1. sync charm-helpers and adapt to correctly import.
2. lint cleanup done
3. Add coveragerc
4. Is in my TODO.
review:
Approve
Revision history for this message
Andres Rodriguez (andreserl) : | # |
review:
Abstain
- 200. By Andres Rodriguez
-
really add coveragerc
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file '.coveragerc' |
2 | --- .coveragerc 1970-01-01 00:00:00 +0000 |
3 | +++ .coveragerc 2013-08-14 22:57:15 +0000 |
4 | @@ -0,0 +1,6 @@ |
5 | +[report] |
6 | +# Regexes for lines to exclude from consideration |
7 | +exclude_lines = |
8 | + if __name__ == .__main__.: |
9 | +include= |
10 | + hooks/glance_* |
11 | |
12 | === modified file 'Makefile' |
13 | --- Makefile 2013-07-11 14:17:30 +0000 |
14 | +++ Makefile 2013-08-14 22:57:15 +0000 |
15 | @@ -6,3 +6,7 @@ |
16 | |
17 | sync: |
18 | @charm-helper-sync -c charm-helpers-sync.yaml |
19 | + |
20 | +test: |
21 | + #@nosetests -svd tests/ |
22 | + @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage tests |
23 | |
24 | === modified file 'charm-helpers.yaml' |
25 | --- charm-helpers.yaml 2013-07-05 19:12:08 +0000 |
26 | +++ charm-helpers.yaml 2013-08-14 22:57:15 +0000 |
27 | @@ -1,4 +1,4 @@ |
28 | -branch: lp:~openstack-charmers/charm-tools/pyrewrite-helpers |
29 | +branch: lp:charm-helpers |
30 | destination: hooks/charmhelpers |
31 | include: |
32 | - core |
33 | |
34 | === added file 'hooks/__init__.py' |
35 | === added file 'hooks/charmhelpers/contrib/hahelpers/apache.py' |
36 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000 |
37 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-08-14 22:57:15 +0000 |
38 | @@ -0,0 +1,58 @@ |
39 | +# |
40 | +# Copyright 2012 Canonical Ltd. |
41 | +# |
42 | +# This file is sourced from lp:openstack-charm-helpers |
43 | +# |
44 | +# Authors: |
45 | +# James Page <james.page@ubuntu.com> |
46 | +# Adam Gandelman <adamg@ubuntu.com> |
47 | +# |
48 | + |
49 | +import subprocess |
50 | + |
51 | +from charmhelpers.core.hookenv import ( |
52 | + config as config_get, |
53 | + relation_get, |
54 | + relation_ids, |
55 | + related_units as relation_list, |
56 | + log, |
57 | + INFO, |
58 | +) |
59 | + |
60 | + |
61 | +def get_cert(): |
62 | + cert = config_get('ssl_cert') |
63 | + key = config_get('ssl_key') |
64 | + if not (cert and key): |
65 | + log("Inspecting identity-service relations for SSL certificate.", |
66 | + level=INFO) |
67 | + cert = key = None |
68 | + for r_id in relation_ids('identity-service'): |
69 | + for unit in relation_list(r_id): |
70 | + if not cert: |
71 | + cert = relation_get('ssl_cert', |
72 | + rid=r_id, unit=unit) |
73 | + if not key: |
74 | + key = relation_get('ssl_key', |
75 | + rid=r_id, unit=unit) |
76 | + return (cert, key) |
77 | + |
78 | + |
79 | +def get_ca_cert(): |
80 | + ca_cert = None |
81 | + log("Inspecting identity-service relations for CA SSL certificate.", |
82 | + level=INFO) |
83 | + for r_id in relation_ids('identity-service'): |
84 | + for unit in relation_list(r_id): |
85 | + if not ca_cert: |
86 | + ca_cert = relation_get('ca_cert', |
87 | + rid=r_id, unit=unit) |
88 | + return ca_cert |
89 | + |
90 | + |
91 | +def install_ca_cert(ca_cert): |
92 | + if ca_cert: |
93 | + with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', |
94 | + 'w') as crt: |
95 | + crt.write(ca_cert) |
96 | + subprocess.check_call(['update-ca-certificates', '--fresh']) |
97 | |
98 | === removed file 'hooks/charmhelpers/contrib/hahelpers/apache_utils.py' |
99 | --- hooks/charmhelpers/contrib/hahelpers/apache_utils.py 2013-06-24 16:05:57 +0000 |
100 | +++ hooks/charmhelpers/contrib/hahelpers/apache_utils.py 1970-01-01 00:00:00 +0000 |
101 | @@ -1,196 +0,0 @@ |
102 | -# |
103 | -# Copyright 2012 Canonical Ltd. |
104 | -# |
105 | -# This file is sourced from lp:openstack-charm-helpers |
106 | -# |
107 | -# Authors: |
108 | -# James Page <james.page@ubuntu.com> |
109 | -# Adam Gandelman <adamg@ubuntu.com> |
110 | -# |
111 | - |
112 | -from utils import ( |
113 | - relation_ids, |
114 | - relation_list, |
115 | - relation_get, |
116 | - render_template, |
117 | - juju_log, |
118 | - config_get, |
119 | - install, |
120 | - get_host_ip, |
121 | - restart |
122 | - ) |
123 | -from cluster_utils import https |
124 | - |
125 | -import os |
126 | -import subprocess |
127 | -from base64 import b64decode |
128 | - |
129 | -APACHE_SITE_DIR = "/etc/apache2/sites-available" |
130 | -SITE_TEMPLATE = "apache2_site.tmpl" |
131 | -RELOAD_CHECK = "To activate the new configuration" |
132 | - |
133 | - |
134 | -def get_cert(): |
135 | - cert = config_get('ssl_cert') |
136 | - key = config_get('ssl_key') |
137 | - if not (cert and key): |
138 | - juju_log('INFO', |
139 | - "Inspecting identity-service relations for SSL certificate.") |
140 | - cert = key = None |
141 | - for r_id in relation_ids('identity-service'): |
142 | - for unit in relation_list(r_id): |
143 | - if not cert: |
144 | - cert = relation_get('ssl_cert', |
145 | - rid=r_id, unit=unit) |
146 | - if not key: |
147 | - key = relation_get('ssl_key', |
148 | - rid=r_id, unit=unit) |
149 | - return (cert, key) |
150 | - |
151 | - |
152 | -def get_ca_cert(): |
153 | - ca_cert = None |
154 | - juju_log('INFO', |
155 | - "Inspecting identity-service relations for CA SSL certificate.") |
156 | - for r_id in relation_ids('identity-service'): |
157 | - for unit in relation_list(r_id): |
158 | - if not ca_cert: |
159 | - ca_cert = relation_get('ca_cert', |
160 | - rid=r_id, unit=unit) |
161 | - return ca_cert |
162 | - |
163 | - |
164 | -def install_ca_cert(ca_cert): |
165 | - if ca_cert: |
166 | - with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', |
167 | - 'w') as crt: |
168 | - crt.write(ca_cert) |
169 | - subprocess.check_call(['update-ca-certificates', '--fresh']) |
170 | - |
171 | - |
172 | -def enable_https(port_maps, namespace, cert, key, ca_cert=None): |
173 | - ''' |
174 | - For a given number of port mappings, configures apache2 |
175 | - HTTPs local reverse proxying using certficates and keys provided in |
176 | - either configuration data (preferred) or relation data. Assumes ports |
177 | - are not in use (calling charm should ensure that). |
178 | - |
179 | - port_maps: dict: external to internal port mappings |
180 | - namespace: str: name of charm |
181 | - ''' |
182 | - def _write_if_changed(path, new_content): |
183 | - content = None |
184 | - if os.path.exists(path): |
185 | - with open(path, 'r') as f: |
186 | - content = f.read().strip() |
187 | - if content != new_content: |
188 | - with open(path, 'w') as f: |
189 | - f.write(new_content) |
190 | - return True |
191 | - else: |
192 | - return False |
193 | - |
194 | - juju_log('INFO', "Enabling HTTPS for port mappings: {}".format(port_maps)) |
195 | - http_restart = False |
196 | - |
197 | - if cert: |
198 | - cert = b64decode(cert) |
199 | - if key: |
200 | - key = b64decode(key) |
201 | - if ca_cert: |
202 | - ca_cert = b64decode(ca_cert) |
203 | - |
204 | - if not cert and not key: |
205 | - juju_log('ERROR', |
206 | - "Expected but could not find SSL certificate data, not " |
207 | - "configuring HTTPS!") |
208 | - return False |
209 | - |
210 | - install('apache2') |
211 | - if RELOAD_CHECK in subprocess.check_output(['a2enmod', 'ssl', |
212 | - 'proxy', 'proxy_http']): |
213 | - http_restart = True |
214 | - |
215 | - ssl_dir = os.path.join('/etc/apache2/ssl', namespace) |
216 | - if not os.path.exists(ssl_dir): |
217 | - os.makedirs(ssl_dir) |
218 | - |
219 | - if (_write_if_changed(os.path.join(ssl_dir, 'cert'), cert)): |
220 | - http_restart = True |
221 | - if (_write_if_changed(os.path.join(ssl_dir, 'key'), key)): |
222 | - http_restart = True |
223 | - os.chmod(os.path.join(ssl_dir, 'key'), 0600) |
224 | - |
225 | - install_ca_cert(ca_cert) |
226 | - |
227 | - sites_dir = '/etc/apache2/sites-available' |
228 | - for ext_port, int_port in port_maps.items(): |
229 | - juju_log('INFO', |
230 | - 'Creating apache2 reverse proxy vhost' |
231 | - ' for {}:{}'.format(ext_port, |
232 | - int_port)) |
233 | - site = "{}_{}".format(namespace, ext_port) |
234 | - site_path = os.path.join(sites_dir, site) |
235 | - with open(site_path, 'w') as fsite: |
236 | - context = { |
237 | - "ext": ext_port, |
238 | - "int": int_port, |
239 | - "namespace": namespace, |
240 | - "private_address": get_host_ip() |
241 | - } |
242 | - fsite.write(render_template(SITE_TEMPLATE, |
243 | - context)) |
244 | - |
245 | - if RELOAD_CHECK in subprocess.check_output(['a2ensite', site]): |
246 | - http_restart = True |
247 | - |
248 | - if http_restart: |
249 | - restart('apache2') |
250 | - |
251 | - return True |
252 | - |
253 | - |
254 | -def disable_https(port_maps, namespace): |
255 | - ''' |
256 | - Ensure HTTPS reverse proxying is disables for given port mappings |
257 | - |
258 | - port_maps: dict: of ext -> int port mappings |
259 | - namespace: str: name of chamr |
260 | - ''' |
261 | - juju_log('INFO', 'Ensuring HTTPS disabled for {}'.format(port_maps)) |
262 | - |
263 | - if (not os.path.exists('/etc/apache2') or |
264 | - not os.path.exists(os.path.join('/etc/apache2/ssl', namespace))): |
265 | - return |
266 | - |
267 | - http_restart = False |
268 | - for ext_port in port_maps.keys(): |
269 | - if os.path.exists(os.path.join(APACHE_SITE_DIR, |
270 | - "{}_{}".format(namespace, |
271 | - ext_port))): |
272 | - juju_log('INFO', |
273 | - "Disabling HTTPS reverse proxy" |
274 | - " for {} {}.".format(namespace, |
275 | - ext_port)) |
276 | - if (RELOAD_CHECK in |
277 | - subprocess.check_output(['a2dissite', |
278 | - '{}_{}'.format(namespace, |
279 | - ext_port)])): |
280 | - http_restart = True |
281 | - |
282 | - if http_restart: |
283 | - restart(['apache2']) |
284 | - |
285 | - |
286 | -def setup_https(port_maps, namespace, cert, key, ca_cert=None): |
287 | - ''' |
288 | - Ensures HTTPS is either enabled or disabled for given port |
289 | - mapping. |
290 | - |
291 | - port_maps: dict: of ext -> int port mappings |
292 | - namespace: str: name of charm |
293 | - ''' |
294 | - if not https: |
295 | - disable_https(port_maps, namespace) |
296 | - else: |
297 | - enable_https(port_maps, namespace, cert, key, ca_cert) |
298 | |
299 | === added file 'hooks/charmhelpers/contrib/hahelpers/ceph.py' |
300 | --- hooks/charmhelpers/contrib/hahelpers/ceph.py 1970-01-01 00:00:00 +0000 |
301 | +++ hooks/charmhelpers/contrib/hahelpers/ceph.py 2013-08-14 22:57:15 +0000 |
302 | @@ -0,0 +1,291 @@ |
303 | +# |
304 | +# Copyright 2012 Canonical Ltd. |
305 | +# |
306 | +# This file is sourced from lp:openstack-charm-helpers |
307 | +# |
308 | +# Authors: |
309 | +# James Page <james.page@ubuntu.com> |
310 | +# Adam Gandelman <adamg@ubuntu.com> |
311 | +# |
312 | + |
313 | +import commands |
314 | +import os |
315 | +import shutil |
316 | +import time |
317 | + |
318 | +from subprocess import ( |
319 | + check_call, |
320 | + check_output, |
321 | + CalledProcessError |
322 | +) |
323 | + |
324 | +from charmhelpers.core.hookenv import ( |
325 | + relation_get, |
326 | + relation_ids, |
327 | + related_units, |
328 | + log, |
329 | + INFO, |
330 | + ERROR |
331 | +) |
332 | + |
333 | +from charmhelpers.core.host import ( |
334 | + apt_install, |
335 | + mount, |
336 | + mounts, |
337 | + service_start, |
338 | + service_stop, |
339 | + umount, |
340 | +) |
341 | + |
342 | +KEYRING = '/etc/ceph/ceph.client.%s.keyring' |
343 | +KEYFILE = '/etc/ceph/ceph.client.%s.key' |
344 | + |
345 | +CEPH_CONF = """[global] |
346 | + auth supported = %(auth)s |
347 | + keyring = %(keyring)s |
348 | + mon host = %(mon_hosts)s |
349 | +""" |
350 | + |
351 | + |
352 | +def running(service): |
353 | + # this local util can be dropped as soon the following branch lands |
354 | + # in lp:charm-helpers |
355 | + # https://code.launchpad.net/~gandelman-a/charm-helpers/service_running/ |
356 | + try: |
357 | + output = check_output(['service', service, 'status']) |
358 | + except CalledProcessError: |
359 | + return False |
360 | + else: |
361 | + if ("start/running" in output or "is running" in output): |
362 | + return True |
363 | + else: |
364 | + return False |
365 | + |
366 | + |
367 | +def install(): |
368 | + ceph_dir = "/etc/ceph" |
369 | + if not os.path.isdir(ceph_dir): |
370 | + os.mkdir(ceph_dir) |
371 | + apt_install('ceph-common', fatal=True) |
372 | + |
373 | + |
374 | +def rbd_exists(service, pool, rbd_img): |
375 | + (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' % |
376 | + (service, pool)) |
377 | + return rbd_img in out |
378 | + |
379 | + |
380 | +def create_rbd_image(service, pool, image, sizemb): |
381 | + cmd = [ |
382 | + 'rbd', |
383 | + 'create', |
384 | + image, |
385 | + '--size', |
386 | + str(sizemb), |
387 | + '--id', |
388 | + service, |
389 | + '--pool', |
390 | + pool |
391 | + ] |
392 | + check_call(cmd) |
393 | + |
394 | + |
395 | +def pool_exists(service, name): |
396 | + (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) |
397 | + return name in out |
398 | + |
399 | + |
400 | +def create_pool(service, name): |
401 | + cmd = [ |
402 | + 'rados', |
403 | + '--id', |
404 | + service, |
405 | + 'mkpool', |
406 | + name |
407 | + ] |
408 | + check_call(cmd) |
409 | + |
410 | + |
411 | +def keyfile_path(service): |
412 | + return KEYFILE % service |
413 | + |
414 | + |
415 | +def keyring_path(service): |
416 | + return KEYRING % service |
417 | + |
418 | + |
419 | +def create_keyring(service, key): |
420 | + keyring = keyring_path(service) |
421 | + if os.path.exists(keyring): |
422 | + log('ceph: Keyring exists at %s.' % keyring, level=INFO) |
423 | + cmd = [ |
424 | + 'ceph-authtool', |
425 | + keyring, |
426 | + '--create-keyring', |
427 | + '--name=client.%s' % service, |
428 | + '--add-key=%s' % key |
429 | + ] |
430 | + check_call(cmd) |
431 | + log('ceph: Created new ring at %s.' % keyring, level=INFO) |
432 | + |
433 | + |
434 | +def create_key_file(service, key): |
435 | + # create a file containing the key |
436 | + keyfile = keyfile_path(service) |
437 | + if os.path.exists(keyfile): |
438 | + log('ceph: Keyfile exists at %s.' % keyfile, level=INFO) |
439 | + fd = open(keyfile, 'w') |
440 | + fd.write(key) |
441 | + fd.close() |
442 | + log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) |
443 | + |
444 | + |
445 | +def get_ceph_nodes(): |
446 | + hosts = [] |
447 | + for r_id in relation_ids('ceph'): |
448 | + for unit in related_units(r_id): |
449 | + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
450 | + return hosts |
451 | + |
452 | + |
453 | +def configure(service, key, auth): |
454 | + create_keyring(service, key) |
455 | + create_key_file(service, key) |
456 | + hosts = get_ceph_nodes() |
457 | + mon_hosts = ",".join(map(str, hosts)) |
458 | + keyring = keyring_path(service) |
459 | + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
460 | + ceph_conf.write(CEPH_CONF % locals()) |
461 | + modprobe_kernel_module('rbd') |
462 | + |
463 | + |
464 | +def image_mapped(image_name): |
465 | + (rc, out) = commands.getstatusoutput('rbd showmapped') |
466 | + return image_name in out |
467 | + |
468 | + |
469 | +def map_block_storage(service, pool, image): |
470 | + cmd = [ |
471 | + 'rbd', |
472 | + 'map', |
473 | + '%s/%s' % (pool, image), |
474 | + '--user', |
475 | + service, |
476 | + '--secret', |
477 | + keyfile_path(service), |
478 | + ] |
479 | + check_call(cmd) |
480 | + |
481 | + |
482 | +def filesystem_mounted(fs): |
483 | + return fs in [f for m, f in mounts()] |
484 | + |
485 | + |
486 | +def make_filesystem(blk_device, fstype='ext4', timeout=10): |
487 | + count = 0 |
488 | + e_noent = os.errno.ENOENT |
489 | + while not os.path.exists(blk_device): |
490 | + if count >= timeout: |
491 | + log('ceph: gave up waiting on block device %s' % blk_device, |
492 | + level=ERROR) |
493 | + raise IOError(e_noent, os.strerror(e_noent), blk_device) |
494 | + log('ceph: waiting for block device %s to appear' % blk_device, |
495 | + level=INFO) |
496 | + count += 1 |
497 | + time.sleep(1) |
498 | + else: |
499 | + log('ceph: Formatting block device %s as filesystem %s.' % |
500 | + (blk_device, fstype), level=INFO) |
501 | + check_call(['mkfs', '-t', fstype, blk_device]) |
502 | + |
503 | + |
504 | +def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): |
505 | + # mount block device into /mnt |
506 | + mount(blk_device, '/mnt') |
507 | + |
508 | + # copy data to /mnt |
509 | + try: |
510 | + copy_files(data_src_dst, '/mnt') |
511 | + except: |
512 | + pass |
513 | + |
514 | + # umount block device |
515 | + umount('/mnt') |
516 | + |
517 | + _dir = os.stat(data_src_dst) |
518 | + uid = _dir.st_uid |
519 | + gid = _dir.st_gid |
520 | + |
521 | + # re-mount where the data should originally be |
522 | + mount(blk_device, data_src_dst, persist=True) |
523 | + |
524 | + # ensure original ownership of new mount. |
525 | + cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] |
526 | + check_call(cmd) |
527 | + |
528 | + |
529 | +# TODO: re-use |
530 | +def modprobe_kernel_module(module): |
531 | + log('ceph: Loading kernel module', level=INFO) |
532 | + cmd = ['modprobe', module] |
533 | + check_call(cmd) |
534 | + cmd = 'echo %s >> /etc/modules' % module |
535 | + check_call(cmd, shell=True) |
536 | + |
537 | + |
538 | +def copy_files(src, dst, symlinks=False, ignore=None): |
539 | + for item in os.listdir(src): |
540 | + s = os.path.join(src, item) |
541 | + d = os.path.join(dst, item) |
542 | + if os.path.isdir(s): |
543 | + shutil.copytree(s, d, symlinks, ignore) |
544 | + else: |
545 | + shutil.copy2(s, d) |
546 | + |
547 | + |
548 | +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
549 | + blk_device, fstype, system_services=[]): |
550 | + """ |
551 | + To be called from the current cluster leader. |
552 | + Ensures given pool and RBD image exists, is mapped to a block device, |
553 | + and the device is formatted and mounted at the given mount_point. |
554 | + |
555 | + If formatting a device for the first time, data existing at mount_point |
556 | + will be migrated to the RBD device before being remounted. |
557 | + |
558 | + All services listed in system_services will be stopped prior to data |
559 | + migration and restarted when complete. |
560 | + """ |
561 | + # Ensure pool, RBD image, RBD mappings are in place. |
562 | + if not pool_exists(service, pool): |
563 | + log('ceph: Creating new pool %s.' % pool, level=INFO) |
564 | + create_pool(service, pool) |
565 | + |
566 | + if not rbd_exists(service, pool, rbd_img): |
567 | + log('ceph: Creating RBD image (%s).' % rbd_img, level=INFO) |
568 | + create_rbd_image(service, pool, rbd_img, sizemb) |
569 | + |
570 | + if not image_mapped(rbd_img): |
571 | + log('ceph: Mapping RBD Image as a Block Device.', level=INFO) |
572 | + map_block_storage(service, pool, rbd_img) |
573 | + |
574 | + # make file system |
575 | + # TODO: What happens if for whatever reason this is run again and |
576 | + # the data is already in the rbd device and/or is mounted?? |
577 | + # When it is mounted already, it will fail to make the fs |
578 | + # XXX: This is really sketchy! Need to at least add an fstab entry |
579 | + # otherwise this hook will blow away existing data if its executed |
580 | + # after a reboot. |
581 | + if not filesystem_mounted(mount_point): |
582 | + make_filesystem(blk_device, fstype) |
583 | + |
584 | + for svc in system_services: |
585 | + if running(svc): |
586 | + log('Stopping services %s prior to migrating data.' % svc, |
587 | + level=INFO) |
588 | + service_stop(svc) |
589 | + |
590 | + place_data_on_ceph(service, blk_device, mount_point, fstype) |
591 | + |
592 | + for svc in system_services: |
593 | + service_start(svc) |
594 | |
595 | === removed file 'hooks/charmhelpers/contrib/hahelpers/ceph_utils.py' |
596 | --- hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 2013-06-24 16:05:57 +0000 |
597 | +++ hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 1970-01-01 00:00:00 +0000 |
598 | @@ -1,256 +0,0 @@ |
599 | -# |
600 | -# Copyright 2012 Canonical Ltd. |
601 | -# |
602 | -# This file is sourced from lp:openstack-charm-helpers |
603 | -# |
604 | -# Authors: |
605 | -# James Page <james.page@ubuntu.com> |
606 | -# Adam Gandelman <adamg@ubuntu.com> |
607 | -# |
608 | - |
609 | -import commands |
610 | -import subprocess |
611 | -import os |
612 | -import shutil |
613 | -import utils |
614 | - |
615 | -KEYRING = '/etc/ceph/ceph.client.%s.keyring' |
616 | -KEYFILE = '/etc/ceph/ceph.client.%s.key' |
617 | - |
618 | -CEPH_CONF = """[global] |
619 | - auth supported = %(auth)s |
620 | - keyring = %(keyring)s |
621 | - mon host = %(mon_hosts)s |
622 | -""" |
623 | - |
624 | - |
625 | -def execute(cmd): |
626 | - subprocess.check_call(cmd) |
627 | - |
628 | - |
629 | -def execute_shell(cmd): |
630 | - subprocess.check_call(cmd, shell=True) |
631 | - |
632 | - |
633 | -def install(): |
634 | - ceph_dir = "/etc/ceph" |
635 | - if not os.path.isdir(ceph_dir): |
636 | - os.mkdir(ceph_dir) |
637 | - utils.install('ceph-common') |
638 | - |
639 | - |
640 | -def rbd_exists(service, pool, rbd_img): |
641 | - (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\ |
642 | - (service, pool)) |
643 | - return rbd_img in out |
644 | - |
645 | - |
646 | -def create_rbd_image(service, pool, image, sizemb): |
647 | - cmd = [ |
648 | - 'rbd', |
649 | - 'create', |
650 | - image, |
651 | - '--size', |
652 | - str(sizemb), |
653 | - '--id', |
654 | - service, |
655 | - '--pool', |
656 | - pool |
657 | - ] |
658 | - execute(cmd) |
659 | - |
660 | - |
661 | -def pool_exists(service, name): |
662 | - (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) |
663 | - return name in out |
664 | - |
665 | - |
666 | -def create_pool(service, name): |
667 | - cmd = [ |
668 | - 'rados', |
669 | - '--id', |
670 | - service, |
671 | - 'mkpool', |
672 | - name |
673 | - ] |
674 | - execute(cmd) |
675 | - |
676 | - |
677 | -def keyfile_path(service): |
678 | - return KEYFILE % service |
679 | - |
680 | - |
681 | -def keyring_path(service): |
682 | - return KEYRING % service |
683 | - |
684 | - |
685 | -def create_keyring(service, key): |
686 | - keyring = keyring_path(service) |
687 | - if os.path.exists(keyring): |
688 | - utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring) |
689 | - cmd = [ |
690 | - 'ceph-authtool', |
691 | - keyring, |
692 | - '--create-keyring', |
693 | - '--name=client.%s' % service, |
694 | - '--add-key=%s' % key |
695 | - ] |
696 | - execute(cmd) |
697 | - utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring) |
698 | - |
699 | - |
700 | -def create_key_file(service, key): |
701 | - # create a file containing the key |
702 | - keyfile = keyfile_path(service) |
703 | - if os.path.exists(keyfile): |
704 | - utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile) |
705 | - fd = open(keyfile, 'w') |
706 | - fd.write(key) |
707 | - fd.close() |
708 | - utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile) |
709 | - |
710 | - |
711 | -def get_ceph_nodes(): |
712 | - hosts = [] |
713 | - for r_id in utils.relation_ids('ceph'): |
714 | - for unit in utils.relation_list(r_id): |
715 | - hosts.append(utils.relation_get('private-address', |
716 | - unit=unit, rid=r_id)) |
717 | - return hosts |
718 | - |
719 | - |
720 | -def configure(service, key, auth): |
721 | - create_keyring(service, key) |
722 | - create_key_file(service, key) |
723 | - hosts = get_ceph_nodes() |
724 | - mon_hosts = ",".join(map(str, hosts)) |
725 | - keyring = keyring_path(service) |
726 | - with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
727 | - ceph_conf.write(CEPH_CONF % locals()) |
728 | - modprobe_kernel_module('rbd') |
729 | - |
730 | - |
731 | -def image_mapped(image_name): |
732 | - (rc, out) = commands.getstatusoutput('rbd showmapped') |
733 | - return image_name in out |
734 | - |
735 | - |
736 | -def map_block_storage(service, pool, image): |
737 | - cmd = [ |
738 | - 'rbd', |
739 | - 'map', |
740 | - '%s/%s' % (pool, image), |
741 | - '--user', |
742 | - service, |
743 | - '--secret', |
744 | - keyfile_path(service), |
745 | - ] |
746 | - execute(cmd) |
747 | - |
748 | - |
749 | -def filesystem_mounted(fs): |
750 | - return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
751 | - |
752 | - |
753 | -def make_filesystem(blk_device, fstype='ext4'): |
754 | - utils.juju_log('INFO', |
755 | - 'ceph: Formatting block device %s as filesystem %s.' %\ |
756 | - (blk_device, fstype)) |
757 | - cmd = ['mkfs', '-t', fstype, blk_device] |
758 | - execute(cmd) |
759 | - |
760 | - |
761 | -def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): |
762 | - # mount block device into /mnt |
763 | - cmd = ['mount', '-t', fstype, blk_device, '/mnt'] |
764 | - execute(cmd) |
765 | - |
766 | - # copy data to /mnt |
767 | - try: |
768 | - copy_files(data_src_dst, '/mnt') |
769 | - except: |
770 | - pass |
771 | - |
772 | - # umount block device |
773 | - cmd = ['umount', '/mnt'] |
774 | - execute(cmd) |
775 | - |
776 | - _dir = os.stat(data_src_dst) |
777 | - uid = _dir.st_uid |
778 | - gid = _dir.st_gid |
779 | - |
780 | - # re-mount where the data should originally be |
781 | - cmd = ['mount', '-t', fstype, blk_device, data_src_dst] |
782 | - execute(cmd) |
783 | - |
784 | - # ensure original ownership of new mount. |
785 | - cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] |
786 | - execute(cmd) |
787 | - |
788 | - |
789 | -# TODO: re-use |
790 | -def modprobe_kernel_module(module): |
791 | - utils.juju_log('INFO', 'Loading kernel module') |
792 | - cmd = ['modprobe', module] |
793 | - execute(cmd) |
794 | - cmd = 'echo %s >> /etc/modules' % module |
795 | - execute_shell(cmd) |
796 | - |
797 | - |
798 | -def copy_files(src, dst, symlinks=False, ignore=None): |
799 | - for item in os.listdir(src): |
800 | - s = os.path.join(src, item) |
801 | - d = os.path.join(dst, item) |
802 | - if os.path.isdir(s): |
803 | - shutil.copytree(s, d, symlinks, ignore) |
804 | - else: |
805 | - shutil.copy2(s, d) |
806 | - |
807 | - |
808 | -def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
809 | - blk_device, fstype, system_services=[]): |
810 | - """ |
811 | - To be called from the current cluster leader. |
812 | - Ensures given pool and RBD image exists, is mapped to a block device, |
813 | - and the device is formatted and mounted at the given mount_point. |
814 | - |
815 | - If formatting a device for the first time, data existing at mount_point |
816 | - will be migrated to the RBD device before being remounted. |
817 | - |
818 | - All services listed in system_services will be stopped prior to data |
819 | - migration and restarted when complete. |
820 | - """ |
821 | - # Ensure pool, RBD image, RBD mappings are in place. |
822 | - if not pool_exists(service, pool): |
823 | - utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool) |
824 | - create_pool(service, pool) |
825 | - |
826 | - if not rbd_exists(service, pool, rbd_img): |
827 | - utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img) |
828 | - create_rbd_image(service, pool, rbd_img, sizemb) |
829 | - |
830 | - if not image_mapped(rbd_img): |
831 | - utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.') |
832 | - map_block_storage(service, pool, rbd_img) |
833 | - |
834 | - # make file system |
835 | - # TODO: What happens if for whatever reason this is run again and |
836 | - # the data is already in the rbd device and/or is mounted?? |
837 | - # When it is mounted already, it will fail to make the fs |
838 | - # XXX: This is really sketchy! Need to at least add an fstab entry |
839 | - # otherwise this hook will blow away existing data if its executed |
840 | - # after a reboot. |
841 | - if not filesystem_mounted(mount_point): |
842 | - make_filesystem(blk_device, fstype) |
843 | - |
844 | - for svc in system_services: |
845 | - if utils.running(svc): |
846 | - utils.juju_log('INFO', |
847 | - 'Stopping services %s prior to migrating '\ |
848 | - 'data' % svc) |
849 | - utils.stop(svc) |
850 | - |
851 | - place_data_on_ceph(service, blk_device, mount_point, fstype) |
852 | - |
853 | - for svc in system_services: |
854 | - utils.start(svc) |
855 | |
856 | === added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
857 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000 |
858 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-08-14 22:57:15 +0000 |
859 | @@ -0,0 +1,181 @@ |
860 | +# |
861 | +# Copyright 2012 Canonical Ltd. |
862 | +# |
863 | +# Authors: |
864 | +# James Page <james.page@ubuntu.com> |
865 | +# Adam Gandelman <adamg@ubuntu.com> |
866 | +# |
867 | + |
868 | +import subprocess |
869 | +import os |
870 | + |
871 | +from socket import gethostname as get_unit_hostname |
872 | + |
873 | +from charmhelpers.core.hookenv import ( |
874 | + log, |
875 | + relation_ids, |
876 | + related_units as relation_list, |
877 | + relation_get, |
878 | + config as config_get, |
879 | + INFO, |
880 | + ERROR, |
881 | + unit_get, |
882 | +) |
883 | + |
884 | + |
885 | +class HAIncompleteConfig(Exception): |
886 | + pass |
887 | + |
888 | + |
889 | +def is_clustered(): |
890 | + for r_id in (relation_ids('ha') or []): |
891 | + for unit in (relation_list(r_id) or []): |
892 | + clustered = relation_get('clustered', |
893 | + rid=r_id, |
894 | + unit=unit) |
895 | + if clustered: |
896 | + return True |
897 | + return False |
898 | + |
899 | + |
900 | +def is_leader(resource): |
901 | + cmd = [ |
902 | + "crm", "resource", |
903 | + "show", resource |
904 | + ] |
905 | + try: |
906 | + status = subprocess.check_output(cmd) |
907 | + except subprocess.CalledProcessError: |
908 | + return False |
909 | + else: |
910 | + if get_unit_hostname() in status: |
911 | + return True |
912 | + else: |
913 | + return False |
914 | + |
915 | + |
916 | +def peer_units(): |
917 | + peers = [] |
918 | + for r_id in (relation_ids('cluster') or []): |
919 | + for unit in (relation_list(r_id) or []): |
920 | + peers.append(unit) |
921 | + return peers |
922 | + |
923 | + |
924 | +def oldest_peer(peers): |
925 | + local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) |
926 | + for peer in peers: |
927 | + remote_unit_no = int(peer.split('/')[1]) |
928 | + if remote_unit_no < local_unit_no: |
929 | + return False |
930 | + return True |
931 | + |
932 | + |
933 | +def eligible_leader(resource): |
934 | + if is_clustered(): |
935 | + if not is_leader(resource): |
936 | + log('Deferring action to CRM leader.', level=INFO) |
937 | + return False |
938 | + else: |
939 | + peers = peer_units() |
940 | + if peers and not oldest_peer(peers): |
941 | + log('Deferring action to oldest service unit.', level=INFO) |
942 | + return False |
943 | + return True |
944 | + |
945 | + |
946 | +def https(): |
947 | + ''' |
948 | + Determines whether enough data has been provided in configuration |
949 | + or relation data to configure HTTPS |
950 | + . |
951 | + returns: boolean |
952 | + ''' |
953 | + if config_get('use-https') == "yes": |
954 | + return True |
955 | + if config_get('ssl_cert') and config_get('ssl_key'): |
956 | + return True |
957 | + for r_id in relation_ids('identity-service'): |
958 | + for unit in relation_list(r_id): |
959 | + if None not in [ |
960 | + relation_get('https_keystone', rid=r_id, unit=unit), |
961 | + relation_get('ssl_cert', rid=r_id, unit=unit), |
962 | + relation_get('ssl_key', rid=r_id, unit=unit), |
963 | + relation_get('ca_cert', rid=r_id, unit=unit), |
964 | + ]: |
965 | + return True |
966 | + return False |
967 | + |
968 | + |
969 | +def determine_api_port(public_port): |
970 | + ''' |
971 | + Determine correct API server listening port based on |
972 | + existence of HTTPS reverse proxy and/or haproxy. |
973 | + |
974 | + public_port: int: standard public port for given service |
975 | + |
976 | + returns: int: the correct listening port for the API service |
977 | + ''' |
978 | + i = 0 |
979 | + if len(peer_units()) > 0 or is_clustered(): |
980 | + i += 1 |
981 | + if https(): |
982 | + i += 1 |
983 | + return public_port - (i * 10) |
984 | + |
985 | + |
986 | +def determine_haproxy_port(public_port): |
987 | + ''' |
988 | + Description: Determine correct proxy listening port based on public IP + |
989 | + existence of HTTPS reverse proxy. |
990 | + |
991 | + public_port: int: standard public port for given service |
992 | + |
993 | + returns: int: the correct listening port for the HAProxy service |
994 | + ''' |
995 | + i = 0 |
996 | + if https(): |
997 | + i += 1 |
998 | + return public_port - (i * 10) |
999 | + |
1000 | + |
1001 | +def get_hacluster_config(): |
1002 | + ''' |
1003 | + Obtains all relevant configuration from charm configuration required |
1004 | + for initiating a relation to hacluster: |
1005 | + |
1006 | + ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr |
1007 | + |
1008 | + returns: dict: A dict containing settings keyed by setting name. |
1009 | + raises: HAIncompleteConfig if settings are missing. |
1010 | + ''' |
1011 | + settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] |
1012 | + conf = {} |
1013 | + for setting in settings: |
1014 | + conf[setting] = config_get(setting) |
1015 | + missing = [] |
1016 | + [missing.append(s) for s, v in conf.iteritems() if v is None] |
1017 | + if missing: |
1018 | + log('Insufficient config data to configure hacluster.', level=ERROR) |
1019 | + raise HAIncompleteConfig |
1020 | + return conf |
1021 | + |
1022 | + |
1023 | +def canonical_url(configs, vip_setting='vip'): |
1024 | + ''' |
1025 | + Returns the correct HTTP URL to this host given the state of HTTPS |
1026 | + configuration and hacluster. |
1027 | + |
1028 | + :configs : OSTemplateRenderer: A config tempating object to inspect for |
1029 | + a complete https context. |
1030 | + :vip_setting: str: Setting in charm config that specifies |
1031 | + VIP address. |
1032 | + ''' |
1033 | + scheme = 'http' |
1034 | + if 'https' in configs.complete_contexts(): |
1035 | + scheme = 'https' |
1036 | + if is_clustered(): |
1037 | + addr = config_get(vip_setting) |
1038 | + else: |
1039 | + addr = unit_get('private-address') |
1040 | + return '%s://%s' % (scheme, addr) |
1041 | |
1042 | === removed file 'hooks/charmhelpers/contrib/hahelpers/cluster_utils.py' |
1043 | --- hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 2013-07-09 17:24:46 +0000 |
1044 | +++ hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 1970-01-01 00:00:00 +0000 |
1045 | @@ -1,176 +0,0 @@ |
1046 | -# |
1047 | -# Copyright 2012 Canonical Ltd. |
1048 | -# |
1049 | -# This file is sourced from lp:openstack-charm-helpers |
1050 | -# |
1051 | -# Authors: |
1052 | -# James Page <james.page@ubuntu.com> |
1053 | -# Adam Gandelman <adamg@ubuntu.com> |
1054 | -# |
1055 | - |
1056 | -from utils import ( |
1057 | - juju_log, |
1058 | - relation_ids, |
1059 | - relation_list, |
1060 | - relation_get, |
1061 | - get_unit_hostname, |
1062 | - config_get |
1063 | -) |
1064 | -import subprocess |
1065 | -import os |
1066 | - |
1067 | - |
1068 | -class HAIncompleteConfig(Exception): |
1069 | - pass |
1070 | - |
1071 | - |
1072 | -def is_clustered(): |
1073 | - for r_id in (relation_ids('ha') or []): |
1074 | - for unit in (relation_list(r_id) or []): |
1075 | - clustered = relation_get('clustered', |
1076 | - rid=r_id, |
1077 | - unit=unit) |
1078 | - if clustered: |
1079 | - return True |
1080 | - return False |
1081 | - |
1082 | - |
1083 | -def is_leader(resource): |
1084 | - cmd = [ |
1085 | - "crm", "resource", |
1086 | - "show", resource |
1087 | - ] |
1088 | - try: |
1089 | - status = subprocess.check_output(cmd) |
1090 | - except subprocess.CalledProcessError: |
1091 | - return False |
1092 | - else: |
1093 | - if get_unit_hostname() in status: |
1094 | - return True |
1095 | - else: |
1096 | - return False |
1097 | - |
1098 | - |
1099 | -def peer_units(): |
1100 | - peers = [] |
1101 | - for r_id in (relation_ids('cluster') or []): |
1102 | - for unit in (relation_list(r_id) or []): |
1103 | - peers.append(unit) |
1104 | - return peers |
1105 | - |
1106 | - |
1107 | -def oldest_peer(peers): |
1108 | - local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) |
1109 | - for peer in peers: |
1110 | - remote_unit_no = int(peer.split('/')[1]) |
1111 | - if remote_unit_no < local_unit_no: |
1112 | - return False |
1113 | - return True |
1114 | - |
1115 | - |
1116 | -def eligible_leader(resource): |
1117 | - if is_clustered(): |
1118 | - if not is_leader(resource): |
1119 | - juju_log('INFO', 'Deferring action to CRM leader.') |
1120 | - return False |
1121 | - else: |
1122 | - peers = peer_units() |
1123 | - if peers and not oldest_peer(peers): |
1124 | - juju_log('INFO', 'Deferring action to oldest service unit.') |
1125 | - return False |
1126 | - return True |
1127 | - |
1128 | - |
1129 | -def https(): |
1130 | - ''' |
1131 | - Determines whether enough data has been provided in configuration |
1132 | - or relation data to configure HTTPS |
1133 | - . |
1134 | - returns: boolean |
1135 | - ''' |
1136 | - if config_get('use-https') == "yes": |
1137 | - return True |
1138 | - if config_get('ssl_cert') and config_get('ssl_key'): |
1139 | - return True |
1140 | - for r_id in relation_ids('identity-service'): |
1141 | - for unit in relation_list(r_id): |
1142 | - if (relation_get('https_keystone', rid=r_id, unit=unit) and |
1143 | - relation_get('ssl_cert', rid=r_id, unit=unit) and |
1144 | - relation_get('ssl_key', rid=r_id, unit=unit) and |
1145 | - relation_get('ca_cert', rid=r_id, unit=unit)): |
1146 | - return True |
1147 | - return False |
1148 | - |
1149 | - |
1150 | -def determine_api_port(public_port): |
1151 | - ''' |
1152 | - Determine correct API server listening port based on |
1153 | - existence of HTTPS reverse proxy and/or haproxy. |
1154 | - |
1155 | - public_port: int: standard public port for given service |
1156 | - |
1157 | - returns: int: the correct listening port for the API service |
1158 | - ''' |
1159 | - i = 0 |
1160 | - if len(peer_units()) > 0 or is_clustered(): |
1161 | - i += 1 |
1162 | - if https(): |
1163 | - i += 1 |
1164 | - return public_port - (i * 10) |
1165 | - |
1166 | - |
1167 | -def determine_haproxy_port(public_port): |
1168 | - ''' |
1169 | - Description: Determine correct proxy listening port based on public IP + |
1170 | - existence of HTTPS reverse proxy. |
1171 | - |
1172 | - public_port: int: standard public port for given service |
1173 | - |
1174 | - returns: int: the correct listening port for the HAProxy service |
1175 | - ''' |
1176 | - i = 0 |
1177 | - if https(): |
1178 | - i += 1 |
1179 | - return public_port - (i * 10) |
1180 | - |
1181 | - |
1182 | -def get_hacluster_config(): |
1183 | - ''' |
1184 | - Obtains all relevant configuration from charm configuration required |
1185 | - for initiating a relation to hacluster: |
1186 | - |
1187 | - ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr |
1188 | - |
1189 | - returns: dict: A dict containing settings keyed by setting name. |
1190 | - raises: HAIncompleteConfig if settings are missing. |
1191 | - ''' |
1192 | - settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] |
1193 | - conf = {} |
1194 | - for setting in settings: |
1195 | - conf[setting] = config_get(setting) |
1196 | - missing = [] |
1197 | - [missing.append(s) for s, v in conf.iteritems() if v is None] |
1198 | - if missing: |
1199 | - juju_log('Insufficient config data to configure hacluster.') |
1200 | - raise HAIncompleteConfig |
1201 | - return conf |
1202 | - |
1203 | - |
1204 | -def canonical_url(configs, vip_setting='vip'): |
1205 | - ''' |
1206 | - Returns the correct HTTP URL to this host given the state of HTTPS |
1207 | - configuration and hacluster. |
1208 | - |
1209 | - :configs : OSTemplateRenderer: A config tempating object to inspect for |
1210 | - a complete https context. |
1211 | - :vip_setting: str: Setting in charm config that specifies |
1212 | - VIP address. |
1213 | - ''' |
1214 | - scheme = 'http' |
1215 | - if 'https' in configs.complete_contexts(): |
1216 | - scheme = 'https' |
1217 | - if is_clustered(): |
1218 | - addr = config_get(vip_setting) |
1219 | - else: |
1220 | - addr = get_unit_hostname() |
1221 | - return '%s://%s' % (scheme, addr) |
1222 | |
1223 | === removed file 'hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py' |
1224 | --- hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 2013-06-24 16:05:57 +0000 |
1225 | +++ hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 1970-01-01 00:00:00 +0000 |
1226 | @@ -1,55 +0,0 @@ |
1227 | -# |
1228 | -# Copyright 2012 Canonical Ltd. |
1229 | -# |
1230 | -# This file is sourced from lp:openstack-charm-helpers |
1231 | -# |
1232 | -# Authors: |
1233 | -# James Page <james.page@ubuntu.com> |
1234 | -# Adam Gandelman <adamg@ubuntu.com> |
1235 | -# |
1236 | - |
1237 | -from utils import ( |
1238 | - relation_ids, |
1239 | - relation_list, |
1240 | - relation_get, |
1241 | - unit_get, |
1242 | - reload, |
1243 | - render_template |
1244 | - ) |
1245 | -import os |
1246 | - |
1247 | -HAPROXY_CONF = '/etc/haproxy/haproxy.cfg' |
1248 | -HAPROXY_DEFAULT = '/etc/default/haproxy' |
1249 | - |
1250 | - |
1251 | -def configure_haproxy(service_ports): |
1252 | - ''' |
1253 | - Configure HAProxy based on the current peers in the service |
1254 | - cluster using the provided port map: |
1255 | - |
1256 | - "swift": [ 8080, 8070 ] |
1257 | - |
1258 | - HAproxy will also be reloaded/started if required |
1259 | - |
1260 | - service_ports: dict: dict of lists of [ frontend, backend ] |
1261 | - ''' |
1262 | - cluster_hosts = {} |
1263 | - cluster_hosts[os.getenv('JUJU_UNIT_NAME').replace('/', '-')] = \ |
1264 | - unit_get('private-address') |
1265 | - for r_id in relation_ids('cluster'): |
1266 | - for unit in relation_list(r_id): |
1267 | - cluster_hosts[unit.replace('/', '-')] = \ |
1268 | - relation_get(attribute='private-address', |
1269 | - rid=r_id, |
1270 | - unit=unit) |
1271 | - context = { |
1272 | - 'units': cluster_hosts, |
1273 | - 'service_ports': service_ports |
1274 | - } |
1275 | - with open(HAPROXY_CONF, 'w') as f: |
1276 | - f.write(render_template(os.path.basename(HAPROXY_CONF), |
1277 | - context)) |
1278 | - with open(HAPROXY_DEFAULT, 'w') as f: |
1279 | - f.write('ENABLED=1') |
1280 | - |
1281 | - reload('haproxy') |
1282 | |
1283 | === removed file 'hooks/charmhelpers/contrib/hahelpers/utils.py' |
1284 | --- hooks/charmhelpers/contrib/hahelpers/utils.py 2013-06-24 16:05:57 +0000 |
1285 | +++ hooks/charmhelpers/contrib/hahelpers/utils.py 1970-01-01 00:00:00 +0000 |
1286 | @@ -1,333 +0,0 @@ |
1287 | -# |
1288 | -# Copyright 2012 Canonical Ltd. |
1289 | -# |
1290 | -# This file is sourced from lp:openstack-charm-helpers |
1291 | -# |
1292 | -# Authors: |
1293 | -# James Page <james.page@ubuntu.com> |
1294 | -# Paul Collins <paul.collins@canonical.com> |
1295 | -# Adam Gandelman <adamg@ubuntu.com> |
1296 | -# |
1297 | - |
1298 | -import json |
1299 | -import os |
1300 | -import subprocess |
1301 | -import socket |
1302 | -import sys |
1303 | - |
1304 | - |
1305 | -def do_hooks(hooks): |
1306 | - hook = os.path.basename(sys.argv[0]) |
1307 | - |
1308 | - try: |
1309 | - hook_func = hooks[hook] |
1310 | - except KeyError: |
1311 | - juju_log('INFO', |
1312 | - "This charm doesn't know how to handle '{}'.".format(hook)) |
1313 | - else: |
1314 | - hook_func() |
1315 | - |
1316 | - |
1317 | -def install(*pkgs): |
1318 | - cmd = [ |
1319 | - 'apt-get', |
1320 | - '-y', |
1321 | - 'install' |
1322 | - ] |
1323 | - for pkg in pkgs: |
1324 | - cmd.append(pkg) |
1325 | - subprocess.check_call(cmd) |
1326 | - |
1327 | -TEMPLATES_DIR = 'templates' |
1328 | - |
1329 | -try: |
1330 | - import jinja2 |
1331 | -except ImportError: |
1332 | - install('python-jinja2') |
1333 | - import jinja2 |
1334 | - |
1335 | -try: |
1336 | - import dns.resolver |
1337 | -except ImportError: |
1338 | - install('python-dnspython') |
1339 | - import dns.resolver |
1340 | - |
1341 | - |
1342 | -def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
1343 | - templates = jinja2.Environment( |
1344 | - loader=jinja2.FileSystemLoader(template_dir) |
1345 | - ) |
1346 | - template = templates.get_template(template_name) |
1347 | - return template.render(context) |
1348 | - |
1349 | -CLOUD_ARCHIVE = \ |
1350 | -""" # Ubuntu Cloud Archive |
1351 | -deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1352 | -""" |
1353 | - |
1354 | -CLOUD_ARCHIVE_POCKETS = { |
1355 | - 'folsom': 'precise-updates/folsom', |
1356 | - 'folsom/updates': 'precise-updates/folsom', |
1357 | - 'folsom/proposed': 'precise-proposed/folsom', |
1358 | - 'grizzly': 'precise-updates/grizzly', |
1359 | - 'grizzly/updates': 'precise-updates/grizzly', |
1360 | - 'grizzly/proposed': 'precise-proposed/grizzly' |
1361 | - } |
1362 | - |
1363 | - |
1364 | -def configure_source(): |
1365 | - source = str(config_get('openstack-origin')) |
1366 | - if not source: |
1367 | - return |
1368 | - if source.startswith('ppa:'): |
1369 | - cmd = [ |
1370 | - 'add-apt-repository', |
1371 | - source |
1372 | - ] |
1373 | - subprocess.check_call(cmd) |
1374 | - if source.startswith('cloud:'): |
1375 | - # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg: |
1376 | - # cloud:precise-folsom/updates or cloud:precise-folsom/proposed |
1377 | - install('ubuntu-cloud-keyring') |
1378 | - pocket = source.split(':')[1] |
1379 | - pocket = pocket.split('-')[1] |
1380 | - with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
1381 | - apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket])) |
1382 | - if source.startswith('deb'): |
1383 | - l = len(source.split('|')) |
1384 | - if l == 2: |
1385 | - (apt_line, key) = source.split('|') |
1386 | - cmd = [ |
1387 | - 'apt-key', |
1388 | - 'adv', '--keyserver keyserver.ubuntu.com', |
1389 | - '--recv-keys', key |
1390 | - ] |
1391 | - subprocess.check_call(cmd) |
1392 | - elif l == 1: |
1393 | - apt_line = source |
1394 | - |
1395 | - with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt: |
1396 | - apt.write(apt_line + "\n") |
1397 | - cmd = [ |
1398 | - 'apt-get', |
1399 | - 'update' |
1400 | - ] |
1401 | - subprocess.check_call(cmd) |
1402 | - |
1403 | -# Protocols |
1404 | -TCP = 'TCP' |
1405 | -UDP = 'UDP' |
1406 | - |
1407 | - |
1408 | -def expose(port, protocol='TCP'): |
1409 | - cmd = [ |
1410 | - 'open-port', |
1411 | - '{}/{}'.format(port, protocol) |
1412 | - ] |
1413 | - subprocess.check_call(cmd) |
1414 | - |
1415 | - |
1416 | -def juju_log(severity, message): |
1417 | - cmd = [ |
1418 | - 'juju-log', |
1419 | - '--log-level', severity, |
1420 | - message |
1421 | - ] |
1422 | - subprocess.check_call(cmd) |
1423 | - |
1424 | - |
1425 | -cache = {} |
1426 | - |
1427 | - |
1428 | -def cached(func): |
1429 | - def wrapper(*args, **kwargs): |
1430 | - global cache |
1431 | - key = str((func, args, kwargs)) |
1432 | - try: |
1433 | - return cache[key] |
1434 | - except KeyError: |
1435 | - res = func(*args, **kwargs) |
1436 | - cache[key] = res |
1437 | - return res |
1438 | - return wrapper |
1439 | - |
1440 | - |
1441 | -@cached |
1442 | -def relation_ids(relation): |
1443 | - cmd = [ |
1444 | - 'relation-ids', |
1445 | - relation |
1446 | - ] |
1447 | - result = str(subprocess.check_output(cmd)).split() |
1448 | - if result == "": |
1449 | - return None |
1450 | - else: |
1451 | - return result |
1452 | - |
1453 | - |
1454 | -@cached |
1455 | -def relation_list(rid): |
1456 | - cmd = [ |
1457 | - 'relation-list', |
1458 | - '-r', rid, |
1459 | - ] |
1460 | - result = str(subprocess.check_output(cmd)).split() |
1461 | - if result == "": |
1462 | - return None |
1463 | - else: |
1464 | - return result |
1465 | - |
1466 | - |
1467 | -@cached |
1468 | -def relation_get(attribute, unit=None, rid=None): |
1469 | - cmd = [ |
1470 | - 'relation-get', |
1471 | - ] |
1472 | - if rid: |
1473 | - cmd.append('-r') |
1474 | - cmd.append(rid) |
1475 | - cmd.append(attribute) |
1476 | - if unit: |
1477 | - cmd.append(unit) |
1478 | - value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1479 | - if value == "": |
1480 | - return None |
1481 | - else: |
1482 | - return value |
1483 | - |
1484 | - |
1485 | -@cached |
1486 | -def relation_get_dict(relation_id=None, remote_unit=None): |
1487 | - """Obtain all relation data as dict by way of JSON""" |
1488 | - cmd = [ |
1489 | - 'relation-get', '--format=json' |
1490 | - ] |
1491 | - if relation_id: |
1492 | - cmd.append('-r') |
1493 | - cmd.append(relation_id) |
1494 | - if remote_unit: |
1495 | - remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None) |
1496 | - os.environ['JUJU_REMOTE_UNIT'] = remote_unit |
1497 | - j = subprocess.check_output(cmd) |
1498 | - if remote_unit and remote_unit_orig: |
1499 | - os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig |
1500 | - d = json.loads(j) |
1501 | - settings = {} |
1502 | - # convert unicode to strings |
1503 | - for k, v in d.iteritems(): |
1504 | - settings[str(k)] = str(v) |
1505 | - return settings |
1506 | - |
1507 | - |
1508 | -def relation_set(**kwargs): |
1509 | - cmd = [ |
1510 | - 'relation-set' |
1511 | - ] |
1512 | - args = [] |
1513 | - for k, v in kwargs.items(): |
1514 | - if k == 'rid': |
1515 | - if v: |
1516 | - cmd.append('-r') |
1517 | - cmd.append(v) |
1518 | - else: |
1519 | - args.append('{}={}'.format(k, v)) |
1520 | - cmd += args |
1521 | - subprocess.check_call(cmd) |
1522 | - |
1523 | - |
1524 | -@cached |
1525 | -def unit_get(attribute): |
1526 | - cmd = [ |
1527 | - 'unit-get', |
1528 | - attribute |
1529 | - ] |
1530 | - value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1531 | - if value == "": |
1532 | - return None |
1533 | - else: |
1534 | - return value |
1535 | - |
1536 | - |
1537 | -@cached |
1538 | -def config_get(attribute): |
1539 | - cmd = [ |
1540 | - 'config-get', |
1541 | - '--format', |
1542 | - 'json', |
1543 | - ] |
1544 | - out = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1545 | - cfg = json.loads(out) |
1546 | - |
1547 | - try: |
1548 | - return cfg[attribute] |
1549 | - except KeyError: |
1550 | - return None |
1551 | - |
1552 | - |
1553 | -@cached |
1554 | -def get_unit_hostname(): |
1555 | - return socket.gethostname() |
1556 | - |
1557 | - |
1558 | -@cached |
1559 | -def get_host_ip(hostname=None): |
1560 | - hostname = hostname or unit_get('private-address') |
1561 | - try: |
1562 | - # Test to see if already an IPv4 address |
1563 | - socket.inet_aton(hostname) |
1564 | - return hostname |
1565 | - except socket.error: |
1566 | - answers = dns.resolver.query(hostname, 'A') |
1567 | - if answers: |
1568 | - return answers[0].address |
1569 | - return None |
1570 | - |
1571 | - |
1572 | -def _svc_control(service, action): |
1573 | - subprocess.check_call(['service', service, action]) |
1574 | - |
1575 | - |
1576 | -def restart(*services): |
1577 | - for service in services: |
1578 | - _svc_control(service, 'restart') |
1579 | - |
1580 | - |
1581 | -def stop(*services): |
1582 | - for service in services: |
1583 | - _svc_control(service, 'stop') |
1584 | - |
1585 | - |
1586 | -def start(*services): |
1587 | - for service in services: |
1588 | - _svc_control(service, 'start') |
1589 | - |
1590 | - |
1591 | -def reload(*services): |
1592 | - for service in services: |
1593 | - try: |
1594 | - _svc_control(service, 'reload') |
1595 | - except subprocess.CalledProcessError: |
1596 | - # Reload failed - either service does not support reload |
1597 | - # or it was not running - restart will fixup most things |
1598 | - _svc_control(service, 'restart') |
1599 | - |
1600 | - |
1601 | -def running(service): |
1602 | - try: |
1603 | - output = subprocess.check_output(['service', service, 'status']) |
1604 | - except subprocess.CalledProcessError: |
1605 | - return False |
1606 | - else: |
1607 | - if ("start/running" in output or |
1608 | - "is running" in output): |
1609 | - return True |
1610 | - else: |
1611 | - return False |
1612 | - |
1613 | - |
1614 | -def is_relation_made(relation, key='private-address'): |
1615 | - for r_id in (relation_ids(relation) or []): |
1616 | - for unit in (relation_list(r_id) or []): |
1617 | - if relation_get(key, rid=r_id, unit=unit): |
1618 | - return True |
1619 | - return False |
1620 | |
1621 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
1622 | --- hooks/charmhelpers/contrib/openstack/context.py 2013-07-05 19:12:08 +0000 |
1623 | +++ hooks/charmhelpers/contrib/openstack/context.py 2013-08-14 22:57:15 +0000 |
1624 | @@ -16,7 +16,7 @@ |
1625 | unit_get, |
1626 | ) |
1627 | |
1628 | -from charmhelpers.contrib.hahelpers.cluster_utils import ( |
1629 | +from charmhelpers.contrib.hahelpers.cluster import ( |
1630 | determine_api_port, |
1631 | determine_haproxy_port, |
1632 | https, |
1633 | @@ -24,7 +24,7 @@ |
1634 | peer_units, |
1635 | ) |
1636 | |
1637 | -from charmhelpers.contrib.hahelpers.apache_utils import ( |
1638 | +from charmhelpers.contrib.hahelpers.apache import ( |
1639 | get_cert, |
1640 | get_ca_cert, |
1641 | ) |
1642 | @@ -42,7 +42,7 @@ |
1643 | if v is None or v == '': |
1644 | _missing.append(k) |
1645 | if _missing: |
1646 | - print 'Missing required data: %s' % ' '.join(_missing) |
1647 | + log('Missing required data: %s' % ' '.join(_missing), level='INFO') |
1648 | return False |
1649 | return True |
1650 | |
1651 | @@ -106,8 +106,8 @@ |
1652 | 'admin_password': relation_get('service_password', rid=rid, |
1653 | unit=unit), |
1654 | # XXX: Hard-coded http. |
1655 | - 'service_protocol': 'http', |
1656 | - 'auth_protocol': 'http', |
1657 | + 'service_protocol': 'http', |
1658 | + 'auth_protocol': 'http', |
1659 | } |
1660 | if not context_complete(ctxt): |
1661 | return {} |
1662 | @@ -202,6 +202,30 @@ |
1663 | with open('/etc/default/haproxy', 'w') as out: |
1664 | out.write('ENABLED=1\n') |
1665 | return ctxt |
1666 | + log('HAProxy context is incomplete, this unit has no peers.') |
1667 | + return {} |
1668 | + |
1669 | + |
1670 | +class ImageServiceContext(OSContextGenerator): |
1671 | + interfaces = ['image-servce'] |
1672 | + |
1673 | + def __call__(self): |
1674 | + ''' |
1675 | + Obtains the glance API server from the image-service relation. Useful |
1676 | + in nova and cinder (currently). |
1677 | + ''' |
1678 | + log('Generating template context for image-service.') |
1679 | + rids = relation_ids('image-service') |
1680 | + if not rids: |
1681 | + return {} |
1682 | + for rid in rids: |
1683 | + for unit in related_units(rid): |
1684 | + api_server = relation_get('glance-api-server', |
1685 | + rid=rid, unit=unit) |
1686 | + if api_server: |
1687 | + return {'glance_api_servers': api_server} |
1688 | + log('ImageService context is incomplete. ' |
1689 | + 'Missing required relation data.') |
1690 | return {} |
1691 | |
1692 | |
1693 | |
1694 | === removed file 'hooks/charmhelpers/contrib/openstack/openstack_utils.py' |
1695 | --- hooks/charmhelpers/contrib/openstack/openstack_utils.py 2013-07-11 17:29:14 +0000 |
1696 | +++ hooks/charmhelpers/contrib/openstack/openstack_utils.py 1970-01-01 00:00:00 +0000 |
1697 | @@ -1,270 +0,0 @@ |
1698 | -#!/usr/bin/python |
1699 | - |
1700 | -# Common python helper functions used for OpenStack charms. |
1701 | - |
1702 | -import apt_pkg as apt |
1703 | -import subprocess |
1704 | -import os |
1705 | -import sys |
1706 | - |
1707 | -from distutils.version import StrictVersion |
1708 | - |
1709 | -from charmhelpers.core.hookenv import ( |
1710 | - config, |
1711 | - charm_dir, |
1712 | -) |
1713 | - |
1714 | -CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
1715 | -CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
1716 | - |
1717 | -ubuntu_openstack_release = { |
1718 | - 'oneiric': 'diablo', |
1719 | - 'precise': 'essex', |
1720 | - 'quantal': 'folsom', |
1721 | - 'raring': 'grizzly', |
1722 | -} |
1723 | - |
1724 | - |
1725 | -openstack_codenames = { |
1726 | - '2011.2': 'diablo', |
1727 | - '2012.1': 'essex', |
1728 | - '2012.2': 'folsom', |
1729 | - '2013.1': 'grizzly', |
1730 | - '2013.2': 'havana', |
1731 | -} |
1732 | - |
1733 | -# The ugly duckling |
1734 | -swift_codenames = { |
1735 | - '1.4.3': 'diablo', |
1736 | - '1.4.8': 'essex', |
1737 | - '1.7.4': 'folsom', |
1738 | - '1.7.6': 'grizzly', |
1739 | - '1.7.7': 'grizzly', |
1740 | - '1.8.0': 'grizzly', |
1741 | -} |
1742 | - |
1743 | - |
1744 | -def juju_log(msg): |
1745 | - subprocess.check_call(['juju-log', msg]) |
1746 | - |
1747 | - |
1748 | -def error_out(msg): |
1749 | - juju_log("FATAL ERROR: %s" % msg) |
1750 | - sys.exit(1) |
1751 | - |
1752 | - |
1753 | -def lsb_release(): |
1754 | - '''Return /etc/lsb-release in a dict''' |
1755 | - lsb = open('/etc/lsb-release', 'r') |
1756 | - d = {} |
1757 | - for l in lsb: |
1758 | - k, v = l.split('=') |
1759 | - d[k.strip()] = v.strip() |
1760 | - return d |
1761 | - |
1762 | - |
1763 | -def get_os_codename_install_source(src): |
1764 | - '''Derive OpenStack release codename from a given installation source.''' |
1765 | - ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
1766 | - |
1767 | - rel = '' |
1768 | - if src == 'distro': |
1769 | - try: |
1770 | - rel = ubuntu_openstack_release[ubuntu_rel] |
1771 | - except KeyError: |
1772 | - e = 'Could not derive openstack release for '\ |
1773 | - 'this Ubuntu release: %s' % ubuntu_rel |
1774 | - error_out(e) |
1775 | - return rel |
1776 | - |
1777 | - if src.startswith('cloud:'): |
1778 | - ca_rel = src.split(':')[1] |
1779 | - ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
1780 | - return ca_rel |
1781 | - |
1782 | - # Best guess match based on deb string provided |
1783 | - if src.startswith('deb') or src.startswith('ppa'): |
1784 | - for k, v in openstack_codenames.iteritems(): |
1785 | - if v in src: |
1786 | - return v |
1787 | - |
1788 | - |
1789 | -def get_os_version_install_source(src): |
1790 | - codename = get_os_codename_install_source(src) |
1791 | - return get_os_version_codename(codename) |
1792 | - |
1793 | - |
1794 | -def get_os_codename_version(vers): |
1795 | - '''Determine OpenStack codename from version number.''' |
1796 | - try: |
1797 | - return openstack_codenames[vers] |
1798 | - except KeyError: |
1799 | - e = 'Could not determine OpenStack codename for version %s' % vers |
1800 | - error_out(e) |
1801 | - |
1802 | - |
1803 | -def get_os_version_codename(codename): |
1804 | - '''Determine OpenStack version number from codename.''' |
1805 | - for k, v in openstack_codenames.iteritems(): |
1806 | - if v == codename: |
1807 | - return k |
1808 | - e = 'Could not derive OpenStack version for '\ |
1809 | - 'codename: %s' % codename |
1810 | - error_out(e) |
1811 | - |
1812 | - |
1813 | -def get_os_codename_package(pkg, fatal=True): |
1814 | - '''Derive OpenStack release codename from an installed package.''' |
1815 | - apt.init() |
1816 | - cache = apt.Cache() |
1817 | - |
1818 | - try: |
1819 | - pkg = cache[pkg] |
1820 | - except: |
1821 | - if not fatal: |
1822 | - return None |
1823 | - e = 'Could not determine version of installed package: %s' % pkg |
1824 | - error_out(e) |
1825 | - |
1826 | - if pkg.current_ver != None: |
1827 | - vers = apt.UpstreamVersion(pkg.current_ver.ver_str) |
1828 | - else: |
1829 | - return None |
1830 | - |
1831 | - try: |
1832 | - if 'swift' in pkg.name: |
1833 | - vers = vers[:5] |
1834 | - return swift_codenames[vers] |
1835 | - else: |
1836 | - vers = vers[:6] |
1837 | - return openstack_codenames[vers] |
1838 | - except KeyError: |
1839 | - e = 'Could not determine OpenStack codename for version %s' % vers |
1840 | - error_out(e) |
1841 | - |
1842 | - |
1843 | -def get_os_version_package(pkg, fatal=True): |
1844 | - '''Derive OpenStack version number from an installed package.''' |
1845 | - codename = get_os_codename_package(pkg, fatal=fatal) |
1846 | - |
1847 | - if not codename: |
1848 | - return None |
1849 | - |
1850 | - if 'swift' in pkg: |
1851 | - vers_map = swift_codenames |
1852 | - else: |
1853 | - vers_map = openstack_codenames |
1854 | - |
1855 | - for version, cname in vers_map.iteritems(): |
1856 | - if cname == codename: |
1857 | - return version |
1858 | - #e = "Could not determine OpenStack version for package: %s" % pkg |
1859 | - #error_out(e) |
1860 | - |
1861 | - |
1862 | -def import_key(keyid): |
1863 | - cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ |
1864 | - "--recv-keys %s" % keyid |
1865 | - try: |
1866 | - subprocess.check_call(cmd.split(' ')) |
1867 | - except subprocess.CalledProcessError: |
1868 | - error_out("Error importing repo key %s" % keyid) |
1869 | - |
1870 | - |
1871 | -def configure_installation_source(rel): |
1872 | - '''Configure apt installation source.''' |
1873 | - if rel == 'distro': |
1874 | - return |
1875 | - elif rel[:4] == "ppa:": |
1876 | - src = rel |
1877 | - subprocess.check_call(["add-apt-repository", "-y", src]) |
1878 | - elif rel[:3] == "deb": |
1879 | - l = len(rel.split('|')) |
1880 | - if l == 2: |
1881 | - src, key = rel.split('|') |
1882 | - juju_log("Importing PPA key from keyserver for %s" % src) |
1883 | - import_key(key) |
1884 | - elif l == 1: |
1885 | - src = rel |
1886 | - with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
1887 | - f.write(src) |
1888 | - elif rel[:6] == 'cloud:': |
1889 | - ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
1890 | - rel = rel.split(':')[1] |
1891 | - u_rel = rel.split('-')[0] |
1892 | - ca_rel = rel.split('-')[1] |
1893 | - |
1894 | - if u_rel != ubuntu_rel: |
1895 | - e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
1896 | - 'version (%s)' % (ca_rel, ubuntu_rel) |
1897 | - error_out(e) |
1898 | - |
1899 | - if 'staging' in ca_rel: |
1900 | - # staging is just a regular PPA. |
1901 | - os_rel = ca_rel.split('/')[0] |
1902 | - ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
1903 | - cmd = 'add-apt-repository -y %s' % ppa |
1904 | - subprocess.check_call(cmd.split(' ')) |
1905 | - return |
1906 | - |
1907 | - # map charm config options to actual archive pockets. |
1908 | - pockets = { |
1909 | - 'folsom': 'precise-updates/folsom', |
1910 | - 'folsom/updates': 'precise-updates/folsom', |
1911 | - 'folsom/proposed': 'precise-proposed/folsom', |
1912 | - 'grizzly': 'precise-updates/grizzly', |
1913 | - 'grizzly/updates': 'precise-updates/grizzly', |
1914 | - 'grizzly/proposed': 'precise-proposed/grizzly' |
1915 | - } |
1916 | - |
1917 | - try: |
1918 | - pocket = pockets[ca_rel] |
1919 | - except KeyError: |
1920 | - e = 'Invalid Cloud Archive release specified: %s' % rel |
1921 | - error_out(e) |
1922 | - |
1923 | - src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
1924 | - # TODO: Replace key import with cloud archive keyring pkg. |
1925 | - import_key(CLOUD_ARCHIVE_KEY_ID) |
1926 | - |
1927 | - with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
1928 | - f.write(src) |
1929 | - else: |
1930 | - error_out("Invalid openstack-release specified: %s" % rel) |
1931 | - |
1932 | - |
1933 | -def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
1934 | - """ |
1935 | - Write an rc file in the charm-delivered directory containing |
1936 | - exported environment variables provided by env_vars. Any charm scripts run |
1937 | - outside the juju hook environment can source this scriptrc to obtain |
1938 | - updated config information necessary to perform health checks or |
1939 | - service changes. |
1940 | - """ |
1941 | - unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-') |
1942 | - juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
1943 | - if not os.path.exists(os.path.dirname(juju_rc_path)): |
1944 | - os.mkdir(os.path.dirname(juju_rc_path)) |
1945 | - with open(juju_rc_path, 'wb') as rc_script: |
1946 | - rc_script.write( |
1947 | - "#!/bin/bash\n") |
1948 | - [rc_script.write('export %s=%s\n' % (u, p)) |
1949 | - for u, p in env_vars.iteritems() if u != "script_path"] |
1950 | - |
1951 | - |
1952 | -def openstack_upgrade_available(package): |
1953 | - """ |
1954 | - Determines if an OpenStack upgrade is available from installation |
1955 | - source, based on version of installed package. |
1956 | - |
1957 | - :param package: str: Name of installed package. |
1958 | - |
1959 | - :returns: bool: : Returns True if configured installation source offers |
1960 | - a newer version of package. |
1961 | - |
1962 | - """ |
1963 | - |
1964 | - src = config('openstack-origin') |
1965 | - cur_vers = get_os_version_package(package) |
1966 | - available_vers = get_os_version_install_source(src) |
1967 | - return StrictVersion(available_vers) > StrictVersion(cur_vers) |
1968 | |
1969 | === modified file 'hooks/charmhelpers/contrib/openstack/templating.py' |
1970 | --- hooks/charmhelpers/contrib/openstack/templating.py 2013-07-05 19:12:08 +0000 |
1971 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2013-08-14 22:57:15 +0000 |
1972 | @@ -1,85 +1,20 @@ |
1973 | -import logging |
1974 | import os |
1975 | |
1976 | +from charmhelpers.core.host import apt_install |
1977 | + |
1978 | +from charmhelpers.core.hookenv import ( |
1979 | + log, |
1980 | + ERROR, |
1981 | + INFO |
1982 | +) |
1983 | + |
1984 | +from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
1985 | + |
1986 | try: |
1987 | - import jinja2 |
1988 | + from jinja2 import FileSystemLoader, ChoiceLoader, Environment |
1989 | except ImportError: |
1990 | - pass |
1991 | - |
1992 | - |
1993 | -logging.basicConfig(level=logging.INFO) |
1994 | - |
1995 | -""" |
1996 | - |
1997 | -WIP Abstract templating system for the OpenStack charms. |
1998 | - |
1999 | -The idea is that an openstack charm can register a number of config files |
2000 | -associated with common context generators. The context generators are |
2001 | -responsible for inspecting charm config/relation data/deployment state |
2002 | -and presenting correct context to the template. Generic context generators |
2003 | -could live somewhere in charmhelpers.contrib.openstack, and each |
2004 | -charm can implement their own specific ones as well. |
2005 | - |
2006 | -Ideally a charm would register all its config files somewhere in its namespace, |
2007 | -eg cinder_utils.py: |
2008 | - |
2009 | -from charmhelpers.contrib.openstack import templating, context |
2010 | - |
2011 | -config_files = { |
2012 | - '/etc/cinder/cinder.conf': [context.shared_db, |
2013 | - context.amqp, |
2014 | - context.ceph], |
2015 | - '/etc/cinder/api-paste.ini': [context.identity_service] |
2016 | -} |
2017 | - |
2018 | -configs = templating.OSConfigRenderer(template_dir='templates/') |
2019 | - |
2020 | -[configs.register(k, v) for k, v in config_files.iteritems()] |
2021 | - |
2022 | -Hooks can then render config files as need, eg: |
2023 | - |
2024 | -def config_changed(): |
2025 | - configs.render_all() |
2026 | - |
2027 | -def db_changed(): |
2028 | - configs.render('/etc/cinder/cinder.conf') |
2029 | - check_call(['cinder-manage', 'db', 'sync']) |
2030 | - |
2031 | -This would look very similar for nova/glance/etc. |
2032 | - |
2033 | - |
2034 | -The OSTemplteLoader is responsible for creating a jinja2.ChoiceLoader that |
2035 | -should help reduce fragmentation of a charms' templates across OpenStack |
2036 | -releases, so we do not need to maintain many copies of templates or juggle |
2037 | -symlinks. The constructed loader lets the template be loaded from the most |
2038 | -recent OS release-specific template dir or a base template dir. |
2039 | - |
2040 | -For example, say cinder has no changes in config structure across any OS |
2041 | -releases, all OS releases share the same templates from the base directory: |
2042 | - |
2043 | - |
2044 | -templates/api-paste.ini |
2045 | -templates/cinder.conf |
2046 | - |
2047 | -Then, since Grizzly and beyond, cinder.conf's format has changed: |
2048 | - |
2049 | -templates/api-paste.ini |
2050 | -templates/cinder.conf |
2051 | -templates/grizzly/cinder.conf |
2052 | - |
2053 | - |
2054 | -Grizzly and beyond will load from templates/grizzly, but any release prior will |
2055 | -load from templates/. If some change in Icehouse breaks config format again: |
2056 | - |
2057 | -templates/api-paste.ini |
2058 | -templates/cinder.conf |
2059 | -templates/grizzly/cinder.conf |
2060 | -templates/icehouse/cinder.conf |
2061 | - |
2062 | -Icehouse and beyond will load from icehouse/, Grizzly + Havan from grizzly/, |
2063 | -previous releases from the base templates/ |
2064 | - |
2065 | -""" |
2066 | + # python-jinja2 may not be installed yet, or we're running unittests. |
2067 | + FileSystemLoader = ChoiceLoader = Environment = None |
2068 | |
2069 | |
2070 | class OSConfigException(Exception): |
2071 | @@ -107,36 +42,36 @@ |
2072 | jinja2.FilesystemLoaders, ordered in descending |
2073 | order by OpenStack release. |
2074 | """ |
2075 | - tmpl_dirs = ( |
2076 | - ('essex', os.path.join(templates_dir, 'essex')), |
2077 | - ('folsom', os.path.join(templates_dir, 'folsom')), |
2078 | - ('grizzly', os.path.join(templates_dir, 'grizzly')), |
2079 | - ('havana', os.path.join(templates_dir, 'havana')), |
2080 | - ('icehouse', os.path.join(templates_dir, 'icehouse')), |
2081 | - ) |
2082 | + tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
2083 | + for rel in OPENSTACK_CODENAMES.itervalues()] |
2084 | |
2085 | if not os.path.isdir(templates_dir): |
2086 | - logging.error('Templates directory not found @ %s.' % templates_dir) |
2087 | + log('Templates directory not found @ %s.' % templates_dir, |
2088 | + level=ERROR) |
2089 | raise OSConfigException |
2090 | |
2091 | # the bottom contains tempaltes_dir and possibly a common templates dir |
2092 | # shipped with the helper. |
2093 | - loaders = [jinja2.FileSystemLoader(templates_dir)] |
2094 | + loaders = [FileSystemLoader(templates_dir)] |
2095 | helper_templates = os.path.join(os.path.dirname(__file__), 'templates') |
2096 | if os.path.isdir(helper_templates): |
2097 | - loaders.append(jinja2.FileSystemLoader(helper_templates)) |
2098 | + loaders.append(FileSystemLoader(helper_templates)) |
2099 | |
2100 | for rel, tmpl_dir in tmpl_dirs: |
2101 | if os.path.isdir(tmpl_dir): |
2102 | - loaders.insert(0, jinja2.FileSystemLoader(tmpl_dir)) |
2103 | + loaders.insert(0, FileSystemLoader(tmpl_dir)) |
2104 | if rel == os_release: |
2105 | break |
2106 | - logging.info('Creating choice loader with dirs: %s' % |
2107 | - [l.searchpath for l in loaders]) |
2108 | - return jinja2.ChoiceLoader(loaders) |
2109 | + log('Creating choice loader with dirs: %s' % |
2110 | + [l.searchpath for l in loaders], level=INFO) |
2111 | + return ChoiceLoader(loaders) |
2112 | |
2113 | |
2114 | class OSConfigTemplate(object): |
2115 | + """ |
2116 | + Associates a config file template with a list of context generators. |
2117 | + Responsible for constructing a template context based on those generators. |
2118 | + """ |
2119 | def __init__(self, config_file, contexts): |
2120 | self.config_file = config_file |
2121 | |
2122 | @@ -170,46 +105,141 @@ |
2123 | |
2124 | |
2125 | class OSConfigRenderer(object): |
2126 | + """ |
2127 | + This class provides a common templating system to be used by OpenStack |
2128 | + charms. It is intended to help charms share common code and templates, |
2129 | + and ease the burden of managing config templates across multiple OpenStack |
2130 | + releases. |
2131 | + |
2132 | + Basic usage: |
2133 | + # import some common context generates from charmhelpers |
2134 | + from charmhelpers.contrib.openstack import context |
2135 | + |
2136 | + # Create a renderer object for a specific OS release. |
2137 | + configs = OSConfigRenderer(templates_dir='/tmp/templates', |
2138 | + openstack_release='folsom') |
2139 | + # register some config files with context generators. |
2140 | + configs.register(config_file='/etc/nova/nova.conf', |
2141 | + contexts=[context.SharedDBContext(), |
2142 | + context.AMQPContext()]) |
2143 | + configs.register(config_file='/etc/nova/api-paste.ini', |
2144 | + contexts=[context.IdentityServiceContext()]) |
2145 | + configs.register(config_file='/etc/haproxy/haproxy.conf', |
2146 | + contexts=[context.HAProxyContext()]) |
2147 | + # write out a single config |
2148 | + configs.write('/etc/nova/nova.conf') |
2149 | + # write out all registered configs |
2150 | + configs.write_all() |
2151 | + |
2152 | + Details: |
2153 | + |
2154 | + OpenStack Releases and template loading |
2155 | + --------------------------------------- |
2156 | + When the object is instantiated, it is associated with a specific OS |
2157 | + release. This dictates how the template loader will be constructed. |
2158 | + |
2159 | + The constructed loader attempts to load the template from several places |
2160 | + in the following order: |
2161 | + - from the most recent OS release-specific template dir (if one exists) |
2162 | + - the base templates_dir |
2163 | + - a template directory shipped in the charm with this helper file. |
2164 | + |
2165 | + |
2166 | + For the example above, '/tmp/templates' contains the following structure: |
2167 | + /tmp/templates/nova.conf |
2168 | + /tmp/templates/api-paste.ini |
2169 | + /tmp/templates/grizzly/api-paste.ini |
2170 | + /tmp/templates/havana/api-paste.ini |
2171 | + |
2172 | + Since it was registered with the grizzly release, it first seraches |
2173 | + the grizzly directory for nova.conf, then the templates dir. |
2174 | + |
2175 | + When writing api-paste.ini, it will find the template in the grizzly |
2176 | + directory. |
2177 | + |
2178 | + If the object were created with folsom, it would fall back to the |
2179 | + base templates dir for its api-paste.ini template. |
2180 | + |
2181 | + This system should help manage changes in config files through |
2182 | + openstack releases, allowing charms to fall back to the most recently |
2183 | + updated config template for a given release |
2184 | + |
2185 | + The haproxy.conf, since it is not shipped in the templates dir, will |
2186 | + be loaded from the module directory's template directory, eg |
2187 | + $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows |
2188 | + us to ship common templates (haproxy, apache) with the helpers. |
2189 | + |
2190 | + Context generators |
2191 | + --------------------------------------- |
2192 | + Context generators are used to generate template contexts during hook |
2193 | + execution. Doing so may require inspecting service relations, charm |
2194 | + config, etc. When registered, a config file is associated with a list |
2195 | + of generators. When a template is rendered and written, all context |
2196 | + generates are called in a chain to generate the context dictionary |
2197 | + passed to the jinja2 template. See context.py for more info. |
2198 | + """ |
2199 | def __init__(self, templates_dir, openstack_release): |
2200 | if not os.path.isdir(templates_dir): |
2201 | - logging.error('Could not locate templates dir %s' % templates_dir) |
2202 | + log('Could not locate templates dir %s' % templates_dir, |
2203 | + level=ERROR) |
2204 | raise OSConfigException |
2205 | + |
2206 | self.templates_dir = templates_dir |
2207 | self.openstack_release = openstack_release |
2208 | self.templates = {} |
2209 | self._tmpl_env = None |
2210 | |
2211 | + if None in [Environment, ChoiceLoader, FileSystemLoader]: |
2212 | + # if this code is running, the object is created pre-install hook. |
2213 | + # jinja2 shouldn't get touched until the module is reloaded on next |
2214 | + # hook execution, with proper jinja2 bits successfully imported. |
2215 | + apt_install('python-jinja2') |
2216 | + |
2217 | def register(self, config_file, contexts): |
2218 | + """ |
2219 | + Register a config file with a list of context generators to be called |
2220 | + during rendering. |
2221 | + """ |
2222 | self.templates[config_file] = OSConfigTemplate(config_file=config_file, |
2223 | contexts=contexts) |
2224 | - logging.info('Registered config file: %s' % config_file) |
2225 | + log('Registered config file: %s' % config_file, level=INFO) |
2226 | |
2227 | def _get_tmpl_env(self): |
2228 | if not self._tmpl_env: |
2229 | loader = get_loader(self.templates_dir, self.openstack_release) |
2230 | - self._tmpl_env = jinja2.Environment(loader=loader) |
2231 | + self._tmpl_env = Environment(loader=loader) |
2232 | + |
2233 | + def _get_template(self, template): |
2234 | + self._get_tmpl_env() |
2235 | + template = self._tmpl_env.get_template(template) |
2236 | + log('Loaded template from %s' % template.filename, level=INFO) |
2237 | + return template |
2238 | |
2239 | def render(self, config_file): |
2240 | if config_file not in self.templates: |
2241 | - logging.error('Config not registered: %s' % config_file) |
2242 | + log('Config not registered: %s' % config_file, level=ERROR) |
2243 | raise OSConfigException |
2244 | ctxt = self.templates[config_file].context() |
2245 | _tmpl = os.path.basename(config_file) |
2246 | - logging.info('Rendering from template: %s' % _tmpl) |
2247 | - self._get_tmpl_env() |
2248 | - _tmpl = self._tmpl_env.get_template(_tmpl) |
2249 | - logging.info('Loaded template from %s' % _tmpl.filename) |
2250 | - return _tmpl.render(ctxt) |
2251 | + log('Rendering from template: %s' % _tmpl, level=INFO) |
2252 | + template = self._get_template(_tmpl) |
2253 | + return template.render(ctxt) |
2254 | |
2255 | def write(self, config_file): |
2256 | + """ |
2257 | + Write a single config file, raises if config file is not registered. |
2258 | + """ |
2259 | if config_file not in self.templates: |
2260 | - logging.error('Config not registered: %s' % config_file) |
2261 | + log('Config not registered: %s' % config_file, level=ERROR) |
2262 | raise OSConfigException |
2263 | with open(config_file, 'wb') as out: |
2264 | out.write(self.render(config_file)) |
2265 | - logging.info('Wrote template %s.' % config_file) |
2266 | + log('Wrote template %s.' % config_file, level=INFO) |
2267 | |
2268 | def write_all(self): |
2269 | + """ |
2270 | + Write out all registered config files. |
2271 | + """ |
2272 | [self.write(k) for k in self.templates.iterkeys()] |
2273 | |
2274 | def set_release(self, openstack_release): |
2275 | |
2276 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' |
2277 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 |
2278 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2013-08-14 22:57:15 +0000 |
2279 | @@ -0,0 +1,273 @@ |
2280 | +#!/usr/bin/python |
2281 | + |
2282 | +# Common python helper functions used for OpenStack charms. |
2283 | + |
2284 | +from collections import OrderedDict |
2285 | + |
2286 | +import apt_pkg as apt |
2287 | +import subprocess |
2288 | +import os |
2289 | +import sys |
2290 | + |
2291 | +from charmhelpers.core.hookenv import ( |
2292 | + config, |
2293 | + log as juju_log, |
2294 | + charm_dir, |
2295 | +) |
2296 | + |
2297 | +from charmhelpers.core.host import ( |
2298 | + lsb_release, |
2299 | + apt_install, |
2300 | +) |
2301 | + |
2302 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
2303 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
2304 | + |
2305 | +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
2306 | + ('oneiric', 'diablo'), |
2307 | + ('precise', 'essex'), |
2308 | + ('quantal', 'folsom'), |
2309 | + ('raring', 'grizzly'), |
2310 | + ('saucy', 'havana'), |
2311 | +]) |
2312 | + |
2313 | + |
2314 | +OPENSTACK_CODENAMES = OrderedDict([ |
2315 | + ('2011.2', 'diablo'), |
2316 | + ('2012.1', 'essex'), |
2317 | + ('2012.2', 'folsom'), |
2318 | + ('2013.1', 'grizzly'), |
2319 | + ('2013.2', 'havana'), |
2320 | + ('2014.1', 'icehouse'), |
2321 | +]) |
2322 | + |
2323 | +# The ugly duckling |
2324 | +SWIFT_CODENAMES = { |
2325 | + '1.4.3': 'diablo', |
2326 | + '1.4.8': 'essex', |
2327 | + '1.7.4': 'folsom', |
2328 | + '1.7.6': 'grizzly', |
2329 | + '1.7.7': 'grizzly', |
2330 | + '1.8.0': 'grizzly', |
2331 | + '1.9.0': 'havana', |
2332 | + '1.9.1': 'havana', |
2333 | +} |
2334 | + |
2335 | + |
2336 | +def error_out(msg): |
2337 | + juju_log("FATAL ERROR: %s" % msg, level='ERROR') |
2338 | + sys.exit(1) |
2339 | + |
2340 | + |
2341 | +def get_os_codename_install_source(src): |
2342 | + '''Derive OpenStack release codename from a given installation source.''' |
2343 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
2344 | + rel = '' |
2345 | + if src == 'distro': |
2346 | + try: |
2347 | + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] |
2348 | + except KeyError: |
2349 | + e = 'Could not derive openstack release for '\ |
2350 | + 'this Ubuntu release: %s' % ubuntu_rel |
2351 | + error_out(e) |
2352 | + return rel |
2353 | + |
2354 | + if src.startswith('cloud:'): |
2355 | + ca_rel = src.split(':')[1] |
2356 | + ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
2357 | + return ca_rel |
2358 | + |
2359 | + # Best guess match based on deb string provided |
2360 | + if src.startswith('deb') or src.startswith('ppa'): |
2361 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
2362 | + if v in src: |
2363 | + return v |
2364 | + |
2365 | + |
2366 | +def get_os_version_install_source(src): |
2367 | + codename = get_os_codename_install_source(src) |
2368 | + return get_os_version_codename(codename) |
2369 | + |
2370 | + |
2371 | +def get_os_codename_version(vers): |
2372 | + '''Determine OpenStack codename from version number.''' |
2373 | + try: |
2374 | + return OPENSTACK_CODENAMES[vers] |
2375 | + except KeyError: |
2376 | + e = 'Could not determine OpenStack codename for version %s' % vers |
2377 | + error_out(e) |
2378 | + |
2379 | + |
2380 | +def get_os_version_codename(codename): |
2381 | + '''Determine OpenStack version number from codename.''' |
2382 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
2383 | + if v == codename: |
2384 | + return k |
2385 | + e = 'Could not derive OpenStack version for '\ |
2386 | + 'codename: %s' % codename |
2387 | + error_out(e) |
2388 | + |
2389 | + |
2390 | +def get_os_codename_package(package, fatal=True): |
2391 | + '''Derive OpenStack release codename from an installed package.''' |
2392 | + apt.init() |
2393 | + cache = apt.Cache() |
2394 | + |
2395 | + try: |
2396 | + pkg = cache[package] |
2397 | + except: |
2398 | + if not fatal: |
2399 | + return None |
2400 | + # the package is unknown to the current apt cache. |
2401 | + e = 'Could not determine version of package with no installation '\ |
2402 | + 'candidate: %s' % package |
2403 | + error_out(e) |
2404 | + |
2405 | + if not pkg.current_ver: |
2406 | + if not fatal: |
2407 | + return None |
2408 | + # package is known, but no version is currently installed. |
2409 | + e = 'Could not determine version of uninstalled package: %s' % package |
2410 | + error_out(e) |
2411 | + |
2412 | + vers = apt.UpstreamVersion(pkg.current_ver.ver_str) |
2413 | + |
2414 | + try: |
2415 | + if 'swift' in pkg.name: |
2416 | + vers = vers[:5] |
2417 | + return SWIFT_CODENAMES[vers] |
2418 | + else: |
2419 | + vers = vers[:6] |
2420 | + return OPENSTACK_CODENAMES[vers] |
2421 | + except KeyError: |
2422 | + e = 'Could not determine OpenStack codename for version %s' % vers |
2423 | + error_out(e) |
2424 | + |
2425 | + |
2426 | +def get_os_version_package(pkg, fatal=True): |
2427 | + '''Derive OpenStack version number from an installed package.''' |
2428 | + codename = get_os_codename_package(pkg, fatal=fatal) |
2429 | + |
2430 | + if not codename: |
2431 | + return None |
2432 | + |
2433 | + if 'swift' in pkg: |
2434 | + vers_map = SWIFT_CODENAMES |
2435 | + else: |
2436 | + vers_map = OPENSTACK_CODENAMES |
2437 | + |
2438 | + for version, cname in vers_map.iteritems(): |
2439 | + if cname == codename: |
2440 | + return version |
2441 | + #e = "Could not determine OpenStack version for package: %s" % pkg |
2442 | + #error_out(e) |
2443 | + |
2444 | + |
2445 | +def import_key(keyid): |
2446 | + cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ |
2447 | + "--recv-keys %s" % keyid |
2448 | + try: |
2449 | + subprocess.check_call(cmd.split(' ')) |
2450 | + except subprocess.CalledProcessError: |
2451 | + error_out("Error importing repo key %s" % keyid) |
2452 | + |
2453 | + |
2454 | +def configure_installation_source(rel): |
2455 | + '''Configure apt installation source.''' |
2456 | + if rel == 'distro': |
2457 | + return |
2458 | + elif rel[:4] == "ppa:": |
2459 | + src = rel |
2460 | + subprocess.check_call(["add-apt-repository", "-y", src]) |
2461 | + elif rel[:3] == "deb": |
2462 | + l = len(rel.split('|')) |
2463 | + if l == 2: |
2464 | + src, key = rel.split('|') |
2465 | + juju_log("Importing PPA key from keyserver for %s" % src) |
2466 | + import_key(key) |
2467 | + elif l == 1: |
2468 | + src = rel |
2469 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
2470 | + f.write(src) |
2471 | + elif rel[:6] == 'cloud:': |
2472 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
2473 | + rel = rel.split(':')[1] |
2474 | + u_rel = rel.split('-')[0] |
2475 | + ca_rel = rel.split('-')[1] |
2476 | + |
2477 | + if u_rel != ubuntu_rel: |
2478 | + e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
2479 | + 'version (%s)' % (ca_rel, ubuntu_rel) |
2480 | + error_out(e) |
2481 | + |
2482 | + if 'staging' in ca_rel: |
2483 | + # staging is just a regular PPA. |
2484 | + os_rel = ca_rel.split('/')[0] |
2485 | + ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
2486 | + cmd = 'add-apt-repository -y %s' % ppa |
2487 | + subprocess.check_call(cmd.split(' ')) |
2488 | + return |
2489 | + |
2490 | + # map charm config options to actual archive pockets. |
2491 | + pockets = { |
2492 | + 'folsom': 'precise-updates/folsom', |
2493 | + 'folsom/updates': 'precise-updates/folsom', |
2494 | + 'folsom/proposed': 'precise-proposed/folsom', |
2495 | + 'grizzly': 'precise-updates/grizzly', |
2496 | + 'grizzly/updates': 'precise-updates/grizzly', |
2497 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
2498 | + 'havana': 'precise-updates/havana', |
2499 | + 'havana/updates': 'precise-updates/havana', |
2500 | + 'havana/proposed': 'precise-proposed/havana', |
2501 | + } |
2502 | + |
2503 | + try: |
2504 | + pocket = pockets[ca_rel] |
2505 | + except KeyError: |
2506 | + e = 'Invalid Cloud Archive release specified: %s' % rel |
2507 | + error_out(e) |
2508 | + |
2509 | + src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
2510 | + apt_install('ubuntu-cloud-keyring', fatal=True) |
2511 | + |
2512 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
2513 | + f.write(src) |
2514 | + else: |
2515 | + error_out("Invalid openstack-release specified: %s" % rel) |
2516 | + |
2517 | + |
2518 | +def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
2519 | + """ |
2520 | + Write an rc file in the charm-delivered directory containing |
2521 | + exported environment variables provided by env_vars. Any charm scripts run |
2522 | + outside the juju hook environment can source this scriptrc to obtain |
2523 | + updated config information necessary to perform health checks or |
2524 | + service changes. |
2525 | + """ |
2526 | + juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
2527 | + if not os.path.exists(os.path.dirname(juju_rc_path)): |
2528 | + os.mkdir(os.path.dirname(juju_rc_path)) |
2529 | + with open(juju_rc_path, 'wb') as rc_script: |
2530 | + rc_script.write( |
2531 | + "#!/bin/bash\n") |
2532 | + [rc_script.write('export %s=%s\n' % (u, p)) |
2533 | + for u, p in env_vars.iteritems() if u != "script_path"] |
2534 | + |
2535 | + |
2536 | +def openstack_upgrade_available(package): |
2537 | + """ |
2538 | + Determines if an OpenStack upgrade is available from installation |
2539 | + source, based on version of installed package. |
2540 | + |
2541 | + :param package: str: Name of installed package. |
2542 | + |
2543 | + :returns: bool: : Returns True if configured installation source offers |
2544 | + a newer version of package. |
2545 | + |
2546 | + """ |
2547 | + |
2548 | + src = config('openstack-origin') |
2549 | + cur_vers = get_os_version_package(package) |
2550 | + available_vers = get_os_version_install_source(src) |
2551 | + apt.init() |
2552 | + return apt.version_compare(available_vers, cur_vers) == 1 |
2553 | |
2554 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
2555 | --- hooks/charmhelpers/core/hookenv.py 2013-07-05 16:17:32 +0000 |
2556 | +++ hooks/charmhelpers/core/hookenv.py 2013-08-14 22:57:15 +0000 |
2557 | @@ -197,7 +197,7 @@ |
2558 | relid_cmd_line = ['relation-ids', '--format=json'] |
2559 | if reltype is not None: |
2560 | relid_cmd_line.append(reltype) |
2561 | - return json.loads(subprocess.check_output(relid_cmd_line)) |
2562 | + return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
2563 | return [] |
2564 | |
2565 | |
2566 | @@ -208,7 +208,7 @@ |
2567 | units_cmd_line = ['relation-list', '--format=json'] |
2568 | if relid is not None: |
2569 | units_cmd_line.extend(('-r', relid)) |
2570 | - return json.loads(subprocess.check_output(units_cmd_line)) |
2571 | + return json.loads(subprocess.check_output(units_cmd_line)) or [] |
2572 | |
2573 | |
2574 | @cached |
2575 | @@ -335,5 +335,6 @@ |
2576 | return decorated |
2577 | return wrapper |
2578 | |
2579 | + |
2580 | def charm_dir(): |
2581 | return os.environ.get('CHARM_DIR') |
2582 | |
2583 | === modified file 'hooks/charmhelpers/core/host.py' |
2584 | --- hooks/charmhelpers/core/host.py 2013-07-05 16:17:32 +0000 |
2585 | +++ hooks/charmhelpers/core/host.py 2013-08-14 22:57:15 +0000 |
2586 | @@ -9,12 +9,14 @@ |
2587 | import os |
2588 | import pwd |
2589 | import grp |
2590 | +import random |
2591 | +import string |
2592 | import subprocess |
2593 | import hashlib |
2594 | |
2595 | from collections import OrderedDict |
2596 | |
2597 | -from hookenv import log, execution_environment |
2598 | +from hookenv import log |
2599 | |
2600 | |
2601 | def service_start(service_name): |
2602 | @@ -39,6 +41,18 @@ |
2603 | return subprocess.call(cmd) == 0 |
2604 | |
2605 | |
2606 | +def service_running(service): |
2607 | + try: |
2608 | + output = subprocess.check_output(['service', service, 'status']) |
2609 | + except subprocess.CalledProcessError: |
2610 | + return False |
2611 | + else: |
2612 | + if ("start/running" in output or "is running" in output): |
2613 | + return True |
2614 | + else: |
2615 | + return False |
2616 | + |
2617 | + |
2618 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
2619 | """Add a user""" |
2620 | try: |
2621 | @@ -74,36 +88,33 @@ |
2622 | |
2623 | def rsync(from_path, to_path, flags='-r', options=None): |
2624 | """Replicate the contents of a path""" |
2625 | - context = execution_environment() |
2626 | options = options or ['--delete', '--executability'] |
2627 | cmd = ['/usr/bin/rsync', flags] |
2628 | cmd.extend(options) |
2629 | - cmd.append(from_path.format(**context)) |
2630 | - cmd.append(to_path.format(**context)) |
2631 | + cmd.append(from_path) |
2632 | + cmd.append(to_path) |
2633 | log(" ".join(cmd)) |
2634 | return subprocess.check_output(cmd).strip() |
2635 | |
2636 | |
2637 | def symlink(source, destination): |
2638 | """Create a symbolic link""" |
2639 | - context = execution_environment() |
2640 | log("Symlinking {} as {}".format(source, destination)) |
2641 | cmd = [ |
2642 | 'ln', |
2643 | '-sf', |
2644 | - source.format(**context), |
2645 | - destination.format(**context) |
2646 | + source, |
2647 | + destination, |
2648 | ] |
2649 | subprocess.check_call(cmd) |
2650 | |
2651 | |
2652 | def mkdir(path, owner='root', group='root', perms=0555, force=False): |
2653 | """Create a directory""" |
2654 | - context = execution_environment() |
2655 | log("Making dir {} {}:{} {:o}".format(path, owner, group, |
2656 | perms)) |
2657 | - uid = pwd.getpwnam(owner.format(**context)).pw_uid |
2658 | - gid = grp.getgrnam(group.format(**context)).gr_gid |
2659 | + uid = pwd.getpwnam(owner).pw_uid |
2660 | + gid = grp.getgrnam(group).gr_gid |
2661 | realpath = os.path.abspath(path) |
2662 | if os.path.exists(realpath): |
2663 | if force and not os.path.isdir(realpath): |
2664 | @@ -114,28 +125,15 @@ |
2665 | os.chown(realpath, uid, gid) |
2666 | |
2667 | |
2668 | -def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs): |
2669 | +def write_file(path, content, owner='root', group='root', perms=0444): |
2670 | """Create or overwrite a file with the contents of a string""" |
2671 | - context = execution_environment() |
2672 | - context.update(kwargs) |
2673 | - log("Writing file {} {}:{} {:o}".format(path, owner, group, |
2674 | - perms)) |
2675 | - uid = pwd.getpwnam(owner.format(**context)).pw_uid |
2676 | - gid = grp.getgrnam(group.format(**context)).gr_gid |
2677 | - with open(path.format(**context), 'w') as target: |
2678 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
2679 | + uid = pwd.getpwnam(owner).pw_uid |
2680 | + gid = grp.getgrnam(group).gr_gid |
2681 | + with open(path, 'w') as target: |
2682 | os.fchown(target.fileno(), uid, gid) |
2683 | os.fchmod(target.fileno(), perms) |
2684 | - target.write(fmtstr.format(**context)) |
2685 | - |
2686 | - |
2687 | -def render_template_file(source, destination, **kwargs): |
2688 | - """Create or overwrite a file using a template""" |
2689 | - log("Rendering template {} for {}".format(source, |
2690 | - destination)) |
2691 | - context = execution_environment() |
2692 | - with open(source.format(**context), 'r') as template: |
2693 | - write_file(destination.format(**context), template.read(), |
2694 | - **kwargs) |
2695 | + target.write(content) |
2696 | |
2697 | |
2698 | def filter_installed_packages(packages): |
2699 | @@ -271,3 +269,15 @@ |
2700 | k, v = l.split('=') |
2701 | d[k.strip()] = v.strip() |
2702 | return d |
2703 | + |
2704 | + |
2705 | +def pwgen(length=None): |
2706 | + '''Generate a random pasword.''' |
2707 | + if length is None: |
2708 | + length = random.choice(range(35, 45)) |
2709 | + alphanumeric_chars = [ |
2710 | + l for l in (string.letters + string.digits) |
2711 | + if l not in 'l0QD1vAEIOUaeiou'] |
2712 | + random_chars = [ |
2713 | + random.choice(alphanumeric_chars) for _ in range(length)] |
2714 | + return(''.join(random_chars)) |
2715 | |
2716 | === modified file 'hooks/glance_contexts.py' |
2717 | --- hooks/glance_contexts.py 2013-07-05 21:13:02 +0000 |
2718 | +++ hooks/glance_contexts.py 2013-08-14 22:57:15 +0000 |
2719 | @@ -8,7 +8,7 @@ |
2720 | ApacheSSLContext as SSLContext, |
2721 | ) |
2722 | |
2723 | -from charmhelpers.contrib.hahelpers.cluster_utils import ( |
2724 | +from charmhelpers.contrib.hahelpers.cluster import ( |
2725 | determine_api_port, |
2726 | determine_haproxy_port, |
2727 | ) |
2728 | |
2729 | === modified file 'hooks/glance_relations.py' |
2730 | --- hooks/glance_relations.py 2013-07-11 15:07:15 +0000 |
2731 | +++ hooks/glance_relations.py 2013-08-14 22:57:15 +0000 |
2732 | @@ -13,7 +13,6 @@ |
2733 | PACKAGES, |
2734 | SERVICES, |
2735 | CHARM, |
2736 | - SERVICE_NAME, |
2737 | GLANCE_REGISTRY_CONF, |
2738 | GLANCE_REGISTRY_PASTE_INI, |
2739 | GLANCE_API_CONF, |
2740 | @@ -22,12 +21,13 @@ |
2741 | CEPH_CONF, ) |
2742 | |
2743 | from charmhelpers.core.hookenv import ( |
2744 | - config as charm_conf, |
2745 | + config, |
2746 | Hooks, |
2747 | log as juju_log, |
2748 | relation_get, |
2749 | relation_set, |
2750 | relation_ids, |
2751 | + service_name, |
2752 | unit_get, |
2753 | UnregisteredHookError, ) |
2754 | |
2755 | @@ -37,16 +37,14 @@ |
2756 | apt_update, |
2757 | service_stop, ) |
2758 | |
2759 | -from charmhelpers.contrib.hahelpers.cluster_utils import ( |
2760 | +from charmhelpers.contrib.hahelpers.cluster import ( |
2761 | eligible_leader, |
2762 | is_clustered, ) |
2763 | |
2764 | -from charmhelpers.contrib.openstack.openstack_utils import ( |
2765 | +from charmhelpers.contrib.openstack.utils import ( |
2766 | configure_installation_source, |
2767 | get_os_codename_package, |
2768 | - get_os_codename_install_source, |
2769 | - get_os_version_codename, |
2770 | - save_script_rc, |
2771 | + openstack_upgrade_available, |
2772 | lsb_release, ) |
2773 | |
2774 | from subprocess import ( |
2775 | @@ -58,13 +56,12 @@ |
2776 | |
2777 | CONFIGS = register_configs() |
2778 | |
2779 | -config = charm_conf() |
2780 | |
2781 | @hooks.hook('install') |
2782 | def install_hook(): |
2783 | juju_log('Installing glance packages') |
2784 | |
2785 | - src = config['openstack-origin'] |
2786 | + src = config('openstack-origin') |
2787 | if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and |
2788 | src == 'distro'): |
2789 | src = 'cloud:precise-folsom' |
2790 | @@ -82,7 +79,7 @@ |
2791 | |
2792 | @hooks.hook('shared-db-relation-joined') |
2793 | def db_joined(): |
2794 | - relation_set(database=config['database'], username=config['database-user'], |
2795 | + relation_set(database=config('database'), username=config('database-user'), |
2796 | hostname=unit_get('private-address')) |
2797 | |
2798 | |
2799 | @@ -123,13 +120,13 @@ |
2800 | |
2801 | host = unit_get('private-address') |
2802 | if is_clustered(): |
2803 | - host = config["vip"] |
2804 | + host = config("vip") |
2805 | |
2806 | relation_data = { |
2807 | - 'glance-api-server': "%s://%s:9292" % (scheme, host), } |
2808 | + 'glance_api_server': "%s://%s:9292" % (scheme, host), } |
2809 | |
2810 | - juju_log("%s: image-service_joined: To peer glance-api-server=%s" % |
2811 | - (CHARM, relation_data['glance-api-server'])) |
2812 | + juju_log("%s: image-service_joined: To peer glance_api_server=%s" % |
2813 | + (CHARM, relation_data['glance_api_server'])) |
2814 | |
2815 | relation_set(relation_id=relation_id, **relation_data) |
2816 | |
2817 | @@ -164,7 +161,7 @@ |
2818 | juju_log('ceph relation incomplete. Peer not ready?') |
2819 | return |
2820 | |
2821 | - if not ensure_ceph_keyring(service=SERVICE_NAME): |
2822 | + if not ensure_ceph_keyring(service=service_name()): |
2823 | juju_log('Could not create ceph keyring: peer not ready?') |
2824 | return |
2825 | |
2826 | @@ -172,7 +169,7 @@ |
2827 | CONFIGS.write(CEPH_CONF) |
2828 | |
2829 | if eligible_leader(CLUSTER_RES): |
2830 | - ensure_ceph_pool(service=SERVICE_NAME) |
2831 | + ensure_ceph_pool(service=service_name()) |
2832 | |
2833 | |
2834 | @hooks.hook('identity-service-relation-joined') |
2835 | @@ -187,13 +184,13 @@ |
2836 | |
2837 | host = unit_get('private-address') |
2838 | if is_clustered(): |
2839 | - host = config["vip"] |
2840 | + host = config("vip") |
2841 | |
2842 | url = "%s://%s:9292" % (scheme, host) |
2843 | |
2844 | relation_data = { |
2845 | 'service': 'glance', |
2846 | - 'region': config['region'], |
2847 | + 'region': config('region'), |
2848 | 'public_url': url, |
2849 | 'admin_url': url, |
2850 | 'internal_url': url, } |
2851 | @@ -226,25 +223,16 @@ |
2852 | @hooks.hook('config-changed') |
2853 | @restart_on_change(restart_map()) |
2854 | def config_changed(): |
2855 | - # Determine whether or not we should do an upgrade, based on whether or not |
2856 | - # the version offered in openstack-origin is greater than what is installed |
2857 | - install_src = config["openstack-origin"] |
2858 | - available = get_os_codename_install_source(install_src) |
2859 | - installed = get_os_codename_package("glance-common") |
2860 | - |
2861 | - if (available and |
2862 | - get_os_version_codename(available) > |
2863 | - get_os_version_codename(installed)): |
2864 | - juju_log('%s: Upgrading OpenStack release: %s -> %s' % |
2865 | - (CHARM, installed, available)) |
2866 | + if openstack_upgrade_available('glance-common'): |
2867 | + juju_log('Upgrading OpenStack release') |
2868 | do_openstack_upgrade(CONFIGS) |
2869 | |
2870 | configure_https() |
2871 | |
2872 | - env_vars = {'OPENSTACK_PORT_MCASTPORT': config["ha-mcastport"], |
2873 | - 'OPENSTACK_SERVICE_API': "glance-api", |
2874 | - 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"} |
2875 | - save_script_rc(**env_vars) |
2876 | + #env_vars = {'OPENSTACK_PORT_MCASTPORT': config("ha-mcastport"), |
2877 | + # 'OPENSTACK_SERVICE_API': "glance-api", |
2878 | + # 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"} |
2879 | + #save_script_rc(**env_vars) |
2880 | |
2881 | |
2882 | @hooks.hook('cluster-relation-changed') |
2883 | @@ -261,11 +249,11 @@ |
2884 | |
2885 | @hooks.hook('ha-relation-joined') |
2886 | def ha_relation_joined(): |
2887 | - corosync_bindiface = config["ha-bindiface"] |
2888 | - corosync_mcastport = config["ha-mcastport"] |
2889 | - vip = config["vip"] |
2890 | - vip_iface = config["vip_iface"] |
2891 | - vip_cidr = config["vip_cidr"] |
2892 | + corosync_bindiface = config("ha-bindiface") |
2893 | + corosync_mcastport = config("ha-mcastport") |
2894 | + vip = config("vip") |
2895 | + vip_iface = config("vip_iface") |
2896 | + vip_cidr = config("vip_cidr") |
2897 | |
2898 | #if vip and vip_iface and vip_cidr and \ |
2899 | # corosync_bindiface and corosync_mcastport: |
2900 | @@ -299,7 +287,7 @@ |
2901 | juju_log('glance subordinate is not fully clustered.') |
2902 | return |
2903 | if eligible_leader(CLUSTER_RES): |
2904 | - host = config["vip"] |
2905 | + host = config("vip") |
2906 | scheme = "http" |
2907 | if 'https' in CONFIGS.complete_contexts(): |
2908 | scheme = "https" |
2909 | @@ -309,14 +297,14 @@ |
2910 | for r_id in relation_ids('identity-service'): |
2911 | relation_set(relation_id=r_id, |
2912 | service="glance", |
2913 | - region=config["region"], |
2914 | + region=config("region"), |
2915 | public_url=url, |
2916 | admin_url=url, |
2917 | internal_url=url) |
2918 | |
2919 | for r_id in relation_ids('image-service'): |
2920 | relation_data = { |
2921 | - 'glance-api-server': url, } |
2922 | + 'glance_api_server': url, } |
2923 | relation_set(relation_id=r_id, **relation_data) |
2924 | |
2925 | |
2926 | @@ -333,7 +321,6 @@ |
2927 | else: |
2928 | cmd = ['a2dissite', 'openstack_https_frontend'] |
2929 | check_call(cmd) |
2930 | - return |
2931 | |
2932 | for r_id in relation_ids('identity-service'): |
2933 | keystone_joined(relation_id=r_id) |
2934 | |
2935 | === modified file 'hooks/glance_utils.py' |
2936 | --- hooks/glance_utils.py 2013-07-11 14:51:24 +0000 |
2937 | +++ hooks/glance_utils.py 2013-08-14 22:57:15 +0000 |
2938 | @@ -16,24 +16,23 @@ |
2939 | log as juju_log, |
2940 | relation_get, |
2941 | relation_ids, |
2942 | - related_units, |
2943 | - service_name, ) |
2944 | + related_units, ) |
2945 | |
2946 | from charmhelpers.contrib.openstack import ( |
2947 | templating, |
2948 | context, ) |
2949 | |
2950 | -from charmhelpers.contrib.hahelpers.cluster_utils import ( |
2951 | +from charmhelpers.contrib.hahelpers.cluster import ( |
2952 | eligible_leader, |
2953 | ) |
2954 | |
2955 | -from charmhelpers.contrib.hahelpers.ceph_utils import ( |
2956 | +from charmhelpers.contrib.hahelpers.ceph import ( |
2957 | create_keyring as ceph_create_keyring, |
2958 | create_pool as ceph_create_pool, |
2959 | keyring_path as ceph_keyring_path, |
2960 | pool_exists as ceph_pool_exists, ) |
2961 | |
2962 | -from charmhelpers.contrib.openstack.openstack_utils import ( |
2963 | +from charmhelpers.contrib.openstack.utils import ( |
2964 | get_os_codename_install_source, |
2965 | get_os_codename_package, |
2966 | configure_installation_source, ) |
2967 | @@ -48,7 +47,6 @@ |
2968 | "glance-api", "glance-registry", ] |
2969 | |
2970 | CHARM = "glance" |
2971 | -SERVICE_NAME = service_name() |
2972 | |
2973 | GLANCE_REGISTRY_CONF = "/etc/glance/glance-registry.conf" |
2974 | GLANCE_REGISTRY_PASTE_INI = "/etc/glance/glance-registry-paste.ini" |
2975 | |
2976 | === added directory 'tests' |
2977 | === added file 'tests/__init__.py' |
2978 | --- tests/__init__.py 1970-01-01 00:00:00 +0000 |
2979 | +++ tests/__init__.py 2013-08-14 22:57:15 +0000 |
2980 | @@ -0,0 +1,3 @@ |
2981 | +import sys |
2982 | + |
2983 | +sys.path.append('hooks/') |
2984 | |
2985 | === added file 'tests/test_glance_relations.py' |
2986 | --- tests/test_glance_relations.py 1970-01-01 00:00:00 +0000 |
2987 | +++ tests/test_glance_relations.py 2013-08-14 22:57:15 +0000 |
2988 | @@ -0,0 +1,517 @@ |
2989 | +from mock import call, patch, MagicMock |
2990 | + |
2991 | +from tests.test_utils import CharmTestCase |
2992 | + |
2993 | +import hooks.glance_utils as utils |
2994 | + |
2995 | +_reg = utils.register_configs |
2996 | +_map = utils.restart_map |
2997 | + |
2998 | +utils.register_configs = MagicMock() |
2999 | +utils.restart_map = MagicMock() |
3000 | + |
3001 | +import hooks.glance_relations as relations |
3002 | + |
3003 | +utils.register_configs = _reg |
3004 | +utils.restart_map = _map |
3005 | + |
3006 | +TO_PATCH = [ |
3007 | + # charmhelpers.core.hookenv |
3008 | + 'Hooks', |
3009 | + 'config', |
3010 | + 'juju_log', |
3011 | + 'relation_ids', |
3012 | + 'relation_set', |
3013 | + 'relation_get', |
3014 | + 'service_name', |
3015 | + 'unit_get', |
3016 | + # charmhelpers.core.host |
3017 | + 'apt_install', |
3018 | + 'apt_update', |
3019 | + 'restart_on_change', |
3020 | + 'service_stop', |
3021 | + #charmhelpers.contrib.openstack.utils |
3022 | + 'configure_installation_source', |
3023 | + 'get_os_codename_package', |
3024 | + 'openstack_upgrade_available', |
3025 | + # charmhelpers.contrib.hahelpers.cluster_utils |
3026 | + 'eligible_leader', |
3027 | + 'is_clustered', |
3028 | + # glance_utils |
3029 | + 'restart_map', |
3030 | + 'register_configs', |
3031 | + 'do_openstack_upgrade', |
3032 | + 'migrate_database', |
3033 | + 'ensure_ceph_keyring', |
3034 | + 'ensure_ceph_pool', |
3035 | + # other |
3036 | + 'getstatusoutput', |
3037 | + 'check_call', |
3038 | +] |
3039 | + |
3040 | + |
3041 | +class GlanceRelationTests(CharmTestCase): |
3042 | + def setUp(self): |
3043 | + super(GlanceRelationTests, self).setUp(relations, TO_PATCH) |
3044 | + self.config.side_effect = self.test_config.get |
3045 | + |
3046 | + def test_install_hook(self): |
3047 | + repo = 'cloud:precise-grizzly' |
3048 | + self.test_config.set('openstack-origin', repo) |
3049 | + self.service_stop.return_value = True |
3050 | + relations.install_hook() |
3051 | + self.configure_installation_source.assert_called_with(repo) |
3052 | + self.assertTrue(self.apt_update.called) |
3053 | + self.apt_install.assert_called_with(['apache2', 'glance', 'python-mysqldb', |
3054 | + 'python-swift', 'python-keystone', |
3055 | + 'uuid', 'haproxy']) |
3056 | + |
3057 | + def test_db_joined(self): |
3058 | + self.unit_get.return_value = 'glance.foohost.com' |
3059 | + relations.db_joined() |
3060 | + self.relation_set.assert_called_with(database='glance', username='glance', |
3061 | + hostname='glance.foohost.com') |
3062 | + self.unit_get.assert_called_with('private-address') |
3063 | + |
3064 | + @patch.object(relations, 'CONFIGS') |
3065 | + def test_db_changed_missing_relation_data(self, configs): |
3066 | + configs.complete_contexts = MagicMock() |
3067 | + configs.complete_contexts.return_value = [] |
3068 | + relations.db_changed() |
3069 | + self.juju_log.assert_called_with( |
3070 | + 'shared-db relation incomplete. Peer not ready?' |
3071 | + ) |
3072 | + |
3073 | + def _shared_db_test(self, configs): |
3074 | + configs.complete_contexts = MagicMock() |
3075 | + configs.complete_contexts.return_value = ['shared-db'] |
3076 | + configs.write = MagicMock() |
3077 | + relations.db_changed() |
3078 | + |
3079 | + @patch.object(relations, 'CONFIGS') |
3080 | + def test_db_changed_no_essex(self, configs): |
3081 | + self._shared_db_test(configs) |
3082 | + self.assertEquals([call('/etc/glance/glance-registry.conf'), |
3083 | + call('/etc/glance/glance-api.conf')], |
3084 | + configs.write.call_args_list) |
3085 | + self.juju_log.assert_called_with( |
3086 | + 'Cluster leader, performing db sync' |
3087 | + ) |
3088 | + self.migrate_database.assert_called_with() |
3089 | + |
3090 | + @patch.object(relations, 'CONFIGS') |
3091 | + def test_db_changed_with_essex_not_setting_version_control(self, configs): |
3092 | + self.get_os_codename_package.return_value = "essex" |
3093 | + self.getstatusoutput.return_value = (0, "version") |
3094 | + self._shared_db_test(configs) |
3095 | + self.assertEquals([call('/etc/glance/glance-registry.conf')], |
3096 | + configs.write.call_args_list) |
3097 | + self.juju_log.assert_called_with( |
3098 | + 'Cluster leader, performing db sync' |
3099 | + ) |
3100 | + self.migrate_database.assert_called_with() |
3101 | + |
3102 | + @patch.object(relations, 'CONFIGS') |
3103 | + def test_db_changed_with_essex_setting_version_control(self, configs): |
3104 | + self.get_os_codename_package.return_value = "essex" |
3105 | + self.getstatusoutput.return_value = (1, "version") |
3106 | + self._shared_db_test(configs) |
3107 | + self.assertEquals([call('/etc/glance/glance-registry.conf')], |
3108 | + configs.write.call_args_list) |
3109 | + self.check_call.assert_called_with( |
3110 | + ["glance-manage", "version_control", "0"] |
3111 | + ) |
3112 | + self.juju_log.assert_called_with( |
3113 | + 'Cluster leader, performing db sync' |
3114 | + ) |
3115 | + self.migrate_database.assert_called_with() |
3116 | + |
3117 | + @patch.object(relations, 'CONFIGS') |
3118 | + def test_image_service_joined_clustered_with_https(self, configs): |
3119 | + configs.complete_contexts = MagicMock() |
3120 | + configs.complete_contexts.return_value = ['https'] |
3121 | + configs.write = MagicMock() |
3122 | + self.unit_get.return_value = 'glance.foohost.com' |
3123 | + self.is_clustered.return_value = True |
3124 | + self.test_config.set('vip', '10.10.10.10') |
3125 | + relations.image_service_joined() |
3126 | + self.assertTrue(self.eligible_leader.called) |
3127 | + self.unit_get.assert_called_with('private-address') |
3128 | + self.relation_set.assert_called_with(relation_id=None, |
3129 | + glance_api_server="https://10.10.10.10:9292") |
3130 | + |
3131 | + @patch.object(relations, 'CONFIGS') |
3132 | + def test_image_service_joined_not_clustered_with_https(self, configs): |
3133 | + configs.complete_contexts = MagicMock() |
3134 | + configs.complete_contexts.return_value = ['https'] |
3135 | + configs.write = MagicMock() |
3136 | + self.unit_get.return_value = 'glance.foohost.com' |
3137 | + self.is_clustered.return_value = False |
3138 | + relations.image_service_joined() |
3139 | + self.assertTrue(self.eligible_leader.called) |
3140 | + self.unit_get.assert_called_with('private-address') |
3141 | + self.relation_set.assert_called_with(relation_id=None, |
3142 | + glance_api_server="https://glance.foohost.com:9292") |
3143 | + |
3144 | + @patch.object(relations, 'CONFIGS') |
3145 | + def test_image_service_joined_clustered_with_http(self, configs): |
3146 | + configs.complete_contexts = MagicMock() |
3147 | + configs.complete_contexts.return_value = [''] |
3148 | + configs.write = MagicMock() |
3149 | + self.unit_get.return_value = 'glance.foohost.com' |
3150 | + self.is_clustered.return_value = True |
3151 | + self.test_config.set('vip', '10.10.10.10') |
3152 | + relations.image_service_joined() |
3153 | + self.assertTrue(self.eligible_leader.called) |
3154 | + self.unit_get.assert_called_with('private-address') |
3155 | + self.relation_set.assert_called_with(relation_id=None, |
3156 | + glance_api_server="http://10.10.10.10:9292") |
3157 | + |
3158 | + @patch.object(relations, 'CONFIGS') |
3159 | + def test_image_service_joined_not_clustered_with_http(self, configs): |
3160 | + configs.complete_contexts = MagicMock() |
3161 | + configs.complete_contexts.return_value = [] |
3162 | + configs.write = MagicMock() |
3163 | + self.unit_get.return_value = 'glance.foohost.com' |
3164 | + self.is_clustered.return_value = False |
3165 | + relations.image_service_joined() |
3166 | + self.assertTrue(self.eligible_leader.called) |
3167 | + self.unit_get.assert_called_with('private-address') |
3168 | + self.relation_set.assert_called_with(relation_id=None, |
3169 | + glance_api_server="http://glance.foohost.com:9292") |
3170 | + |
3171 | + @patch.object(relations, 'CONFIGS') |
3172 | + def test_object_store_joined_without_identity_service(self, configs): |
3173 | + configs.complete_contexts = MagicMock() |
3174 | + configs.complete_contexts.return_value = [''] |
3175 | + configs.write = MagicMock() |
3176 | + relations.object_store_joined() |
3177 | + self.juju_log.assert_called_with( |
3178 | + 'Deferring swift stora configuration until ' |
3179 | + 'an identity-service relation exists' |
3180 | + ) |
3181 | + |
3182 | + @patch.object(relations, 'CONFIGS') |
3183 | + def test_object_store_joined_with_identity_service_without_object_store(self, configs): |
3184 | + configs.complete_contexts = MagicMock() |
3185 | + configs.complete_contexts.return_value = ['identity-service'] |
3186 | + configs.write = MagicMock() |
3187 | + relations.object_store_joined() |
3188 | + self.juju_log.assert_called_with( |
3189 | + 'swift relation incomplete' |
3190 | + ) |
3191 | + |
3192 | + @patch.object(relations, 'CONFIGS') |
3193 | + def test_object_store_joined_with_identity_service_with_object_store(self, configs): |
3194 | + configs.complete_contexts = MagicMock() |
3195 | + configs.complete_contexts.return_value = ['identity-service', 'object-store'] |
3196 | + configs.write = MagicMock() |
3197 | + relations.object_store_joined() |
3198 | + self.assertEquals([call('/etc/glance/glance-api.conf')], |
3199 | + configs.write.call_args_list) |
3200 | + |
3201 | + @patch('os.mkdir') |
3202 | + @patch('os.path.isdir') |
3203 | + def test_ceph_joined(self, isdir, mkdir): |
3204 | + isdir.return_value = False |
3205 | + relations.ceph_joined() |
3206 | + mkdir.assert_called_with('/etc/ceph') |
3207 | + self.apt_install.assert_called_with(['ceph-common', 'python-ceph']) |
3208 | + |
3209 | + @patch.object(relations, 'CONFIGS') |
3210 | + def test_ceph_changed_missing_relation_data(self, configs): |
3211 | + configs.complete_contexts = MagicMock() |
3212 | + configs.complete_contexts.return_value = [] |
3213 | + configs.write = MagicMock() |
3214 | + relations.ceph_changed() |
3215 | + self.juju_log.assert_called_with( |
3216 | + 'ceph relation incomplete. Peer not ready?' |
3217 | + ) |
3218 | + |
3219 | + @patch.object(relations, 'CONFIGS') |
3220 | + def test_ceph_changed_no_keyring(self, configs): |
3221 | + configs.complete_contexts = MagicMock() |
3222 | + configs.complete_contexts.return_value = ['ceph'] |
3223 | + configs.write = MagicMock() |
3224 | + self.ensure_ceph_keyring.return_value = False |
3225 | + relations.ceph_changed() |
3226 | + self.juju_log.assert_called_with( |
3227 | + 'Could not create ceph keyring: peer not ready?' |
3228 | + ) |
3229 | + |
3230 | + @patch.object(relations, 'CONFIGS') |
3231 | + def test_ceph_changed_with_key_and_relation_data(self, configs): |
3232 | + configs.complete_contexts = MagicMock() |
3233 | + configs.complete_contexts.return_value = ['ceph'] |
3234 | + configs.write = MagicMock() |
3235 | + self.ensure_ceph_keyring.return_value = True |
3236 | + relations.ceph_changed() |
3237 | + self.assertEquals([call('/etc/glance/glance-api.conf'), |
3238 | + call('/etc/ceph/ceph.conf')], |
3239 | + configs.write.call_args_list) |
3240 | + self.ensure_ceph_pool.assert_called_with(service=self.service_name()) |
3241 | + |
3242 | + @patch.object(relations, 'CONFIGS') |
3243 | + def test_keystone_joined_not_clustered(self, configs): |
3244 | + configs.complete_contexts = MagicMock() |
3245 | + configs.complete_contexts.return_value = [''] |
3246 | + configs.write = MagicMock() |
3247 | + self.unit_get.return_value = 'glance.foohost.com' |
3248 | + self.test_config.set('region', 'FirstRegion') |
3249 | + self.is_clustered.return_value = False |
3250 | + relations.keystone_joined() |
3251 | + self.unit_get.assert_called_with('private-address') |
3252 | + self.relation_set.assert_called_with( |
3253 | + relation_id=None, |
3254 | + service='glance', |
3255 | + region='FirstRegion', |
3256 | + public_url='http://glance.foohost.com:9292', |
3257 | + admin_url='http://glance.foohost.com:9292', |
3258 | + internal_url='http://glance.foohost.com:9292', |
3259 | + ) |
3260 | + |
3261 | + @patch.object(relations, 'CONFIGS') |
3262 | + def test_keystone_joined_clustered(self, configs): |
3263 | + configs.complete_contexts = MagicMock() |
3264 | + configs.complete_contexts.return_value = [''] |
3265 | + configs.write = MagicMock() |
3266 | + self.unit_get.return_value = 'glance.foohost.com' |
3267 | + self.test_config.set('region', 'FirstRegion') |
3268 | + self.test_config.set('vip', '10.10.10.10') |
3269 | + self.is_clustered.return_value = True |
3270 | + relations.keystone_joined() |
3271 | + self.unit_get.assert_called_with('private-address') |
3272 | + self.relation_set.assert_called_with( |
3273 | + relation_id=None, |
3274 | + service='glance', |
3275 | + region='FirstRegion', |
3276 | + public_url='http://10.10.10.10:9292', |
3277 | + admin_url='http://10.10.10.10:9292', |
3278 | + internal_url='http://10.10.10.10:9292', |
3279 | + ) |
3280 | + |
3281 | + |
3282 | + @patch.object(relations, 'CONFIGS') |
3283 | + def test_keystone_joined_not_clustered_with_https(self, configs): |
3284 | + configs.complete_contexts = MagicMock() |
3285 | + configs.complete_contexts.return_value = ['https'] |
3286 | + configs.write = MagicMock() |
3287 | + self.unit_get.return_value = 'glance.foohost.com' |
3288 | + self.test_config.set('region', 'FirstRegion') |
3289 | + self.is_clustered.return_value = False |
3290 | + relations.keystone_joined() |
3291 | + self.unit_get.assert_called_with('private-address') |
3292 | + self.relation_set.assert_called_with( |
3293 | + relation_id=None, |
3294 | + service='glance', |
3295 | + region='FirstRegion', |
3296 | + public_url='https://glance.foohost.com:9292', |
3297 | + admin_url='https://glance.foohost.com:9292', |
3298 | + internal_url='https://glance.foohost.com:9292', |
3299 | + ) |
3300 | + |
3301 | + @patch.object(relations, 'CONFIGS') |
3302 | + def test_keystone_joined_clustered_with_https(self, configs): |
3303 | + configs.complete_contexts = MagicMock() |
3304 | + configs.complete_contexts.return_value = ['https'] |
3305 | + configs.write = MagicMock() |
3306 | + self.unit_get.return_value = 'glance.foohost.com' |
3307 | + self.test_config.set('region', 'FirstRegion') |
3308 | + self.test_config.set('vip', '10.10.10.10') |
3309 | + self.is_clustered.return_value = True |
3310 | + relations.keystone_joined() |
3311 | + self.unit_get.assert_called_with('private-address') |
3312 | + self.relation_set.assert_called_with( |
3313 | + relation_id=None, |
3314 | + service='glance', |
3315 | + region='FirstRegion', |
3316 | + public_url='https://10.10.10.10:9292', |
3317 | + admin_url='https://10.10.10.10:9292', |
3318 | + internal_url='https://10.10.10.10:9292', |
3319 | + ) |
3320 | + |
3321 | + @patch.object(relations, 'configure_https') |
3322 | + @patch.object(relations, 'CONFIGS') |
3323 | + def test_keystone_changed_no_object_store_relation(self, configs, configure_https): |
3324 | + configs.complete_contexts = MagicMock() |
3325 | + configs.complete_contexts.return_value = ['identity-service'] |
3326 | + configs.write = MagicMock() |
3327 | + self.relation_ids.return_value = False |
3328 | + relations.keystone_changed() |
3329 | + self.assertEquals([call('/etc/glance/glance-api.conf'), |
3330 | + call('/etc/glance/glance-registry.conf'), |
3331 | + call('/etc/glance/glance-api-paste.ini'), |
3332 | + call('/etc/glance/glance-registry-paste.ini')], |
3333 | + configs.write.call_args_list) |
3334 | + self.assertTrue(configure_https.called) |
3335 | + |
3336 | + @patch.object(relations, 'configure_https') |
3337 | + @patch.object(relations, 'object_store_joined') |
3338 | + @patch.object(relations, 'CONFIGS') |
3339 | + def test_keystone_changed_no_object_store_relation(self, configs, object_store_joined, configure_https): |
3340 | + configs.complete_contexts = MagicMock() |
3341 | + configs.complete_contexts.return_value = ['identity-service'] |
3342 | + configs.write = MagicMock() |
3343 | + self.relation_ids.return_value = True |
3344 | + relations.keystone_changed() |
3345 | + self.assertEquals([call('/etc/glance/glance-api.conf'), |
3346 | + call('/etc/glance/glance-registry.conf'), |
3347 | + call('/etc/glance/glance-api-paste.ini'), |
3348 | + call('/etc/glance/glance-registry-paste.ini')], |
3349 | + configs.write.call_args_list) |
3350 | + object_store_joined.assert_called_with() |
3351 | + self.assertTrue(configure_https.called) |
3352 | + |
3353 | + @patch.object(relations, 'configure_https') |
3354 | + def test_config_changed_no_openstack_upgrade(self, configure_https): |
3355 | + self.openstack_upgrade_available.return_value = False |
3356 | + relations.config_changed() |
3357 | + self.assertTrue(configure_https.called) |
3358 | + |
3359 | + @patch.object(relations, 'configure_https') |
3360 | + def test_config_changed_with_openstack_upgrade(self, configure_https): |
3361 | + self.openstack_upgrade_available.return_value = True |
3362 | + relations.config_changed() |
3363 | + self.juju_log.assert_called_with( |
3364 | + 'Upgrading OpenStack release' |
3365 | + ) |
3366 | + self.assertTrue(self.do_openstack_upgrade.called) |
3367 | + self.assertTrue(configure_https.called) |
3368 | + |
3369 | + @patch.object(relations, 'CONFIGS') |
3370 | + def test_cluster_changed(self, configs): |
3371 | + configs.complete_contexts = MagicMock() |
3372 | + configs.complete_contexts.return_value = ['cluster'] |
3373 | + configs.write = MagicMock() |
3374 | + relations.cluster_changed() |
3375 | + self.assertEquals([call('/etc/glance/glance-api.conf'), |
3376 | + call('/etc/haproxy/haproxy.cfg')], |
3377 | + configs.write.call_args_list) |
3378 | + |
3379 | + @patch.object(relations, 'cluster_changed') |
3380 | + def test_upgrade_charm(self, cluster_changed): |
3381 | + relations.upgrade_charm() |
3382 | + cluster_changed.assert_called_with() |
3383 | + |
3384 | + def test_ha_relation_joined(self): |
3385 | + self.test_config.set('ha-bindiface', 'em0') |
3386 | + self.test_config.set('ha-mcastport', '8080') |
3387 | + self.test_config.set('vip', '10.10.10.10') |
3388 | + self.test_config.set('vip_iface', 'em1') |
3389 | + self.test_config.set('vip_cidr', '24') |
3390 | + relations.ha_relation_joined() |
3391 | + args = { |
3392 | + 'corosync_bindiface': 'em0', |
3393 | + 'corosync_mcastport': '8080', |
3394 | + 'init_services': {'res_glance_haproxy': 'haproxy'}, |
3395 | + 'resources': {'res_glance_vip': 'ocf:heartbeat:IPaddr2', |
3396 | + 'res_glance_haproxy': 'lsb:haproxy'}, |
3397 | + 'resource_params': {'res_glance_vip': 'params ip="10.10.10.10" cidr_netmask="24" nic="em1"', |
3398 | + 'res_glance_haproxy': 'op monitor interval="5s"'}, |
3399 | + 'clones': {'cl_glance_haproxy': 'res_glance_haproxy'} |
3400 | + } |
3401 | + self.relation_set.assert_called_with(**args) |
3402 | + |
3403 | + def test_ha_relation_changed_not_clustered(self): |
3404 | + self.relation_get.return_value = False |
3405 | + relations.ha_relation_changed() |
3406 | + self.juju_log.assert_called_with('glance subordinate is not fully clustered.') |
3407 | + |
3408 | + @patch.object(relations, 'CONFIGS') |
3409 | + def test_ha_relation_changed_with_https(self, configs): |
3410 | + configs.complete_contexts = MagicMock() |
3411 | + configs.complete_contexts.return_value = ['https'] |
3412 | + configs.write = MagicMock() |
3413 | + self.relation_get.return_value = True |
3414 | + self.test_config.set('vip', '10.10.10.10') |
3415 | + self.test_config.set('region', 'FirstRegion') |
3416 | + self.relation_ids.return_value = ['relation-made:0'] |
3417 | + relations.ha_relation_changed() |
3418 | + self.juju_log.assert_called_with('glance: Cluster configured, notifying other services') |
3419 | + self.assertEquals([call('identity-service'), call('image-service')], |
3420 | + self.relation_ids.call_args_list) |
3421 | + ex = [ |
3422 | + call(service='glance', |
3423 | + region='FirstRegion', |
3424 | + public_url='https://10.10.10.10:9292', |
3425 | + internal_url='https://10.10.10.10:9292', |
3426 | + relation_id='relation-made:0', |
3427 | + admin_url='https://10.10.10.10:9292'), |
3428 | + call(glance_api_server='https://10.10.10.10:9292', |
3429 | + relation_id='relation-made:0') |
3430 | + ] |
3431 | + self.assertEquals(ex, self.relation_set.call_args_list) |
3432 | + |
3433 | + @patch.object(relations, 'CONFIGS') |
3434 | + def test_ha_relation_changed_with_http(self, configs): |
3435 | + configs.complete_contexts = MagicMock() |
3436 | + configs.complete_contexts.return_value = [''] |
3437 | + configs.write = MagicMock() |
3438 | + self.relation_get.return_value = True |
3439 | + self.test_config.set('vip', '10.10.10.10') |
3440 | + self.test_config.set('region', 'FirstRegion') |
3441 | + self.relation_ids.return_value = ['relation-made:0'] |
3442 | + relations.ha_relation_changed() |
3443 | + self.juju_log.assert_called_with('glance: Cluster configured, notifying other services') |
3444 | + self.assertEquals([call('identity-service'), call('image-service')], |
3445 | + self.relation_ids.call_args_list) |
3446 | + ex = [ |
3447 | + call(service='glance', |
3448 | + region='FirstRegion', |
3449 | + public_url='http://10.10.10.10:9292', |
3450 | + internal_url='http://10.10.10.10:9292', |
3451 | + relation_id='relation-made:0', |
3452 | + admin_url='http://10.10.10.10:9292'), |
3453 | + call(glance_api_server='http://10.10.10.10:9292', |
3454 | + relation_id='relation-made:0') |
3455 | + ] |
3456 | + self.assertEquals(ex, self.relation_set.call_args_list) |
3457 | + |
3458 | + @patch.object(relations, 'keystone_joined') |
3459 | + @patch.object(relations, 'CONFIGS') |
3460 | + def test_configure_https_enable_with_identity_service(self, configs, keystone_joined): |
3461 | + configs.complete_contexts = MagicMock() |
3462 | + configs.complete_contexts.return_value = ['https'] |
3463 | + configs.write = MagicMock() |
3464 | + self.relation_ids.return_value = ['identity-service:0'] |
3465 | + relations.configure_https() |
3466 | + cmd = ['a2ensite', 'openstack_https_frontend'] |
3467 | + self.check_call.assert_called_with(cmd) |
3468 | + keystone_joined.assert_called_with(relation_id='identity-service:0') |
3469 | + |
3470 | + @patch.object(relations, 'keystone_joined') |
3471 | + @patch.object(relations, 'CONFIGS') |
3472 | + def test_configure_https_disable_with_keystone_joined(self, configs, keystone_joined): |
3473 | + configs.complete_contexts = MagicMock() |
3474 | + configs.complete_contexts.return_value = [''] |
3475 | + configs.write = MagicMock() |
3476 | + self.relation_ids.return_value = ['identity-service:0'] |
3477 | + relations.configure_https() |
3478 | + cmd = ['a2dissite', 'openstack_https_frontend'] |
3479 | + self.check_call.assert_called_with(cmd) |
3480 | + keystone_joined.assert_called_with(relation_id='identity-service:0') |
3481 | + |
3482 | + @patch.object(relations, 'image_service_joined') |
3483 | + @patch.object(relations, 'CONFIGS') |
3484 | + def test_configure_https_enable_with_image_service(self, configs, image_service_joined): |
3485 | + configs.complete_contexts = MagicMock() |
3486 | + configs.complete_contexts.return_value = ['https'] |
3487 | + configs.write = MagicMock() |
3488 | + self.relation_ids.return_value = ['image-service:0'] |
3489 | + relations.configure_https() |
3490 | + cmd = ['a2ensite', 'openstack_https_frontend'] |
3491 | + self.check_call.assert_called_with(cmd) |
3492 | + image_service_joined.assert_called_with(relation_id='image-service:0') |
3493 | + |
3494 | + @patch.object(relations, 'image_service_joined') |
3495 | + @patch.object(relations, 'CONFIGS') |
3496 | + def test_configure_https_disable_with_image_service(self, configs, image_service_joined): |
3497 | + configs.complete_contexts = MagicMock() |
3498 | + configs.complete_contexts.return_value = [''] |
3499 | + configs.write = MagicMock() |
3500 | + self.relation_ids.return_value = ['image-service:0'] |
3501 | + relations.configure_https() |
3502 | + cmd = ['a2dissite', 'openstack_https_frontend'] |
3503 | + self.check_call.assert_called_with(cmd) |
3504 | + image_service_joined.assert_called_with(relation_id='image-service:0') |
3505 | + |
3506 | |
3507 | === added file 'tests/test_utils.py' |
3508 | --- tests/test_utils.py 1970-01-01 00:00:00 +0000 |
3509 | +++ tests/test_utils.py 2013-08-14 22:57:15 +0000 |
3510 | @@ -0,0 +1,118 @@ |
3511 | +import logging |
3512 | +import unittest |
3513 | +import os |
3514 | +import yaml |
3515 | + |
3516 | +from contextlib import contextmanager |
3517 | +from mock import patch, MagicMock |
3518 | + |
3519 | + |
3520 | +def load_config(): |
3521 | + ''' |
3522 | + Walk backwords from __file__ looking for config.yaml, load and return the |
3523 | + 'options' section' |
3524 | + ''' |
3525 | + config = None |
3526 | + f = __file__ |
3527 | + while config is None: |
3528 | + d = os.path.dirname(f) |
3529 | + if os.path.isfile(os.path.join(d, 'config.yaml')): |
3530 | + config = os.path.join(d, 'config.yaml') |
3531 | + break |
3532 | + f = d |
3533 | + |
3534 | + if not config: |
3535 | + logging.error('Could not find config.yaml in any parent directory ' |
3536 | + 'of %s. ' % file) |
3537 | + raise Exception |
3538 | + |
3539 | + return yaml.safe_load(open(config).read())['options'] |
3540 | + |
3541 | + |
3542 | +def get_default_config(): |
3543 | + ''' |
3544 | + Load default charm config from config.yaml return as a dict. |
3545 | + If no default is set in config.yaml, its value is None. |
3546 | + ''' |
3547 | + default_config = {} |
3548 | + config = load_config() |
3549 | + for k, v in config.iteritems(): |
3550 | + if 'default' in v: |
3551 | + default_config[k] = v['default'] |
3552 | + else: |
3553 | + default_config[k] = None |
3554 | + return default_config |
3555 | + |
3556 | + |
3557 | +class CharmTestCase(unittest.TestCase): |
3558 | + def setUp(self, obj, patches): |
3559 | + super(CharmTestCase, self).setUp() |
3560 | + self.patches = patches |
3561 | + self.obj = obj |
3562 | + self.test_config = TestConfig() |
3563 | + self.test_relation = TestRelation() |
3564 | + self.patch_all() |
3565 | + |
3566 | + def patch(self, method): |
3567 | + _m = patch.object(self.obj, method) |
3568 | + mock = _m.start() |
3569 | + self.addCleanup(_m.stop) |
3570 | + return mock |
3571 | + |
3572 | + def patch_all(self): |
3573 | + for method in self.patches: |
3574 | + setattr(self, method, self.patch(method)) |
3575 | + |
3576 | + |
3577 | +class TestConfig(object): |
3578 | + def __init__(self): |
3579 | + self.config = get_default_config() |
3580 | + |
3581 | + def get(self, attr=None): |
3582 | + if not attr: |
3583 | + return self.get_all() |
3584 | + try: |
3585 | + return self.config[attr] |
3586 | + except KeyError: |
3587 | + return None |
3588 | + |
3589 | + def get_all(self): |
3590 | + return self.config |
3591 | + |
3592 | + def set(self, attr, value): |
3593 | + if attr not in self.config: |
3594 | + raise KeyError |
3595 | + self.config[attr] = value |
3596 | + |
3597 | + |
3598 | +class TestRelation(object): |
3599 | + def __init__(self, relation_data={}): |
3600 | + self.relation_data = relation_data |
3601 | + |
3602 | + def set(self, relation_data): |
3603 | + self.relation_data = relation_data |
3604 | + |
3605 | + def get(self, attr=None, unit=None, rid=None): |
3606 | + if attr == None: |
3607 | + return self.relation_data |
3608 | + elif attr in self.relation_data: |
3609 | + return self.relation_data[attr] |
3610 | + return None |
3611 | + |
3612 | + |
3613 | +@contextmanager |
3614 | +def patch_open(): |
3615 | + '''Patch open() to allow mocking both open() itself and the file that is |
3616 | + yielded. |
3617 | + |
3618 | + Yields the mock for "open" and "file", respectively.''' |
3619 | + mock_open = MagicMock(spec=open) |
3620 | + mock_file = MagicMock(spec=file) |
3621 | + |
3622 | + @contextmanager |
3623 | + def stub_open(*args, **kwargs): |
3624 | + mock_open(*args, **kwargs) |
3625 | + yield mock_file |
3626 | + |
3627 | + with patch('__builtin__.open', stub_open): |
3628 | + yield mock_open, mock_file |
Generally looks good. A couple of things:
- Can you sync the latest charm-helpers? There has been a lot added, and we've already rearranged the layout helpers in contrib.cluster and contrib.openstack. Also, with the version of the helpers synced currently, I cannot run the tests without python-dnspython installed. This should be resolved if you sync the latest, but it will require some modifications to your charms to properly import stuff as per the restructured I mention.
- Needs some lint cleanup
- Can you add a .coveragerc and adjust the Makefile to also report on code coverage? This will let you know what code you haven't hit with your tests. See the current nova pyredux branches for examples on that.
It would be good to get glance_utils.py covered by tests, too, but this is a good start and shouldn't block merging this.