Merge lp:~james-page/charms/precise/quantum-gateway/nvp into lp:~charmers/charms/precise/quantum-gateway/trunk

Proposed by James Page
Status: Merged
Merged at revision: 34
Proposed branch: lp:~james-page/charms/precise/quantum-gateway/nvp
Merge into: lp:~charmers/charms/precise/quantum-gateway/trunk
Diff against target: 3998 lines (+2442/-1228)
29 files modified
.bzrignore (+1/-0)
.coveragerc (+6/-0)
.pydevproject (+1/-1)
Makefile (+14/-0)
README.md (+1/-3)
charm-helpers-sync.yaml (+7/-0)
config.yaml (+1/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/ceph.py (+278/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+180/-0)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+72/-0)
hooks/charmhelpers/contrib/openstack/context.py (+271/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+261/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+273/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+269/-0)
hooks/lib/cluster_utils.py (+0/-130)
hooks/lib/openstack_common.py (+0/-230)
hooks/lib/utils.py (+0/-359)
hooks/quantum_contexts.py (+66/-0)
hooks/quantum_hooks.py (+104/-291)
hooks/quantum_utils.py (+224/-141)
setup.cfg (+5/-0)
templates/dhcp_agent.ini (+5/-0)
templates/evacuate_unit.py (+0/-70)
templates/l3_agent.ini (+1/-1)
templates/metadata_agent.ini (+1/-1)
templates/nova.conf (+1/-1)
To merge this branch: bzr merge lp:~james-page/charms/precise/quantum-gateway/nvp
Reviewer Review Type Date Requested Status
charmers Pending
Review via email: mp+173954@code.launchpad.net

Description of the change

Add support for NVP quantum plugin.

In an NVP deployment, the quantum gateway provides DHCP and Metadata services
to instances. L3 routing is provided by NVP itself.

To post a comment you must log in.
40. By James Page

First phase of charm-helper refactoring

41. By James Page

Redux to use agree structure and OS templating

42. By James Page

Update TODO to remove implemented items.

43. By James Page

Drop use of keys

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2013-07-19 09:54:29 +0000
4@@ -0,0 +1,1 @@
5+.coverage
6
7=== added file '.coveragerc'
8--- .coveragerc 1970-01-01 00:00:00 +0000
9+++ .coveragerc 2013-07-19 09:54:29 +0000
10@@ -0,0 +1,6 @@
11+[report]
12+# Regexes for lines to exclude from consideration
13+exclude_lines =
14+ if __name__ == .__main__.:
15+include=
16+ hooks/quantum_*
17
18=== modified file '.pydevproject'
19--- .pydevproject 2013-04-12 16:19:51 +0000
20+++ .pydevproject 2013-07-19 09:54:29 +0000
21@@ -4,6 +4,6 @@
22 <pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
23 <pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
24 <path>/quantum-gateway/hooks</path>
25-<path>/quantum-gateway/templates</path>
26+<path>/quantum-gateway/unit_tests</path>
27 </pydev_pathproperty>
28 </pydev_project>
29
30=== added file 'Makefile'
31--- Makefile 1970-01-01 00:00:00 +0000
32+++ Makefile 2013-07-19 09:54:29 +0000
33@@ -0,0 +1,14 @@
34+#!/usr/bin/make
35+PYTHON := /usr/bin/env python
36+
37+lint:
38+ @flake8 --exclude hooks/charmhelpers hooks
39+ @flake8 --exclude hooks/charmhelpers unit_tests
40+ @charm proof
41+
42+test:
43+ @echo Starting tests...
44+ @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
45+
46+sync:
47+ @charm-helper-sync -c charm-helpers-sync.yaml
48
49=== modified file 'README.md'
50--- README.md 2012-12-06 10:22:24 +0000
51+++ README.md 2013-07-19 09:54:29 +0000
52@@ -1,4 +1,4 @@
53-Overview
54+
55 --------
56
57 Quantum provides flexible software defined networking (SDN) for OpenStack.
58@@ -54,5 +54,3 @@
59
60 * Provide more network configuration use cases.
61 * Support VLAN in addition to GRE+OpenFlow for L2 separation.
62- * High Avaliability.
63- * Support for propriety plugins for Quantum.
64
65=== added file 'charm-helpers-sync.yaml'
66--- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000
67+++ charm-helpers-sync.yaml 2013-07-19 09:54:29 +0000
68@@ -0,0 +1,7 @@
69+branch: lp:charm-helpers
70+destination: hooks/charmhelpers
71+include:
72+ - core
73+ - contrib.openstack
74+ - contrib.hahelpers
75+ - contrib.network.ovs
76
77=== modified file 'config.yaml'
78--- config.yaml 2013-03-20 16:08:54 +0000
79+++ config.yaml 2013-07-19 09:54:29 +0000
80@@ -7,6 +7,7 @@
81 Supported values include:
82 .
83 ovs - OpenVSwitch
84+ nvp - Nicira NVP
85 ext-port:
86 type: string
87 description: |
88
89=== modified symlink 'hooks/amqp-relation-changed'
90=== target changed u'hooks.py' => u'quantum_hooks.py'
91=== modified symlink 'hooks/amqp-relation-joined'
92=== target changed u'hooks.py' => u'quantum_hooks.py'
93=== added directory 'hooks/charmhelpers'
94=== added file 'hooks/charmhelpers/__init__.py'
95=== added directory 'hooks/charmhelpers/contrib'
96=== added file 'hooks/charmhelpers/contrib/__init__.py'
97=== added directory 'hooks/charmhelpers/contrib/hahelpers'
98=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
99=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
100--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
101+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-07-19 09:54:29 +0000
102@@ -0,0 +1,58 @@
103+#
104+# Copyright 2012 Canonical Ltd.
105+#
106+# This file is sourced from lp:openstack-charm-helpers
107+#
108+# Authors:
109+# James Page <james.page@ubuntu.com>
110+# Adam Gandelman <adamg@ubuntu.com>
111+#
112+
113+import subprocess
114+
115+from charmhelpers.core.hookenv import (
116+ config as config_get,
117+ relation_get,
118+ relation_ids,
119+ related_units as relation_list,
120+ log,
121+ INFO,
122+)
123+
124+
125+def get_cert():
126+ cert = config_get('ssl_cert')
127+ key = config_get('ssl_key')
128+ if not (cert and key):
129+ log("Inspecting identity-service relations for SSL certificate.",
130+ level=INFO)
131+ cert = key = None
132+ for r_id in relation_ids('identity-service'):
133+ for unit in relation_list(r_id):
134+ if not cert:
135+ cert = relation_get('ssl_cert',
136+ rid=r_id, unit=unit)
137+ if not key:
138+ key = relation_get('ssl_key',
139+ rid=r_id, unit=unit)
140+ return (cert, key)
141+
142+
143+def get_ca_cert():
144+ ca_cert = None
145+ log("Inspecting identity-service relations for CA SSL certificate.",
146+ level=INFO)
147+ for r_id in relation_ids('identity-service'):
148+ for unit in relation_list(r_id):
149+ if not ca_cert:
150+ ca_cert = relation_get('ca_cert',
151+ rid=r_id, unit=unit)
152+ return ca_cert
153+
154+
155+def install_ca_cert(ca_cert):
156+ if ca_cert:
157+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
158+ 'w') as crt:
159+ crt.write(ca_cert)
160+ subprocess.check_call(['update-ca-certificates', '--fresh'])
161
162=== added file 'hooks/charmhelpers/contrib/hahelpers/ceph.py'
163--- hooks/charmhelpers/contrib/hahelpers/ceph.py 1970-01-01 00:00:00 +0000
164+++ hooks/charmhelpers/contrib/hahelpers/ceph.py 2013-07-19 09:54:29 +0000
165@@ -0,0 +1,278 @@
166+#
167+# Copyright 2012 Canonical Ltd.
168+#
169+# This file is sourced from lp:openstack-charm-helpers
170+#
171+# Authors:
172+# James Page <james.page@ubuntu.com>
173+# Adam Gandelman <adamg@ubuntu.com>
174+#
175+
176+import commands
177+import os
178+import shutil
179+
180+from subprocess import (
181+ check_call,
182+ check_output,
183+ CalledProcessError
184+)
185+
186+from charmhelpers.core.hookenv import (
187+ relation_get,
188+ relation_ids,
189+ related_units,
190+ log,
191+ INFO,
192+)
193+
194+from charmhelpers.core.host import (
195+ apt_install,
196+ mount,
197+ mounts,
198+ service_start,
199+ service_stop,
200+ umount,
201+)
202+
203+KEYRING = '/etc/ceph/ceph.client.%s.keyring'
204+KEYFILE = '/etc/ceph/ceph.client.%s.key'
205+
206+CEPH_CONF = """[global]
207+ auth supported = %(auth)s
208+ keyring = %(keyring)s
209+ mon host = %(mon_hosts)s
210+"""
211+
212+
213+def running(service):
214+ # this local util can be dropped as soon the following branch lands
215+ # in lp:charm-helpers
216+ # https://code.launchpad.net/~gandelman-a/charm-helpers/service_running/
217+ try:
218+ output = check_output(['service', service, 'status'])
219+ except CalledProcessError:
220+ return False
221+ else:
222+ if ("start/running" in output or "is running" in output):
223+ return True
224+ else:
225+ return False
226+
227+
228+def install():
229+ ceph_dir = "/etc/ceph"
230+ if not os.path.isdir(ceph_dir):
231+ os.mkdir(ceph_dir)
232+ apt_install('ceph-common', fatal=True)
233+
234+
235+def rbd_exists(service, pool, rbd_img):
236+ (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %
237+ (service, pool))
238+ return rbd_img in out
239+
240+
241+def create_rbd_image(service, pool, image, sizemb):
242+ cmd = [
243+ 'rbd',
244+ 'create',
245+ image,
246+ '--size',
247+ str(sizemb),
248+ '--id',
249+ service,
250+ '--pool',
251+ pool
252+ ]
253+ check_call(cmd)
254+
255+
256+def pool_exists(service, name):
257+ (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
258+ return name in out
259+
260+
261+def create_pool(service, name):
262+ cmd = [
263+ 'rados',
264+ '--id',
265+ service,
266+ 'mkpool',
267+ name
268+ ]
269+ check_call(cmd)
270+
271+
272+def keyfile_path(service):
273+ return KEYFILE % service
274+
275+
276+def keyring_path(service):
277+ return KEYRING % service
278+
279+
280+def create_keyring(service, key):
281+ keyring = keyring_path(service)
282+ if os.path.exists(keyring):
283+ log('ceph: Keyring exists at %s.' % keyring, level=INFO)
284+ cmd = [
285+ 'ceph-authtool',
286+ keyring,
287+ '--create-keyring',
288+ '--name=client.%s' % service,
289+ '--add-key=%s' % key
290+ ]
291+ check_call(cmd)
292+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
293+
294+
295+def create_key_file(service, key):
296+ # create a file containing the key
297+ keyfile = keyfile_path(service)
298+ if os.path.exists(keyfile):
299+ log('ceph: Keyfile exists at %s.' % keyfile, level=INFO)
300+ fd = open(keyfile, 'w')
301+ fd.write(key)
302+ fd.close()
303+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
304+
305+
306+def get_ceph_nodes():
307+ hosts = []
308+ for r_id in relation_ids('ceph'):
309+ for unit in related_units(r_id):
310+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
311+ return hosts
312+
313+
314+def configure(service, key, auth):
315+ create_keyring(service, key)
316+ create_key_file(service, key)
317+ hosts = get_ceph_nodes()
318+ mon_hosts = ",".join(map(str, hosts))
319+ keyring = keyring_path(service)
320+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
321+ ceph_conf.write(CEPH_CONF % locals())
322+ modprobe_kernel_module('rbd')
323+
324+
325+def image_mapped(image_name):
326+ (rc, out) = commands.getstatusoutput('rbd showmapped')
327+ return image_name in out
328+
329+
330+def map_block_storage(service, pool, image):
331+ cmd = [
332+ 'rbd',
333+ 'map',
334+ '%s/%s' % (pool, image),
335+ '--user',
336+ service,
337+ '--secret',
338+ keyfile_path(service),
339+ ]
340+ check_call(cmd)
341+
342+
343+def filesystem_mounted(fs):
344+ return fs in [f for m, f in mounts()]
345+
346+
347+def make_filesystem(blk_device, fstype='ext4'):
348+ log('ceph: Formatting block device %s as filesystem %s.' %
349+ (blk_device, fstype), level=INFO)
350+ cmd = ['mkfs', '-t', fstype, blk_device]
351+ check_call(cmd)
352+
353+
354+def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
355+ # mount block device into /mnt
356+ mount(blk_device, '/mnt')
357+
358+ # copy data to /mnt
359+ try:
360+ copy_files(data_src_dst, '/mnt')
361+ except:
362+ pass
363+
364+ # umount block device
365+ umount('/mnt')
366+
367+ _dir = os.stat(data_src_dst)
368+ uid = _dir.st_uid
369+ gid = _dir.st_gid
370+
371+ # re-mount where the data should originally be
372+ mount(blk_device, data_src_dst, persist=True)
373+
374+ # ensure original ownership of new mount.
375+ cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
376+ check_call(cmd)
377+
378+
379+# TODO: re-use
380+def modprobe_kernel_module(module):
381+ log('ceph: Loading kernel module', level=INFO)
382+ cmd = ['modprobe', module]
383+ check_call(cmd)
384+ cmd = 'echo %s >> /etc/modules' % module
385+ check_call(cmd, shell=True)
386+
387+
388+def copy_files(src, dst, symlinks=False, ignore=None):
389+ for item in os.listdir(src):
390+ s = os.path.join(src, item)
391+ d = os.path.join(dst, item)
392+ if os.path.isdir(s):
393+ shutil.copytree(s, d, symlinks, ignore)
394+ else:
395+ shutil.copy2(s, d)
396+
397+
398+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
399+ blk_device, fstype, system_services=[]):
400+ """
401+ To be called from the current cluster leader.
402+ Ensures given pool and RBD image exists, is mapped to a block device,
403+ and the device is formatted and mounted at the given mount_point.
404+
405+ If formatting a device for the first time, data existing at mount_point
406+ will be migrated to the RBD device before being remounted.
407+
408+ All services listed in system_services will be stopped prior to data
409+ migration and restarted when complete.
410+ """
411+ # Ensure pool, RBD image, RBD mappings are in place.
412+ if not pool_exists(service, pool):
413+ log('ceph: Creating new pool %s.' % pool, level=INFO)
414+ create_pool(service, pool)
415+
416+ if not rbd_exists(service, pool, rbd_img):
417+ log('ceph: Creating RBD image (%s).' % rbd_img, level=INFO)
418+ create_rbd_image(service, pool, rbd_img, sizemb)
419+
420+ if not image_mapped(rbd_img):
421+ log('ceph: Mapping RBD Image as a Block Device.', level=INFO)
422+ map_block_storage(service, pool, rbd_img)
423+
424+ # make file system
425+ # TODO: What happens if for whatever reason this is run again and
426+ # the data is already in the rbd device and/or is mounted??
427+ # When it is mounted already, it will fail to make the fs
428+ # XXX: This is really sketchy! Need to at least add an fstab entry
429+ # otherwise this hook will blow away existing data if its executed
430+ # after a reboot.
431+ if not filesystem_mounted(mount_point):
432+ make_filesystem(blk_device, fstype)
433+
434+ for svc in system_services:
435+ if running(svc):
436+ log('Stopping services %s prior to migrating data.' % svc,
437+ level=INFO)
438+ service_stop(svc)
439+
440+ place_data_on_ceph(service, blk_device, mount_point, fstype)
441+
442+ for svc in system_services:
443+ service_start(svc)
444
445=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
446--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
447+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-07-19 09:54:29 +0000
448@@ -0,0 +1,180 @@
449+#
450+# Copyright 2012 Canonical Ltd.
451+#
452+# Authors:
453+# James Page <james.page@ubuntu.com>
454+# Adam Gandelman <adamg@ubuntu.com>
455+#
456+
457+import subprocess
458+import os
459+
460+from socket import gethostname as get_unit_hostname
461+
462+from charmhelpers.core.hookenv import (
463+ log,
464+ relation_ids,
465+ related_units as relation_list,
466+ relation_get,
467+ config as config_get,
468+ INFO,
469+ ERROR,
470+)
471+
472+
473+class HAIncompleteConfig(Exception):
474+ pass
475+
476+
477+def is_clustered():
478+ for r_id in (relation_ids('ha') or []):
479+ for unit in (relation_list(r_id) or []):
480+ clustered = relation_get('clustered',
481+ rid=r_id,
482+ unit=unit)
483+ if clustered:
484+ return True
485+ return False
486+
487+
488+def is_leader(resource):
489+ cmd = [
490+ "crm", "resource",
491+ "show", resource
492+ ]
493+ try:
494+ status = subprocess.check_output(cmd)
495+ except subprocess.CalledProcessError:
496+ return False
497+ else:
498+ if get_unit_hostname() in status:
499+ return True
500+ else:
501+ return False
502+
503+
504+def peer_units():
505+ peers = []
506+ for r_id in (relation_ids('cluster') or []):
507+ for unit in (relation_list(r_id) or []):
508+ peers.append(unit)
509+ return peers
510+
511+
512+def oldest_peer(peers):
513+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
514+ for peer in peers:
515+ remote_unit_no = int(peer.split('/')[1])
516+ if remote_unit_no < local_unit_no:
517+ return False
518+ return True
519+
520+
521+def eligible_leader(resource):
522+ if is_clustered():
523+ if not is_leader(resource):
524+ log('Deferring action to CRM leader.', level=INFO)
525+ return False
526+ else:
527+ peers = peer_units()
528+ if peers and not oldest_peer(peers):
529+ log('Deferring action to oldest service unit.', level=INFO)
530+ return False
531+ return True
532+
533+
534+def https():
535+ '''
536+ Determines whether enough data has been provided in configuration
537+ or relation data to configure HTTPS
538+ .
539+ returns: boolean
540+ '''
541+ if config_get('use-https') == "yes":
542+ return True
543+ if config_get('ssl_cert') and config_get('ssl_key'):
544+ return True
545+ for r_id in relation_ids('identity-service'):
546+ for unit in relation_list(r_id):
547+ if None not in [
548+ relation_get('https_keystone', rid=r_id, unit=unit),
549+ relation_get('ssl_cert', rid=r_id, unit=unit),
550+ relation_get('ssl_key', rid=r_id, unit=unit),
551+ relation_get('ca_cert', rid=r_id, unit=unit),
552+ ]:
553+ return True
554+ return False
555+
556+
557+def determine_api_port(public_port):
558+ '''
559+ Determine correct API server listening port based on
560+ existence of HTTPS reverse proxy and/or haproxy.
561+
562+ public_port: int: standard public port for given service
563+
564+ returns: int: the correct listening port for the API service
565+ '''
566+ i = 0
567+ if len(peer_units()) > 0 or is_clustered():
568+ i += 1
569+ if https():
570+ i += 1
571+ return public_port - (i * 10)
572+
573+
574+def determine_haproxy_port(public_port):
575+ '''
576+ Description: Determine correct proxy listening port based on public IP +
577+ existence of HTTPS reverse proxy.
578+
579+ public_port: int: standard public port for given service
580+
581+ returns: int: the correct listening port for the HAProxy service
582+ '''
583+ i = 0
584+ if https():
585+ i += 1
586+ return public_port - (i * 10)
587+
588+
589+def get_hacluster_config():
590+ '''
591+ Obtains all relevant configuration from charm configuration required
592+ for initiating a relation to hacluster:
593+
594+ ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
595+
596+ returns: dict: A dict containing settings keyed by setting name.
597+ raises: HAIncompleteConfig if settings are missing.
598+ '''
599+ settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
600+ conf = {}
601+ for setting in settings:
602+ conf[setting] = config_get(setting)
603+ missing = []
604+ [missing.append(s) for s, v in conf.iteritems() if v is None]
605+ if missing:
606+ log('Insufficient config data to configure hacluster.', level=ERROR)
607+ raise HAIncompleteConfig
608+ return conf
609+
610+
611+def canonical_url(configs, vip_setting='vip'):
612+ '''
613+ Returns the correct HTTP URL to this host given the state of HTTPS
614+ configuration and hacluster.
615+
616+ :configs : OSTemplateRenderer: A config tempating object to inspect for
617+ a complete https context.
618+ :vip_setting: str: Setting in charm config that specifies
619+ VIP address.
620+ '''
621+ scheme = 'http'
622+ if 'https' in configs.complete_contexts():
623+ scheme = 'https'
624+ if is_clustered():
625+ addr = config_get(vip_setting)
626+ else:
627+ addr = get_unit_hostname()
628+ return '%s://%s' % (scheme, addr)
629
630=== added directory 'hooks/charmhelpers/contrib/network'
631=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
632=== added directory 'hooks/charmhelpers/contrib/network/ovs'
633=== added file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
634--- hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000
635+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2013-07-19 09:54:29 +0000
636@@ -0,0 +1,72 @@
637+''' Helpers for interacting with OpenvSwitch '''
638+import subprocess
639+import os
640+from charmhelpers.core.hookenv import (
641+ log, WARNING
642+)
643+from charmhelpers.core.host import (
644+ service
645+)
646+
647+
648+def add_bridge(name):
649+ ''' Add the named bridge to openvswitch '''
650+ log('Creating bridge {}'.format(name))
651+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
652+
653+
654+def del_bridge(name):
655+ ''' Delete the named bridge from openvswitch '''
656+ log('Deleting bridge {}'.format(name))
657+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
658+
659+
660+def add_bridge_port(name, port):
661+ ''' Add a port to the named openvswitch bridge '''
662+ log('Adding port {} to bridge {}'.format(port, name))
663+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
664+ name, port])
665+ subprocess.check_call(["ip", "link", "set", port, "up"])
666+
667+
668+def del_bridge_port(name, port):
669+ ''' Delete a port from the named openvswitch bridge '''
670+ log('Deleting port {} from bridge {}'.format(port, name))
671+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
672+ name, port])
673+ subprocess.check_call(["ip", "link", "set", port, "down"])
674+
675+
676+def set_manager(manager):
677+ ''' Set the controller for the local openvswitch '''
678+ log('Setting manager for local ovs to {}'.format(manager))
679+ subprocess.check_call(['ovs-vsctl', 'set-manager',
680+ 'ssl:{}'.format(manager)])
681+
682+
683+CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
684+
685+
686+def get_certificate():
687+ ''' Read openvswitch certificate from disk '''
688+ if os.path.exists(CERT_PATH):
689+ log('Reading ovs certificate from {}'.format(CERT_PATH))
690+ with open(CERT_PATH, 'r') as cert:
691+ full_cert = cert.read()
692+ begin_marker = "-----BEGIN CERTIFICATE-----"
693+ end_marker = "-----END CERTIFICATE-----"
694+ begin_index = full_cert.find(begin_marker)
695+ end_index = full_cert.rfind(end_marker)
696+ if end_index == -1 or begin_index == -1:
697+ raise RuntimeError("Certificate does not contain valid begin"
698+ " and end markers.")
699+ full_cert = full_cert[begin_index:(end_index + len(end_marker))]
700+ return full_cert
701+ else:
702+ log('Certificate not found', level=WARNING)
703+ return None
704+
705+
706+def full_restart():
707+ ''' Full restart and reload of openvswitch '''
708+ service('force-reload-kmod', 'openvswitch-switch')
709
710=== added directory 'hooks/charmhelpers/contrib/openstack'
711=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
712=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
713--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
714+++ hooks/charmhelpers/contrib/openstack/context.py 2013-07-19 09:54:29 +0000
715@@ -0,0 +1,271 @@
716+import os
717+
718+from base64 import b64decode
719+
720+from subprocess import (
721+ check_call
722+)
723+
724+from charmhelpers.core.hookenv import (
725+ config,
726+ local_unit,
727+ log,
728+ relation_get,
729+ relation_ids,
730+ related_units,
731+ unit_get,
732+)
733+
734+from charmhelpers.contrib.hahelpers.cluster import (
735+ determine_api_port,
736+ determine_haproxy_port,
737+ https,
738+ is_clustered,
739+ peer_units,
740+)
741+
742+from charmhelpers.contrib.hahelpers.apache import (
743+ get_cert,
744+ get_ca_cert,
745+)
746+
747+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
748+
749+
750+class OSContextError(Exception):
751+ pass
752+
753+
754+def context_complete(ctxt):
755+ _missing = []
756+ for k, v in ctxt.iteritems():
757+ if v is None or v == '':
758+ _missing.append(k)
759+ if _missing:
760+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
761+ return False
762+ return True
763+
764+
765+class OSContextGenerator(object):
766+ interfaces = []
767+
768+ def __call__(self):
769+ raise NotImplementedError
770+
771+
772+class SharedDBContext(OSContextGenerator):
773+ interfaces = ['shared-db']
774+
775+ def __call__(self):
776+ log('Generating template context for shared-db')
777+ conf = config()
778+ try:
779+ database = conf['database']
780+ username = conf['database-user']
781+ except KeyError as e:
782+ log('Could not generate shared_db context. '
783+ 'Missing required charm config options: %s.' % e)
784+ raise OSContextError
785+ ctxt = {}
786+ for rid in relation_ids('shared-db'):
787+ for unit in related_units(rid):
788+ ctxt = {
789+ 'database_host': relation_get('db_host', rid=rid,
790+ unit=unit),
791+ 'database': database,
792+ 'database_user': username,
793+ 'database_password': relation_get('password', rid=rid,
794+ unit=unit)
795+ }
796+ if not context_complete(ctxt):
797+ return {}
798+ return ctxt
799+
800+
801+class IdentityServiceContext(OSContextGenerator):
802+ interfaces = ['identity-service']
803+
804+ def __call__(self):
805+ log('Generating template context for identity-service')
806+ ctxt = {}
807+
808+ for rid in relation_ids('identity-service'):
809+ for unit in related_units(rid):
810+ ctxt = {
811+ 'service_port': relation_get('service_port', rid=rid,
812+ unit=unit),
813+ 'service_host': relation_get('service_host', rid=rid,
814+ unit=unit),
815+ 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
816+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
817+ 'admin_tenant_name': relation_get('service_tenant',
818+ rid=rid, unit=unit),
819+ 'admin_user': relation_get('service_username', rid=rid,
820+ unit=unit),
821+ 'admin_password': relation_get('service_password', rid=rid,
822+ unit=unit),
823+ # XXX: Hard-coded http.
824+ 'service_protocol': 'http',
825+ 'auth_protocol': 'http',
826+ }
827+ if not context_complete(ctxt):
828+ return {}
829+ return ctxt
830+
831+
832+class AMQPContext(OSContextGenerator):
833+ interfaces = ['amqp']
834+
835+ def __call__(self):
836+ log('Generating template context for amqp')
837+ conf = config()
838+ try:
839+ username = conf['rabbit-user']
840+ vhost = conf['rabbit-vhost']
841+ except KeyError as e:
842+ log('Could not generate shared_db context. '
843+ 'Missing required charm config options: %s.' % e)
844+ raise OSContextError
845+
846+ ctxt = {}
847+ for rid in relation_ids('amqp'):
848+ for unit in related_units(rid):
849+ if relation_get('clustered', rid=rid, unit=unit):
850+ rabbitmq_host = relation_get('vip', rid=rid, unit=unit)
851+ else:
852+ rabbitmq_host = relation_get('private-address',
853+ rid=rid, unit=unit)
854+ ctxt = {
855+ 'rabbitmq_host': rabbitmq_host,
856+ 'rabbitmq_user': username,
857+ 'rabbitmq_password': relation_get('password', rid=rid,
858+ unit=unit),
859+ 'rabbitmq_virtual_host': vhost,
860+ }
861+ if not context_complete(ctxt):
862+ return {}
863+ return ctxt
864+
865+
866+class CephContext(OSContextGenerator):
867+ interfaces = ['ceph']
868+
869+ def __call__(self):
870+ '''This generates context for /etc/ceph/ceph.conf templates'''
871+ log('Generating tmeplate context for ceph')
872+ mon_hosts = []
873+ auth = None
874+ for rid in relation_ids('ceph'):
875+ for unit in related_units(rid):
876+ mon_hosts.append(relation_get('private-address', rid=rid,
877+ unit=unit))
878+ auth = relation_get('auth', rid=rid, unit=unit)
879+
880+ ctxt = {
881+ 'mon_hosts': ' '.join(mon_hosts),
882+ 'auth': auth,
883+ }
884+ if not context_complete(ctxt):
885+ return {}
886+ return ctxt
887+
888+
889+class HAProxyContext(OSContextGenerator):
890+ interfaces = ['cluster']
891+
892+ def __call__(self):
893+ '''
894+ Builds half a context for the haproxy template, which describes
895+ all peers to be included in the cluster. Each charm needs to include
896+ its own context generator that describes the port mapping.
897+ '''
898+ if not relation_ids('cluster'):
899+ return {}
900+
901+ cluster_hosts = {}
902+ l_unit = local_unit().replace('/', '-')
903+ cluster_hosts[l_unit] = unit_get('private-address')
904+
905+ for rid in relation_ids('cluster'):
906+ for unit in related_units(rid):
907+ _unit = unit.replace('/', '-')
908+ addr = relation_get('private-address', rid=rid, unit=unit)
909+ cluster_hosts[_unit] = addr
910+
911+ ctxt = {
912+ 'units': cluster_hosts,
913+ }
914+ if len(cluster_hosts.keys()) > 1:
915+ # Enable haproxy when we have enough peers.
916+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
917+ with open('/etc/default/haproxy', 'w') as out:
918+ out.write('ENABLED=1\n')
919+ return ctxt
920+ log('HAProxy context is incomplete, this unit has no peers.')
921+ return {}
922+
923+
924+class ApacheSSLContext(OSContextGenerator):
925+ """
926+ Generates a context for an apache vhost configuration that configures
927+ HTTPS reverse proxying for one or many endpoints. Generated context
928+ looks something like:
929+ {
930+ 'namespace': 'cinder',
931+ 'private_address': 'iscsi.mycinderhost.com',
932+ 'endpoints': [(8776, 8766), (8777, 8767)]
933+ }
934+
935+ The endpoints list consists of a tuples mapping external ports
936+ to internal ports.
937+ """
938+ interfaces = ['https']
939+
940+ # charms should inherit this context and set external ports
941+ # and service namespace accordingly.
942+ external_ports = []
943+ service_namespace = None
944+
945+ def enable_modules(self):
946+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
947+ check_call(cmd)
948+
949+ def configure_cert(self):
950+ if not os.path.isdir('/etc/apache2/ssl'):
951+ os.mkdir('/etc/apache2/ssl')
952+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
953+ if not os.path.isdir(ssl_dir):
954+ os.mkdir(ssl_dir)
955+ cert, key = get_cert()
956+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
957+ cert_out.write(b64decode(cert))
958+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
959+ key_out.write(b64decode(key))
960+ ca_cert = get_ca_cert()
961+ if ca_cert:
962+ with open(CA_CERT_PATH, 'w') as ca_out:
963+ ca_out.write(b64decode(ca_cert))
964+
965+ def __call__(self):
966+ if isinstance(self.external_ports, basestring):
967+ self.external_ports = [self.external_ports]
968+ if (not self.external_ports or not https()):
969+ return {}
970+
971+ self.configure_cert()
972+ self.enable_modules()
973+
974+ ctxt = {
975+ 'namespace': self.service_namespace,
976+ 'private_address': unit_get('private-address'),
977+ 'endpoints': []
978+ }
979+ for ext_port in self.external_ports:
980+ if peer_units() or is_clustered():
981+ int_port = determine_haproxy_port(ext_port)
982+ else:
983+ int_port = determine_api_port(ext_port)
984+ portmap = (int(ext_port), int(int_port))
985+ ctxt['endpoints'].append(portmap)
986+ return ctxt
987
988=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
989=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
990--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
991+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-07-19 09:54:29 +0000
992@@ -0,0 +1,2 @@
993+# dummy __init__.py to fool syncer into thinking this is a syncable python
994+# module
995
996=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
997--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
998+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-07-19 09:54:29 +0000
999@@ -0,0 +1,261 @@
1000+import os
1001+
1002+from charmhelpers.core.host import apt_install
1003+
1004+from charmhelpers.core.hookenv import (
1005+ log,
1006+ ERROR,
1007+ INFO
1008+)
1009+
1010+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1011+
1012+try:
1013+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment
1014+except ImportError:
1015+ # python-jinja2 may not be installed yet, or we're running unittests.
1016+ FileSystemLoader = ChoiceLoader = Environment = None
1017+
1018+
1019+class OSConfigException(Exception):
1020+ pass
1021+
1022+
1023+def get_loader(templates_dir, os_release):
1024+ """
1025+ Create a jinja2.ChoiceLoader containing template dirs up to
1026+ and including os_release. If directory template directory
1027+ is missing at templates_dir, it will be omitted from the loader.
1028+ templates_dir is added to the bottom of the search list as a base
1029+ loading dir.
1030+
1031+ A charm may also ship a templates dir with this module
1032+ and it will be appended to the bottom of the search list, eg:
1033+ hooks/charmhelpers/contrib/openstack/templates.
1034+
1035+ :param templates_dir: str: Base template directory containing release
1036+ sub-directories.
1037+ :param os_release : str: OpenStack release codename to construct template
1038+ loader.
1039+
1040+ :returns : jinja2.ChoiceLoader constructed with a list of
1041+ jinja2.FilesystemLoaders, ordered in descending
1042+ order by OpenStack release.
1043+ """
1044+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1045+ for rel in OPENSTACK_CODENAMES.itervalues()]
1046+
1047+ if not os.path.isdir(templates_dir):
1048+ log('Templates directory not found @ %s.' % templates_dir,
1049+ level=ERROR)
1050+ raise OSConfigException
1051+
1052+ # the bottom contains tempaltes_dir and possibly a common templates dir
1053+ # shipped with the helper.
1054+ loaders = [FileSystemLoader(templates_dir)]
1055+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
1056+ if os.path.isdir(helper_templates):
1057+ loaders.append(FileSystemLoader(helper_templates))
1058+
1059+ for rel, tmpl_dir in tmpl_dirs:
1060+ if os.path.isdir(tmpl_dir):
1061+ loaders.insert(0, FileSystemLoader(tmpl_dir))
1062+ if rel == os_release:
1063+ break
1064+ log('Creating choice loader with dirs: %s' %
1065+ [l.searchpath for l in loaders], level=INFO)
1066+ return ChoiceLoader(loaders)
1067+
1068+
1069+class OSConfigTemplate(object):
1070+ """
1071+ Associates a config file template with a list of context generators.
1072+ Responsible for constructing a template context based on those generators.
1073+ """
1074+ def __init__(self, config_file, contexts):
1075+ self.config_file = config_file
1076+
1077+ if hasattr(contexts, '__call__'):
1078+ self.contexts = [contexts]
1079+ else:
1080+ self.contexts = contexts
1081+
1082+ self._complete_contexts = []
1083+
1084+ def context(self):
1085+ ctxt = {}
1086+ for context in self.contexts:
1087+ _ctxt = context()
1088+ if _ctxt:
1089+ ctxt.update(_ctxt)
1090+ # track interfaces for every complete context.
1091+ [self._complete_contexts.append(interface)
1092+ for interface in context.interfaces
1093+ if interface not in self._complete_contexts]
1094+ return ctxt
1095+
1096+ def complete_contexts(self):
1097+ '''
1098+ Return a list of interfaces that have atisfied contexts.
1099+ '''
1100+ if self._complete_contexts:
1101+ return self._complete_contexts
1102+ self.context()
1103+ return self._complete_contexts
1104+
1105+
1106+class OSConfigRenderer(object):
1107+ """
1108+ This class provides a common templating system to be used by OpenStack
1109+ charms. It is intended to help charms share common code and templates,
1110+ and ease the burden of managing config templates across multiple OpenStack
1111+ releases.
1112+
1113+ Basic usage:
1114+ # import some common context generates from charmhelpers
1115+ from charmhelpers.contrib.openstack import context
1116+
1117+ # Create a renderer object for a specific OS release.
1118+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
1119+ openstack_release='folsom')
1120+ # register some config files with context generators.
1121+ configs.register(config_file='/etc/nova/nova.conf',
1122+ contexts=[context.SharedDBContext(),
1123+ context.AMQPContext()])
1124+ configs.register(config_file='/etc/nova/api-paste.ini',
1125+ contexts=[context.IdentityServiceContext()])
1126+ configs.register(config_file='/etc/haproxy/haproxy.conf',
1127+ contexts=[context.HAProxyContext()])
1128+ # write out a single config
1129+ configs.write('/etc/nova/nova.conf')
1130+ # write out all registered configs
1131+ configs.write_all()
1132+
1133+ Details:
1134+
1135+ OpenStack Releases and template loading
1136+ ---------------------------------------
1137+ When the object is instantiated, it is associated with a specific OS
1138+ release. This dictates how the template loader will be constructed.
1139+
1140+ The constructed loader attempts to load the template from several places
1141+ in the following order:
1142+ - from the most recent OS release-specific template dir (if one exists)
1143+ - the base templates_dir
1144+ - a template directory shipped in the charm with this helper file.
1145+
1146+
1147+ For the example above, '/tmp/templates' contains the following structure:
1148+ /tmp/templates/nova.conf
1149+ /tmp/templates/api-paste.ini
1150+ /tmp/templates/grizzly/api-paste.ini
1151+ /tmp/templates/havana/api-paste.ini
1152+
1153+ Since it was registered with the grizzly release, it first seraches
1154+ the grizzly directory for nova.conf, then the templates dir.
1155+
1156+ When writing api-paste.ini, it will find the template in the grizzly
1157+ directory.
1158+
1159+ If the object were created with folsom, it would fall back to the
1160+ base templates dir for its api-paste.ini template.
1161+
1162+ This system should help manage changes in config files through
1163+ openstack releases, allowing charms to fall back to the most recently
1164+ updated config template for a given release
1165+
1166+ The haproxy.conf, since it is not shipped in the templates dir, will
1167+ be loaded from the module directory's template directory, eg
1168+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1169+ us to ship common templates (haproxy, apache) with the helpers.
1170+
1171+ Context generators
1172+ ---------------------------------------
1173+ Context generators are used to generate template contexts during hook
1174+ execution. Doing so may require inspecting service relations, charm
1175+ config, etc. When registered, a config file is associated with a list
1176+ of generators. When a template is rendered and written, all context
1177+ generates are called in a chain to generate the context dictionary
1178+ passed to the jinja2 template. See context.py for more info.
1179+ """
1180+ def __init__(self, templates_dir, openstack_release):
1181+ if not os.path.isdir(templates_dir):
1182+ log('Could not locate templates dir %s' % templates_dir,
1183+ level=ERROR)
1184+ raise OSConfigException
1185+
1186+ self.templates_dir = templates_dir
1187+ self.openstack_release = openstack_release
1188+ self.templates = {}
1189+ self._tmpl_env = None
1190+
1191+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
1192+ # if this code is running, the object is created pre-install hook.
1193+ # jinja2 shouldn't get touched until the module is reloaded on next
1194+ # hook execution, with proper jinja2 bits successfully imported.
1195+ apt_install('python-jinja2')
1196+
1197+ def register(self, config_file, contexts):
1198+ """
1199+ Register a config file with a list of context generators to be called
1200+ during rendering.
1201+ """
1202+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
1203+ contexts=contexts)
1204+ log('Registered config file: %s' % config_file, level=INFO)
1205+
1206+ def _get_tmpl_env(self):
1207+ if not self._tmpl_env:
1208+ loader = get_loader(self.templates_dir, self.openstack_release)
1209+ self._tmpl_env = Environment(loader=loader)
1210+
1211+ def _get_template(self, template):
1212+ self._get_tmpl_env()
1213+ template = self._tmpl_env.get_template(template)
1214+ log('Loaded template from %s' % template.filename, level=INFO)
1215+ return template
1216+
1217+ def render(self, config_file):
1218+ if config_file not in self.templates:
1219+ log('Config not registered: %s' % config_file, level=ERROR)
1220+ raise OSConfigException
1221+ ctxt = self.templates[config_file].context()
1222+ _tmpl = os.path.basename(config_file)
1223+ log('Rendering from template: %s' % _tmpl, level=INFO)
1224+ template = self._get_template(_tmpl)
1225+ return template.render(ctxt)
1226+
1227+ def write(self, config_file):
1228+ """
1229+ Write a single config file, raises if config file is not registered.
1230+ """
1231+ if config_file not in self.templates:
1232+ log('Config not registered: %s' % config_file, level=ERROR)
1233+ raise OSConfigException
1234+ with open(config_file, 'wb') as out:
1235+ out.write(self.render(config_file))
1236+ log('Wrote template %s.' % config_file, level=INFO)
1237+
1238+ def write_all(self):
1239+ """
1240+ Write out all registered config files.
1241+ """
1242+ [self.write(k) for k in self.templates.iterkeys()]
1243+
1244+ def set_release(self, openstack_release):
1245+ """
1246+ Resets the template environment and generates a new template loader
1247+ based on a the new openstack release.
1248+ """
1249+ self._tmpl_env = None
1250+ self.openstack_release = openstack_release
1251+ self._get_tmpl_env()
1252+
1253+ def complete_contexts(self):
1254+ '''
1255+ Returns a list of context interfaces that yield a complete context.
1256+ '''
1257+ interfaces = []
1258+ [interfaces.extend(i.complete_contexts())
1259+ for i in self.templates.itervalues()]
1260+ return interfaces
1261
1262=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
1263--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
1264+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-07-19 09:54:29 +0000
1265@@ -0,0 +1,273 @@
1266+#!/usr/bin/python
1267+
1268+# Common python helper functions used for OpenStack charms.
1269+
1270+from collections import OrderedDict
1271+
1272+import apt_pkg as apt
1273+import subprocess
1274+import os
1275+import sys
1276+
1277+from charmhelpers.core.hookenv import (
1278+ config,
1279+ log as juju_log,
1280+ charm_dir,
1281+)
1282+
1283+from charmhelpers.core.host import (
1284+ lsb_release,
1285+ apt_install,
1286+)
1287+
1288+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
1289+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
1290+
1291+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
1292+ ('oneiric', 'diablo'),
1293+ ('precise', 'essex'),
1294+ ('quantal', 'folsom'),
1295+ ('raring', 'grizzly'),
1296+ ('saucy', 'havana'),
1297+])
1298+
1299+
1300+OPENSTACK_CODENAMES = OrderedDict([
1301+ ('2011.2', 'diablo'),
1302+ ('2012.1', 'essex'),
1303+ ('2012.2', 'folsom'),
1304+ ('2013.1', 'grizzly'),
1305+ ('2013.2', 'havana'),
1306+ ('2014.1', 'icehouse'),
1307+])
1308+
1309+# The ugly duckling
1310+SWIFT_CODENAMES = {
1311+ '1.4.3': 'diablo',
1312+ '1.4.8': 'essex',
1313+ '1.7.4': 'folsom',
1314+ '1.7.6': 'grizzly',
1315+ '1.7.7': 'grizzly',
1316+ '1.8.0': 'grizzly',
1317+ '1.9.0': 'havana',
1318+ '1.9.1': 'havana',
1319+}
1320+
1321+
1322+def error_out(msg):
1323+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
1324+ sys.exit(1)
1325+
1326+
1327+def get_os_codename_install_source(src):
1328+ '''Derive OpenStack release codename from a given installation source.'''
1329+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1330+ rel = ''
1331+ if src == 'distro':
1332+ try:
1333+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1334+ except KeyError:
1335+ e = 'Could not derive openstack release for '\
1336+ 'this Ubuntu release: %s' % ubuntu_rel
1337+ error_out(e)
1338+ return rel
1339+
1340+ if src.startswith('cloud:'):
1341+ ca_rel = src.split(':')[1]
1342+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
1343+ return ca_rel
1344+
1345+ # Best guess match based on deb string provided
1346+ if src.startswith('deb') or src.startswith('ppa'):
1347+ for k, v in OPENSTACK_CODENAMES.iteritems():
1348+ if v in src:
1349+ return v
1350+
1351+
1352+def get_os_version_install_source(src):
1353+ codename = get_os_codename_install_source(src)
1354+ return get_os_version_codename(codename)
1355+
1356+
1357+def get_os_codename_version(vers):
1358+ '''Determine OpenStack codename from version number.'''
1359+ try:
1360+ return OPENSTACK_CODENAMES[vers]
1361+ except KeyError:
1362+ e = 'Could not determine OpenStack codename for version %s' % vers
1363+ error_out(e)
1364+
1365+
1366+def get_os_version_codename(codename):
1367+ '''Determine OpenStack version number from codename.'''
1368+ for k, v in OPENSTACK_CODENAMES.iteritems():
1369+ if v == codename:
1370+ return k
1371+ e = 'Could not derive OpenStack version for '\
1372+ 'codename: %s' % codename
1373+ error_out(e)
1374+
1375+
1376+def get_os_codename_package(package, fatal=True):
1377+ '''Derive OpenStack release codename from an installed package.'''
1378+ apt.init()
1379+ cache = apt.Cache()
1380+
1381+ try:
1382+ pkg = cache[package]
1383+ except:
1384+ if not fatal:
1385+ return None
1386+ # the package is unknown to the current apt cache.
1387+ e = 'Could not determine version of package with no installation '\
1388+ 'candidate: %s' % package
1389+ error_out(e)
1390+
1391+ if not pkg.current_ver:
1392+ if not fatal:
1393+ return None
1394+ # package is known, but no version is currently installed.
1395+ e = 'Could not determine version of uninstalled package: %s' % package
1396+ error_out(e)
1397+
1398+ vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
1399+
1400+ try:
1401+ if 'swift' in pkg.name:
1402+ vers = vers[:5]
1403+ return SWIFT_CODENAMES[vers]
1404+ else:
1405+ vers = vers[:6]
1406+ return OPENSTACK_CODENAMES[vers]
1407+ except KeyError:
1408+ e = 'Could not determine OpenStack codename for version %s' % vers
1409+ error_out(e)
1410+
1411+
1412+def get_os_version_package(pkg, fatal=True):
1413+ '''Derive OpenStack version number from an installed package.'''
1414+ codename = get_os_codename_package(pkg, fatal=fatal)
1415+
1416+ if not codename:
1417+ return None
1418+
1419+ if 'swift' in pkg:
1420+ vers_map = SWIFT_CODENAMES
1421+ else:
1422+ vers_map = OPENSTACK_CODENAMES
1423+
1424+ for version, cname in vers_map.iteritems():
1425+ if cname == codename:
1426+ return version
1427+ #e = "Could not determine OpenStack version for package: %s" % pkg
1428+ #error_out(e)
1429+
1430+
1431+def import_key(keyid):
1432+ cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
1433+ "--recv-keys %s" % keyid
1434+ try:
1435+ subprocess.check_call(cmd.split(' '))
1436+ except subprocess.CalledProcessError:
1437+ error_out("Error importing repo key %s" % keyid)
1438+
1439+
1440+def configure_installation_source(rel):
1441+ '''Configure apt installation source.'''
1442+ if rel == 'distro':
1443+ return
1444+ elif rel[:4] == "ppa:":
1445+ src = rel
1446+ subprocess.check_call(["add-apt-repository", "-y", src])
1447+ elif rel[:3] == "deb":
1448+ l = len(rel.split('|'))
1449+ if l == 2:
1450+ src, key = rel.split('|')
1451+ juju_log("Importing PPA key from keyserver for %s" % src)
1452+ import_key(key)
1453+ elif l == 1:
1454+ src = rel
1455+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
1456+ f.write(src)
1457+ elif rel[:6] == 'cloud:':
1458+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1459+ rel = rel.split(':')[1]
1460+ u_rel = rel.split('-')[0]
1461+ ca_rel = rel.split('-')[1]
1462+
1463+ if u_rel != ubuntu_rel:
1464+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
1465+ 'version (%s)' % (ca_rel, ubuntu_rel)
1466+ error_out(e)
1467+
1468+ if 'staging' in ca_rel:
1469+ # staging is just a regular PPA.
1470+ os_rel = ca_rel.split('/')[0]
1471+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
1472+ cmd = 'add-apt-repository -y %s' % ppa
1473+ subprocess.check_call(cmd.split(' '))
1474+ return
1475+
1476+ # map charm config options to actual archive pockets.
1477+ pockets = {
1478+ 'folsom': 'precise-updates/folsom',
1479+ 'folsom/updates': 'precise-updates/folsom',
1480+ 'folsom/proposed': 'precise-proposed/folsom',
1481+ 'grizzly': 'precise-updates/grizzly',
1482+ 'grizzly/updates': 'precise-updates/grizzly',
1483+ 'grizzly/proposed': 'precise-proposed/grizzly',
1484+ 'havana': 'precise-updates/havana',
1485+ 'havana/updates': 'precise-updates/havana',
1486+ 'havana/proposed': 'precise-proposed/havana',
1487+ }
1488+
1489+ try:
1490+ pocket = pockets[ca_rel]
1491+ except KeyError:
1492+ e = 'Invalid Cloud Archive release specified: %s' % rel
1493+ error_out(e)
1494+
1495+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
1496+ apt_install('ubuntu-cloud-keyring', fatal=True)
1497+
1498+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
1499+ f.write(src)
1500+ else:
1501+ error_out("Invalid openstack-release specified: %s" % rel)
1502+
1503+
1504+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
1505+ """
1506+ Write an rc file in the charm-delivered directory containing
1507+ exported environment variables provided by env_vars. Any charm scripts run
1508+ outside the juju hook environment can source this scriptrc to obtain
1509+ updated config information necessary to perform health checks or
1510+ service changes.
1511+ """
1512+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
1513+ if not os.path.exists(os.path.dirname(juju_rc_path)):
1514+ os.mkdir(os.path.dirname(juju_rc_path))
1515+ with open(juju_rc_path, 'wb') as rc_script:
1516+ rc_script.write(
1517+ "#!/bin/bash\n")
1518+ [rc_script.write('export %s=%s\n' % (u, p))
1519+ for u, p in env_vars.iteritems() if u != "script_path"]
1520+
1521+
1522+def openstack_upgrade_available(package):
1523+ """
1524+ Determines if an OpenStack upgrade is available from installation
1525+ source, based on version of installed package.
1526+
1527+ :param package: str: Name of installed package.
1528+
1529+ :returns: bool: : Returns True if configured installation source offers
1530+ a newer version of package.
1531+
1532+ """
1533+
1534+ src = config('openstack-origin')
1535+ cur_vers = get_os_version_package(package)
1536+ available_vers = get_os_version_install_source(src)
1537+ apt.init()
1538+ return apt.version_compare(available_vers, cur_vers) == 1
1539
1540=== added directory 'hooks/charmhelpers/core'
1541=== added file 'hooks/charmhelpers/core/__init__.py'
1542=== added file 'hooks/charmhelpers/core/hookenv.py'
1543--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
1544+++ hooks/charmhelpers/core/hookenv.py 2013-07-19 09:54:29 +0000
1545@@ -0,0 +1,340 @@
1546+"Interactions with the Juju environment"
1547+# Copyright 2013 Canonical Ltd.
1548+#
1549+# Authors:
1550+# Charm Helpers Developers <juju@lists.ubuntu.com>
1551+
1552+import os
1553+import json
1554+import yaml
1555+import subprocess
1556+import UserDict
1557+
1558+CRITICAL = "CRITICAL"
1559+ERROR = "ERROR"
1560+WARNING = "WARNING"
1561+INFO = "INFO"
1562+DEBUG = "DEBUG"
1563+MARKER = object()
1564+
1565+cache = {}
1566+
1567+
1568+def cached(func):
1569+ ''' Cache return values for multiple executions of func + args
1570+
1571+ For example:
1572+
1573+ @cached
1574+ def unit_get(attribute):
1575+ pass
1576+
1577+ unit_get('test')
1578+
1579+ will cache the result of unit_get + 'test' for future calls.
1580+ '''
1581+ def wrapper(*args, **kwargs):
1582+ global cache
1583+ key = str((func, args, kwargs))
1584+ try:
1585+ return cache[key]
1586+ except KeyError:
1587+ res = func(*args, **kwargs)
1588+ cache[key] = res
1589+ return res
1590+ return wrapper
1591+
1592+
1593+def flush(key):
1594+ ''' Flushes any entries from function cache where the
1595+ key is found in the function+args '''
1596+ flush_list = []
1597+ for item in cache:
1598+ if key in item:
1599+ flush_list.append(item)
1600+ for item in flush_list:
1601+ del cache[item]
1602+
1603+
1604+def log(message, level=None):
1605+ "Write a message to the juju log"
1606+ command = ['juju-log']
1607+ if level:
1608+ command += ['-l', level]
1609+ command += [message]
1610+ subprocess.call(command)
1611+
1612+
1613+class Serializable(UserDict.IterableUserDict):
1614+ "Wrapper, an object that can be serialized to yaml or json"
1615+
1616+ def __init__(self, obj):
1617+ # wrap the object
1618+ UserDict.IterableUserDict.__init__(self)
1619+ self.data = obj
1620+
1621+ def __getattr__(self, attr):
1622+ # See if this object has attribute.
1623+ if attr in ("json", "yaml", "data"):
1624+ return self.__dict__[attr]
1625+ # Check for attribute in wrapped object.
1626+ got = getattr(self.data, attr, MARKER)
1627+ if got is not MARKER:
1628+ return got
1629+ # Proxy to the wrapped object via dict interface.
1630+ try:
1631+ return self.data[attr]
1632+ except KeyError:
1633+ raise AttributeError(attr)
1634+
1635+ def __getstate__(self):
1636+ # Pickle as a standard dictionary.
1637+ return self.data
1638+
1639+ def __setstate__(self, state):
1640+ # Unpickle into our wrapper.
1641+ self.data = state
1642+
1643+ def json(self):
1644+ "Serialize the object to json"
1645+ return json.dumps(self.data)
1646+
1647+ def yaml(self):
1648+ "Serialize the object to yaml"
1649+ return yaml.dump(self.data)
1650+
1651+
1652+def execution_environment():
1653+ """A convenient bundling of the current execution context"""
1654+ context = {}
1655+ context['conf'] = config()
1656+ if relation_id():
1657+ context['reltype'] = relation_type()
1658+ context['relid'] = relation_id()
1659+ context['rel'] = relation_get()
1660+ context['unit'] = local_unit()
1661+ context['rels'] = relations()
1662+ context['env'] = os.environ
1663+ return context
1664+
1665+
1666+def in_relation_hook():
1667+ "Determine whether we're running in a relation hook"
1668+ return 'JUJU_RELATION' in os.environ
1669+
1670+
1671+def relation_type():
1672+ "The scope for the current relation hook"
1673+ return os.environ.get('JUJU_RELATION', None)
1674+
1675+
1676+def relation_id():
1677+ "The relation ID for the current relation hook"
1678+ return os.environ.get('JUJU_RELATION_ID', None)
1679+
1680+
1681+def local_unit():
1682+ "Local unit ID"
1683+ return os.environ['JUJU_UNIT_NAME']
1684+
1685+
1686+def remote_unit():
1687+ "The remote unit for the current relation hook"
1688+ return os.environ['JUJU_REMOTE_UNIT']
1689+
1690+
1691+def service_name():
1692+ "The name service group this unit belongs to"
1693+ return local_unit().split('/')[0]
1694+
1695+
1696+@cached
1697+def config(scope=None):
1698+ "Juju charm configuration"
1699+ config_cmd_line = ['config-get']
1700+ if scope is not None:
1701+ config_cmd_line.append(scope)
1702+ config_cmd_line.append('--format=json')
1703+ try:
1704+ return json.loads(subprocess.check_output(config_cmd_line))
1705+ except ValueError:
1706+ return None
1707+
1708+
1709+@cached
1710+def relation_get(attribute=None, unit=None, rid=None):
1711+ _args = ['relation-get', '--format=json']
1712+ if rid:
1713+ _args.append('-r')
1714+ _args.append(rid)
1715+ _args.append(attribute or '-')
1716+ if unit:
1717+ _args.append(unit)
1718+ try:
1719+ return json.loads(subprocess.check_output(_args))
1720+ except ValueError:
1721+ return None
1722+
1723+
1724+def relation_set(relation_id=None, relation_settings={}, **kwargs):
1725+ relation_cmd_line = ['relation-set']
1726+ if relation_id is not None:
1727+ relation_cmd_line.extend(('-r', relation_id))
1728+ for k, v in (relation_settings.items() + kwargs.items()):
1729+ if v is None:
1730+ relation_cmd_line.append('{}='.format(k))
1731+ else:
1732+ relation_cmd_line.append('{}={}'.format(k, v))
1733+ subprocess.check_call(relation_cmd_line)
1734+ # Flush cache of any relation-gets for local unit
1735+ flush(local_unit())
1736+
1737+
1738+@cached
1739+def relation_ids(reltype=None):
1740+ "A list of relation_ids"
1741+ reltype = reltype or relation_type()
1742+ relid_cmd_line = ['relation-ids', '--format=json']
1743+ if reltype is not None:
1744+ relid_cmd_line.append(reltype)
1745+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
1746+ return []
1747+
1748+
1749+@cached
1750+def related_units(relid=None):
1751+ "A list of related units"
1752+ relid = relid or relation_id()
1753+ units_cmd_line = ['relation-list', '--format=json']
1754+ if relid is not None:
1755+ units_cmd_line.extend(('-r', relid))
1756+ return json.loads(subprocess.check_output(units_cmd_line)) or []
1757+
1758+
1759+@cached
1760+def relation_for_unit(unit=None, rid=None):
1761+ "Get the json represenation of a unit's relation"
1762+ unit = unit or remote_unit()
1763+ relation = relation_get(unit=unit, rid=rid)
1764+ for key in relation:
1765+ if key.endswith('-list'):
1766+ relation[key] = relation[key].split()
1767+ relation['__unit__'] = unit
1768+ return relation
1769+
1770+
1771+@cached
1772+def relations_for_id(relid=None):
1773+ "Get relations of a specific relation ID"
1774+ relation_data = []
1775+ relid = relid or relation_ids()
1776+ for unit in related_units(relid):
1777+ unit_data = relation_for_unit(unit, relid)
1778+ unit_data['__relid__'] = relid
1779+ relation_data.append(unit_data)
1780+ return relation_data
1781+
1782+
1783+@cached
1784+def relations_of_type(reltype=None):
1785+ "Get relations of a specific type"
1786+ relation_data = []
1787+ reltype = reltype or relation_type()
1788+ for relid in relation_ids(reltype):
1789+ for relation in relations_for_id(relid):
1790+ relation['__relid__'] = relid
1791+ relation_data.append(relation)
1792+ return relation_data
1793+
1794+
1795+@cached
1796+def relation_types():
1797+ "Get a list of relation types supported by this charm"
1798+ charmdir = os.environ.get('CHARM_DIR', '')
1799+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1800+ md = yaml.safe_load(mdf)
1801+ rel_types = []
1802+ for key in ('provides', 'requires', 'peers'):
1803+ section = md.get(key)
1804+ if section:
1805+ rel_types.extend(section.keys())
1806+ mdf.close()
1807+ return rel_types
1808+
1809+
1810+@cached
1811+def relations():
1812+ rels = {}
1813+ for reltype in relation_types():
1814+ relids = {}
1815+ for relid in relation_ids(reltype):
1816+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1817+ for unit in related_units(relid):
1818+ reldata = relation_get(unit=unit, rid=relid)
1819+ units[unit] = reldata
1820+ relids[relid] = units
1821+ rels[reltype] = relids
1822+ return rels
1823+
1824+
1825+def open_port(port, protocol="TCP"):
1826+ "Open a service network port"
1827+ _args = ['open-port']
1828+ _args.append('{}/{}'.format(port, protocol))
1829+ subprocess.check_call(_args)
1830+
1831+
1832+def close_port(port, protocol="TCP"):
1833+ "Close a service network port"
1834+ _args = ['close-port']
1835+ _args.append('{}/{}'.format(port, protocol))
1836+ subprocess.check_call(_args)
1837+
1838+
1839+@cached
1840+def unit_get(attribute):
1841+ _args = ['unit-get', '--format=json', attribute]
1842+ try:
1843+ return json.loads(subprocess.check_output(_args))
1844+ except ValueError:
1845+ return None
1846+
1847+
1848+def unit_private_ip():
1849+ return unit_get('private-address')
1850+
1851+
1852+class UnregisteredHookError(Exception):
1853+ pass
1854+
1855+
1856+class Hooks(object):
1857+ def __init__(self):
1858+ super(Hooks, self).__init__()
1859+ self._hooks = {}
1860+
1861+ def register(self, name, function):
1862+ self._hooks[name] = function
1863+
1864+ def execute(self, args):
1865+ hook_name = os.path.basename(args[0])
1866+ if hook_name in self._hooks:
1867+ self._hooks[hook_name]()
1868+ else:
1869+ raise UnregisteredHookError(hook_name)
1870+
1871+ def hook(self, *hook_names):
1872+ def wrapper(decorated):
1873+ for hook_name in hook_names:
1874+ self.register(hook_name, decorated)
1875+ else:
1876+ self.register(decorated.__name__, decorated)
1877+ if '_' in decorated.__name__:
1878+ self.register(
1879+ decorated.__name__.replace('_', '-'), decorated)
1880+ return decorated
1881+ return wrapper
1882+
1883+
1884+def charm_dir():
1885+ return os.environ.get('CHARM_DIR')
1886
1887=== added file 'hooks/charmhelpers/core/host.py'
1888--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
1889+++ hooks/charmhelpers/core/host.py 2013-07-19 09:54:29 +0000
1890@@ -0,0 +1,269 @@
1891+"""Tools for working with the host system"""
1892+# Copyright 2012 Canonical Ltd.
1893+#
1894+# Authors:
1895+# Nick Moffitt <nick.moffitt@canonical.com>
1896+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1897+
1898+import apt_pkg
1899+import os
1900+import pwd
1901+import grp
1902+import subprocess
1903+import hashlib
1904+
1905+from collections import OrderedDict
1906+
1907+from hookenv import log
1908+
1909+
1910+def service_start(service_name):
1911+ service('start', service_name)
1912+
1913+
1914+def service_stop(service_name):
1915+ service('stop', service_name)
1916+
1917+
1918+def service_restart(service_name):
1919+ service('restart', service_name)
1920+
1921+
1922+def service_reload(service_name, restart_on_failure=False):
1923+ if not service('reload', service_name) and restart_on_failure:
1924+ service('restart', service_name)
1925+
1926+
1927+def service(action, service_name):
1928+ cmd = ['service', service_name, action]
1929+ return subprocess.call(cmd) == 0
1930+
1931+
1932+def service_running(service):
1933+ try:
1934+ output = subprocess.check_output(['service', service, 'status'])
1935+ except subprocess.CalledProcessError:
1936+ return False
1937+ else:
1938+ if ("start/running" in output or "is running" in output):
1939+ return True
1940+ else:
1941+ return False
1942+
1943+
1944+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1945+ """Add a user"""
1946+ try:
1947+ user_info = pwd.getpwnam(username)
1948+ log('user {0} already exists!'.format(username))
1949+ except KeyError:
1950+ log('creating user {0}'.format(username))
1951+ cmd = ['useradd']
1952+ if system_user or password is None:
1953+ cmd.append('--system')
1954+ else:
1955+ cmd.extend([
1956+ '--create-home',
1957+ '--shell', shell,
1958+ '--password', password,
1959+ ])
1960+ cmd.append(username)
1961+ subprocess.check_call(cmd)
1962+ user_info = pwd.getpwnam(username)
1963+ return user_info
1964+
1965+
1966+def add_user_to_group(username, group):
1967+ """Add a user to a group"""
1968+ cmd = [
1969+ 'gpasswd', '-a',
1970+ username,
1971+ group
1972+ ]
1973+ log("Adding user {} to group {}".format(username, group))
1974+ subprocess.check_call(cmd)
1975+
1976+
1977+def rsync(from_path, to_path, flags='-r', options=None):
1978+ """Replicate the contents of a path"""
1979+ options = options or ['--delete', '--executability']
1980+ cmd = ['/usr/bin/rsync', flags]
1981+ cmd.extend(options)
1982+ cmd.append(from_path)
1983+ cmd.append(to_path)
1984+ log(" ".join(cmd))
1985+ return subprocess.check_output(cmd).strip()
1986+
1987+
1988+def symlink(source, destination):
1989+ """Create a symbolic link"""
1990+ log("Symlinking {} as {}".format(source, destination))
1991+ cmd = [
1992+ 'ln',
1993+ '-sf',
1994+ source,
1995+ destination,
1996+ ]
1997+ subprocess.check_call(cmd)
1998+
1999+
2000+def mkdir(path, owner='root', group='root', perms=0555, force=False):
2001+ """Create a directory"""
2002+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
2003+ perms))
2004+ uid = pwd.getpwnam(owner).pw_uid
2005+ gid = grp.getgrnam(group).gr_gid
2006+ realpath = os.path.abspath(path)
2007+ if os.path.exists(realpath):
2008+ if force and not os.path.isdir(realpath):
2009+ log("Removing non-directory file {} prior to mkdir()".format(path))
2010+ os.unlink(realpath)
2011+ else:
2012+ os.makedirs(realpath, perms)
2013+ os.chown(realpath, uid, gid)
2014+
2015+
2016+def write_file(path, content, owner='root', group='root', perms=0444):
2017+ """Create or overwrite a file with the contents of a string"""
2018+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2019+ uid = pwd.getpwnam(owner).pw_uid
2020+ gid = grp.getgrnam(group).gr_gid
2021+ with open(path, 'w') as target:
2022+ os.fchown(target.fileno(), uid, gid)
2023+ os.fchmod(target.fileno(), perms)
2024+ target.write(content)
2025+
2026+
2027+def filter_installed_packages(packages):
2028+ """Returns a list of packages that require installation"""
2029+ apt_pkg.init()
2030+ cache = apt_pkg.Cache()
2031+ _pkgs = []
2032+ for package in packages:
2033+ try:
2034+ p = cache[package]
2035+ p.current_ver or _pkgs.append(package)
2036+ except KeyError:
2037+ log('Package {} has no installation candidate.'.format(package),
2038+ level='WARNING')
2039+ _pkgs.append(package)
2040+ return _pkgs
2041+
2042+
2043+def apt_install(packages, options=None, fatal=False):
2044+ """Install one or more packages"""
2045+ options = options or []
2046+ cmd = ['apt-get', '-y']
2047+ cmd.extend(options)
2048+ cmd.append('install')
2049+ if isinstance(packages, basestring):
2050+ cmd.append(packages)
2051+ else:
2052+ cmd.extend(packages)
2053+ log("Installing {} with options: {}".format(packages,
2054+ options))
2055+ if fatal:
2056+ subprocess.check_call(cmd)
2057+ else:
2058+ subprocess.call(cmd)
2059+
2060+
2061+def apt_update(fatal=False):
2062+ """Update local apt cache"""
2063+ cmd = ['apt-get', 'update']
2064+ if fatal:
2065+ subprocess.check_call(cmd)
2066+ else:
2067+ subprocess.call(cmd)
2068+
2069+
2070+def mount(device, mountpoint, options=None, persist=False):
2071+ '''Mount a filesystem'''
2072+ cmd_args = ['mount']
2073+ if options is not None:
2074+ cmd_args.extend(['-o', options])
2075+ cmd_args.extend([device, mountpoint])
2076+ try:
2077+ subprocess.check_output(cmd_args)
2078+ except subprocess.CalledProcessError, e:
2079+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2080+ return False
2081+ if persist:
2082+ # TODO: update fstab
2083+ pass
2084+ return True
2085+
2086+
2087+def umount(mountpoint, persist=False):
2088+ '''Unmount a filesystem'''
2089+ cmd_args = ['umount', mountpoint]
2090+ try:
2091+ subprocess.check_output(cmd_args)
2092+ except subprocess.CalledProcessError, e:
2093+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2094+ return False
2095+ if persist:
2096+ # TODO: update fstab
2097+ pass
2098+ return True
2099+
2100+
2101+def mounts():
2102+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
2103+ with open('/proc/mounts') as f:
2104+ # [['/mount/point','/dev/path'],[...]]
2105+ system_mounts = [m[1::-1] for m in [l.strip().split()
2106+ for l in f.readlines()]]
2107+ return system_mounts
2108+
2109+
2110+def file_hash(path):
2111+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
2112+ if os.path.exists(path):
2113+ h = hashlib.md5()
2114+ with open(path, 'r') as source:
2115+ h.update(source.read()) # IGNORE:E1101 - it does have update
2116+ return h.hexdigest()
2117+ else:
2118+ return None
2119+
2120+
2121+def restart_on_change(restart_map):
2122+ ''' Restart services based on configuration files changing
2123+
2124+ This function is used a decorator, for example
2125+
2126+ @restart_on_change({
2127+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
2128+ })
2129+ def ceph_client_changed():
2130+ ...
2131+
2132+ In this example, the cinder-api and cinder-volume services
2133+ would be restarted if /etc/ceph/ceph.conf is changed by the
2134+ ceph_client_changed function.
2135+ '''
2136+ def wrap(f):
2137+ def wrapped_f(*args):
2138+ checksums = {}
2139+ for path in restart_map:
2140+ checksums[path] = file_hash(path)
2141+ f(*args)
2142+ restarts = []
2143+ for path in restart_map:
2144+ if checksums[path] != file_hash(path):
2145+ restarts += restart_map[path]
2146+ for service_name in list(OrderedDict.fromkeys(restarts)):
2147+ service('restart', service_name)
2148+ return wrapped_f
2149+ return wrap
2150+
2151+
2152+def lsb_release():
2153+ '''Return /etc/lsb-release in a dict'''
2154+ d = {}
2155+ with open('/etc/lsb-release', 'r') as lsb:
2156+ for l in lsb:
2157+ k, v = l.split('=')
2158+ d[k.strip()] = v.strip()
2159+ return d
2160
2161=== modified symlink 'hooks/cluster-relation-departed'
2162=== target changed u'hooks.py' => u'quantum_hooks.py'
2163=== modified symlink 'hooks/config-changed'
2164=== target changed u'hooks.py' => u'quantum_hooks.py'
2165=== modified symlink 'hooks/ha-relation-joined'
2166=== target changed u'hooks.py' => u'quantum_hooks.py'
2167=== modified symlink 'hooks/install'
2168=== target changed u'hooks.py' => u'quantum_hooks.py'
2169=== removed directory 'hooks/lib'
2170=== removed file 'hooks/lib/__init__.py'
2171=== removed file 'hooks/lib/cluster_utils.py'
2172--- hooks/lib/cluster_utils.py 2013-03-20 16:08:54 +0000
2173+++ hooks/lib/cluster_utils.py 1970-01-01 00:00:00 +0000
2174@@ -1,130 +0,0 @@
2175-#
2176-# Copyright 2012 Canonical Ltd.
2177-#
2178-# This file is sourced from lp:openstack-charm-helpers
2179-#
2180-# Authors:
2181-# James Page <james.page@ubuntu.com>
2182-# Adam Gandelman <adamg@ubuntu.com>
2183-#
2184-
2185-from lib.utils import (
2186- juju_log,
2187- relation_ids,
2188- relation_list,
2189- relation_get,
2190- get_unit_hostname,
2191- config_get
2192- )
2193-import subprocess
2194-import os
2195-
2196-
2197-def is_clustered():
2198- for r_id in (relation_ids('ha') or []):
2199- for unit in (relation_list(r_id) or []):
2200- clustered = relation_get('clustered',
2201- rid=r_id,
2202- unit=unit)
2203- if clustered:
2204- return True
2205- return False
2206-
2207-
2208-def is_leader(resource):
2209- cmd = [
2210- "crm", "resource",
2211- "show", resource
2212- ]
2213- try:
2214- status = subprocess.check_output(cmd)
2215- except subprocess.CalledProcessError:
2216- return False
2217- else:
2218- if get_unit_hostname() in status:
2219- return True
2220- else:
2221- return False
2222-
2223-
2224-def peer_units():
2225- peers = []
2226- for r_id in (relation_ids('cluster') or []):
2227- for unit in (relation_list(r_id) or []):
2228- peers.append(unit)
2229- return peers
2230-
2231-
2232-def oldest_peer(peers):
2233- local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
2234- for peer in peers:
2235- remote_unit_no = int(peer.split('/')[1])
2236- if remote_unit_no < local_unit_no:
2237- return False
2238- return True
2239-
2240-
2241-def eligible_leader(resource):
2242- if is_clustered():
2243- if not is_leader(resource):
2244- juju_log('INFO', 'Deferring action to CRM leader.')
2245- return False
2246- else:
2247- peers = peer_units()
2248- if peers and not oldest_peer(peers):
2249- juju_log('INFO', 'Deferring action to oldest service unit.')
2250- return False
2251- return True
2252-
2253-
2254-def https():
2255- '''
2256- Determines whether enough data has been provided in configuration
2257- or relation data to configure HTTPS
2258- .
2259- returns: boolean
2260- '''
2261- if config_get('use-https') == "yes":
2262- return True
2263- if config_get('ssl_cert') and config_get('ssl_key'):
2264- return True
2265- for r_id in relation_ids('identity-service'):
2266- for unit in relation_list(r_id):
2267- if (relation_get('https_keystone', rid=r_id, unit=unit) and
2268- relation_get('ssl_cert', rid=r_id, unit=unit) and
2269- relation_get('ssl_key', rid=r_id, unit=unit) and
2270- relation_get('ca_cert', rid=r_id, unit=unit)):
2271- return True
2272- return False
2273-
2274-
2275-def determine_api_port(public_port):
2276- '''
2277- Determine correct API server listening port based on
2278- existence of HTTPS reverse proxy and/or haproxy.
2279-
2280- public_port: int: standard public port for given service
2281-
2282- returns: int: the correct listening port for the API service
2283- '''
2284- i = 0
2285- if len(peer_units()) > 0 or is_clustered():
2286- i += 1
2287- if https():
2288- i += 1
2289- return public_port - (i * 10)
2290-
2291-
2292-def determine_haproxy_port(public_port):
2293- '''
2294- Description: Determine correct proxy listening port based on public IP +
2295- existence of HTTPS reverse proxy.
2296-
2297- public_port: int: standard public port for given service
2298-
2299- returns: int: the correct listening port for the HAProxy service
2300- '''
2301- i = 0
2302- if https():
2303- i += 1
2304- return public_port - (i * 10)
2305
2306=== removed file 'hooks/lib/openstack_common.py'
2307--- hooks/lib/openstack_common.py 2013-05-22 22:24:48 +0000
2308+++ hooks/lib/openstack_common.py 1970-01-01 00:00:00 +0000
2309@@ -1,230 +0,0 @@
2310-#!/usr/bin/python
2311-
2312-# Common python helper functions used for OpenStack charms.
2313-
2314-import apt_pkg as apt
2315-import subprocess
2316-import os
2317-
2318-CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
2319-CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
2320-
2321-ubuntu_openstack_release = {
2322- 'oneiric': 'diablo',
2323- 'precise': 'essex',
2324- 'quantal': 'folsom',
2325- 'raring': 'grizzly',
2326-}
2327-
2328-
2329-openstack_codenames = {
2330- '2011.2': 'diablo',
2331- '2012.1': 'essex',
2332- '2012.2': 'folsom',
2333- '2013.1': 'grizzly',
2334- '2013.2': 'havana',
2335-}
2336-
2337-# The ugly duckling
2338-swift_codenames = {
2339- '1.4.3': 'diablo',
2340- '1.4.8': 'essex',
2341- '1.7.4': 'folsom',
2342- '1.7.6': 'grizzly',
2343- '1.7.7': 'grizzly',
2344- '1.8.0': 'grizzly',
2345-}
2346-
2347-
2348-def juju_log(msg):
2349- subprocess.check_call(['juju-log', msg])
2350-
2351-
2352-def error_out(msg):
2353- juju_log("FATAL ERROR: %s" % msg)
2354- exit(1)
2355-
2356-
2357-def lsb_release():
2358- '''Return /etc/lsb-release in a dict'''
2359- lsb = open('/etc/lsb-release', 'r')
2360- d = {}
2361- for l in lsb:
2362- k, v = l.split('=')
2363- d[k.strip()] = v.strip()
2364- return d
2365-
2366-
2367-def get_os_codename_install_source(src):
2368- '''Derive OpenStack release codename from a given installation source.'''
2369- ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2370-
2371- rel = ''
2372- if src == 'distro':
2373- try:
2374- rel = ubuntu_openstack_release[ubuntu_rel]
2375- except KeyError:
2376- e = 'Code not derive openstack release for '\
2377- 'this Ubuntu release: %s' % rel
2378- error_out(e)
2379- return rel
2380-
2381- if src.startswith('cloud:'):
2382- ca_rel = src.split(':')[1]
2383- ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
2384- return ca_rel
2385-
2386- # Best guess match based on deb string provided
2387- if src.startswith('deb') or src.startswith('ppa'):
2388- for k, v in openstack_codenames.iteritems():
2389- if v in src:
2390- return v
2391-
2392-
2393-def get_os_codename_version(vers):
2394- '''Determine OpenStack codename from version number.'''
2395- try:
2396- return openstack_codenames[vers]
2397- except KeyError:
2398- e = 'Could not determine OpenStack codename for version %s' % vers
2399- error_out(e)
2400-
2401-
2402-def get_os_version_codename(codename):
2403- '''Determine OpenStack version number from codename.'''
2404- for k, v in openstack_codenames.iteritems():
2405- if v == codename:
2406- return k
2407- e = 'Code not derive OpenStack version for '\
2408- 'codename: %s' % codename
2409- error_out(e)
2410-
2411-
2412-def get_os_codename_package(pkg):
2413- '''Derive OpenStack release codename from an installed package.'''
2414- apt.init()
2415- cache = apt.Cache()
2416- try:
2417- pkg = cache[pkg]
2418- except:
2419- e = 'Could not determine version of installed package: %s' % pkg
2420- error_out(e)
2421-
2422- vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
2423-
2424- try:
2425- if 'swift' in pkg.name:
2426- vers = vers[:5]
2427- return swift_codenames[vers]
2428- else:
2429- vers = vers[:6]
2430- return openstack_codenames[vers]
2431- except KeyError:
2432- e = 'Could not determine OpenStack codename for version %s' % vers
2433- error_out(e)
2434-
2435-
2436-def get_os_version_package(pkg):
2437- '''Derive OpenStack version number from an installed package.'''
2438- codename = get_os_codename_package(pkg)
2439-
2440- if 'swift' in pkg:
2441- vers_map = swift_codenames
2442- else:
2443- vers_map = openstack_codenames
2444-
2445- for version, cname in vers_map.iteritems():
2446- if cname == codename:
2447- return version
2448- e = "Could not determine OpenStack version for package: %s" % pkg
2449- error_out(e)
2450-
2451-
2452-def configure_installation_source(rel):
2453- '''Configure apt installation source.'''
2454-
2455- def _import_key(keyid):
2456- cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
2457- "--recv-keys %s" % keyid
2458- try:
2459- subprocess.check_call(cmd.split(' '))
2460- except subprocess.CalledProcessError:
2461- error_out("Error importing repo key %s" % keyid)
2462-
2463- if rel == 'distro':
2464- return
2465- elif rel[:4] == "ppa:":
2466- src = rel
2467- subprocess.check_call(["add-apt-repository", "-y", src])
2468- elif rel[:3] == "deb":
2469- l = len(rel.split('|'))
2470- if l == 2:
2471- src, key = rel.split('|')
2472- juju_log("Importing PPA key from keyserver for %s" % src)
2473- _import_key(key)
2474- elif l == 1:
2475- src = rel
2476- else:
2477- error_out("Invalid openstack-release: %s" % rel)
2478-
2479- with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2480- f.write(src)
2481- elif rel[:6] == 'cloud:':
2482- ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2483- rel = rel.split(':')[1]
2484- u_rel = rel.split('-')[0]
2485- ca_rel = rel.split('-')[1]
2486-
2487- if u_rel != ubuntu_rel:
2488- e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
2489- 'version (%s)' % (ca_rel, ubuntu_rel)
2490- error_out(e)
2491-
2492- if 'staging' in ca_rel:
2493- # staging is just a regular PPA.
2494- os_rel = ca_rel.split('/')[0]
2495- ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
2496- cmd = 'add-apt-repository -y %s' % ppa
2497- subprocess.check_call(cmd.split(' '))
2498- return
2499-
2500- # map charm config options to actual archive pockets.
2501- pockets = {
2502- 'folsom': 'precise-updates/folsom',
2503- 'folsom/updates': 'precise-updates/folsom',
2504- 'folsom/proposed': 'precise-proposed/folsom',
2505- 'grizzly': 'precise-updates/grizzly',
2506- 'grizzly/updates': 'precise-updates/grizzly',
2507- 'grizzly/proposed': 'precise-proposed/grizzly'
2508- }
2509-
2510- try:
2511- pocket = pockets[ca_rel]
2512- except KeyError:
2513- e = 'Invalid Cloud Archive release specified: %s' % rel
2514- error_out(e)
2515-
2516- src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
2517- _import_key(CLOUD_ARCHIVE_KEY_ID)
2518-
2519- with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
2520- f.write(src)
2521- else:
2522- error_out("Invalid openstack-release specified: %s" % rel)
2523-
2524-
2525-def save_script_rc(script_path="scripts/scriptrc", **env_vars):
2526- """
2527- Write an rc file in the charm-delivered directory containing
2528- exported environment variables provided by env_vars. Any charm scripts run
2529- outside the juju hook environment can source this scriptrc to obtain
2530- updated config information necessary to perform health checks or
2531- service changes.
2532- """
2533- charm_dir = os.getenv('CHARM_DIR')
2534- juju_rc_path = "%s/%s" % (charm_dir, script_path)
2535- with open(juju_rc_path, 'wb') as rc_script:
2536- rc_script.write(
2537- "#!/bin/bash\n")
2538- [rc_script.write('export %s=%s\n' % (u, p))
2539- for u, p in env_vars.iteritems() if u != "script_path"]
2540
2541=== removed file 'hooks/lib/utils.py'
2542--- hooks/lib/utils.py 2013-04-12 15:39:37 +0000
2543+++ hooks/lib/utils.py 1970-01-01 00:00:00 +0000
2544@@ -1,359 +0,0 @@
2545-#
2546-# Copyright 2012 Canonical Ltd.
2547-#
2548-# This file is sourced from lp:openstack-charm-helpers
2549-#
2550-# Authors:
2551-# James Page <james.page@ubuntu.com>
2552-# Paul Collins <paul.collins@canonical.com>
2553-# Adam Gandelman <adamg@ubuntu.com>
2554-#
2555-
2556-import json
2557-import os
2558-import subprocess
2559-import socket
2560-import sys
2561-import hashlib
2562-
2563-
2564-def do_hooks(hooks):
2565- hook = os.path.basename(sys.argv[0])
2566-
2567- try:
2568- hook_func = hooks[hook]
2569- except KeyError:
2570- juju_log('INFO',
2571- "This charm doesn't know how to handle '{}'.".format(hook))
2572- else:
2573- hook_func()
2574-
2575-
2576-def install(*pkgs):
2577- cmd = [
2578- 'apt-get',
2579- '-y',
2580- 'install'
2581- ]
2582- for pkg in pkgs:
2583- cmd.append(pkg)
2584- subprocess.check_call(cmd)
2585-
2586-TEMPLATES_DIR = 'templates'
2587-
2588-try:
2589- import jinja2
2590-except ImportError:
2591- install('python-jinja2')
2592- import jinja2
2593-
2594-try:
2595- import dns.resolver
2596-except ImportError:
2597- install('python-dnspython')
2598- import dns.resolver
2599-
2600-
2601-def render_template(template_name, context, template_dir=TEMPLATES_DIR):
2602- templates = jinja2.Environment(
2603- loader=jinja2.FileSystemLoader(template_dir)
2604- )
2605- template = templates.get_template(template_name)
2606- return template.render(context)
2607-
2608-CLOUD_ARCHIVE = \
2609-""" # Ubuntu Cloud Archive
2610-deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2611-"""
2612-
2613-CLOUD_ARCHIVE_POCKETS = {
2614- 'folsom': 'precise-updates/folsom',
2615- 'folsom/updates': 'precise-updates/folsom',
2616- 'folsom/proposed': 'precise-proposed/folsom',
2617- 'grizzly': 'precise-updates/grizzly',
2618- 'grizzly/updates': 'precise-updates/grizzly',
2619- 'grizzly/proposed': 'precise-proposed/grizzly'
2620- }
2621-
2622-
2623-def configure_source():
2624- source = str(config_get('openstack-origin'))
2625- if not source:
2626- return
2627- if source.startswith('ppa:'):
2628- cmd = [
2629- 'add-apt-repository',
2630- source
2631- ]
2632- subprocess.check_call(cmd)
2633- if source.startswith('cloud:'):
2634- # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg:
2635- # cloud:precise-folsom/updates or cloud:precise-folsom/proposed
2636- install('ubuntu-cloud-keyring')
2637- pocket = source.split(':')[1]
2638- pocket = pocket.split('-')[1]
2639- with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2640- apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
2641- if source.startswith('deb'):
2642- l = len(source.split('|'))
2643- if l == 2:
2644- (apt_line, key) = source.split('|')
2645- cmd = [
2646- 'apt-key',
2647- 'adv', '--keyserver keyserver.ubuntu.com',
2648- '--recv-keys', key
2649- ]
2650- subprocess.check_call(cmd)
2651- elif l == 1:
2652- apt_line = source
2653-
2654- with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
2655- apt.write(apt_line + "\n")
2656- cmd = [
2657- 'apt-get',
2658- 'update'
2659- ]
2660- subprocess.check_call(cmd)
2661-
2662-# Protocols
2663-TCP = 'TCP'
2664-UDP = 'UDP'
2665-
2666-
2667-def expose(port, protocol='TCP'):
2668- cmd = [
2669- 'open-port',
2670- '{}/{}'.format(port, protocol)
2671- ]
2672- subprocess.check_call(cmd)
2673-
2674-
2675-def juju_log(severity, message):
2676- cmd = [
2677- 'juju-log',
2678- '--log-level', severity,
2679- message
2680- ]
2681- subprocess.check_call(cmd)
2682-
2683-
2684-cache = {}
2685-
2686-
2687-def cached(func):
2688- def wrapper(*args, **kwargs):
2689- global cache
2690- key = str((func, args, kwargs))
2691- try:
2692- return cache[key]
2693- except KeyError:
2694- res = func(*args, **kwargs)
2695- cache[key] = res
2696- return res
2697- return wrapper
2698-
2699-
2700-@cached
2701-def relation_ids(relation):
2702- cmd = [
2703- 'relation-ids',
2704- relation
2705- ]
2706- result = str(subprocess.check_output(cmd)).split()
2707- if result == "":
2708- return None
2709- else:
2710- return result
2711-
2712-
2713-@cached
2714-def relation_list(rid):
2715- cmd = [
2716- 'relation-list',
2717- '-r', rid,
2718- ]
2719- result = str(subprocess.check_output(cmd)).split()
2720- if result == "":
2721- return None
2722- else:
2723- return result
2724-
2725-
2726-@cached
2727-def relation_get(attribute, unit=None, rid=None):
2728- cmd = [
2729- 'relation-get',
2730- ]
2731- if rid:
2732- cmd.append('-r')
2733- cmd.append(rid)
2734- cmd.append(attribute)
2735- if unit:
2736- cmd.append(unit)
2737- value = subprocess.check_output(cmd).strip() # IGNORE:E1103
2738- if value == "":
2739- return None
2740- else:
2741- return value
2742-
2743-
2744-@cached
2745-def relation_get_dict(relation_id=None, remote_unit=None):
2746- """Obtain all relation data as dict by way of JSON"""
2747- cmd = [
2748- 'relation-get', '--format=json'
2749- ]
2750- if relation_id:
2751- cmd.append('-r')
2752- cmd.append(relation_id)
2753- if remote_unit:
2754- remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None)
2755- os.environ['JUJU_REMOTE_UNIT'] = remote_unit
2756- j = subprocess.check_output(cmd)
2757- if remote_unit and remote_unit_orig:
2758- os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig
2759- d = json.loads(j)
2760- settings = {}
2761- # convert unicode to strings
2762- for k, v in d.iteritems():
2763- settings[str(k)] = str(v)
2764- return settings
2765-
2766-
2767-def relation_set(**kwargs):
2768- cmd = [
2769- 'relation-set'
2770- ]
2771- args = []
2772- for k, v in kwargs.items():
2773- if k == 'rid':
2774- if v:
2775- cmd.append('-r')
2776- cmd.append(v)
2777- else:
2778- args.append('{}={}'.format(k, v))
2779- cmd += args
2780- subprocess.check_call(cmd)
2781-
2782-
2783-@cached
2784-def unit_get(attribute):
2785- cmd = [
2786- 'unit-get',
2787- attribute
2788- ]
2789- value = subprocess.check_output(cmd).strip() # IGNORE:E1103
2790- if value == "":
2791- return None
2792- else:
2793- return value
2794-
2795-
2796-@cached
2797-def config_get(attribute):
2798- cmd = [
2799- 'config-get',
2800- '--format',
2801- 'json',
2802- ]
2803- out = subprocess.check_output(cmd).strip() # IGNORE:E1103
2804- cfg = json.loads(out)
2805-
2806- try:
2807- return cfg[attribute]
2808- except KeyError:
2809- return None
2810-
2811-
2812-@cached
2813-def get_unit_hostname():
2814- return socket.gethostname()
2815-
2816-
2817-@cached
2818-def get_host_ip(hostname=unit_get('private-address')):
2819- try:
2820- # Test to see if already an IPv4 address
2821- socket.inet_aton(hostname)
2822- return hostname
2823- except socket.error:
2824- answers = dns.resolver.query(hostname, 'A')
2825- if answers:
2826- return answers[0].address
2827- return None
2828-
2829-
2830-def _svc_control(service, action):
2831- subprocess.check_call(['service', service, action])
2832-
2833-
2834-def restart(*services):
2835- for service in services:
2836- _svc_control(service, 'restart')
2837-
2838-
2839-def stop(*services):
2840- for service in services:
2841- _svc_control(service, 'stop')
2842-
2843-
2844-def start(*services):
2845- for service in services:
2846- _svc_control(service, 'start')
2847-
2848-
2849-def reload(*services):
2850- for service in services:
2851- try:
2852- _svc_control(service, 'reload')
2853- except subprocess.CalledProcessError:
2854- # Reload failed - either service does not support reload
2855- # or it was not running - restart will fixup most things
2856- _svc_control(service, 'restart')
2857-
2858-
2859-def running(service):
2860- try:
2861- output = subprocess.check_output(['service', service, 'status'])
2862- except subprocess.CalledProcessError:
2863- return False
2864- else:
2865- if ("start/running" in output or
2866- "is running" in output):
2867- return True
2868- else:
2869- return False
2870-
2871-
2872-def file_hash(path):
2873- if os.path.exists(path):
2874- h = hashlib.md5()
2875- with open(path, 'r') as source:
2876- h.update(source.read()) # IGNORE:E1101 - it does have update
2877- return h.hexdigest()
2878- else:
2879- return None
2880-
2881-
2882-def inteli_restart(restart_map):
2883- def wrap(f):
2884- def wrapped_f(*args):
2885- checksums = {}
2886- for path in restart_map:
2887- checksums[path] = file_hash(path)
2888- f(*args)
2889- restarts = []
2890- for path in restart_map:
2891- if checksums[path] != file_hash(path):
2892- restarts += restart_map[path]
2893- restart(*list(set(restarts)))
2894- return wrapped_f
2895- return wrap
2896-
2897-
2898-def is_relation_made(relation, key='private-address'):
2899- for r_id in (relation_ids(relation) or []):
2900- for unit in (relation_list(r_id) or []):
2901- if relation_get(key, rid=r_id, unit=unit):
2902- return True
2903- return False
2904
2905=== modified symlink 'hooks/quantum-network-service-relation-changed'
2906=== target changed u'hooks.py' => u'quantum_hooks.py'
2907=== added file 'hooks/quantum_contexts.py'
2908--- hooks/quantum_contexts.py 1970-01-01 00:00:00 +0000
2909+++ hooks/quantum_contexts.py 2013-07-19 09:54:29 +0000
2910@@ -0,0 +1,66 @@
2911+# vim: set ts=4:et
2912+from charmhelpers.core.hookenv import (
2913+ config,
2914+ relation_ids,
2915+ related_units,
2916+ relation_get,
2917+)
2918+from charmhelpers.contrib.openstack.context import (
2919+ OSContextGenerator,
2920+ context_complete
2921+)
2922+import quantum_utils as qutils
2923+
2924+
2925+class NetworkServiceContext(OSContextGenerator):
2926+ interfaces = ['quantum-network-service']
2927+
2928+ def __call__(self):
2929+ for rid in relation_ids('quantum-network-service'):
2930+ for unit in related_units(rid):
2931+ ctxt = {
2932+ 'keystone_host': relation_get('keystone_host',
2933+ rid=rid, unit=unit),
2934+ 'service_port': relation_get('service_port', rid=rid,
2935+ unit=unit),
2936+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
2937+ 'service_tenant': relation_get('service_tenant',
2938+ rid=rid, unit=unit),
2939+ 'service_username': relation_get('service_username',
2940+ rid=rid, unit=unit),
2941+ 'service_password': relation_get('service_password',
2942+ rid=rid, unit=unit),
2943+ 'quantum_host': relation_get('quantum_host',
2944+ rid=rid, unit=unit),
2945+ 'quantum_port': relation_get('quantum_port',
2946+ rid=rid, unit=unit),
2947+ 'quantum_url': relation_get('quantum_url',
2948+ rid=rid, unit=unit),
2949+ 'region': relation_get('region',
2950+ rid=rid, unit=unit),
2951+ # XXX: Hard-coded http.
2952+ 'service_protocol': 'http',
2953+ 'auth_protocol': 'http',
2954+ }
2955+ if context_complete(ctxt):
2956+ return ctxt
2957+ return {}
2958+
2959+
2960+class ExternalPortContext(OSContextGenerator):
2961+ def __call__(self):
2962+ if config('ext-port'):
2963+ return {"ext_port": config('ext-port')}
2964+ else:
2965+ return None
2966+
2967+
2968+class QuantumGatewayContext(OSContextGenerator):
2969+ def __call__(self):
2970+ ctxt = {
2971+ 'shared_secret': qutils.get_shared_secret(),
2972+ 'local_ip': qutils.get_host_ip(),
2973+ 'core_plugin': qutils.CORE_PLUGIN[config('plugin')],
2974+ 'plugin': config('plugin')
2975+ }
2976+ return ctxt
2977
2978=== renamed file 'hooks/hooks.py' => 'hooks/quantum_hooks.py'
2979--- hooks/hooks.py 2013-05-22 23:02:16 +0000
2980+++ hooks/quantum_hooks.py 2013-07-19 09:54:29 +0000
2981@@ -1,313 +1,126 @@
2982 #!/usr/bin/python
2983
2984-import lib.utils as utils
2985-import lib.cluster_utils as cluster
2986-import lib.openstack_common as openstack
2987+from charmhelpers.core.hookenv import (
2988+ log, ERROR, WARNING,
2989+ config,
2990+ relation_get,
2991+ relation_set,
2992+ unit_get,
2993+ Hooks, UnregisteredHookError
2994+)
2995+from charmhelpers.core.host import (
2996+ apt_update,
2997+ apt_install,
2998+ filter_installed_packages,
2999+ restart_on_change
3000+)
3001+from charmhelpers.contrib.hahelpers.cluster import(
3002+ eligible_leader
3003+)
3004+from charmhelpers.contrib.hahelpers.apache import(
3005+ install_ca_cert
3006+)
3007+from charmhelpers.contrib.openstack.utils import (
3008+ configure_installation_source,
3009+ openstack_upgrade_available
3010+)
3011+
3012 import sys
3013-import quantum_utils as qutils
3014-import os
3015-
3016-PLUGIN = utils.config_get('plugin')
3017-
3018-
3019+from quantum_utils import (
3020+ register_configs,
3021+ restart_map,
3022+ do_openstack_upgrade,
3023+ get_packages,
3024+ get_early_packages,
3025+ valid_plugin,
3026+ RABBIT_USER,
3027+ RABBIT_VHOST,
3028+ DB_USER, QUANTUM_DB,
3029+ NOVA_DB_USER, NOVA_DB,
3030+ configure_ovs,
3031+ reassign_agent_resources,
3032+)
3033+
3034+hooks = Hooks()
3035+CONFIGS = register_configs()
3036+
3037+
3038+@hooks.hook('install')
3039 def install():
3040- utils.configure_source()
3041- if PLUGIN in qutils.GATEWAY_PKGS.keys():
3042- if PLUGIN == qutils.OVS:
3043- # Install OVS DKMS first to ensure that the ovs module
3044- # loaded supports GRE tunnels
3045- utils.install('openvswitch-datapath-dkms')
3046- utils.install(*qutils.GATEWAY_PKGS[PLUGIN])
3047+ configure_installation_source(config('openstack-origin'))
3048+ apt_update(fatal=True)
3049+ if valid_plugin():
3050+ apt_install(filter_installed_packages(get_early_packages()),
3051+ fatal=True)
3052+ apt_install(filter_installed_packages(get_packages()),
3053+ fatal=True)
3054 else:
3055- utils.juju_log('ERROR', 'Please provide a valid plugin config')
3056+ log('Please provide a valid plugin config', level=ERROR)
3057 sys.exit(1)
3058
3059
3060-@utils.inteli_restart(qutils.RESTART_MAP)
3061+@hooks.hook('config-changed')
3062+@restart_on_change(restart_map())
3063 def config_changed():
3064- src = utils.config_get('openstack-origin')
3065- available = openstack.get_os_codename_install_source(src)
3066- installed = openstack.get_os_codename_package('quantum-common')
3067- if (available and
3068- openstack.get_os_version_codename(available) > \
3069- openstack.get_os_version_codename(installed)):
3070- qutils.do_openstack_upgrade()
3071-
3072- if PLUGIN in qutils.GATEWAY_PKGS.keys():
3073- render_quantum_conf()
3074- render_dhcp_agent_conf()
3075- render_l3_agent_conf()
3076- render_metadata_agent_conf()
3077- render_metadata_api_conf()
3078- render_plugin_conf()
3079- render_ext_port_upstart()
3080- render_evacuate_unit()
3081- if PLUGIN == qutils.OVS:
3082- qutils.add_bridge(qutils.INT_BRIDGE)
3083- qutils.add_bridge(qutils.EXT_BRIDGE)
3084- ext_port = utils.config_get('ext-port')
3085- if ext_port:
3086- qutils.add_bridge_port(qutils.EXT_BRIDGE, ext_port)
3087+ if openstack_upgrade_available('quantum-common'):
3088+ do_openstack_upgrade(CONFIGS)
3089+ if valid_plugin():
3090+ CONFIGS.write_all()
3091+ configure_ovs()
3092 else:
3093- utils.juju_log('ERROR',
3094- 'Please provide a valid plugin config')
3095+ log('Please provide a valid plugin config', level=ERROR)
3096 sys.exit(1)
3097
3098
3099+@hooks.hook('upgrade-charm')
3100 def upgrade_charm():
3101 install()
3102 config_changed()
3103
3104
3105-def render_ext_port_upstart():
3106- if utils.config_get('ext-port'):
3107- with open(qutils.EXT_PORT_CONF, "w") as conf:
3108- conf.write(utils.render_template(
3109- os.path.basename(qutils.EXT_PORT_CONF),
3110- {"ext_port": utils.config_get('ext-port')}
3111- )
3112- )
3113- else:
3114- if os.path.exists(qutils.EXT_PORT_CONF):
3115- os.remove(qutils.EXT_PORT_CONF)
3116-
3117-
3118-def render_l3_agent_conf():
3119- context = get_keystone_conf()
3120- if (context and
3121- os.path.exists(qutils.L3_AGENT_CONF)):
3122- with open(qutils.L3_AGENT_CONF, "w") as conf:
3123- conf.write(utils.render_template(
3124- os.path.basename(qutils.L3_AGENT_CONF),
3125- context
3126- )
3127- )
3128-
3129-
3130-def render_dhcp_agent_conf():
3131- if (os.path.exists(qutils.DHCP_AGENT_CONF)):
3132- with open(qutils.DHCP_AGENT_CONF, "w") as conf:
3133- conf.write(utils.render_template(
3134- os.path.basename(qutils.DHCP_AGENT_CONF),
3135- {}
3136- )
3137- )
3138-
3139-
3140-def render_metadata_agent_conf():
3141- context = get_keystone_conf()
3142- if (context and
3143- os.path.exists(qutils.METADATA_AGENT_CONF)):
3144- context['local_ip'] = utils.get_host_ip()
3145- context['shared_secret'] = qutils.get_shared_secret()
3146- with open(qutils.METADATA_AGENT_CONF, "w") as conf:
3147- conf.write(utils.render_template(
3148- os.path.basename(qutils.METADATA_AGENT_CONF),
3149- context
3150- )
3151- )
3152-
3153-
3154-def render_quantum_conf():
3155- context = get_rabbit_conf()
3156- if (context and
3157- os.path.exists(qutils.QUANTUM_CONF)):
3158- context['core_plugin'] = \
3159- qutils.CORE_PLUGIN[PLUGIN]
3160- with open(qutils.QUANTUM_CONF, "w") as conf:
3161- conf.write(utils.render_template(
3162- os.path.basename(qutils.QUANTUM_CONF),
3163- context
3164- )
3165- )
3166-
3167-
3168-def render_plugin_conf():
3169- context = get_quantum_db_conf()
3170- if (context and
3171- os.path.exists(qutils.PLUGIN_CONF[PLUGIN])):
3172- context['local_ip'] = utils.get_host_ip()
3173- conf_file = qutils.PLUGIN_CONF[PLUGIN]
3174- with open(conf_file, "w") as conf:
3175- conf.write(utils.render_template(
3176- os.path.basename(conf_file),
3177- context
3178- )
3179- )
3180-
3181-
3182-def render_metadata_api_conf():
3183- context = get_nova_db_conf()
3184- r_context = get_rabbit_conf()
3185- q_context = get_keystone_conf()
3186- if (context and r_context and q_context and
3187- os.path.exists(qutils.NOVA_CONF)):
3188- context.update(r_context)
3189- context.update(q_context)
3190- context['shared_secret'] = qutils.get_shared_secret()
3191- with open(qutils.NOVA_CONF, "w") as conf:
3192- conf.write(utils.render_template(
3193- os.path.basename(qutils.NOVA_CONF),
3194- context
3195- )
3196- )
3197-
3198-
3199-def render_evacuate_unit():
3200- context = get_keystone_conf()
3201- if context:
3202- with open('/usr/local/bin/quantum-evacuate-unit', "w") as conf:
3203- conf.write(utils.render_template('evacuate_unit.py', context))
3204- os.chmod('/usr/local/bin/quantum-evacuate-unit', 0700)
3205-
3206-
3207-def get_keystone_conf():
3208- for relid in utils.relation_ids('quantum-network-service'):
3209- for unit in utils.relation_list(relid):
3210- conf = {
3211- "keystone_host": utils.relation_get('keystone_host',
3212- unit, relid),
3213- "service_port": utils.relation_get('service_port',
3214- unit, relid),
3215- "auth_port": utils.relation_get('auth_port', unit, relid),
3216- "service_username": utils.relation_get('service_username',
3217- unit, relid),
3218- "service_password": utils.relation_get('service_password',
3219- unit, relid),
3220- "service_tenant": utils.relation_get('service_tenant',
3221- unit, relid),
3222- "quantum_host": utils.relation_get('quantum_host',
3223- unit, relid),
3224- "quantum_port": utils.relation_get('quantum_port',
3225- unit, relid),
3226- "quantum_url": utils.relation_get('quantum_url',
3227- unit, relid),
3228- "region": utils.relation_get('region',
3229- unit, relid)
3230- }
3231- if None not in conf.itervalues():
3232- return conf
3233- return None
3234-
3235-
3236+@hooks.hook('shared-db-relation-joined')
3237 def db_joined():
3238- utils.relation_set(quantum_username=qutils.DB_USER,
3239- quantum_database=qutils.QUANTUM_DB,
3240- quantum_hostname=utils.unit_get('private-address'),
3241- nova_username=qutils.NOVA_DB_USER,
3242- nova_database=qutils.NOVA_DB,
3243- nova_hostname=utils.unit_get('private-address'))
3244-
3245-
3246-@utils.inteli_restart(qutils.RESTART_MAP)
3247-def db_changed():
3248- render_plugin_conf()
3249- render_metadata_api_conf()
3250-
3251-
3252-def get_quantum_db_conf():
3253- for relid in utils.relation_ids('shared-db'):
3254- for unit in utils.relation_list(relid):
3255- conf = {
3256- "host": utils.relation_get('db_host',
3257- unit, relid),
3258- "user": qutils.DB_USER,
3259- "password": utils.relation_get('quantum_password',
3260- unit, relid),
3261- "db": qutils.QUANTUM_DB
3262- }
3263- if None not in conf.itervalues():
3264- return conf
3265- return None
3266-
3267-
3268-def get_nova_db_conf():
3269- for relid in utils.relation_ids('shared-db'):
3270- for unit in utils.relation_list(relid):
3271- conf = {
3272- "host": utils.relation_get('db_host',
3273- unit, relid),
3274- "user": qutils.NOVA_DB_USER,
3275- "password": utils.relation_get('nova_password',
3276- unit, relid),
3277- "db": qutils.NOVA_DB
3278- }
3279- if None not in conf.itervalues():
3280- return conf
3281- return None
3282-
3283-
3284+ relation_set(quantum_username=DB_USER,
3285+ quantum_database=QUANTUM_DB,
3286+ quantum_hostname=unit_get('private-address'),
3287+ nova_username=NOVA_DB_USER,
3288+ nova_database=NOVA_DB,
3289+ nova_hostname=unit_get('private-address'))
3290+
3291+
3292+@hooks.hook('amqp-relation-joined')
3293 def amqp_joined():
3294- utils.relation_set(username=qutils.RABBIT_USER,
3295- vhost=qutils.RABBIT_VHOST)
3296-
3297-
3298-@utils.inteli_restart(qutils.RESTART_MAP)
3299-def amqp_changed():
3300- render_dhcp_agent_conf()
3301- render_quantum_conf()
3302- render_metadata_api_conf()
3303-
3304-
3305-def get_rabbit_conf():
3306- for relid in utils.relation_ids('amqp'):
3307- for unit in utils.relation_list(relid):
3308- conf = {
3309- "rabbit_host": utils.relation_get('private-address',
3310- unit, relid),
3311- "rabbit_virtual_host": qutils.RABBIT_VHOST,
3312- "rabbit_userid": qutils.RABBIT_USER,
3313- "rabbit_password": utils.relation_get('password',
3314- unit, relid)
3315- }
3316- clustered = utils.relation_get('clustered', unit, relid)
3317- if clustered:
3318- conf['rabbit_host'] = utils.relation_get('vip', unit, relid)
3319- if None not in conf.itervalues():
3320- return conf
3321- return None
3322-
3323-
3324-@utils.inteli_restart(qutils.RESTART_MAP)
3325+ relation_set(username=RABBIT_USER,
3326+ vhost=RABBIT_VHOST)
3327+
3328+
3329+@hooks.hook('shared-db-relation-changed',
3330+ 'amqp-relation-changed')
3331+@restart_on_change(restart_map())
3332+def db_amqp_changed():
3333+ CONFIGS.write_all()
3334+
3335+
3336+@hooks.hook('quantum-network-service-relation-changed')
3337+@restart_on_change(restart_map())
3338 def nm_changed():
3339- render_dhcp_agent_conf()
3340- render_l3_agent_conf()
3341- render_metadata_agent_conf()
3342- render_metadata_api_conf()
3343- render_evacuate_unit()
3344- store_ca_cert()
3345-
3346-
3347-def store_ca_cert():
3348- ca_cert = get_ca_cert()
3349- if ca_cert:
3350- qutils.install_ca(ca_cert)
3351-
3352-
3353-def get_ca_cert():
3354- for relid in utils.relation_ids('quantum-network-service'):
3355- for unit in utils.relation_list(relid):
3356- ca_cert = utils.relation_get('ca_cert', unit, relid)
3357- if ca_cert:
3358- return ca_cert
3359- return None
3360-
3361-
3362+ CONFIGS.write_all()
3363+ if relation_get('ca_cert'):
3364+ install_ca_cert(relation_get('ca_cert'))
3365+
3366+
3367+@hooks.hook("cluster-relation-departed")
3368 def cluster_departed():
3369- conf = get_keystone_conf()
3370- if conf and cluster.eligible_leader(None):
3371- qutils.reassign_agent_resources(conf)
3372-
3373-utils.do_hooks({
3374- "install": install,
3375- "config-changed": config_changed,
3376- "upgrade-charm": upgrade_charm,
3377- "shared-db-relation-joined": db_joined,
3378- "shared-db-relation-changed": db_changed,
3379- "amqp-relation-joined": amqp_joined,
3380- "amqp-relation-changed": amqp_changed,
3381- "quantum-network-service-relation-changed": nm_changed,
3382- "cluster-relation-departed": cluster_departed
3383- })
3384-
3385-sys.exit(0)
3386+ if config('plugin') == 'nvp':
3387+ log('Unable to re-assign agent resources for failed nodes with nvp',
3388+ level=WARNING)
3389+ return
3390+ if eligible_leader(None):
3391+ reassign_agent_resources()
3392+
3393+
3394+if __name__ == '__main__':
3395+ try:
3396+ hooks.execute(sys.argv)
3397+ except UnregisteredHookError as e:
3398+ log('Unknown hook {} - skipping.'.format(e))
3399
3400=== modified file 'hooks/quantum_utils.py'
3401--- hooks/quantum_utils.py 2013-05-25 21:11:15 +0000
3402+++ hooks/quantum_utils.py 2013-07-19 09:54:29 +0000
3403@@ -1,28 +1,54 @@
3404-import subprocess
3405 import os
3406 import uuid
3407-import base64
3408-import apt_pkg as apt
3409-from lib.utils import (
3410- juju_log as log,
3411- configure_source,
3412- config_get
3413- )
3414-
3415+import socket
3416+from charmhelpers.core.hookenv import (
3417+ log,
3418+ config,
3419+ unit_get,
3420+ cached
3421+)
3422+from charmhelpers.core.host import (
3423+ apt_install,
3424+ apt_update
3425+)
3426+from charmhelpers.contrib.network.ovs import (
3427+ add_bridge,
3428+ add_bridge_port
3429+)
3430+from charmhelpers.contrib.openstack.utils import (
3431+ configure_installation_source,
3432+ get_os_codename_package,
3433+ get_os_codename_install_source
3434+)
3435+import charmhelpers.contrib.openstack.context as context
3436+import charmhelpers.contrib.openstack.templating as templating
3437+import quantum_contexts
3438+from collections import OrderedDict
3439
3440 OVS = "ovs"
3441+NVP = "nvp"
3442
3443 OVS_PLUGIN = \
3444 "quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2"
3445+NVP_PLUGIN = \
3446+ "quantum.plugins.nicira.nicira_nvp_plugin.QuantumPlugin.NvpPluginV2"
3447 CORE_PLUGIN = {
3448 OVS: OVS_PLUGIN,
3449- }
3450+ NVP: NVP_PLUGIN
3451+}
3452+
3453+
3454+def valid_plugin():
3455+ return config('plugin') in CORE_PLUGIN
3456
3457 OVS_PLUGIN_CONF = \
3458 "/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini"
3459+NVP_PLUGIN_CONF = \
3460+ "/etc/quantum/plugins/nicira/nvp.ini"
3461 PLUGIN_CONF = {
3462 OVS: OVS_PLUGIN_CONF,
3463- }
3464+ NVP: NVP_PLUGIN_CONF
3465+}
3466
3467 GATEWAY_PKGS = {
3468 OVS: [
3469@@ -31,34 +57,128 @@
3470 "quantum-dhcp-agent",
3471 'python-mysqldb',
3472 "nova-api-metadata"
3473- ],
3474- }
3475-
3476-GATEWAY_AGENTS = {
3477- OVS: [
3478- "quantum-plugin-openvswitch-agent",
3479- "quantum-l3-agent",
3480+ ],
3481+ NVP: [
3482+ "openvswitch-switch",
3483 "quantum-dhcp-agent",
3484+ 'python-mysqldb',
3485 "nova-api-metadata"
3486- ],
3487- }
3488+ ]
3489+}
3490+
3491+EARLY_PACKAGES = {
3492+ OVS: ['openvswitch-datapath-dkms']
3493+}
3494+
3495+
3496+def get_early_packages():
3497+ '''Return a list of package for pre-install based on configured plugin'''
3498+ if config('plugin') in EARLY_PACKAGES:
3499+ return EARLY_PACKAGES[config('plugin')]
3500+ else:
3501+ return []
3502+
3503+
3504+def get_packages():
3505+ '''Return a list of packages for install based on the configured plugin'''
3506+ return GATEWAY_PKGS[config('plugin')]
3507
3508 EXT_PORT_CONF = '/etc/init/ext-port.conf'
3509-
3510-
3511-def get_os_version(package=None):
3512- apt.init()
3513- cache = apt.Cache()
3514- pkg = cache[package or 'quantum-common']
3515- if pkg.current_ver:
3516- return apt.upstream_version(pkg.current_ver.ver_str)
3517- else:
3518- return None
3519-
3520-
3521-if get_os_version('quantum-common') >= "2013.1":
3522- for plugin in GATEWAY_AGENTS:
3523- GATEWAY_AGENTS[plugin].append("quantum-metadata-agent")
3524+TEMPLATES = 'templates'
3525+
3526+QUANTUM_CONF = "/etc/quantum/quantum.conf"
3527+L3_AGENT_CONF = "/etc/quantum/l3_agent.ini"
3528+DHCP_AGENT_CONF = "/etc/quantum/dhcp_agent.ini"
3529+METADATA_AGENT_CONF = "/etc/quantum/metadata_agent.ini"
3530+NOVA_CONF = "/etc/nova/nova.conf"
3531+
3532+SHARED_CONFIG_FILES = {
3533+ DHCP_AGENT_CONF: {
3534+ 'hook_contexts': [quantum_contexts.QuantumGatewayContext()],
3535+ 'services': ['quantum-dhcp-agent']
3536+ },
3537+ METADATA_AGENT_CONF: {
3538+ 'hook_contexts': [quantum_contexts.NetworkServiceContext()],
3539+ 'services': ['quantum-metadata-agent']
3540+ },
3541+ NOVA_CONF: {
3542+ 'hook_contexts': [context.AMQPContext(),
3543+ context.SharedDBContext(),
3544+ quantum_contexts.NetworkServiceContext(),
3545+ quantum_contexts.QuantumGatewayContext()],
3546+ 'services': ['nova-api-metadata']
3547+ },
3548+}
3549+
3550+OVS_CONFIG_FILES = {
3551+ QUANTUM_CONF: {
3552+ 'hook_contexts': [context.AMQPContext(),
3553+ quantum_contexts.QuantumGatewayContext()],
3554+ 'services': ['quantum-l3-agent',
3555+ 'quantum-dhcp-agent',
3556+ 'quantum-metadata-agent',
3557+ 'quantum-plugin-openvswitch-agent']
3558+ },
3559+ L3_AGENT_CONF: {
3560+ 'hook_contexts': [quantum_contexts.NetworkServiceContext()],
3561+ 'services': ['quantum-l3-agent']
3562+ },
3563+ # TODO: Check to see if this is actually required
3564+ OVS_PLUGIN_CONF: {
3565+ 'hook_contexts': [context.SharedDBContext(),
3566+ quantum_contexts.QuantumGatewayContext()],
3567+ 'services': ['quantum-plugin-openvswitch-agent']
3568+ },
3569+ EXT_PORT_CONF: {
3570+ 'hook_contexts': [quantum_contexts.ExternalPortContext()],
3571+ 'services': []
3572+ }
3573+}
3574+
3575+NVP_CONFIG_FILES = {
3576+ QUANTUM_CONF: {
3577+ 'hook_contexts': [context.AMQPContext()],
3578+ 'services': ['quantum-dhcp-agent', 'quantum-metadata-agent']
3579+ },
3580+}
3581+
3582+CONFIG_FILES = {
3583+ NVP: NVP_CONFIG_FILES.update(SHARED_CONFIG_FILES),
3584+ OVS: OVS_CONFIG_FILES.update(SHARED_CONFIG_FILES),
3585+}
3586+
3587+
3588+def register_configs():
3589+ ''' Register config files with their respective contexts. '''
3590+ release = get_os_codename_package('quantum-common', fatal=False) or \
3591+ 'essex'
3592+ configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
3593+ openstack_release=release)
3594+
3595+ plugin = config('plugin')
3596+ for conf in CONFIG_FILES[plugin]:
3597+ configs.register(conf, CONFIG_FILES[conf]['hook_contexts'])
3598+
3599+ return configs
3600+
3601+
3602+def restart_map():
3603+ '''
3604+ Determine the correct resource map to be passed to
3605+ charmhelpers.core.restart_on_change() based on the services configured.
3606+
3607+ :returns: dict: A dictionary mapping config file to lists of services
3608+ that should be restarted when file changes.
3609+ '''
3610+ _map = []
3611+ for f, ctxt in CONFIG_FILES[config('plugin')].iteritems():
3612+ svcs = []
3613+ for svc in ctxt['services']:
3614+ svcs.append(svc)
3615+ if svcs:
3616+ _map.append((f, svcs))
3617+ return OrderedDict(_map)
3618+
3619
3620 DB_USER = "quantum"
3621 QUANTUM_DB = "quantum"
3622@@ -66,77 +186,12 @@
3623 NOVA_DB_USER = "nova"
3624 NOVA_DB = "nova"
3625
3626-QUANTUM_CONF = "/etc/quantum/quantum.conf"
3627-L3_AGENT_CONF = "/etc/quantum/l3_agent.ini"
3628-DHCP_AGENT_CONF = "/etc/quantum/dhcp_agent.ini"
3629-METADATA_AGENT_CONF = "/etc/quantum/metadata_agent.ini"
3630-NOVA_CONF = "/etc/nova/nova.conf"
3631-
3632-RESTART_MAP = {
3633- QUANTUM_CONF: [
3634- 'quantum-l3-agent',
3635- 'quantum-dhcp-agent',
3636- 'quantum-metadata-agent',
3637- 'quantum-plugin-openvswitch-agent'
3638- ],
3639- DHCP_AGENT_CONF: [
3640- 'quantum-dhcp-agent'
3641- ],
3642- L3_AGENT_CONF: [
3643- 'quantum-l3-agent'
3644- ],
3645- METADATA_AGENT_CONF: [
3646- 'quantum-metadata-agent'
3647- ],
3648- OVS_PLUGIN_CONF: [
3649- 'quantum-plugin-openvswitch-agent'
3650- ],
3651- NOVA_CONF: [
3652- 'nova-api-metadata'
3653- ]
3654- }
3655-
3656 RABBIT_USER = "nova"
3657 RABBIT_VHOST = "nova"
3658
3659 INT_BRIDGE = "br-int"
3660 EXT_BRIDGE = "br-ex"
3661
3662-
3663-def add_bridge(name):
3664- status = subprocess.check_output(["ovs-vsctl", "show"])
3665- if "Bridge {}".format(name) not in status:
3666- log('INFO', 'Creating bridge {}'.format(name))
3667- subprocess.check_call(["ovs-vsctl", "add-br", name])
3668-
3669-
3670-def del_bridge(name):
3671- status = subprocess.check_output(["ovs-vsctl", "show"])
3672- if "Bridge {}".format(name) in status:
3673- log('INFO', 'Deleting bridge {}'.format(name))
3674- subprocess.check_call(["ovs-vsctl", "del-br", name])
3675-
3676-
3677-def add_bridge_port(name, port):
3678- status = subprocess.check_output(["ovs-vsctl", "show"])
3679- if ("Bridge {}".format(name) in status and
3680- "Interface \"{}\"".format(port) not in status):
3681- log('INFO',
3682- 'Adding port {} to bridge {}'.format(port, name))
3683- subprocess.check_call(["ovs-vsctl", "add-port", name, port])
3684- subprocess.check_call(["ip", "link", "set", port, "up"])
3685-
3686-
3687-def del_bridge_port(name, port):
3688- status = subprocess.check_output(["ovs-vsctl", "show"])
3689- if ("Bridge {}".format(name) in status and
3690- "Interface \"{}\"".format(port) in status):
3691- log('INFO',
3692- 'Deleting port {} from bridge {}'.format(port, name))
3693- subprocess.check_call(["ovs-vsctl", "del-port", name, port])
3694- subprocess.check_call(["ip", "link", "set", port, "down"])
3695-
3696-
3697 SHARED_SECRET = "/etc/quantum/secret.txt"
3698
3699
3700@@ -151,33 +206,22 @@
3701 secret = secret_file.read().strip()
3702 return secret
3703
3704-
3705-def flush_local_configuration():
3706- if os.path.exists('/usr/bin/quantum-netns-cleanup'):
3707- cmd = [
3708- "quantum-netns-cleanup",
3709- "--config-file=/etc/quantum/quantum.conf"
3710- ]
3711- for agent_conf in ['l3_agent.ini', 'dhcp_agent.ini']:
3712- agent_cmd = list(cmd)
3713- agent_cmd.append('--config-file=/etc/quantum/{}'\
3714- .format(agent_conf))
3715- subprocess.call(agent_cmd)
3716-
3717-
3718-def install_ca(ca_cert):
3719- with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
3720- 'w') as crt:
3721- crt.write(base64.b64decode(ca_cert))
3722- subprocess.check_call(['update-ca-certificates', '--fresh'])
3723-
3724 DHCP_AGENT = "DHCP Agent"
3725 L3_AGENT = "L3 Agent"
3726
3727
3728-def reassign_agent_resources(env):
3729+def reassign_agent_resources():
3730 ''' Use agent scheduler API to detect down agents and re-schedule '''
3731- from quantumclient.v2_0 import client
3732+ env = quantum_contexts.NetworkServiceContext()()
3733+ if not env:
3734+ log('Unable to re-assign resources at this time')
3735+ return
3736+ try:
3737+ from quantumclient.v2_0 import client
3738+ except ImportError:
3739+ ''' Try to import neutronclient instead for havana+ '''
3740+ from neutronclient.v2_0 import client
3741+
3742 # TODO: Fixup for https keystone
3743 auth_url = 'http://%(keystone_host)s:%(auth_port)s/v2.0' % env
3744 quantum = client.Client(username=env['service_username'],
3745@@ -192,7 +236,7 @@
3746 networks = {}
3747 for agent in agents['agents']:
3748 if not agent['alive']:
3749- log('INFO', 'DHCP Agent %s down' % agent['id'])
3750+ log('DHCP Agent %s down' % agent['id'])
3751 for network in \
3752 quantum.list_networks_on_dhcp_agent(agent['id'])['networks']:
3753 networks[network['id']] = agent['id']
3754@@ -203,7 +247,7 @@
3755 routers = {}
3756 for agent in agents['agents']:
3757 if not agent['alive']:
3758- log('INFO', 'L3 Agent %s down' % agent['id'])
3759+ log('L3 Agent %s down' % agent['id'])
3760 for router in \
3761 quantum.list_routers_on_l3_agent(agent['id'])['routers']:
3762 routers[router['id']] = agent['id']
3763@@ -213,8 +257,7 @@
3764 index = 0
3765 for router_id in routers:
3766 agent = index % len(l3_agents)
3767- log('INFO',
3768- 'Moving router %s from %s to %s' % \
3769+ log('Moving router %s from %s to %s' %
3770 (router_id, routers[router_id], l3_agents[agent]))
3771 quantum.remove_router_from_l3_agent(l3_agent=routers[router_id],
3772 router_id=router_id)
3773@@ -225,8 +268,7 @@
3774 index = 0
3775 for network_id in networks:
3776 agent = index % len(dhcp_agents)
3777- log('INFO',
3778- 'Moving network %s from %s to %s' % \
3779+ log('Moving network %s from %s to %s' %
3780 (network_id, networks[network_id], dhcp_agents[agent]))
3781 quantum.remove_network_from_dhcp_agent(dhcp_agent=networks[network_id],
3782 network_id=network_id)
3783@@ -234,16 +276,57 @@
3784 body={'network_id': network_id})
3785 index += 1
3786
3787-def do_openstack_upgrade():
3788- configure_source()
3789- plugin = config_get('plugin')
3790- pkgs = []
3791- if plugin in GATEWAY_PKGS.keys():
3792- pkgs += GATEWAY_PKGS[plugin]
3793- if plugin == OVS:
3794- pkgs.append('openvswitch-datapath-dkms')
3795- cmd = ['apt-get', '-y',
3796- '--option', 'Dpkg::Options::=--force-confold',
3797- '--option', 'Dpkg::Options::=--force-confdef',
3798- 'install'] + pkgs
3799- subprocess.check_call(cmd)
3800+
3801+def do_openstack_upgrade(configs):
3802+ """
3803+ Perform an upgrade. Takes care of upgrading packages, rewriting
3804+ configs, database migrations and potentially any other post-upgrade
3805+ actions.
3806+
3807+ :param configs: The charms main OSConfigRenderer object.
3808+ """
3809+ new_src = config('openstack-origin')
3810+ new_os_rel = get_os_codename_install_source(new_src)
3811+
3812+ log('Performing OpenStack upgrade to %s.' % (new_os_rel))
3813+
3814+ configure_installation_source(new_src)
3815+ dpkg_opts = [
3816+ '--option', 'Dpkg::Options::=--force-confnew',
3817+ '--option', 'Dpkg::Options::=--force-confdef',
3818+ ]
3819+ apt_update(fatal=True)
3820+ apt_install(packages=GATEWAY_PKGS[config('plugin')], options=dpkg_opts,
3821+ fatal=True)
3822+
3823+ # set CONFIGS to load templates from new release
3824+ configs.set_release(openstack_release=new_os_rel)
3825+
3826+
3827+@cached
3828+def get_host_ip(hostname=None):
3829+ try:
3830+ import dns.resolver
3831+ except ImportError:
3832+ apt_install('python-dnspython', fatal=True)
3833+ import dns.resolver
3834+ hostname = hostname or unit_get('private-address')
3835+ try:
3836+ # Test to see if already an IPv4 address
3837+ socket.inet_aton(hostname)
3838+ return hostname
3839+ except socket.error:
3840+ answers = dns.resolver.query(hostname, 'A')
3841+ if answers:
3842+ return answers[0].address
3843+
3844+
3845+def configure_ovs():
3846+ if config('plugin') == OVS:
3847+ add_bridge(INT_BRIDGE)
3848+ add_bridge(EXT_BRIDGE)
3849+ ext_port = config('ext-port')
3850+ if ext_port:
3851+ add_bridge_port(EXT_BRIDGE, ext_port)
3852+ if config('plugin') == NVP:
3853+ add_bridge(INT_BRIDGE)
3854
3855=== modified symlink 'hooks/shared-db-relation-changed'
3856=== target changed u'hooks.py' => u'quantum_hooks.py'
3857=== modified symlink 'hooks/shared-db-relation-joined'
3858=== target changed u'hooks.py' => u'quantum_hooks.py'
3859=== modified symlink 'hooks/start'
3860=== target changed u'hooks.py' => u'quantum_hooks.py'
3861=== modified symlink 'hooks/stop'
3862=== target changed u'hooks.py' => u'quantum_hooks.py'
3863=== modified symlink 'hooks/upgrade-charm'
3864=== target changed u'hooks.py' => u'quantum_hooks.py'
3865=== added file 'setup.cfg'
3866--- setup.cfg 1970-01-01 00:00:00 +0000
3867+++ setup.cfg 2013-07-19 09:54:29 +0000
3868@@ -0,0 +1,5 @@
3869+[nosetests]
3870+verbosity=2
3871+with-coverage=1
3872+cover-erase=1
3873+cover-package=hooks
3874
3875=== modified file 'templates/dhcp_agent.ini'
3876--- templates/dhcp_agent.ini 2012-11-05 11:59:27 +0000
3877+++ templates/dhcp_agent.ini 2013-07-19 09:54:29 +0000
3878@@ -3,3 +3,8 @@
3879 interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
3880 dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
3881 root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf
3882+{% if plugin == 'nvp' %}
3883+ovs_use_veth = True
3884+enable_metadata_network = True
3885+enable_isolated_metadata = True
3886+{% endif %}
3887
3888=== removed file 'templates/evacuate_unit.py'
3889--- templates/evacuate_unit.py 2013-04-12 16:27:28 +0000
3890+++ templates/evacuate_unit.py 1970-01-01 00:00:00 +0000
3891@@ -1,70 +0,0 @@
3892-#!/usr/bin/python
3893-
3894-import subprocess
3895-
3896-
3897-def log(priority, message):
3898- print "{}: {}".format(priority, message)
3899-
3900-DHCP_AGENT = "DHCP Agent"
3901-L3_AGENT = "L3 Agent"
3902-
3903-
3904-def evacuate_unit(unit):
3905- ''' Use agent scheduler API to detect down agents and re-schedule '''
3906- from quantumclient.v2_0 import client
3907- # TODO: Fixup for https keystone
3908- auth_url = 'http://{{ keystone_host }}:{{ auth_port }}/v2.0'
3909- quantum = client.Client(username='{{ service_username }}',
3910- password='{{ service_password }}',
3911- tenant_name='{{ service_tenant }}',
3912- auth_url=auth_url,
3913- region_name='{{ region }}')
3914-
3915- agents = quantum.list_agents(agent_type=DHCP_AGENT)
3916- dhcp_agents = []
3917- l3_agents = []
3918- networks = {}
3919- for agent in agents['agents']:
3920- if agent['alive'] and agent['host'] != unit:
3921- dhcp_agents.append(agent['id'])
3922- elif agent['host'] == unit:
3923- for network in \
3924- quantum.list_networks_on_dhcp_agent(agent['id'])['networks']:
3925- networks[network['id']] = agent['id']
3926-
3927- agents = quantum.list_agents(agent_type=L3_AGENT)
3928- routers = {}
3929- for agent in agents['agents']:
3930- if agent['alive'] and agent['host'] != unit:
3931- l3_agents.append(agent['id'])
3932- elif agent['host'] == unit:
3933- for router in \
3934- quantum.list_routers_on_l3_agent(agent['id'])['routers']:
3935- routers[router['id']] = agent['id']
3936-
3937- index = 0
3938- for router_id in routers:
3939- agent = index % len(l3_agents)
3940- log('INFO',
3941- 'Moving router %s from %s to %s' % \
3942- (router_id, routers[router_id], l3_agents[agent]))
3943- quantum.remove_router_from_l3_agent(l3_agent=routers[router_id],
3944- router_id=router_id)
3945- quantum.add_router_to_l3_agent(l3_agent=l3_agents[agent],
3946- body={'router_id': router_id})
3947- index += 1
3948-
3949- index = 0
3950- for network_id in networks:
3951- agent = index % len(dhcp_agents)
3952- log('INFO',
3953- 'Moving network %s from %s to %s' % \
3954- (network_id, networks[network_id], dhcp_agents[agent]))
3955- quantum.remove_network_from_dhcp_agent(dhcp_agent=networks[network_id],
3956- network_id=network_id)
3957- quantum.add_network_to_dhcp_agent(dhcp_agent=dhcp_agents[agent],
3958- body={'network_id': network_id})
3959- index += 1
3960-
3961-evacuate_unit(subprocess.check_output(['hostname', '-f']).strip())
3962
3963=== modified file 'templates/l3_agent.ini'
3964--- templates/l3_agent.ini 2013-01-22 18:36:19 +0000
3965+++ templates/l3_agent.ini 2013-07-19 09:54:29 +0000
3966@@ -1,6 +1,6 @@
3967 [DEFAULT]
3968 interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
3969-auth_url = http://{{ keystone_host }}:{{ service_port }}/v2.0
3970+auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
3971 auth_region = {{ region }}
3972 admin_tenant_name = {{ service_tenant }}
3973 admin_user = {{ service_username }}
3974
3975=== modified file 'templates/metadata_agent.ini'
3976--- templates/metadata_agent.ini 2013-01-22 18:36:19 +0000
3977+++ templates/metadata_agent.ini 2013-07-19 09:54:29 +0000
3978@@ -1,6 +1,6 @@
3979 [DEFAULT]
3980 debug = True
3981-auth_url = http://{{ keystone_host }}:{{ service_port }}/v2.0
3982+auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
3983 auth_region = {{ region }}
3984 admin_tenant_name = {{ service_tenant }}
3985 admin_user = {{ service_username }}
3986
3987=== modified file 'templates/nova.conf'
3988--- templates/nova.conf 2013-03-11 12:25:14 +0000
3989+++ templates/nova.conf 2013-07-19 09:54:29 +0000
3990@@ -22,4 +22,4 @@
3991 quantum_admin_tenant_name={{ service_tenant }}
3992 quantum_admin_username={{ service_username }}
3993 quantum_admin_password={{ service_password }}
3994-quantum_admin_auth_url=http://{{ keystone_host }}:{{ service_port }}/v2.0
3995+quantum_admin_auth_url={{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
3996
3997=== added directory 'unit_tests'
3998=== added file 'unit_tests/__init__.py'

Subscribers

People subscribed via source and target branches

to all changes: