Merge lp:~openstack-charmers/charms/precise/quantum-gateway/python-redux into lp:~charmers/charms/precise/quantum-gateway/trunk

Proposed by Adam Gandelman
Status: Merged
Merged at revision: 34
Proposed branch: lp:~openstack-charmers/charms/precise/quantum-gateway/python-redux
Merge into: lp:~charmers/charms/precise/quantum-gateway/trunk
Diff against target: 5803 lines (+4032/-1295)
48 files modified
.bzrignore (+1/-0)
.coveragerc (+6/-0)
.pydevproject (+1/-1)
Makefile (+14/-0)
README.md (+10/-12)
charm-helpers-sync.yaml (+9/-0)
config.yaml (+11/-1)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+72/-0)
hooks/charmhelpers/contrib/openstack/context.py (+522/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+365/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+241/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+49/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/lib/cluster_utils.py (+0/-130)
hooks/lib/openstack_common.py (+0/-230)
hooks/lib/utils.py (+0/-359)
hooks/quantum_contexts.py (+175/-0)
hooks/quantum_hooks.py (+114/-291)
hooks/quantum_utils.py (+337/-181)
metadata.yaml (+7/-7)
setup.cfg (+5/-0)
templates/evacuate_unit.py (+0/-70)
templates/folsom/dhcp_agent.ini (+5/-0)
templates/folsom/l3_agent.ini (+1/-1)
templates/folsom/metadata_agent.ini (+1/-1)
templates/folsom/nova.conf (+6/-6)
templates/folsom/ovs_quantum_plugin.ini (+1/-1)
templates/folsom/quantum.conf (+4/-4)
templates/havana/dhcp_agent.ini (+10/-0)
templates/havana/l3_agent.ini (+9/-0)
templates/havana/metadata_agent.ini (+17/-0)
templates/havana/neutron.conf (+20/-0)
templates/havana/nova.conf (+25/-0)
templates/havana/ovs_neutron_plugin.ini (+11/-0)
unit_tests/__init__.py (+2/-0)
unit_tests/test_quantum_contexts.py (+240/-0)
unit_tests/test_quantum_hooks.py (+154/-0)
unit_tests/test_quantum_utils.py (+202/-0)
unit_tests/test_utils.py (+97/-0)
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/quantum-gateway/python-redux
Reviewer Review Type Date Requested Status
charmers Pending
Review via email: mp+191086@code.launchpad.net

Description of the change

Update of all Havana / Saucy / python-redux work:

* Full python rewrite using new OpenStack charm-helpers.

* Test coverage

* Havana support

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2013-10-15 01:35:59 +0000
4@@ -0,0 +1,1 @@
5+.coverage
6
7=== added file '.coveragerc'
8--- .coveragerc 1970-01-01 00:00:00 +0000
9+++ .coveragerc 2013-10-15 01:35:59 +0000
10@@ -0,0 +1,6 @@
11+[report]
12+# Regexes for lines to exclude from consideration
13+exclude_lines =
14+ if __name__ == .__main__.:
15+include=
16+ hooks/quantum_*
17
18=== modified file '.pydevproject'
19--- .pydevproject 2013-04-12 16:19:51 +0000
20+++ .pydevproject 2013-10-15 01:35:59 +0000
21@@ -4,6 +4,6 @@
22 <pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
23 <pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
24 <path>/quantum-gateway/hooks</path>
25-<path>/quantum-gateway/templates</path>
26+<path>/quantum-gateway/unit_tests</path>
27 </pydev_pathproperty>
28 </pydev_project>
29
30=== added file 'Makefile'
31--- Makefile 1970-01-01 00:00:00 +0000
32+++ Makefile 2013-10-15 01:35:59 +0000
33@@ -0,0 +1,14 @@
34+#!/usr/bin/make
35+PYTHON := /usr/bin/env python
36+
37+lint:
38+ @flake8 --exclude hooks/charmhelpers hooks
39+ @flake8 --exclude hooks/charmhelpers unit_tests
40+ @charm proof
41+
42+test:
43+ @echo Starting tests...
44+ @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
45+
46+sync:
47+ @charm-helper-sync -c charm-helpers-sync.yaml
48
49=== modified file 'README.md'
50--- README.md 2012-12-06 10:22:24 +0000
51+++ README.md 2013-10-15 01:35:59 +0000
52@@ -1,7 +1,7 @@
53-Overview
54+
55 --------
56
57-Quantum provides flexible software defined networking (SDN) for OpenStack.
58+Neutron provides flexible software defined networking (SDN) for OpenStack.
59
60 This charm is designed to be used in conjunction with the rest of the OpenStack
61 related charms in the charm store) to virtualized the network that Nova Compute
62@@ -11,34 +11,34 @@
63 support all of the features as nova-network (such as multihost) so may not
64 be suitable for all.
65
66-Quantum supports a rich plugin/extension framework for propriety networking
67+Neutron supports a rich plugin/extension framework for propriety networking
68 solutions and supports (in core) Nicira NVP, NEC, Cisco and others...
69
70 The Openstack charms currently only support the fully free OpenvSwitch plugin
71 and implements the 'Provider Router with Private Networks' use case.
72
73-See the upstream [Quantum documentation](http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html)
74+See the upstream [Neutron documentation](http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html)
75 for more details.
76
77
78 Usage
79 -----
80
81-In order to use Quantum with Openstack, you will need to deploy the
82+In order to use Neutron with Openstack, you will need to deploy the
83 nova-compute and nova-cloud-controller charms with the network-manager
84-configuration set to 'Quantum':
85+configuration set to 'Neutron':
86
87 nova-cloud-controller:
88- network-manager: Quantum
89+ network-manager: Neutron
90
91 This decision must be made prior to deploying Openstack with Juju as
92-Quantum is deployed baked into these charms from install onwards:
93+Neutron is deployed baked into these charms from install onwards:
94
95 juju deploy nova-compute
96 juju deploy --config config.yaml nova-cloud-controller
97 juju add-relation nova-compute nova-cloud-controller
98
99-The Quantum Gateway can then be added to the deploying:
100+The Neutron Gateway can then be added to the deploying:
101
102 juju deploy quantum-gateway
103 juju add-relation quantum-gateway mysql
104@@ -47,12 +47,10 @@
105
106 The gateway provides two key services; L3 network routing and DHCP services.
107
108-These are both required in a fully functional Quantum Openstack deployment.
109+These are both required in a fully functional Neutron Openstack deployment.
110
111 TODO
112 ----
113
114 * Provide more network configuration use cases.
115 * Support VLAN in addition to GRE+OpenFlow for L2 separation.
116- * High Avaliability.
117- * Support for propriety plugins for Quantum.
118
119=== added file 'charm-helpers-sync.yaml'
120--- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000
121+++ charm-helpers-sync.yaml 2013-10-15 01:35:59 +0000
122@@ -0,0 +1,9 @@
123+branch: lp:charm-helpers
124+destination: hooks/charmhelpers
125+include:
126+ - core
127+ - fetch
128+ - contrib.openstack
129+ - contrib.hahelpers
130+ - contrib.network.ovs
131+ - payload.execd
132
133=== modified file 'config.yaml'
134--- config.yaml 2013-03-20 16:08:54 +0000
135+++ config.yaml 2013-10-15 01:35:59 +0000
136@@ -7,6 +7,7 @@
137 Supported values include:
138 .
139 ovs - OpenVSwitch
140+ nvp - Nicira NVP
141 ext-port:
142 type: string
143 description: |
144@@ -14,7 +15,7 @@
145 traffic to the external public network.
146 openstack-origin:
147 type: string
148- default: cloud:precise-folsom
149+ default: distro
150 description: |
151 Optional configuration to support use of additional sources such as:
152 .
153@@ -22,3 +23,12 @@
154 - cloud:precise-folsom/proposed
155 - cloud:precise-folsom
156 - deb http://my.archive.com/ubuntu main|KEYID
157+ .
158+ Note that quantum/neutron is only supports > Folsom.
159+ rabbit-user:
160+ type: string
161+ default: nova
162+ rabbit-vhost:
163+ type: string
164+ default: nova
165+
166
167=== modified symlink 'hooks/amqp-relation-changed'
168=== target changed u'hooks.py' => u'quantum_hooks.py'
169=== modified symlink 'hooks/amqp-relation-joined'
170=== target changed u'hooks.py' => u'quantum_hooks.py'
171=== added directory 'hooks/charmhelpers'
172=== added file 'hooks/charmhelpers/__init__.py'
173=== added directory 'hooks/charmhelpers/contrib'
174=== added file 'hooks/charmhelpers/contrib/__init__.py'
175=== added directory 'hooks/charmhelpers/contrib/hahelpers'
176=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
177=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
178--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
179+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 01:35:59 +0000
180@@ -0,0 +1,58 @@
181+#
182+# Copyright 2012 Canonical Ltd.
183+#
184+# This file is sourced from lp:openstack-charm-helpers
185+#
186+# Authors:
187+# James Page <james.page@ubuntu.com>
188+# Adam Gandelman <adamg@ubuntu.com>
189+#
190+
191+import subprocess
192+
193+from charmhelpers.core.hookenv import (
194+ config as config_get,
195+ relation_get,
196+ relation_ids,
197+ related_units as relation_list,
198+ log,
199+ INFO,
200+)
201+
202+
203+def get_cert():
204+ cert = config_get('ssl_cert')
205+ key = config_get('ssl_key')
206+ if not (cert and key):
207+ log("Inspecting identity-service relations for SSL certificate.",
208+ level=INFO)
209+ cert = key = None
210+ for r_id in relation_ids('identity-service'):
211+ for unit in relation_list(r_id):
212+ if not cert:
213+ cert = relation_get('ssl_cert',
214+ rid=r_id, unit=unit)
215+ if not key:
216+ key = relation_get('ssl_key',
217+ rid=r_id, unit=unit)
218+ return (cert, key)
219+
220+
221+def get_ca_cert():
222+ ca_cert = None
223+ log("Inspecting identity-service relations for CA SSL certificate.",
224+ level=INFO)
225+ for r_id in relation_ids('identity-service'):
226+ for unit in relation_list(r_id):
227+ if not ca_cert:
228+ ca_cert = relation_get('ca_cert',
229+ rid=r_id, unit=unit)
230+ return ca_cert
231+
232+
233+def install_ca_cert(ca_cert):
234+ if ca_cert:
235+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
236+ 'w') as crt:
237+ crt.write(ca_cert)
238+ subprocess.check_call(['update-ca-certificates', '--fresh'])
239
240=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
241--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
242+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 01:35:59 +0000
243@@ -0,0 +1,183 @@
244+#
245+# Copyright 2012 Canonical Ltd.
246+#
247+# Authors:
248+# James Page <james.page@ubuntu.com>
249+# Adam Gandelman <adamg@ubuntu.com>
250+#
251+
252+import subprocess
253+import os
254+
255+from socket import gethostname as get_unit_hostname
256+
257+from charmhelpers.core.hookenv import (
258+ log,
259+ relation_ids,
260+ related_units as relation_list,
261+ relation_get,
262+ config as config_get,
263+ INFO,
264+ ERROR,
265+ unit_get,
266+)
267+
268+
269+class HAIncompleteConfig(Exception):
270+ pass
271+
272+
273+def is_clustered():
274+ for r_id in (relation_ids('ha') or []):
275+ for unit in (relation_list(r_id) or []):
276+ clustered = relation_get('clustered',
277+ rid=r_id,
278+ unit=unit)
279+ if clustered:
280+ return True
281+ return False
282+
283+
284+def is_leader(resource):
285+ cmd = [
286+ "crm", "resource",
287+ "show", resource
288+ ]
289+ try:
290+ status = subprocess.check_output(cmd)
291+ except subprocess.CalledProcessError:
292+ return False
293+ else:
294+ if get_unit_hostname() in status:
295+ return True
296+ else:
297+ return False
298+
299+
300+def peer_units():
301+ peers = []
302+ for r_id in (relation_ids('cluster') or []):
303+ for unit in (relation_list(r_id) or []):
304+ peers.append(unit)
305+ return peers
306+
307+
308+def oldest_peer(peers):
309+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
310+ for peer in peers:
311+ remote_unit_no = int(peer.split('/')[1])
312+ if remote_unit_no < local_unit_no:
313+ return False
314+ return True
315+
316+
317+def eligible_leader(resource):
318+ if is_clustered():
319+ if not is_leader(resource):
320+ log('Deferring action to CRM leader.', level=INFO)
321+ return False
322+ else:
323+ peers = peer_units()
324+ if peers and not oldest_peer(peers):
325+ log('Deferring action to oldest service unit.', level=INFO)
326+ return False
327+ return True
328+
329+
330+def https():
331+ '''
332+ Determines whether enough data has been provided in configuration
333+ or relation data to configure HTTPS
334+ .
335+ returns: boolean
336+ '''
337+ if config_get('use-https') == "yes":
338+ return True
339+ if config_get('ssl_cert') and config_get('ssl_key'):
340+ return True
341+ for r_id in relation_ids('identity-service'):
342+ for unit in relation_list(r_id):
343+ rel_state = [
344+ relation_get('https_keystone', rid=r_id, unit=unit),
345+ relation_get('ssl_cert', rid=r_id, unit=unit),
346+ relation_get('ssl_key', rid=r_id, unit=unit),
347+ relation_get('ca_cert', rid=r_id, unit=unit),
348+ ]
349+ # NOTE: works around (LP: #1203241)
350+ if (None not in rel_state) and ('' not in rel_state):
351+ return True
352+ return False
353+
354+
355+def determine_api_port(public_port):
356+ '''
357+ Determine correct API server listening port based on
358+ existence of HTTPS reverse proxy and/or haproxy.
359+
360+ public_port: int: standard public port for given service
361+
362+ returns: int: the correct listening port for the API service
363+ '''
364+ i = 0
365+ if len(peer_units()) > 0 or is_clustered():
366+ i += 1
367+ if https():
368+ i += 1
369+ return public_port - (i * 10)
370+
371+
372+def determine_haproxy_port(public_port):
373+ '''
374+ Description: Determine correct proxy listening port based on public IP +
375+ existence of HTTPS reverse proxy.
376+
377+ public_port: int: standard public port for given service
378+
379+ returns: int: the correct listening port for the HAProxy service
380+ '''
381+ i = 0
382+ if https():
383+ i += 1
384+ return public_port - (i * 10)
385+
386+
387+def get_hacluster_config():
388+ '''
389+ Obtains all relevant configuration from charm configuration required
390+ for initiating a relation to hacluster:
391+
392+ ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
393+
394+ returns: dict: A dict containing settings keyed by setting name.
395+ raises: HAIncompleteConfig if settings are missing.
396+ '''
397+ settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
398+ conf = {}
399+ for setting in settings:
400+ conf[setting] = config_get(setting)
401+ missing = []
402+ [missing.append(s) for s, v in conf.iteritems() if v is None]
403+ if missing:
404+ log('Insufficient config data to configure hacluster.', level=ERROR)
405+ raise HAIncompleteConfig
406+ return conf
407+
408+
409+def canonical_url(configs, vip_setting='vip'):
410+ '''
411+ Returns the correct HTTP URL to this host given the state of HTTPS
412+ configuration and hacluster.
413+
414+ :configs : OSTemplateRenderer: A config tempating object to inspect for
415+ a complete https context.
416+ :vip_setting: str: Setting in charm config that specifies
417+ VIP address.
418+ '''
419+ scheme = 'http'
420+ if 'https' in configs.complete_contexts():
421+ scheme = 'https'
422+ if is_clustered():
423+ addr = config_get(vip_setting)
424+ else:
425+ addr = unit_get('private-address')
426+ return '%s://%s' % (scheme, addr)
427
428=== added directory 'hooks/charmhelpers/contrib/network'
429=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
430=== added directory 'hooks/charmhelpers/contrib/network/ovs'
431=== added file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
432--- hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000
433+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2013-10-15 01:35:59 +0000
434@@ -0,0 +1,72 @@
435+''' Helpers for interacting with OpenvSwitch '''
436+import subprocess
437+import os
438+from charmhelpers.core.hookenv import (
439+ log, WARNING
440+)
441+from charmhelpers.core.host import (
442+ service
443+)
444+
445+
446+def add_bridge(name):
447+ ''' Add the named bridge to openvswitch '''
448+ log('Creating bridge {}'.format(name))
449+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
450+
451+
452+def del_bridge(name):
453+ ''' Delete the named bridge from openvswitch '''
454+ log('Deleting bridge {}'.format(name))
455+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
456+
457+
458+def add_bridge_port(name, port):
459+ ''' Add a port to the named openvswitch bridge '''
460+ log('Adding port {} to bridge {}'.format(port, name))
461+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
462+ name, port])
463+ subprocess.check_call(["ip", "link", "set", port, "up"])
464+
465+
466+def del_bridge_port(name, port):
467+ ''' Delete a port from the named openvswitch bridge '''
468+ log('Deleting port {} from bridge {}'.format(port, name))
469+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
470+ name, port])
471+ subprocess.check_call(["ip", "link", "set", port, "down"])
472+
473+
474+def set_manager(manager):
475+ ''' Set the controller for the local openvswitch '''
476+ log('Setting manager for local ovs to {}'.format(manager))
477+ subprocess.check_call(['ovs-vsctl', 'set-manager',
478+ 'ssl:{}'.format(manager)])
479+
480+
481+CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
482+
483+
484+def get_certificate():
485+ ''' Read openvswitch certificate from disk '''
486+ if os.path.exists(CERT_PATH):
487+ log('Reading ovs certificate from {}'.format(CERT_PATH))
488+ with open(CERT_PATH, 'r') as cert:
489+ full_cert = cert.read()
490+ begin_marker = "-----BEGIN CERTIFICATE-----"
491+ end_marker = "-----END CERTIFICATE-----"
492+ begin_index = full_cert.find(begin_marker)
493+ end_index = full_cert.rfind(end_marker)
494+ if end_index == -1 or begin_index == -1:
495+ raise RuntimeError("Certificate does not contain valid begin"
496+ " and end markers.")
497+ full_cert = full_cert[begin_index:(end_index + len(end_marker))]
498+ return full_cert
499+ else:
500+ log('Certificate not found', level=WARNING)
501+ return None
502+
503+
504+def full_restart():
505+ ''' Full restart and reload of openvswitch '''
506+ service('force-reload-kmod', 'openvswitch-switch')
507
508=== added directory 'hooks/charmhelpers/contrib/openstack'
509=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
510=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
511--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
512+++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 01:35:59 +0000
513@@ -0,0 +1,522 @@
514+import json
515+import os
516+
517+from base64 import b64decode
518+
519+from subprocess import (
520+ check_call
521+)
522+
523+
524+from charmhelpers.fetch import (
525+ apt_install,
526+ filter_installed_packages,
527+)
528+
529+from charmhelpers.core.hookenv import (
530+ config,
531+ local_unit,
532+ log,
533+ relation_get,
534+ relation_ids,
535+ related_units,
536+ unit_get,
537+ unit_private_ip,
538+ ERROR,
539+ WARNING,
540+)
541+
542+from charmhelpers.contrib.hahelpers.cluster import (
543+ determine_api_port,
544+ determine_haproxy_port,
545+ https,
546+ is_clustered,
547+ peer_units,
548+)
549+
550+from charmhelpers.contrib.hahelpers.apache import (
551+ get_cert,
552+ get_ca_cert,
553+)
554+
555+from charmhelpers.contrib.openstack.neutron import (
556+ neutron_plugin_attribute,
557+)
558+
559+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
560+
561+
562+class OSContextError(Exception):
563+ pass
564+
565+
566+def ensure_packages(packages):
567+ '''Install but do not upgrade required plugin packages'''
568+ required = filter_installed_packages(packages)
569+ if required:
570+ apt_install(required, fatal=True)
571+
572+
573+def context_complete(ctxt):
574+ _missing = []
575+ for k, v in ctxt.iteritems():
576+ if v is None or v == '':
577+ _missing.append(k)
578+ if _missing:
579+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
580+ return False
581+ return True
582+
583+
584+class OSContextGenerator(object):
585+ interfaces = []
586+
587+ def __call__(self):
588+ raise NotImplementedError
589+
590+
591+class SharedDBContext(OSContextGenerator):
592+ interfaces = ['shared-db']
593+
594+ def __init__(self, database=None, user=None, relation_prefix=None):
595+ '''
596+ Allows inspecting relation for settings prefixed with relation_prefix.
597+ This is useful for parsing access for multiple databases returned via
598+ the shared-db interface (eg, nova_password, quantum_password)
599+ '''
600+ self.relation_prefix = relation_prefix
601+ self.database = database
602+ self.user = user
603+
604+ def __call__(self):
605+ self.database = self.database or config('database')
606+ self.user = self.user or config('database-user')
607+ if None in [self.database, self.user]:
608+ log('Could not generate shared_db context. '
609+ 'Missing required charm config options. '
610+ '(database name and user)')
611+ raise OSContextError
612+ ctxt = {}
613+
614+ password_setting = 'password'
615+ if self.relation_prefix:
616+ password_setting = self.relation_prefix + '_password'
617+
618+ for rid in relation_ids('shared-db'):
619+ for unit in related_units(rid):
620+ passwd = relation_get(password_setting, rid=rid, unit=unit)
621+ ctxt = {
622+ 'database_host': relation_get('db_host', rid=rid,
623+ unit=unit),
624+ 'database': self.database,
625+ 'database_user': self.user,
626+ 'database_password': passwd,
627+ }
628+ if context_complete(ctxt):
629+ return ctxt
630+ return {}
631+
632+
633+class IdentityServiceContext(OSContextGenerator):
634+ interfaces = ['identity-service']
635+
636+ def __call__(self):
637+ log('Generating template context for identity-service')
638+ ctxt = {}
639+
640+ for rid in relation_ids('identity-service'):
641+ for unit in related_units(rid):
642+ ctxt = {
643+ 'service_port': relation_get('service_port', rid=rid,
644+ unit=unit),
645+ 'service_host': relation_get('service_host', rid=rid,
646+ unit=unit),
647+ 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
648+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
649+ 'admin_tenant_name': relation_get('service_tenant',
650+ rid=rid, unit=unit),
651+ 'admin_user': relation_get('service_username', rid=rid,
652+ unit=unit),
653+ 'admin_password': relation_get('service_password', rid=rid,
654+ unit=unit),
655+ # XXX: Hard-coded http.
656+ 'service_protocol': 'http',
657+ 'auth_protocol': 'http',
658+ }
659+ if context_complete(ctxt):
660+ return ctxt
661+ return {}
662+
663+
664+class AMQPContext(OSContextGenerator):
665+ interfaces = ['amqp']
666+
667+ def __call__(self):
668+ log('Generating template context for amqp')
669+ conf = config()
670+ try:
671+ username = conf['rabbit-user']
672+ vhost = conf['rabbit-vhost']
673+ except KeyError as e:
674+ log('Could not generate shared_db context. '
675+ 'Missing required charm config options: %s.' % e)
676+ raise OSContextError
677+
678+ ctxt = {}
679+ for rid in relation_ids('amqp'):
680+ for unit in related_units(rid):
681+ if relation_get('clustered', rid=rid, unit=unit):
682+ ctxt['clustered'] = True
683+ ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
684+ unit=unit)
685+ else:
686+ ctxt['rabbitmq_host'] = relation_get('private-address',
687+ rid=rid, unit=unit)
688+ ctxt.update({
689+ 'rabbitmq_user': username,
690+ 'rabbitmq_password': relation_get('password', rid=rid,
691+ unit=unit),
692+ 'rabbitmq_virtual_host': vhost,
693+ })
694+ if context_complete(ctxt):
695+ # Sufficient information found = break out!
696+ break
697+ # Used for active/active rabbitmq >= grizzly
698+ ctxt['rabbitmq_hosts'] = []
699+ for unit in related_units(rid):
700+ ctxt['rabbitmq_hosts'].append(relation_get('private-address',
701+ rid=rid, unit=unit))
702+ if not context_complete(ctxt):
703+ return {}
704+ else:
705+ return ctxt
706+
707+
708+class CephContext(OSContextGenerator):
709+ interfaces = ['ceph']
710+
711+ def __call__(self):
712+ '''This generates context for /etc/ceph/ceph.conf templates'''
713+ if not relation_ids('ceph'):
714+ return {}
715+ log('Generating template context for ceph')
716+ mon_hosts = []
717+ auth = None
718+ key = None
719+ for rid in relation_ids('ceph'):
720+ for unit in related_units(rid):
721+ mon_hosts.append(relation_get('private-address', rid=rid,
722+ unit=unit))
723+ auth = relation_get('auth', rid=rid, unit=unit)
724+ key = relation_get('key', rid=rid, unit=unit)
725+
726+ ctxt = {
727+ 'mon_hosts': ' '.join(mon_hosts),
728+ 'auth': auth,
729+ 'key': key,
730+ }
731+
732+ if not os.path.isdir('/etc/ceph'):
733+ os.mkdir('/etc/ceph')
734+
735+ if not context_complete(ctxt):
736+ return {}
737+
738+ ensure_packages(['ceph-common'])
739+
740+ return ctxt
741+
742+
743+class HAProxyContext(OSContextGenerator):
744+ interfaces = ['cluster']
745+
746+ def __call__(self):
747+ '''
748+ Builds half a context for the haproxy template, which describes
749+ all peers to be included in the cluster. Each charm needs to include
750+ its own context generator that describes the port mapping.
751+ '''
752+ if not relation_ids('cluster'):
753+ return {}
754+
755+ cluster_hosts = {}
756+ l_unit = local_unit().replace('/', '-')
757+ cluster_hosts[l_unit] = unit_get('private-address')
758+
759+ for rid in relation_ids('cluster'):
760+ for unit in related_units(rid):
761+ _unit = unit.replace('/', '-')
762+ addr = relation_get('private-address', rid=rid, unit=unit)
763+ cluster_hosts[_unit] = addr
764+
765+ ctxt = {
766+ 'units': cluster_hosts,
767+ }
768+ if len(cluster_hosts.keys()) > 1:
769+ # Enable haproxy when we have enough peers.
770+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
771+ with open('/etc/default/haproxy', 'w') as out:
772+ out.write('ENABLED=1\n')
773+ return ctxt
774+ log('HAProxy context is incomplete, this unit has no peers.')
775+ return {}
776+
777+
778+class ImageServiceContext(OSContextGenerator):
779+ interfaces = ['image-service']
780+
781+ def __call__(self):
782+ '''
783+ Obtains the glance API server from the image-service relation. Useful
784+ in nova and cinder (currently).
785+ '''
786+ log('Generating template context for image-service.')
787+ rids = relation_ids('image-service')
788+ if not rids:
789+ return {}
790+ for rid in rids:
791+ for unit in related_units(rid):
792+ api_server = relation_get('glance-api-server',
793+ rid=rid, unit=unit)
794+ if api_server:
795+ return {'glance_api_servers': api_server}
796+ log('ImageService context is incomplete. '
797+ 'Missing required relation data.')
798+ return {}
799+
800+
801+class ApacheSSLContext(OSContextGenerator):
802+ """
803+ Generates a context for an apache vhost configuration that configures
804+ HTTPS reverse proxying for one or many endpoints. Generated context
805+ looks something like:
806+ {
807+ 'namespace': 'cinder',
808+ 'private_address': 'iscsi.mycinderhost.com',
809+ 'endpoints': [(8776, 8766), (8777, 8767)]
810+ }
811+
812+ The endpoints list consists of a tuples mapping external ports
813+ to internal ports.
814+ """
815+ interfaces = ['https']
816+
817+ # charms should inherit this context and set external ports
818+ # and service namespace accordingly.
819+ external_ports = []
820+ service_namespace = None
821+
822+ def enable_modules(self):
823+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
824+ check_call(cmd)
825+
826+ def configure_cert(self):
827+ if not os.path.isdir('/etc/apache2/ssl'):
828+ os.mkdir('/etc/apache2/ssl')
829+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
830+ if not os.path.isdir(ssl_dir):
831+ os.mkdir(ssl_dir)
832+ cert, key = get_cert()
833+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
834+ cert_out.write(b64decode(cert))
835+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
836+ key_out.write(b64decode(key))
837+ ca_cert = get_ca_cert()
838+ if ca_cert:
839+ with open(CA_CERT_PATH, 'w') as ca_out:
840+ ca_out.write(b64decode(ca_cert))
841+ check_call(['update-ca-certificates'])
842+
843+ def __call__(self):
844+ if isinstance(self.external_ports, basestring):
845+ self.external_ports = [self.external_ports]
846+ if (not self.external_ports or not https()):
847+ return {}
848+
849+ self.configure_cert()
850+ self.enable_modules()
851+
852+ ctxt = {
853+ 'namespace': self.service_namespace,
854+ 'private_address': unit_get('private-address'),
855+ 'endpoints': []
856+ }
857+ for ext_port in self.external_ports:
858+ if peer_units() or is_clustered():
859+ int_port = determine_haproxy_port(ext_port)
860+ else:
861+ int_port = determine_api_port(ext_port)
862+ portmap = (int(ext_port), int(int_port))
863+ ctxt['endpoints'].append(portmap)
864+ return ctxt
865+
866+
867+class NeutronContext(object):
868+ interfaces = []
869+
870+ @property
871+ def plugin(self):
872+ return None
873+
874+ @property
875+ def network_manager(self):
876+ return None
877+
878+ @property
879+ def packages(self):
880+ return neutron_plugin_attribute(
881+ self.plugin, 'packages', self.network_manager)
882+
883+ @property
884+ def neutron_security_groups(self):
885+ return None
886+
887+ def _ensure_packages(self):
888+ [ensure_packages(pkgs) for pkgs in self.packages]
889+
890+ def _save_flag_file(self):
891+ if self.network_manager == 'quantum':
892+ _file = '/etc/nova/quantum_plugin.conf'
893+ else:
894+ _file = '/etc/nova/neutron_plugin.conf'
895+ with open(_file, 'wb') as out:
896+ out.write(self.plugin + '\n')
897+
898+ def ovs_ctxt(self):
899+ driver = neutron_plugin_attribute(self.plugin, 'driver',
900+ self.network_manager)
901+
902+ ovs_ctxt = {
903+ 'core_plugin': driver,
904+ 'neutron_plugin': 'ovs',
905+ 'neutron_security_groups': self.neutron_security_groups,
906+ 'local_ip': unit_private_ip(),
907+ }
908+
909+ return ovs_ctxt
910+
911+ def __call__(self):
912+ self._ensure_packages()
913+
914+ if self.network_manager not in ['quantum', 'neutron']:
915+ return {}
916+
917+ if not self.plugin:
918+ return {}
919+
920+ ctxt = {'network_manager': self.network_manager}
921+
922+ if self.plugin == 'ovs':
923+ ctxt.update(self.ovs_ctxt())
924+
925+ self._save_flag_file()
926+ return ctxt
927+
928+
929+class OSConfigFlagContext(OSContextGenerator):
930+ '''
931+ Responsible adding user-defined config-flags in charm config to a
932+ to a template context.
933+ '''
934+ def __call__(self):
935+ config_flags = config('config-flags')
936+ if not config_flags or config_flags in ['None', '']:
937+ return {}
938+ config_flags = config_flags.split(',')
939+ flags = {}
940+ for flag in config_flags:
941+ if '=' not in flag:
942+ log('Improperly formatted config-flag, expected k=v '
943+ 'got %s' % flag, level=WARNING)
944+ continue
945+ k, v = flag.split('=')
946+ flags[k.strip()] = v
947+ ctxt = {'user_config_flags': flags}
948+ return ctxt
949+
950+
951+class SubordinateConfigContext(OSContextGenerator):
952+ """
953+ Responsible for inspecting relations to subordinates that
954+ may be exporting required config via a json blob.
955+
956+ The subordinate interface allows subordinates to export their
957+ configuration requirements to the principle for multiple config
958+ files and multiple serivces. Ie, a subordinate that has interfaces
959+ to both glance and nova may export to following yaml blob as json:
960+
961+ glance:
962+ /etc/glance/glance-api.conf:
963+ sections:
964+ DEFAULT:
965+ - [key1, value1]
966+ /etc/glance/glance-registry.conf:
967+ MYSECTION:
968+ - [key2, value2]
969+ nova:
970+ /etc/nova/nova.conf:
971+ sections:
972+ DEFAULT:
973+ - [key3, value3]
974+
975+
976+ It is then up to the principle charms to subscribe this context to
977+ the service+config file it is interestd in. Configuration data will
978+ be available in the template context, in glance's case, as:
979+ ctxt = {
980+ ... other context ...
981+ 'subordinate_config': {
982+ 'DEFAULT': {
983+ 'key1': 'value1',
984+ },
985+ 'MYSECTION': {
986+ 'key2': 'value2',
987+ },
988+ }
989+ }
990+
991+ """
992+ def __init__(self, service, config_file, interface):
993+ """
994+ :param service : Service name key to query in any subordinate
995+ data found
996+ :param config_file : Service's config file to query sections
997+ :param interface : Subordinate interface to inspect
998+ """
999+ self.service = service
1000+ self.config_file = config_file
1001+ self.interface = interface
1002+
1003+ def __call__(self):
1004+ ctxt = {}
1005+ for rid in relation_ids(self.interface):
1006+ for unit in related_units(rid):
1007+ sub_config = relation_get('subordinate_configuration',
1008+ rid=rid, unit=unit)
1009+ if sub_config and sub_config != '':
1010+ try:
1011+ sub_config = json.loads(sub_config)
1012+ except:
1013+ log('Could not parse JSON from subordinate_config '
1014+ 'setting from %s' % rid, level=ERROR)
1015+ continue
1016+
1017+ if self.service not in sub_config:
1018+ log('Found subordinate_config on %s but it contained'
1019+ 'nothing for %s service' % (rid, self.service))
1020+ continue
1021+
1022+ sub_config = sub_config[self.service]
1023+ if self.config_file not in sub_config:
1024+ log('Found subordinate_config on %s but it contained'
1025+ 'nothing for %s' % (rid, self.config_file))
1026+ continue
1027+
1028+ sub_config = sub_config[self.config_file]
1029+ for k, v in sub_config.iteritems():
1030+ ctxt[k] = v
1031+
1032+ if not ctxt:
1033+ ctxt['sections'] = {}
1034+
1035+ return ctxt
1036
1037=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1038--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
1039+++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 01:35:59 +0000
1040@@ -0,0 +1,117 @@
1041+# Various utilies for dealing with Neutron and the renaming from Quantum.
1042+
1043+from subprocess import check_output
1044+
1045+from charmhelpers.core.hookenv import (
1046+ config,
1047+ log,
1048+ ERROR,
1049+)
1050+
1051+from charmhelpers.contrib.openstack.utils import os_release
1052+
1053+
1054+def headers_package():
1055+ """Ensures correct linux-headers for running kernel are installed,
1056+ for building DKMS package"""
1057+ kver = check_output(['uname', '-r']).strip()
1058+ return 'linux-headers-%s' % kver
1059+
1060+
1061+# legacy
1062+def quantum_plugins():
1063+ from charmhelpers.contrib.openstack import context
1064+ return {
1065+ 'ovs': {
1066+ 'config': '/etc/quantum/plugins/openvswitch/'
1067+ 'ovs_quantum_plugin.ini',
1068+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
1069+ 'OVSQuantumPluginV2',
1070+ 'contexts': [
1071+ context.SharedDBContext(user=config('neutron-database-user'),
1072+ database=config('neutron-database'),
1073+ relation_prefix='neutron')],
1074+ 'services': ['quantum-plugin-openvswitch-agent'],
1075+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
1076+ ['quantum-plugin-openvswitch-agent']],
1077+ },
1078+ 'nvp': {
1079+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
1080+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
1081+ 'QuantumPlugin.NvpPluginV2',
1082+ 'services': [],
1083+ 'packages': [],
1084+ }
1085+ }
1086+
1087+
1088+def neutron_plugins():
1089+ from charmhelpers.contrib.openstack import context
1090+ return {
1091+ 'ovs': {
1092+ 'config': '/etc/neutron/plugins/openvswitch/'
1093+ 'ovs_neutron_plugin.ini',
1094+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
1095+ 'OVSNeutronPluginV2',
1096+ 'contexts': [
1097+ context.SharedDBContext(user=config('neutron-database-user'),
1098+ database=config('neutron-database'),
1099+ relation_prefix='neutron')],
1100+ 'services': ['neutron-plugin-openvswitch-agent'],
1101+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
1102+ ['quantum-plugin-openvswitch-agent']],
1103+ },
1104+ 'nvp': {
1105+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
1106+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
1107+ 'NeutronPlugin.NvpPluginV2',
1108+ 'services': [],
1109+ 'packages': [],
1110+ }
1111+ }
1112+
1113+
1114+def neutron_plugin_attribute(plugin, attr, net_manager=None):
1115+ manager = net_manager or network_manager()
1116+ if manager == 'quantum':
1117+ plugins = quantum_plugins()
1118+ elif manager == 'neutron':
1119+ plugins = neutron_plugins()
1120+ else:
1121+ log('Error: Network manager does not support plugins.')
1122+ raise Exception
1123+
1124+ try:
1125+ _plugin = plugins[plugin]
1126+ except KeyError:
1127+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
1128+ raise Exception
1129+
1130+ try:
1131+ return _plugin[attr]
1132+ except KeyError:
1133+ return None
1134+
1135+
1136+def network_manager():
1137+ '''
1138+ Deals with the renaming of Quantum to Neutron in H and any situations
1139+ that require compatability (eg, deploying H with network-manager=quantum,
1140+ upgrading from G).
1141+ '''
1142+ release = os_release('nova-common')
1143+ manager = config('network-manager').lower()
1144+
1145+ if manager not in ['quantum', 'neutron']:
1146+ return manager
1147+
1148+ if release in ['essex']:
1149+ # E does not support neutron
1150+ log('Neutron networking not supported in Essex.', level=ERROR)
1151+ raise Exception
1152+ elif release in ['folsom', 'grizzly']:
1153+ # neutron is named quantum in F and G
1154+ return 'quantum'
1155+ else:
1156+ # ensure accurate naming for all releases post-H
1157+ return 'neutron'
1158
1159=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
1160=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
1161--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
1162+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 01:35:59 +0000
1163@@ -0,0 +1,2 @@
1164+# dummy __init__.py to fool syncer into thinking this is a syncable python
1165+# module
1166
1167=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
1168--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
1169+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 01:35:59 +0000
1170@@ -0,0 +1,280 @@
1171+import os
1172+
1173+from charmhelpers.fetch import apt_install
1174+
1175+from charmhelpers.core.hookenv import (
1176+ log,
1177+ ERROR,
1178+ INFO
1179+)
1180+
1181+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1182+
1183+try:
1184+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
1185+except ImportError:
1186+ # python-jinja2 may not be installed yet, or we're running unittests.
1187+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
1188+
1189+
1190+class OSConfigException(Exception):
1191+ pass
1192+
1193+
1194+def get_loader(templates_dir, os_release):
1195+ """
1196+ Create a jinja2.ChoiceLoader containing template dirs up to
1197+ and including os_release. If directory template directory
1198+ is missing at templates_dir, it will be omitted from the loader.
1199+ templates_dir is added to the bottom of the search list as a base
1200+ loading dir.
1201+
1202+ A charm may also ship a templates dir with this module
1203+ and it will be appended to the bottom of the search list, eg:
1204+ hooks/charmhelpers/contrib/openstack/templates.
1205+
1206+ :param templates_dir: str: Base template directory containing release
1207+ sub-directories.
1208+ :param os_release : str: OpenStack release codename to construct template
1209+ loader.
1210+
1211+ :returns : jinja2.ChoiceLoader constructed with a list of
1212+ jinja2.FilesystemLoaders, ordered in descending
1213+ order by OpenStack release.
1214+ """
1215+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1216+ for rel in OPENSTACK_CODENAMES.itervalues()]
1217+
1218+ if not os.path.isdir(templates_dir):
1219+ log('Templates directory not found @ %s.' % templates_dir,
1220+ level=ERROR)
1221+ raise OSConfigException
1222+
1223+ # the bottom contains tempaltes_dir and possibly a common templates dir
1224+ # shipped with the helper.
1225+ loaders = [FileSystemLoader(templates_dir)]
1226+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
1227+ if os.path.isdir(helper_templates):
1228+ loaders.append(FileSystemLoader(helper_templates))
1229+
1230+ for rel, tmpl_dir in tmpl_dirs:
1231+ if os.path.isdir(tmpl_dir):
1232+ loaders.insert(0, FileSystemLoader(tmpl_dir))
1233+ if rel == os_release:
1234+ break
1235+ log('Creating choice loader with dirs: %s' %
1236+ [l.searchpath for l in loaders], level=INFO)
1237+ return ChoiceLoader(loaders)
1238+
1239+
1240+class OSConfigTemplate(object):
1241+ """
1242+ Associates a config file template with a list of context generators.
1243+ Responsible for constructing a template context based on those generators.
1244+ """
1245+ def __init__(self, config_file, contexts):
1246+ self.config_file = config_file
1247+
1248+ if hasattr(contexts, '__call__'):
1249+ self.contexts = [contexts]
1250+ else:
1251+ self.contexts = contexts
1252+
1253+ self._complete_contexts = []
1254+
1255+ def context(self):
1256+ ctxt = {}
1257+ for context in self.contexts:
1258+ _ctxt = context()
1259+ if _ctxt:
1260+ ctxt.update(_ctxt)
1261+ # track interfaces for every complete context.
1262+ [self._complete_contexts.append(interface)
1263+ for interface in context.interfaces
1264+ if interface not in self._complete_contexts]
1265+ return ctxt
1266+
1267+ def complete_contexts(self):
1268+ '''
1269+ Return a list of interfaces that have atisfied contexts.
1270+ '''
1271+ if self._complete_contexts:
1272+ return self._complete_contexts
1273+ self.context()
1274+ return self._complete_contexts
1275+
1276+
1277+class OSConfigRenderer(object):
1278+ """
1279+ This class provides a common templating system to be used by OpenStack
1280+ charms. It is intended to help charms share common code and templates,
1281+ and ease the burden of managing config templates across multiple OpenStack
1282+ releases.
1283+
1284+ Basic usage:
1285+ # import some common context generates from charmhelpers
1286+ from charmhelpers.contrib.openstack import context
1287+
1288+ # Create a renderer object for a specific OS release.
1289+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
1290+ openstack_release='folsom')
1291+ # register some config files with context generators.
1292+ configs.register(config_file='/etc/nova/nova.conf',
1293+ contexts=[context.SharedDBContext(),
1294+ context.AMQPContext()])
1295+ configs.register(config_file='/etc/nova/api-paste.ini',
1296+ contexts=[context.IdentityServiceContext()])
1297+ configs.register(config_file='/etc/haproxy/haproxy.conf',
1298+ contexts=[context.HAProxyContext()])
1299+ # write out a single config
1300+ configs.write('/etc/nova/nova.conf')
1301+ # write out all registered configs
1302+ configs.write_all()
1303+
1304+ Details:
1305+
1306+ OpenStack Releases and template loading
1307+ ---------------------------------------
1308+ When the object is instantiated, it is associated with a specific OS
1309+ release. This dictates how the template loader will be constructed.
1310+
1311+ The constructed loader attempts to load the template from several places
1312+ in the following order:
1313+ - from the most recent OS release-specific template dir (if one exists)
1314+ - the base templates_dir
1315+ - a template directory shipped in the charm with this helper file.
1316+
1317+
1318+ For the example above, '/tmp/templates' contains the following structure:
1319+ /tmp/templates/nova.conf
1320+ /tmp/templates/api-paste.ini
1321+ /tmp/templates/grizzly/api-paste.ini
1322+ /tmp/templates/havana/api-paste.ini
1323+
1324+ Since it was registered with the grizzly release, it first seraches
1325+ the grizzly directory for nova.conf, then the templates dir.
1326+
1327+ When writing api-paste.ini, it will find the template in the grizzly
1328+ directory.
1329+
1330+ If the object were created with folsom, it would fall back to the
1331+ base templates dir for its api-paste.ini template.
1332+
1333+ This system should help manage changes in config files through
1334+ openstack releases, allowing charms to fall back to the most recently
1335+ updated config template for a given release
1336+
1337+ The haproxy.conf, since it is not shipped in the templates dir, will
1338+ be loaded from the module directory's template directory, eg
1339+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1340+ us to ship common templates (haproxy, apache) with the helpers.
1341+
1342+ Context generators
1343+ ---------------------------------------
1344+ Context generators are used to generate template contexts during hook
1345+ execution. Doing so may require inspecting service relations, charm
1346+ config, etc. When registered, a config file is associated with a list
1347+ of generators. When a template is rendered and written, all context
1348+ generates are called in a chain to generate the context dictionary
1349+ passed to the jinja2 template. See context.py for more info.
1350+ """
1351+ def __init__(self, templates_dir, openstack_release):
1352+ if not os.path.isdir(templates_dir):
1353+ log('Could not locate templates dir %s' % templates_dir,
1354+ level=ERROR)
1355+ raise OSConfigException
1356+
1357+ self.templates_dir = templates_dir
1358+ self.openstack_release = openstack_release
1359+ self.templates = {}
1360+ self._tmpl_env = None
1361+
1362+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
1363+ # if this code is running, the object is created pre-install hook.
1364+ # jinja2 shouldn't get touched until the module is reloaded on next
1365+ # hook execution, with proper jinja2 bits successfully imported.
1366+ apt_install('python-jinja2')
1367+
1368+ def register(self, config_file, contexts):
1369+ """
1370+ Register a config file with a list of context generators to be called
1371+ during rendering.
1372+ """
1373+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
1374+ contexts=contexts)
1375+ log('Registered config file: %s' % config_file, level=INFO)
1376+
1377+ def _get_tmpl_env(self):
1378+ if not self._tmpl_env:
1379+ loader = get_loader(self.templates_dir, self.openstack_release)
1380+ self._tmpl_env = Environment(loader=loader)
1381+
1382+ def _get_template(self, template):
1383+ self._get_tmpl_env()
1384+ template = self._tmpl_env.get_template(template)
1385+ log('Loaded template from %s' % template.filename, level=INFO)
1386+ return template
1387+
1388+ def render(self, config_file):
1389+ if config_file not in self.templates:
1390+ log('Config not registered: %s' % config_file, level=ERROR)
1391+ raise OSConfigException
1392+ ctxt = self.templates[config_file].context()
1393+
1394+ _tmpl = os.path.basename(config_file)
1395+ try:
1396+ template = self._get_template(_tmpl)
1397+ except exceptions.TemplateNotFound:
1398+ # if no template is found with basename, try looking for it
1399+ # using a munged full path, eg:
1400+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
1401+ _tmpl = '_'.join(config_file.split('/')[1:])
1402+ try:
1403+ template = self._get_template(_tmpl)
1404+ except exceptions.TemplateNotFound as e:
1405+ log('Could not load template from %s by %s or %s.' %
1406+ (self.templates_dir, os.path.basename(config_file), _tmpl),
1407+ level=ERROR)
1408+ raise e
1409+
1410+ log('Rendering from template: %s' % _tmpl, level=INFO)
1411+ return template.render(ctxt)
1412+
1413+ def write(self, config_file):
1414+ """
1415+ Write a single config file, raises if config file is not registered.
1416+ """
1417+ if config_file not in self.templates:
1418+ log('Config not registered: %s' % config_file, level=ERROR)
1419+ raise OSConfigException
1420+
1421+ _out = self.render(config_file)
1422+
1423+ with open(config_file, 'wb') as out:
1424+ out.write(_out)
1425+
1426+ log('Wrote template %s.' % config_file, level=INFO)
1427+
1428+ def write_all(self):
1429+ """
1430+ Write out all registered config files.
1431+ """
1432+ [self.write(k) for k in self.templates.iterkeys()]
1433+
1434+ def set_release(self, openstack_release):
1435+ """
1436+ Resets the template environment and generates a new template loader
1437+ based on a the new openstack release.
1438+ """
1439+ self._tmpl_env = None
1440+ self.openstack_release = openstack_release
1441+ self._get_tmpl_env()
1442+
1443+ def complete_contexts(self):
1444+ '''
1445+ Returns a list of context interfaces that yield a complete context.
1446+ '''
1447+ interfaces = []
1448+ [interfaces.extend(i.complete_contexts())
1449+ for i in self.templates.itervalues()]
1450+ return interfaces
1451
1452=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
1453--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
1454+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 01:35:59 +0000
1455@@ -0,0 +1,365 @@
1456+#!/usr/bin/python
1457+
1458+# Common python helper functions used for OpenStack charms.
1459+from collections import OrderedDict
1460+
1461+import apt_pkg as apt
1462+import subprocess
1463+import os
1464+import socket
1465+import sys
1466+
1467+from charmhelpers.core.hookenv import (
1468+ config,
1469+ log as juju_log,
1470+ charm_dir,
1471+)
1472+
1473+from charmhelpers.core.host import (
1474+ lsb_release,
1475+)
1476+
1477+from charmhelpers.fetch import (
1478+ apt_install,
1479+)
1480+
1481+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
1482+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
1483+
1484+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
1485+ ('oneiric', 'diablo'),
1486+ ('precise', 'essex'),
1487+ ('quantal', 'folsom'),
1488+ ('raring', 'grizzly'),
1489+ ('saucy', 'havana'),
1490+])
1491+
1492+
1493+OPENSTACK_CODENAMES = OrderedDict([
1494+ ('2011.2', 'diablo'),
1495+ ('2012.1', 'essex'),
1496+ ('2012.2', 'folsom'),
1497+ ('2013.1', 'grizzly'),
1498+ ('2013.2', 'havana'),
1499+ ('2014.1', 'icehouse'),
1500+])
1501+
1502+# The ugly duckling
1503+SWIFT_CODENAMES = OrderedDict([
1504+ ('1.4.3', 'diablo'),
1505+ ('1.4.8', 'essex'),
1506+ ('1.7.4', 'folsom'),
1507+ ('1.8.0', 'grizzly'),
1508+ ('1.7.7', 'grizzly'),
1509+ ('1.7.6', 'grizzly'),
1510+ ('1.10.0', 'havana'),
1511+ ('1.9.1', 'havana'),
1512+ ('1.9.0', 'havana'),
1513+])
1514+
1515+
1516+def error_out(msg):
1517+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
1518+ sys.exit(1)
1519+
1520+
1521+def get_os_codename_install_source(src):
1522+ '''Derive OpenStack release codename from a given installation source.'''
1523+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1524+ rel = ''
1525+ if src == 'distro':
1526+ try:
1527+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1528+ except KeyError:
1529+ e = 'Could not derive openstack release for '\
1530+ 'this Ubuntu release: %s' % ubuntu_rel
1531+ error_out(e)
1532+ return rel
1533+
1534+ if src.startswith('cloud:'):
1535+ ca_rel = src.split(':')[1]
1536+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
1537+ return ca_rel
1538+
1539+ # Best guess match based on deb string provided
1540+ if src.startswith('deb') or src.startswith('ppa'):
1541+ for k, v in OPENSTACK_CODENAMES.iteritems():
1542+ if v in src:
1543+ return v
1544+
1545+
1546+def get_os_version_install_source(src):
1547+ codename = get_os_codename_install_source(src)
1548+ return get_os_version_codename(codename)
1549+
1550+
1551+def get_os_codename_version(vers):
1552+ '''Determine OpenStack codename from version number.'''
1553+ try:
1554+ return OPENSTACK_CODENAMES[vers]
1555+ except KeyError:
1556+ e = 'Could not determine OpenStack codename for version %s' % vers
1557+ error_out(e)
1558+
1559+
1560+def get_os_version_codename(codename):
1561+ '''Determine OpenStack version number from codename.'''
1562+ for k, v in OPENSTACK_CODENAMES.iteritems():
1563+ if v == codename:
1564+ return k
1565+ e = 'Could not derive OpenStack version for '\
1566+ 'codename: %s' % codename
1567+ error_out(e)
1568+
1569+
1570+def get_os_codename_package(package, fatal=True):
1571+ '''Derive OpenStack release codename from an installed package.'''
1572+ apt.init()
1573+ cache = apt.Cache()
1574+
1575+ try:
1576+ pkg = cache[package]
1577+ except:
1578+ if not fatal:
1579+ return None
1580+ # the package is unknown to the current apt cache.
1581+ e = 'Could not determine version of package with no installation '\
1582+ 'candidate: %s' % package
1583+ error_out(e)
1584+
1585+ if not pkg.current_ver:
1586+ if not fatal:
1587+ return None
1588+ # package is known, but no version is currently installed.
1589+ e = 'Could not determine version of uninstalled package: %s' % package
1590+ error_out(e)
1591+
1592+ vers = apt.upstream_version(pkg.current_ver.ver_str)
1593+
1594+ try:
1595+ if 'swift' in pkg.name:
1596+ swift_vers = vers[:5]
1597+ if swift_vers not in SWIFT_CODENAMES:
1598+ # Deal with 1.10.0 upward
1599+ swift_vers = vers[:6]
1600+ return SWIFT_CODENAMES[swift_vers]
1601+ else:
1602+ vers = vers[:6]
1603+ return OPENSTACK_CODENAMES[vers]
1604+ except KeyError:
1605+ e = 'Could not determine OpenStack codename for version %s' % vers
1606+ error_out(e)
1607+
1608+
1609+def get_os_version_package(pkg, fatal=True):
1610+ '''Derive OpenStack version number from an installed package.'''
1611+ codename = get_os_codename_package(pkg, fatal=fatal)
1612+
1613+ if not codename:
1614+ return None
1615+
1616+ if 'swift' in pkg:
1617+ vers_map = SWIFT_CODENAMES
1618+ else:
1619+ vers_map = OPENSTACK_CODENAMES
1620+
1621+ for version, cname in vers_map.iteritems():
1622+ if cname == codename:
1623+ return version
1624+ #e = "Could not determine OpenStack version for package: %s" % pkg
1625+ #error_out(e)
1626+
1627+
1628+os_rel = None
1629+
1630+
1631+def os_release(package, base='essex'):
1632+ '''
1633+ Returns OpenStack release codename from a cached global.
1634+ If the codename can not be determined from either an installed package or
1635+ the installation source, the earliest release supported by the charm should
1636+ be returned.
1637+ '''
1638+ global os_rel
1639+ if os_rel:
1640+ return os_rel
1641+ os_rel = (get_os_codename_package(package, fatal=False) or
1642+ get_os_codename_install_source(config('openstack-origin')) or
1643+ base)
1644+ return os_rel
1645+
1646+
1647+def import_key(keyid):
1648+ cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
1649+ "--recv-keys %s" % keyid
1650+ try:
1651+ subprocess.check_call(cmd.split(' '))
1652+ except subprocess.CalledProcessError:
1653+ error_out("Error importing repo key %s" % keyid)
1654+
1655+
1656+def configure_installation_source(rel):
1657+ '''Configure apt installation source.'''
1658+ if rel == 'distro':
1659+ return
1660+ elif rel[:4] == "ppa:":
1661+ src = rel
1662+ subprocess.check_call(["add-apt-repository", "-y", src])
1663+ elif rel[:3] == "deb":
1664+ l = len(rel.split('|'))
1665+ if l == 2:
1666+ src, key = rel.split('|')
1667+ juju_log("Importing PPA key from keyserver for %s" % src)
1668+ import_key(key)
1669+ elif l == 1:
1670+ src = rel
1671+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
1672+ f.write(src)
1673+ elif rel[:6] == 'cloud:':
1674+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1675+ rel = rel.split(':')[1]
1676+ u_rel = rel.split('-')[0]
1677+ ca_rel = rel.split('-')[1]
1678+
1679+ if u_rel != ubuntu_rel:
1680+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
1681+ 'version (%s)' % (ca_rel, ubuntu_rel)
1682+ error_out(e)
1683+
1684+ if 'staging' in ca_rel:
1685+ # staging is just a regular PPA.
1686+ os_rel = ca_rel.split('/')[0]
1687+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
1688+ cmd = 'add-apt-repository -y %s' % ppa
1689+ subprocess.check_call(cmd.split(' '))
1690+ return
1691+
1692+ # map charm config options to actual archive pockets.
1693+ pockets = {
1694+ 'folsom': 'precise-updates/folsom',
1695+ 'folsom/updates': 'precise-updates/folsom',
1696+ 'folsom/proposed': 'precise-proposed/folsom',
1697+ 'grizzly': 'precise-updates/grizzly',
1698+ 'grizzly/updates': 'precise-updates/grizzly',
1699+ 'grizzly/proposed': 'precise-proposed/grizzly',
1700+ 'havana': 'precise-updates/havana',
1701+ 'havana/updates': 'precise-updates/havana',
1702+ 'havana/proposed': 'precise-proposed/havana',
1703+ }
1704+
1705+ try:
1706+ pocket = pockets[ca_rel]
1707+ except KeyError:
1708+ e = 'Invalid Cloud Archive release specified: %s' % rel
1709+ error_out(e)
1710+
1711+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
1712+ apt_install('ubuntu-cloud-keyring', fatal=True)
1713+
1714+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
1715+ f.write(src)
1716+ else:
1717+ error_out("Invalid openstack-release specified: %s" % rel)
1718+
1719+
1720+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
1721+ """
1722+ Write an rc file in the charm-delivered directory containing
1723+ exported environment variables provided by env_vars. Any charm scripts run
1724+ outside the juju hook environment can source this scriptrc to obtain
1725+ updated config information necessary to perform health checks or
1726+ service changes.
1727+ """
1728+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
1729+ if not os.path.exists(os.path.dirname(juju_rc_path)):
1730+ os.mkdir(os.path.dirname(juju_rc_path))
1731+ with open(juju_rc_path, 'wb') as rc_script:
1732+ rc_script.write(
1733+ "#!/bin/bash\n")
1734+ [rc_script.write('export %s=%s\n' % (u, p))
1735+ for u, p in env_vars.iteritems() if u != "script_path"]
1736+
1737+
1738+def openstack_upgrade_available(package):
1739+ """
1740+ Determines if an OpenStack upgrade is available from installation
1741+ source, based on version of installed package.
1742+
1743+ :param package: str: Name of installed package.
1744+
1745+ :returns: bool: : Returns True if configured installation source offers
1746+ a newer version of package.
1747+
1748+ """
1749+
1750+ src = config('openstack-origin')
1751+ cur_vers = get_os_version_package(package)
1752+ available_vers = get_os_version_install_source(src)
1753+ apt.init()
1754+ return apt.version_compare(available_vers, cur_vers) == 1
1755+
1756+
1757+def is_ip(address):
1758+ """
1759+ Returns True if address is a valid IP address.
1760+ """
1761+ try:
1762+ # Test to see if already an IPv4 address
1763+ socket.inet_aton(address)
1764+ return True
1765+ except socket.error:
1766+ return False
1767+
1768+
1769+def ns_query(address):
1770+ try:
1771+ import dns.resolver
1772+ except ImportError:
1773+ apt_install('python-dnspython')
1774+ import dns.resolver
1775+
1776+ if isinstance(address, dns.name.Name):
1777+ rtype = 'PTR'
1778+ elif isinstance(address, basestring):
1779+ rtype = 'A'
1780+
1781+ answers = dns.resolver.query(address, rtype)
1782+ if answers:
1783+ return str(answers[0])
1784+ return None
1785+
1786+
1787+def get_host_ip(hostname):
1788+ """
1789+ Resolves the IP for a given hostname, or returns
1790+ the input if it is already an IP.
1791+ """
1792+ if is_ip(hostname):
1793+ return hostname
1794+
1795+ return ns_query(hostname)
1796+
1797+
1798+def get_hostname(address):
1799+ """
1800+ Resolves hostname for given IP, or returns the input
1801+ if it is already a hostname.
1802+ """
1803+ if not is_ip(address):
1804+ return address
1805+
1806+ try:
1807+ import dns.reversename
1808+ except ImportError:
1809+ apt_install('python-dnspython')
1810+ import dns.reversename
1811+
1812+ rev = dns.reversename.from_address(address)
1813+ result = ns_query(rev)
1814+ if not result:
1815+ return None
1816+
1817+ # strip trailing .
1818+ if result.endswith('.'):
1819+ return result[:-1]
1820+ return result
1821
1822=== added directory 'hooks/charmhelpers/core'
1823=== added file 'hooks/charmhelpers/core/__init__.py'
1824=== added file 'hooks/charmhelpers/core/hookenv.py'
1825--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
1826+++ hooks/charmhelpers/core/hookenv.py 2013-10-15 01:35:59 +0000
1827@@ -0,0 +1,340 @@
1828+"Interactions with the Juju environment"
1829+# Copyright 2013 Canonical Ltd.
1830+#
1831+# Authors:
1832+# Charm Helpers Developers <juju@lists.ubuntu.com>
1833+
1834+import os
1835+import json
1836+import yaml
1837+import subprocess
1838+import UserDict
1839+
1840+CRITICAL = "CRITICAL"
1841+ERROR = "ERROR"
1842+WARNING = "WARNING"
1843+INFO = "INFO"
1844+DEBUG = "DEBUG"
1845+MARKER = object()
1846+
1847+cache = {}
1848+
1849+
1850+def cached(func):
1851+ ''' Cache return values for multiple executions of func + args
1852+
1853+ For example:
1854+
1855+ @cached
1856+ def unit_get(attribute):
1857+ pass
1858+
1859+ unit_get('test')
1860+
1861+ will cache the result of unit_get + 'test' for future calls.
1862+ '''
1863+ def wrapper(*args, **kwargs):
1864+ global cache
1865+ key = str((func, args, kwargs))
1866+ try:
1867+ return cache[key]
1868+ except KeyError:
1869+ res = func(*args, **kwargs)
1870+ cache[key] = res
1871+ return res
1872+ return wrapper
1873+
1874+
1875+def flush(key):
1876+ ''' Flushes any entries from function cache where the
1877+ key is found in the function+args '''
1878+ flush_list = []
1879+ for item in cache:
1880+ if key in item:
1881+ flush_list.append(item)
1882+ for item in flush_list:
1883+ del cache[item]
1884+
1885+
1886+def log(message, level=None):
1887+ "Write a message to the juju log"
1888+ command = ['juju-log']
1889+ if level:
1890+ command += ['-l', level]
1891+ command += [message]
1892+ subprocess.call(command)
1893+
1894+
1895+class Serializable(UserDict.IterableUserDict):
1896+ "Wrapper, an object that can be serialized to yaml or json"
1897+
1898+ def __init__(self, obj):
1899+ # wrap the object
1900+ UserDict.IterableUserDict.__init__(self)
1901+ self.data = obj
1902+
1903+ def __getattr__(self, attr):
1904+ # See if this object has attribute.
1905+ if attr in ("json", "yaml", "data"):
1906+ return self.__dict__[attr]
1907+ # Check for attribute in wrapped object.
1908+ got = getattr(self.data, attr, MARKER)
1909+ if got is not MARKER:
1910+ return got
1911+ # Proxy to the wrapped object via dict interface.
1912+ try:
1913+ return self.data[attr]
1914+ except KeyError:
1915+ raise AttributeError(attr)
1916+
1917+ def __getstate__(self):
1918+ # Pickle as a standard dictionary.
1919+ return self.data
1920+
1921+ def __setstate__(self, state):
1922+ # Unpickle into our wrapper.
1923+ self.data = state
1924+
1925+ def json(self):
1926+ "Serialize the object to json"
1927+ return json.dumps(self.data)
1928+
1929+ def yaml(self):
1930+ "Serialize the object to yaml"
1931+ return yaml.dump(self.data)
1932+
1933+
1934+def execution_environment():
1935+ """A convenient bundling of the current execution context"""
1936+ context = {}
1937+ context['conf'] = config()
1938+ if relation_id():
1939+ context['reltype'] = relation_type()
1940+ context['relid'] = relation_id()
1941+ context['rel'] = relation_get()
1942+ context['unit'] = local_unit()
1943+ context['rels'] = relations()
1944+ context['env'] = os.environ
1945+ return context
1946+
1947+
1948+def in_relation_hook():
1949+ "Determine whether we're running in a relation hook"
1950+ return 'JUJU_RELATION' in os.environ
1951+
1952+
1953+def relation_type():
1954+ "The scope for the current relation hook"
1955+ return os.environ.get('JUJU_RELATION', None)
1956+
1957+
1958+def relation_id():
1959+ "The relation ID for the current relation hook"
1960+ return os.environ.get('JUJU_RELATION_ID', None)
1961+
1962+
1963+def local_unit():
1964+ "Local unit ID"
1965+ return os.environ['JUJU_UNIT_NAME']
1966+
1967+
1968+def remote_unit():
1969+ "The remote unit for the current relation hook"
1970+ return os.environ['JUJU_REMOTE_UNIT']
1971+
1972+
1973+def service_name():
1974+ "The name service group this unit belongs to"
1975+ return local_unit().split('/')[0]
1976+
1977+
1978+@cached
1979+def config(scope=None):
1980+ "Juju charm configuration"
1981+ config_cmd_line = ['config-get']
1982+ if scope is not None:
1983+ config_cmd_line.append(scope)
1984+ config_cmd_line.append('--format=json')
1985+ try:
1986+ return json.loads(subprocess.check_output(config_cmd_line))
1987+ except ValueError:
1988+ return None
1989+
1990+
1991+@cached
1992+def relation_get(attribute=None, unit=None, rid=None):
1993+ _args = ['relation-get', '--format=json']
1994+ if rid:
1995+ _args.append('-r')
1996+ _args.append(rid)
1997+ _args.append(attribute or '-')
1998+ if unit:
1999+ _args.append(unit)
2000+ try:
2001+ return json.loads(subprocess.check_output(_args))
2002+ except ValueError:
2003+ return None
2004+
2005+
2006+def relation_set(relation_id=None, relation_settings={}, **kwargs):
2007+ relation_cmd_line = ['relation-set']
2008+ if relation_id is not None:
2009+ relation_cmd_line.extend(('-r', relation_id))
2010+ for k, v in (relation_settings.items() + kwargs.items()):
2011+ if v is None:
2012+ relation_cmd_line.append('{}='.format(k))
2013+ else:
2014+ relation_cmd_line.append('{}={}'.format(k, v))
2015+ subprocess.check_call(relation_cmd_line)
2016+ # Flush cache of any relation-gets for local unit
2017+ flush(local_unit())
2018+
2019+
2020+@cached
2021+def relation_ids(reltype=None):
2022+ "A list of relation_ids"
2023+ reltype = reltype or relation_type()
2024+ relid_cmd_line = ['relation-ids', '--format=json']
2025+ if reltype is not None:
2026+ relid_cmd_line.append(reltype)
2027+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
2028+ return []
2029+
2030+
2031+@cached
2032+def related_units(relid=None):
2033+ "A list of related units"
2034+ relid = relid or relation_id()
2035+ units_cmd_line = ['relation-list', '--format=json']
2036+ if relid is not None:
2037+ units_cmd_line.extend(('-r', relid))
2038+ return json.loads(subprocess.check_output(units_cmd_line)) or []
2039+
2040+
2041+@cached
2042+def relation_for_unit(unit=None, rid=None):
2043+ "Get the json represenation of a unit's relation"
2044+ unit = unit or remote_unit()
2045+ relation = relation_get(unit=unit, rid=rid)
2046+ for key in relation:
2047+ if key.endswith('-list'):
2048+ relation[key] = relation[key].split()
2049+ relation['__unit__'] = unit
2050+ return relation
2051+
2052+
2053+@cached
2054+def relations_for_id(relid=None):
2055+ "Get relations of a specific relation ID"
2056+ relation_data = []
2057+ relid = relid or relation_ids()
2058+ for unit in related_units(relid):
2059+ unit_data = relation_for_unit(unit, relid)
2060+ unit_data['__relid__'] = relid
2061+ relation_data.append(unit_data)
2062+ return relation_data
2063+
2064+
2065+@cached
2066+def relations_of_type(reltype=None):
2067+ "Get relations of a specific type"
2068+ relation_data = []
2069+ reltype = reltype or relation_type()
2070+ for relid in relation_ids(reltype):
2071+ for relation in relations_for_id(relid):
2072+ relation['__relid__'] = relid
2073+ relation_data.append(relation)
2074+ return relation_data
2075+
2076+
2077+@cached
2078+def relation_types():
2079+ "Get a list of relation types supported by this charm"
2080+ charmdir = os.environ.get('CHARM_DIR', '')
2081+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
2082+ md = yaml.safe_load(mdf)
2083+ rel_types = []
2084+ for key in ('provides', 'requires', 'peers'):
2085+ section = md.get(key)
2086+ if section:
2087+ rel_types.extend(section.keys())
2088+ mdf.close()
2089+ return rel_types
2090+
2091+
2092+@cached
2093+def relations():
2094+ rels = {}
2095+ for reltype in relation_types():
2096+ relids = {}
2097+ for relid in relation_ids(reltype):
2098+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
2099+ for unit in related_units(relid):
2100+ reldata = relation_get(unit=unit, rid=relid)
2101+ units[unit] = reldata
2102+ relids[relid] = units
2103+ rels[reltype] = relids
2104+ return rels
2105+
2106+
2107+def open_port(port, protocol="TCP"):
2108+ "Open a service network port"
2109+ _args = ['open-port']
2110+ _args.append('{}/{}'.format(port, protocol))
2111+ subprocess.check_call(_args)
2112+
2113+
2114+def close_port(port, protocol="TCP"):
2115+ "Close a service network port"
2116+ _args = ['close-port']
2117+ _args.append('{}/{}'.format(port, protocol))
2118+ subprocess.check_call(_args)
2119+
2120+
2121+@cached
2122+def unit_get(attribute):
2123+ _args = ['unit-get', '--format=json', attribute]
2124+ try:
2125+ return json.loads(subprocess.check_output(_args))
2126+ except ValueError:
2127+ return None
2128+
2129+
2130+def unit_private_ip():
2131+ return unit_get('private-address')
2132+
2133+
2134+class UnregisteredHookError(Exception):
2135+ pass
2136+
2137+
2138+class Hooks(object):
2139+ def __init__(self):
2140+ super(Hooks, self).__init__()
2141+ self._hooks = {}
2142+
2143+ def register(self, name, function):
2144+ self._hooks[name] = function
2145+
2146+ def execute(self, args):
2147+ hook_name = os.path.basename(args[0])
2148+ if hook_name in self._hooks:
2149+ self._hooks[hook_name]()
2150+ else:
2151+ raise UnregisteredHookError(hook_name)
2152+
2153+ def hook(self, *hook_names):
2154+ def wrapper(decorated):
2155+ for hook_name in hook_names:
2156+ self.register(hook_name, decorated)
2157+ else:
2158+ self.register(decorated.__name__, decorated)
2159+ if '_' in decorated.__name__:
2160+ self.register(
2161+ decorated.__name__.replace('_', '-'), decorated)
2162+ return decorated
2163+ return wrapper
2164+
2165+
2166+def charm_dir():
2167+ return os.environ.get('CHARM_DIR')
2168
2169=== added file 'hooks/charmhelpers/core/host.py'
2170--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
2171+++ hooks/charmhelpers/core/host.py 2013-10-15 01:35:59 +0000
2172@@ -0,0 +1,241 @@
2173+"""Tools for working with the host system"""
2174+# Copyright 2012 Canonical Ltd.
2175+#
2176+# Authors:
2177+# Nick Moffitt <nick.moffitt@canonical.com>
2178+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
2179+
2180+import os
2181+import pwd
2182+import grp
2183+import random
2184+import string
2185+import subprocess
2186+import hashlib
2187+
2188+from collections import OrderedDict
2189+
2190+from hookenv import log
2191+
2192+
2193+def service_start(service_name):
2194+ return service('start', service_name)
2195+
2196+
2197+def service_stop(service_name):
2198+ return service('stop', service_name)
2199+
2200+
2201+def service_restart(service_name):
2202+ return service('restart', service_name)
2203+
2204+
2205+def service_reload(service_name, restart_on_failure=False):
2206+ service_result = service('reload', service_name)
2207+ if not service_result and restart_on_failure:
2208+ service_result = service('restart', service_name)
2209+ return service_result
2210+
2211+
2212+def service(action, service_name):
2213+ cmd = ['service', service_name, action]
2214+ return subprocess.call(cmd) == 0
2215+
2216+
2217+def service_running(service):
2218+ try:
2219+ output = subprocess.check_output(['service', service, 'status'])
2220+ except subprocess.CalledProcessError:
2221+ return False
2222+ else:
2223+ if ("start/running" in output or "is running" in output):
2224+ return True
2225+ else:
2226+ return False
2227+
2228+
2229+def adduser(username, password=None, shell='/bin/bash', system_user=False):
2230+ """Add a user"""
2231+ try:
2232+ user_info = pwd.getpwnam(username)
2233+ log('user {0} already exists!'.format(username))
2234+ except KeyError:
2235+ log('creating user {0}'.format(username))
2236+ cmd = ['useradd']
2237+ if system_user or password is None:
2238+ cmd.append('--system')
2239+ else:
2240+ cmd.extend([
2241+ '--create-home',
2242+ '--shell', shell,
2243+ '--password', password,
2244+ ])
2245+ cmd.append(username)
2246+ subprocess.check_call(cmd)
2247+ user_info = pwd.getpwnam(username)
2248+ return user_info
2249+
2250+
2251+def add_user_to_group(username, group):
2252+ """Add a user to a group"""
2253+ cmd = [
2254+ 'gpasswd', '-a',
2255+ username,
2256+ group
2257+ ]
2258+ log("Adding user {} to group {}".format(username, group))
2259+ subprocess.check_call(cmd)
2260+
2261+
2262+def rsync(from_path, to_path, flags='-r', options=None):
2263+ """Replicate the contents of a path"""
2264+ options = options or ['--delete', '--executability']
2265+ cmd = ['/usr/bin/rsync', flags]
2266+ cmd.extend(options)
2267+ cmd.append(from_path)
2268+ cmd.append(to_path)
2269+ log(" ".join(cmd))
2270+ return subprocess.check_output(cmd).strip()
2271+
2272+
2273+def symlink(source, destination):
2274+ """Create a symbolic link"""
2275+ log("Symlinking {} as {}".format(source, destination))
2276+ cmd = [
2277+ 'ln',
2278+ '-sf',
2279+ source,
2280+ destination,
2281+ ]
2282+ subprocess.check_call(cmd)
2283+
2284+
2285+def mkdir(path, owner='root', group='root', perms=0555, force=False):
2286+ """Create a directory"""
2287+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
2288+ perms))
2289+ uid = pwd.getpwnam(owner).pw_uid
2290+ gid = grp.getgrnam(group).gr_gid
2291+ realpath = os.path.abspath(path)
2292+ if os.path.exists(realpath):
2293+ if force and not os.path.isdir(realpath):
2294+ log("Removing non-directory file {} prior to mkdir()".format(path))
2295+ os.unlink(realpath)
2296+ else:
2297+ os.makedirs(realpath, perms)
2298+ os.chown(realpath, uid, gid)
2299+
2300+
2301+def write_file(path, content, owner='root', group='root', perms=0444):
2302+ """Create or overwrite a file with the contents of a string"""
2303+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2304+ uid = pwd.getpwnam(owner).pw_uid
2305+ gid = grp.getgrnam(group).gr_gid
2306+ with open(path, 'w') as target:
2307+ os.fchown(target.fileno(), uid, gid)
2308+ os.fchmod(target.fileno(), perms)
2309+ target.write(content)
2310+
2311+
2312+def mount(device, mountpoint, options=None, persist=False):
2313+ '''Mount a filesystem'''
2314+ cmd_args = ['mount']
2315+ if options is not None:
2316+ cmd_args.extend(['-o', options])
2317+ cmd_args.extend([device, mountpoint])
2318+ try:
2319+ subprocess.check_output(cmd_args)
2320+ except subprocess.CalledProcessError, e:
2321+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2322+ return False
2323+ if persist:
2324+ # TODO: update fstab
2325+ pass
2326+ return True
2327+
2328+
2329+def umount(mountpoint, persist=False):
2330+ '''Unmount a filesystem'''
2331+ cmd_args = ['umount', mountpoint]
2332+ try:
2333+ subprocess.check_output(cmd_args)
2334+ except subprocess.CalledProcessError, e:
2335+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2336+ return False
2337+ if persist:
2338+ # TODO: update fstab
2339+ pass
2340+ return True
2341+
2342+
2343+def mounts():
2344+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
2345+ with open('/proc/mounts') as f:
2346+ # [['/mount/point','/dev/path'],[...]]
2347+ system_mounts = [m[1::-1] for m in [l.strip().split()
2348+ for l in f.readlines()]]
2349+ return system_mounts
2350+
2351+
2352+def file_hash(path):
2353+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
2354+ if os.path.exists(path):
2355+ h = hashlib.md5()
2356+ with open(path, 'r') as source:
2357+ h.update(source.read()) # IGNORE:E1101 - it does have update
2358+ return h.hexdigest()
2359+ else:
2360+ return None
2361+
2362+
2363+def restart_on_change(restart_map):
2364+ ''' Restart services based on configuration files changing
2365+
2366+ This function is used a decorator, for example
2367+
2368+ @restart_on_change({
2369+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
2370+ })
2371+ def ceph_client_changed():
2372+ ...
2373+
2374+ In this example, the cinder-api and cinder-volume services
2375+ would be restarted if /etc/ceph/ceph.conf is changed by the
2376+ ceph_client_changed function.
2377+ '''
2378+ def wrap(f):
2379+ def wrapped_f(*args):
2380+ checksums = {}
2381+ for path in restart_map:
2382+ checksums[path] = file_hash(path)
2383+ f(*args)
2384+ restarts = []
2385+ for path in restart_map:
2386+ if checksums[path] != file_hash(path):
2387+ restarts += restart_map[path]
2388+ for service_name in list(OrderedDict.fromkeys(restarts)):
2389+ service('restart', service_name)
2390+ return wrapped_f
2391+ return wrap
2392+
2393+
2394+def lsb_release():
2395+ '''Return /etc/lsb-release in a dict'''
2396+ d = {}
2397+ with open('/etc/lsb-release', 'r') as lsb:
2398+ for l in lsb:
2399+ k, v = l.split('=')
2400+ d[k.strip()] = v.strip()
2401+ return d
2402+
2403+
2404+def pwgen(length=None):
2405+ '''Generate a random pasword.'''
2406+ if length is None:
2407+ length = random.choice(range(35, 45))
2408+ alphanumeric_chars = [
2409+ l for l in (string.letters + string.digits)
2410+ if l not in 'l0QD1vAEIOUaeiou']
2411+ random_chars = [
2412+ random.choice(alphanumeric_chars) for _ in range(length)]
2413+ return(''.join(random_chars))
2414
2415=== added directory 'hooks/charmhelpers/fetch'
2416=== added file 'hooks/charmhelpers/fetch/__init__.py'
2417--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
2418+++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 01:35:59 +0000
2419@@ -0,0 +1,209 @@
2420+import importlib
2421+from yaml import safe_load
2422+from charmhelpers.core.host import (
2423+ lsb_release
2424+)
2425+from urlparse import (
2426+ urlparse,
2427+ urlunparse,
2428+)
2429+import subprocess
2430+from charmhelpers.core.hookenv import (
2431+ config,
2432+ log,
2433+)
2434+import apt_pkg
2435+
2436+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2437+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2438+"""
2439+PROPOSED_POCKET = """# Proposed
2440+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
2441+"""
2442+
2443+
2444+def filter_installed_packages(packages):
2445+ """Returns a list of packages that require installation"""
2446+ apt_pkg.init()
2447+ cache = apt_pkg.Cache()
2448+ _pkgs = []
2449+ for package in packages:
2450+ try:
2451+ p = cache[package]
2452+ p.current_ver or _pkgs.append(package)
2453+ except KeyError:
2454+ log('Package {} has no installation candidate.'.format(package),
2455+ level='WARNING')
2456+ _pkgs.append(package)
2457+ return _pkgs
2458+
2459+
2460+def apt_install(packages, options=None, fatal=False):
2461+ """Install one or more packages"""
2462+ options = options or []
2463+ cmd = ['apt-get', '-y']
2464+ cmd.extend(options)
2465+ cmd.append('install')
2466+ if isinstance(packages, basestring):
2467+ cmd.append(packages)
2468+ else:
2469+ cmd.extend(packages)
2470+ log("Installing {} with options: {}".format(packages,
2471+ options))
2472+ if fatal:
2473+ subprocess.check_call(cmd)
2474+ else:
2475+ subprocess.call(cmd)
2476+
2477+
2478+def apt_update(fatal=False):
2479+ """Update local apt cache"""
2480+ cmd = ['apt-get', 'update']
2481+ if fatal:
2482+ subprocess.check_call(cmd)
2483+ else:
2484+ subprocess.call(cmd)
2485+
2486+
2487+def apt_purge(packages, fatal=False):
2488+ """Purge one or more packages"""
2489+ cmd = ['apt-get', '-y', 'purge']
2490+ if isinstance(packages, basestring):
2491+ cmd.append(packages)
2492+ else:
2493+ cmd.extend(packages)
2494+ log("Purging {}".format(packages))
2495+ if fatal:
2496+ subprocess.check_call(cmd)
2497+ else:
2498+ subprocess.call(cmd)
2499+
2500+
2501+def add_source(source, key=None):
2502+ if ((source.startswith('ppa:') or
2503+ source.startswith('http:'))):
2504+ subprocess.check_call(['add-apt-repository', '--yes', source])
2505+ elif source.startswith('cloud:'):
2506+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
2507+ fatal=True)
2508+ pocket = source.split(':')[-1]
2509+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2510+ apt.write(CLOUD_ARCHIVE.format(pocket))
2511+ elif source == 'proposed':
2512+ release = lsb_release()['DISTRIB_CODENAME']
2513+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2514+ apt.write(PROPOSED_POCKET.format(release))
2515+ if key:
2516+ subprocess.check_call(['apt-key', 'import', key])
2517+
2518+
2519+class SourceConfigError(Exception):
2520+ pass
2521+
2522+
2523+def configure_sources(update=False,
2524+ sources_var='install_sources',
2525+ keys_var='install_keys'):
2526+ """
2527+ Configure multiple sources from charm configuration
2528+
2529+ Example config:
2530+ install_sources:
2531+ - "ppa:foo"
2532+ - "http://example.com/repo precise main"
2533+ install_keys:
2534+ - null
2535+ - "a1b2c3d4"
2536+
2537+ Note that 'null' (a.k.a. None) should not be quoted.
2538+ """
2539+ sources = safe_load(config(sources_var))
2540+ keys = safe_load(config(keys_var))
2541+ if isinstance(sources, basestring) and isinstance(keys, basestring):
2542+ add_source(sources, keys)
2543+ else:
2544+ if not len(sources) == len(keys):
2545+ msg = 'Install sources and keys lists are different lengths'
2546+ raise SourceConfigError(msg)
2547+ for src_num in range(len(sources)):
2548+ add_source(sources[src_num], keys[src_num])
2549+ if update:
2550+ apt_update(fatal=True)
2551+
2552+# The order of this list is very important. Handlers should be listed in from
2553+# least- to most-specific URL matching.
2554+FETCH_HANDLERS = (
2555+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2556+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2557+)
2558+
2559+
2560+class UnhandledSource(Exception):
2561+ pass
2562+
2563+
2564+def install_remote(source):
2565+ """
2566+ Install a file tree from a remote source
2567+
2568+ The specified source should be a url of the form:
2569+ scheme://[host]/path[#[option=value][&...]]
2570+
2571+ Schemes supported are based on this modules submodules
2572+ Options supported are submodule-specific"""
2573+ # We ONLY check for True here because can_handle may return a string
2574+ # explaining why it can't handle a given source.
2575+ handlers = [h for h in plugins() if h.can_handle(source) is True]
2576+ installed_to = None
2577+ for handler in handlers:
2578+ try:
2579+ installed_to = handler.install(source)
2580+ except UnhandledSource:
2581+ pass
2582+ if not installed_to:
2583+ raise UnhandledSource("No handler found for source {}".format(source))
2584+ return installed_to
2585+
2586+
2587+def install_from_config(config_var_name):
2588+ charm_config = config()
2589+ source = charm_config[config_var_name]
2590+ return install_remote(source)
2591+
2592+
2593+class BaseFetchHandler(object):
2594+ """Base class for FetchHandler implementations in fetch plugins"""
2595+ def can_handle(self, source):
2596+ """Returns True if the source can be handled. Otherwise returns
2597+ a string explaining why it cannot"""
2598+ return "Wrong source type"
2599+
2600+ def install(self, source):
2601+ """Try to download and unpack the source. Return the path to the
2602+ unpacked files or raise UnhandledSource."""
2603+ raise UnhandledSource("Wrong source type {}".format(source))
2604+
2605+ def parse_url(self, url):
2606+ return urlparse(url)
2607+
2608+ def base_url(self, url):
2609+ """Return url without querystring or fragment"""
2610+ parts = list(self.parse_url(url))
2611+ parts[4:] = ['' for i in parts[4:]]
2612+ return urlunparse(parts)
2613+
2614+
2615+def plugins(fetch_handlers=None):
2616+ if not fetch_handlers:
2617+ fetch_handlers = FETCH_HANDLERS
2618+ plugin_list = []
2619+ for handler_name in fetch_handlers:
2620+ package, classname = handler_name.rsplit('.', 1)
2621+ try:
2622+ handler_class = getattr(importlib.import_module(package), classname)
2623+ plugin_list.append(handler_class())
2624+ except (ImportError, AttributeError):
2625+ # Skip missing plugins so that they can be ommitted from
2626+ # installation if desired
2627+ log("FetchHandler {} not found, skipping plugin".format(handler_name))
2628+ return plugin_list
2629
2630=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
2631--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
2632+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 01:35:59 +0000
2633@@ -0,0 +1,48 @@
2634+import os
2635+import urllib2
2636+from charmhelpers.fetch import (
2637+ BaseFetchHandler,
2638+ UnhandledSource
2639+)
2640+from charmhelpers.payload.archive import (
2641+ get_archive_handler,
2642+ extract,
2643+)
2644+from charmhelpers.core.host import mkdir
2645+
2646+
2647+class ArchiveUrlFetchHandler(BaseFetchHandler):
2648+ """Handler for archives via generic URLs"""
2649+ def can_handle(self, source):
2650+ url_parts = self.parse_url(source)
2651+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
2652+ return "Wrong source type"
2653+ if get_archive_handler(self.base_url(source)):
2654+ return True
2655+ return False
2656+
2657+ def download(self, source, dest):
2658+ # propogate all exceptions
2659+ # URLError, OSError, etc
2660+ response = urllib2.urlopen(source)
2661+ try:
2662+ with open(dest, 'w') as dest_file:
2663+ dest_file.write(response.read())
2664+ except Exception as e:
2665+ if os.path.isfile(dest):
2666+ os.unlink(dest)
2667+ raise e
2668+
2669+ def install(self, source):
2670+ url_parts = self.parse_url(source)
2671+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2672+ if not os.path.exists(dest_dir):
2673+ mkdir(dest_dir, perms=0755)
2674+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2675+ try:
2676+ self.download(source, dld_file)
2677+ except urllib2.URLError as e:
2678+ raise UnhandledSource(e.reason)
2679+ except OSError as e:
2680+ raise UnhandledSource(e.strerror)
2681+ return extract(dld_file)
2682
2683=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
2684--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
2685+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 01:35:59 +0000
2686@@ -0,0 +1,49 @@
2687+import os
2688+from charmhelpers.fetch import (
2689+ BaseFetchHandler,
2690+ UnhandledSource
2691+)
2692+from charmhelpers.core.host import mkdir
2693+
2694+try:
2695+ from bzrlib.branch import Branch
2696+except ImportError:
2697+ from charmhelpers.fetch import apt_install
2698+ apt_install("python-bzrlib")
2699+ from bzrlib.branch import Branch
2700+
2701+class BzrUrlFetchHandler(BaseFetchHandler):
2702+ """Handler for bazaar branches via generic and lp URLs"""
2703+ def can_handle(self, source):
2704+ url_parts = self.parse_url(source)
2705+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
2706+ return False
2707+ else:
2708+ return True
2709+
2710+ def branch(self, source, dest):
2711+ url_parts = self.parse_url(source)
2712+ # If we use lp:branchname scheme we need to load plugins
2713+ if not self.can_handle(source):
2714+ raise UnhandledSource("Cannot handle {}".format(source))
2715+ if url_parts.scheme == "lp":
2716+ from bzrlib.plugin import load_plugins
2717+ load_plugins()
2718+ try:
2719+ remote_branch = Branch.open(source)
2720+ remote_branch.bzrdir.sprout(dest).open_branch()
2721+ except Exception as e:
2722+ raise e
2723+
2724+ def install(self, source):
2725+ url_parts = self.parse_url(source)
2726+ branch_name = url_parts.path.strip("/").split("/")[-1]
2727+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
2728+ if not os.path.exists(dest_dir):
2729+ mkdir(dest_dir, perms=0755)
2730+ try:
2731+ self.branch(source, dest_dir)
2732+ except OSError as e:
2733+ raise UnhandledSource(e.strerror)
2734+ return dest_dir
2735+
2736
2737=== added directory 'hooks/charmhelpers/payload'
2738=== added file 'hooks/charmhelpers/payload/__init__.py'
2739--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
2740+++ hooks/charmhelpers/payload/__init__.py 2013-10-15 01:35:59 +0000
2741@@ -0,0 +1,1 @@
2742+"Tools for working with files injected into a charm just before deployment."
2743
2744=== added file 'hooks/charmhelpers/payload/execd.py'
2745--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
2746+++ hooks/charmhelpers/payload/execd.py 2013-10-15 01:35:59 +0000
2747@@ -0,0 +1,50 @@
2748+#!/usr/bin/env python
2749+
2750+import os
2751+import sys
2752+import subprocess
2753+from charmhelpers.core import hookenv
2754+
2755+
2756+def default_execd_dir():
2757+ return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
2758+
2759+
2760+def execd_module_paths(execd_dir=None):
2761+ """Generate a list of full paths to modules within execd_dir."""
2762+ if not execd_dir:
2763+ execd_dir = default_execd_dir()
2764+
2765+ if not os.path.exists(execd_dir):
2766+ return
2767+
2768+ for subpath in os.listdir(execd_dir):
2769+ module = os.path.join(execd_dir, subpath)
2770+ if os.path.isdir(module):
2771+ yield module
2772+
2773+
2774+def execd_submodule_paths(command, execd_dir=None):
2775+ """Generate a list of full paths to the specified command within exec_dir.
2776+ """
2777+ for module_path in execd_module_paths(execd_dir):
2778+ path = os.path.join(module_path, command)
2779+ if os.access(path, os.X_OK) and os.path.isfile(path):
2780+ yield path
2781+
2782+
2783+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
2784+ """Run command for each module within execd_dir which defines it."""
2785+ for submodule_path in execd_submodule_paths(command, execd_dir):
2786+ try:
2787+ subprocess.check_call(submodule_path, shell=True, stderr=stderr)
2788+ except subprocess.CalledProcessError as e:
2789+ hookenv.log("Error ({}) running {}. Output: {}".format(
2790+ e.returncode, e.cmd, e.output))
2791+ if die_on_error:
2792+ sys.exit(e.returncode)
2793+
2794+
2795+def execd_preinstall(execd_dir=None):
2796+ """Run charm-pre-install for each module within execd_dir."""
2797+ execd_run('charm-pre-install', execd_dir=execd_dir)
2798
2799=== modified symlink 'hooks/cluster-relation-departed'
2800=== target changed u'hooks.py' => u'quantum_hooks.py'
2801=== modified symlink 'hooks/config-changed'
2802=== target changed u'hooks.py' => u'quantum_hooks.py'
2803=== modified symlink 'hooks/ha-relation-joined'
2804=== target changed u'hooks.py' => u'quantum_hooks.py'
2805=== modified symlink 'hooks/install'
2806=== target changed u'hooks.py' => u'quantum_hooks.py'
2807=== removed directory 'hooks/lib'
2808=== removed file 'hooks/lib/__init__.py'
2809=== removed file 'hooks/lib/cluster_utils.py'
2810--- hooks/lib/cluster_utils.py 2013-03-20 16:08:54 +0000
2811+++ hooks/lib/cluster_utils.py 1970-01-01 00:00:00 +0000
2812@@ -1,130 +0,0 @@
2813-#
2814-# Copyright 2012 Canonical Ltd.
2815-#
2816-# This file is sourced from lp:openstack-charm-helpers
2817-#
2818-# Authors:
2819-# James Page <james.page@ubuntu.com>
2820-# Adam Gandelman <adamg@ubuntu.com>
2821-#
2822-
2823-from lib.utils import (
2824- juju_log,
2825- relation_ids,
2826- relation_list,
2827- relation_get,
2828- get_unit_hostname,
2829- config_get
2830- )
2831-import subprocess
2832-import os
2833-
2834-
2835-def is_clustered():
2836- for r_id in (relation_ids('ha') or []):
2837- for unit in (relation_list(r_id) or []):
2838- clustered = relation_get('clustered',
2839- rid=r_id,
2840- unit=unit)
2841- if clustered:
2842- return True
2843- return False
2844-
2845-
2846-def is_leader(resource):
2847- cmd = [
2848- "crm", "resource",
2849- "show", resource
2850- ]
2851- try:
2852- status = subprocess.check_output(cmd)
2853- except subprocess.CalledProcessError:
2854- return False
2855- else:
2856- if get_unit_hostname() in status:
2857- return True
2858- else:
2859- return False
2860-
2861-
2862-def peer_units():
2863- peers = []
2864- for r_id in (relation_ids('cluster') or []):
2865- for unit in (relation_list(r_id) or []):
2866- peers.append(unit)
2867- return peers
2868-
2869-
2870-def oldest_peer(peers):
2871- local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
2872- for peer in peers:
2873- remote_unit_no = int(peer.split('/')[1])
2874- if remote_unit_no < local_unit_no:
2875- return False
2876- return True
2877-
2878-
2879-def eligible_leader(resource):
2880- if is_clustered():
2881- if not is_leader(resource):
2882- juju_log('INFO', 'Deferring action to CRM leader.')
2883- return False
2884- else:
2885- peers = peer_units()
2886- if peers and not oldest_peer(peers):
2887- juju_log('INFO', 'Deferring action to oldest service unit.')
2888- return False
2889- return True
2890-
2891-
2892-def https():
2893- '''
2894- Determines whether enough data has been provided in configuration
2895- or relation data to configure HTTPS
2896- .
2897- returns: boolean
2898- '''
2899- if config_get('use-https') == "yes":
2900- return True
2901- if config_get('ssl_cert') and config_get('ssl_key'):
2902- return True
2903- for r_id in relation_ids('identity-service'):
2904- for unit in relation_list(r_id):
2905- if (relation_get('https_keystone', rid=r_id, unit=unit) and
2906- relation_get('ssl_cert', rid=r_id, unit=unit) and
2907- relation_get('ssl_key', rid=r_id, unit=unit) and
2908- relation_get('ca_cert', rid=r_id, unit=unit)):
2909- return True
2910- return False
2911-
2912-
2913-def determine_api_port(public_port):
2914- '''
2915- Determine correct API server listening port based on
2916- existence of HTTPS reverse proxy and/or haproxy.
2917-
2918- public_port: int: standard public port for given service
2919-
2920- returns: int: the correct listening port for the API service
2921- '''
2922- i = 0
2923- if len(peer_units()) > 0 or is_clustered():
2924- i += 1
2925- if https():
2926- i += 1
2927- return public_port - (i * 10)
2928-
2929-
2930-def determine_haproxy_port(public_port):
2931- '''
2932- Description: Determine correct proxy listening port based on public IP +
2933- existence of HTTPS reverse proxy.
2934-
2935- public_port: int: standard public port for given service
2936-
2937- returns: int: the correct listening port for the HAProxy service
2938- '''
2939- i = 0
2940- if https():
2941- i += 1
2942- return public_port - (i * 10)
2943
2944=== removed file 'hooks/lib/openstack_common.py'
2945--- hooks/lib/openstack_common.py 2013-05-22 22:24:48 +0000
2946+++ hooks/lib/openstack_common.py 1970-01-01 00:00:00 +0000
2947@@ -1,230 +0,0 @@
2948-#!/usr/bin/python
2949-
2950-# Common python helper functions used for OpenStack charms.
2951-
2952-import apt_pkg as apt
2953-import subprocess
2954-import os
2955-
2956-CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
2957-CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
2958-
2959-ubuntu_openstack_release = {
2960- 'oneiric': 'diablo',
2961- 'precise': 'essex',
2962- 'quantal': 'folsom',
2963- 'raring': 'grizzly',
2964-}
2965-
2966-
2967-openstack_codenames = {
2968- '2011.2': 'diablo',
2969- '2012.1': 'essex',
2970- '2012.2': 'folsom',
2971- '2013.1': 'grizzly',
2972- '2013.2': 'havana',
2973-}
2974-
2975-# The ugly duckling
2976-swift_codenames = {
2977- '1.4.3': 'diablo',
2978- '1.4.8': 'essex',
2979- '1.7.4': 'folsom',
2980- '1.7.6': 'grizzly',
2981- '1.7.7': 'grizzly',
2982- '1.8.0': 'grizzly',
2983-}
2984-
2985-
2986-def juju_log(msg):
2987- subprocess.check_call(['juju-log', msg])
2988-
2989-
2990-def error_out(msg):
2991- juju_log("FATAL ERROR: %s" % msg)
2992- exit(1)
2993-
2994-
2995-def lsb_release():
2996- '''Return /etc/lsb-release in a dict'''
2997- lsb = open('/etc/lsb-release', 'r')
2998- d = {}
2999- for l in lsb:
3000- k, v = l.split('=')
3001- d[k.strip()] = v.strip()
3002- return d
3003-
3004-
3005-def get_os_codename_install_source(src):
3006- '''Derive OpenStack release codename from a given installation source.'''
3007- ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3008-
3009- rel = ''
3010- if src == 'distro':
3011- try:
3012- rel = ubuntu_openstack_release[ubuntu_rel]
3013- except KeyError:
3014- e = 'Code not derive openstack release for '\
3015- 'this Ubuntu release: %s' % rel
3016- error_out(e)
3017- return rel
3018-
3019- if src.startswith('cloud:'):
3020- ca_rel = src.split(':')[1]
3021- ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
3022- return ca_rel
3023-
3024- # Best guess match based on deb string provided
3025- if src.startswith('deb') or src.startswith('ppa'):
3026- for k, v in openstack_codenames.iteritems():
3027- if v in src:
3028- return v
3029-
3030-
3031-def get_os_codename_version(vers):
3032- '''Determine OpenStack codename from version number.'''
3033- try:
3034- return openstack_codenames[vers]
3035- except KeyError:
3036- e = 'Could not determine OpenStack codename for version %s' % vers
3037- error_out(e)
3038-
3039-
3040-def get_os_version_codename(codename):
3041- '''Determine OpenStack version number from codename.'''
3042- for k, v in openstack_codenames.iteritems():
3043- if v == codename:
3044- return k
3045- e = 'Code not derive OpenStack version for '\
3046- 'codename: %s' % codename
3047- error_out(e)
3048-
3049-
3050-def get_os_codename_package(pkg):
3051- '''Derive OpenStack release codename from an installed package.'''
3052- apt.init()
3053- cache = apt.Cache()
3054- try:
3055- pkg = cache[pkg]
3056- except:
3057- e = 'Could not determine version of installed package: %s' % pkg
3058- error_out(e)
3059-
3060- vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
3061-
3062- try:
3063- if 'swift' in pkg.name:
3064- vers = vers[:5]
3065- return swift_codenames[vers]
3066- else:
3067- vers = vers[:6]
3068- return openstack_codenames[vers]
3069- except KeyError:
3070- e = 'Could not determine OpenStack codename for version %s' % vers
3071- error_out(e)
3072-
3073-
3074-def get_os_version_package(pkg):
3075- '''Derive OpenStack version number from an installed package.'''
3076- codename = get_os_codename_package(pkg)
3077-
3078- if 'swift' in pkg:
3079- vers_map = swift_codenames
3080- else:
3081- vers_map = openstack_codenames
3082-
3083- for version, cname in vers_map.iteritems():
3084- if cname == codename:
3085- return version
3086- e = "Could not determine OpenStack version for package: %s" % pkg
3087- error_out(e)
3088-
3089-
3090-def configure_installation_source(rel):
3091- '''Configure apt installation source.'''
3092-
3093- def _import_key(keyid):
3094- cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
3095- "--recv-keys %s" % keyid
3096- try:
3097- subprocess.check_call(cmd.split(' '))
3098- except subprocess.CalledProcessError:
3099- error_out("Error importing repo key %s" % keyid)
3100-
3101- if rel == 'distro':
3102- return
3103- elif rel[:4] == "ppa:":
3104- src = rel
3105- subprocess.check_call(["add-apt-repository", "-y", src])
3106- elif rel[:3] == "deb":
3107- l = len(rel.split('|'))
3108- if l == 2:
3109- src, key = rel.split('|')
3110- juju_log("Importing PPA key from keyserver for %s" % src)
3111- _import_key(key)
3112- elif l == 1:
3113- src = rel
3114- else:
3115- error_out("Invalid openstack-release: %s" % rel)
3116-
3117- with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
3118- f.write(src)
3119- elif rel[:6] == 'cloud:':
3120- ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3121- rel = rel.split(':')[1]
3122- u_rel = rel.split('-')[0]
3123- ca_rel = rel.split('-')[1]
3124-
3125- if u_rel != ubuntu_rel:
3126- e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
3127- 'version (%s)' % (ca_rel, ubuntu_rel)
3128- error_out(e)
3129-
3130- if 'staging' in ca_rel:
3131- # staging is just a regular PPA.
3132- os_rel = ca_rel.split('/')[0]
3133- ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
3134- cmd = 'add-apt-repository -y %s' % ppa
3135- subprocess.check_call(cmd.split(' '))
3136- return
3137-
3138- # map charm config options to actual archive pockets.
3139- pockets = {
3140- 'folsom': 'precise-updates/folsom',
3141- 'folsom/updates': 'precise-updates/folsom',
3142- 'folsom/proposed': 'precise-proposed/folsom',
3143- 'grizzly': 'precise-updates/grizzly',
3144- 'grizzly/updates': 'precise-updates/grizzly',
3145- 'grizzly/proposed': 'precise-proposed/grizzly'
3146- }
3147-
3148- try:
3149- pocket = pockets[ca_rel]
3150- except KeyError:
3151- e = 'Invalid Cloud Archive release specified: %s' % rel
3152- error_out(e)
3153-
3154- src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
3155- _import_key(CLOUD_ARCHIVE_KEY_ID)
3156-
3157- with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
3158- f.write(src)
3159- else:
3160- error_out("Invalid openstack-release specified: %s" % rel)
3161-
3162-
3163-def save_script_rc(script_path="scripts/scriptrc", **env_vars):
3164- """
3165- Write an rc file in the charm-delivered directory containing
3166- exported environment variables provided by env_vars. Any charm scripts run
3167- outside the juju hook environment can source this scriptrc to obtain
3168- updated config information necessary to perform health checks or
3169- service changes.
3170- """
3171- charm_dir = os.getenv('CHARM_DIR')
3172- juju_rc_path = "%s/%s" % (charm_dir, script_path)
3173- with open(juju_rc_path, 'wb') as rc_script:
3174- rc_script.write(
3175- "#!/bin/bash\n")
3176- [rc_script.write('export %s=%s\n' % (u, p))
3177- for u, p in env_vars.iteritems() if u != "script_path"]
3178
3179=== removed file 'hooks/lib/utils.py'
3180--- hooks/lib/utils.py 2013-04-12 15:39:37 +0000
3181+++ hooks/lib/utils.py 1970-01-01 00:00:00 +0000
3182@@ -1,359 +0,0 @@
3183-#
3184-# Copyright 2012 Canonical Ltd.
3185-#
3186-# This file is sourced from lp:openstack-charm-helpers
3187-#
3188-# Authors:
3189-# James Page <james.page@ubuntu.com>
3190-# Paul Collins <paul.collins@canonical.com>
3191-# Adam Gandelman <adamg@ubuntu.com>
3192-#
3193-
3194-import json
3195-import os
3196-import subprocess
3197-import socket
3198-import sys
3199-import hashlib
3200-
3201-
3202-def do_hooks(hooks):
3203- hook = os.path.basename(sys.argv[0])
3204-
3205- try:
3206- hook_func = hooks[hook]
3207- except KeyError:
3208- juju_log('INFO',
3209- "This charm doesn't know how to handle '{}'.".format(hook))
3210- else:
3211- hook_func()
3212-
3213-
3214-def install(*pkgs):
3215- cmd = [
3216- 'apt-get',
3217- '-y',
3218- 'install'
3219- ]
3220- for pkg in pkgs:
3221- cmd.append(pkg)
3222- subprocess.check_call(cmd)
3223-
3224-TEMPLATES_DIR = 'templates'
3225-
3226-try:
3227- import jinja2
3228-except ImportError:
3229- install('python-jinja2')
3230- import jinja2
3231-
3232-try:
3233- import dns.resolver
3234-except ImportError:
3235- install('python-dnspython')
3236- import dns.resolver
3237-
3238-
3239-def render_template(template_name, context, template_dir=TEMPLATES_DIR):
3240- templates = jinja2.Environment(
3241- loader=jinja2.FileSystemLoader(template_dir)
3242- )
3243- template = templates.get_template(template_name)
3244- return template.render(context)
3245-
3246-CLOUD_ARCHIVE = \
3247-""" # Ubuntu Cloud Archive
3248-deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3249-"""
3250-
3251-CLOUD_ARCHIVE_POCKETS = {
3252- 'folsom': 'precise-updates/folsom',
3253- 'folsom/updates': 'precise-updates/folsom',
3254- 'folsom/proposed': 'precise-proposed/folsom',
3255- 'grizzly': 'precise-updates/grizzly',
3256- 'grizzly/updates': 'precise-updates/grizzly',
3257- 'grizzly/proposed': 'precise-proposed/grizzly'
3258- }
3259-
3260-
3261-def configure_source():
3262- source = str(config_get('openstack-origin'))
3263- if not source:
3264- return
3265- if source.startswith('ppa:'):
3266- cmd = [
3267- 'add-apt-repository',
3268- source
3269- ]
3270- subprocess.check_call(cmd)
3271- if source.startswith('cloud:'):
3272- # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg:
3273- # cloud:precise-folsom/updates or cloud:precise-folsom/proposed
3274- install('ubuntu-cloud-keyring')
3275- pocket = source.split(':')[1]
3276- pocket = pocket.split('-')[1]
3277- with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
3278- apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
3279- if source.startswith('deb'):
3280- l = len(source.split('|'))
3281- if l == 2:
3282- (apt_line, key) = source.split('|')
3283- cmd = [
3284- 'apt-key',
3285- 'adv', '--keyserver keyserver.ubuntu.com',
3286- '--recv-keys', key
3287- ]
3288- subprocess.check_call(cmd)
3289- elif l == 1:
3290- apt_line = source
3291-
3292- with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
3293- apt.write(apt_line + "\n")
3294- cmd = [
3295- 'apt-get',
3296- 'update'
3297- ]
3298- subprocess.check_call(cmd)
3299-
3300-# Protocols
3301-TCP = 'TCP'
3302-UDP = 'UDP'
3303-
3304-
3305-def expose(port, protocol='TCP'):
3306- cmd = [
3307- 'open-port',
3308- '{}/{}'.format(port, protocol)
3309- ]
3310- subprocess.check_call(cmd)
3311-
3312-
3313-def juju_log(severity, message):
3314- cmd = [
3315- 'juju-log',
3316- '--log-level', severity,
3317- message
3318- ]
3319- subprocess.check_call(cmd)
3320-
3321-
3322-cache = {}
3323-
3324-
3325-def cached(func):
3326- def wrapper(*args, **kwargs):
3327- global cache
3328- key = str((func, args, kwargs))
3329- try:
3330- return cache[key]
3331- except KeyError:
3332- res = func(*args, **kwargs)
3333- cache[key] = res
3334- return res
3335- return wrapper
3336-
3337-
3338-@cached
3339-def relation_ids(relation):
3340- cmd = [
3341- 'relation-ids',
3342- relation
3343- ]
3344- result = str(subprocess.check_output(cmd)).split()
3345- if result == "":
3346- return None
3347- else:
3348- return result
3349-
3350-
3351-@cached
3352-def relation_list(rid):
3353- cmd = [
3354- 'relation-list',
3355- '-r', rid,
3356- ]
3357- result = str(subprocess.check_output(cmd)).split()
3358- if result == "":
3359- return None
3360- else:
3361- return result
3362-
3363-
3364-@cached
3365-def relation_get(attribute, unit=None, rid=None):
3366- cmd = [
3367- 'relation-get',
3368- ]
3369- if rid:
3370- cmd.append('-r')
3371- cmd.append(rid)
3372- cmd.append(attribute)
3373- if unit:
3374- cmd.append(unit)
3375- value = subprocess.check_output(cmd).strip() # IGNORE:E1103
3376- if value == "":
3377- return None
3378- else:
3379- return value
3380-
3381-
3382-@cached
3383-def relation_get_dict(relation_id=None, remote_unit=None):
3384- """Obtain all relation data as dict by way of JSON"""
3385- cmd = [
3386- 'relation-get', '--format=json'
3387- ]
3388- if relation_id:
3389- cmd.append('-r')
3390- cmd.append(relation_id)
3391- if remote_unit:
3392- remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None)
3393- os.environ['JUJU_REMOTE_UNIT'] = remote_unit
3394- j = subprocess.check_output(cmd)
3395- if remote_unit and remote_unit_orig:
3396- os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig
3397- d = json.loads(j)
3398- settings = {}
3399- # convert unicode to strings
3400- for k, v in d.iteritems():
3401- settings[str(k)] = str(v)
3402- return settings
3403-
3404-
3405-def relation_set(**kwargs):
3406- cmd = [
3407- 'relation-set'
3408- ]
3409- args = []
3410- for k, v in kwargs.items():
3411- if k == 'rid':
3412- if v:
3413- cmd.append('-r')
3414- cmd.append(v)
3415- else:
3416- args.append('{}={}'.format(k, v))
3417- cmd += args
3418- subprocess.check_call(cmd)
3419-
3420-
3421-@cached
3422-def unit_get(attribute):
3423- cmd = [
3424- 'unit-get',
3425- attribute
3426- ]
3427- value = subprocess.check_output(cmd).strip() # IGNORE:E1103
3428- if value == "":
3429- return None
3430- else:
3431- return value
3432-
3433-
3434-@cached
3435-def config_get(attribute):
3436- cmd = [
3437- 'config-get',
3438- '--format',
3439- 'json',
3440- ]
3441- out = subprocess.check_output(cmd).strip() # IGNORE:E1103
3442- cfg = json.loads(out)
3443-
3444- try:
3445- return cfg[attribute]
3446- except KeyError:
3447- return None
3448-
3449-
3450-@cached
3451-def get_unit_hostname():
3452- return socket.gethostname()
3453-
3454-
3455-@cached
3456-def get_host_ip(hostname=unit_get('private-address')):
3457- try:
3458- # Test to see if already an IPv4 address
3459- socket.inet_aton(hostname)
3460- return hostname
3461- except socket.error:
3462- answers = dns.resolver.query(hostname, 'A')
3463- if answers:
3464- return answers[0].address
3465- return None
3466-
3467-
3468-def _svc_control(service, action):
3469- subprocess.check_call(['service', service, action])
3470-
3471-
3472-def restart(*services):
3473- for service in services:
3474- _svc_control(service, 'restart')
3475-
3476-
3477-def stop(*services):
3478- for service in services:
3479- _svc_control(service, 'stop')
3480-
3481-
3482-def start(*services):
3483- for service in services:
3484- _svc_control(service, 'start')
3485-
3486-
3487-def reload(*services):
3488- for service in services:
3489- try:
3490- _svc_control(service, 'reload')
3491- except subprocess.CalledProcessError:
3492- # Reload failed - either service does not support reload
3493- # or it was not running - restart will fixup most things
3494- _svc_control(service, 'restart')
3495-
3496-
3497-def running(service):
3498- try:
3499- output = subprocess.check_output(['service', service, 'status'])
3500- except subprocess.CalledProcessError:
3501- return False
3502- else:
3503- if ("start/running" in output or
3504- "is running" in output):
3505- return True
3506- else:
3507- return False
3508-
3509-
3510-def file_hash(path):
3511- if os.path.exists(path):
3512- h = hashlib.md5()
3513- with open(path, 'r') as source:
3514- h.update(source.read()) # IGNORE:E1101 - it does have update
3515- return h.hexdigest()
3516- else:
3517- return None
3518-
3519-
3520-def inteli_restart(restart_map):
3521- def wrap(f):
3522- def wrapped_f(*args):
3523- checksums = {}
3524- for path in restart_map:
3525- checksums[path] = file_hash(path)
3526- f(*args)
3527- restarts = []
3528- for path in restart_map:
3529- if checksums[path] != file_hash(path):
3530- restarts += restart_map[path]
3531- restart(*list(set(restarts)))
3532- return wrapped_f
3533- return wrap
3534-
3535-
3536-def is_relation_made(relation, key='private-address'):
3537- for r_id in (relation_ids(relation) or []):
3538- for unit in (relation_list(r_id) or []):
3539- if relation_get(key, rid=r_id, unit=unit):
3540- return True
3541- return False
3542
3543=== modified symlink 'hooks/quantum-network-service-relation-changed'
3544=== target changed u'hooks.py' => u'quantum_hooks.py'
3545=== added file 'hooks/quantum_contexts.py'
3546--- hooks/quantum_contexts.py 1970-01-01 00:00:00 +0000
3547+++ hooks/quantum_contexts.py 2013-10-15 01:35:59 +0000
3548@@ -0,0 +1,175 @@
3549+# vim: set ts=4:et
3550+import os
3551+import uuid
3552+import socket
3553+from charmhelpers.core.hookenv import (
3554+ config,
3555+ relation_ids,
3556+ related_units,
3557+ relation_get,
3558+ unit_get,
3559+ cached,
3560+)
3561+from charmhelpers.fetch import (
3562+ apt_install,
3563+)
3564+from charmhelpers.contrib.openstack.context import (
3565+ OSContextGenerator,
3566+ context_complete
3567+)
3568+from charmhelpers.contrib.openstack.utils import (
3569+ get_os_codename_install_source
3570+)
3571+
3572+DB_USER = "quantum"
3573+QUANTUM_DB = "quantum"
3574+NOVA_DB_USER = "nova"
3575+NOVA_DB = "nova"
3576+
3577+QUANTUM_OVS_PLUGIN = \
3578+ "quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2"
3579+QUANTUM_NVP_PLUGIN = \
3580+ "quantum.plugins.nicira.nicira_nvp_plugin.QuantumPlugin.NvpPluginV2"
3581+NEUTRON_OVS_PLUGIN = \
3582+ "neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2"
3583+NEUTRON_NVP_PLUGIN = \
3584+ "neutron.plugins.nicira.nicira_nvp_plugin.NeutronPlugin.NvpPluginV2"
3585+NEUTRON = 'neutron'
3586+QUANTUM = 'quantum'
3587+
3588+
3589+def networking_name():
3590+ ''' Determine whether neutron or quantum should be used for name '''
3591+ if get_os_codename_install_source(config('openstack-origin')) >= 'havana':
3592+ return NEUTRON
3593+ else:
3594+ return QUANTUM
3595+
3596+OVS = 'ovs'
3597+NVP = 'nvp'
3598+
3599+CORE_PLUGIN = {
3600+ QUANTUM: {
3601+ OVS: QUANTUM_OVS_PLUGIN,
3602+ NVP: QUANTUM_NVP_PLUGIN
3603+ },
3604+ NEUTRON: {
3605+ OVS: NEUTRON_OVS_PLUGIN,
3606+ NVP: NEUTRON_NVP_PLUGIN
3607+ },
3608+}
3609+
3610+
3611+def core_plugin():
3612+ return CORE_PLUGIN[networking_name()][config('plugin')]
3613+
3614+
3615+class NetworkServiceContext(OSContextGenerator):
3616+ interfaces = ['quantum-network-service']
3617+
3618+ def __call__(self):
3619+ for rid in relation_ids('quantum-network-service'):
3620+ for unit in related_units(rid):
3621+ ctxt = {
3622+ 'keystone_host': relation_get('keystone_host',
3623+ rid=rid, unit=unit),
3624+ 'service_port': relation_get('service_port', rid=rid,
3625+ unit=unit),
3626+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
3627+ 'service_tenant': relation_get('service_tenant',
3628+ rid=rid, unit=unit),
3629+ 'service_username': relation_get('service_username',
3630+ rid=rid, unit=unit),
3631+ 'service_password': relation_get('service_password',
3632+ rid=rid, unit=unit),
3633+ 'quantum_host': relation_get('quantum_host',
3634+ rid=rid, unit=unit),
3635+ 'quantum_port': relation_get('quantum_port',
3636+ rid=rid, unit=unit),
3637+ 'quantum_url': relation_get('quantum_url',
3638+ rid=rid, unit=unit),
3639+ 'region': relation_get('region',
3640+ rid=rid, unit=unit),
3641+ # XXX: Hard-coded http.
3642+ 'service_protocol': 'http',
3643+ 'auth_protocol': 'http',
3644+ }
3645+ if context_complete(ctxt):
3646+ return ctxt
3647+ return {}
3648+
3649+
3650+class ExternalPortContext(OSContextGenerator):
3651+ def __call__(self):
3652+ if config('ext-port'):
3653+ return {"ext_port": config('ext-port')}
3654+ else:
3655+ return None
3656+
3657+
3658+class QuantumGatewayContext(OSContextGenerator):
3659+ def __call__(self):
3660+ ctxt = {
3661+ 'shared_secret': get_shared_secret(),
3662+ 'local_ip': get_host_ip(), # XXX: data network impact
3663+ 'core_plugin': core_plugin(),
3664+ 'plugin': config('plugin')
3665+ }
3666+ return ctxt
3667+
3668+
3669+class QuantumSharedDBContext(OSContextGenerator):
3670+ interfaces = ['shared-db']
3671+
3672+ def __call__(self):
3673+ for rid in relation_ids('shared-db'):
3674+ for unit in related_units(rid):
3675+ ctxt = {
3676+ 'database_host': relation_get('db_host', rid=rid,
3677+ unit=unit),
3678+ 'quantum_db': QUANTUM_DB,
3679+ 'quantum_user': DB_USER,
3680+ 'quantum_password': relation_get('quantum_password',
3681+ rid=rid, unit=unit),
3682+ 'nova_db': NOVA_DB,
3683+ 'nova_user': NOVA_DB_USER,
3684+ 'nova_password': relation_get('nova_password', rid=rid,
3685+ unit=unit)
3686+ }
3687+ if context_complete(ctxt):
3688+ return ctxt
3689+ return {}
3690+
3691+
3692+@cached
3693+def get_host_ip(hostname=None):
3694+ try:
3695+ import dns.resolver
3696+ except ImportError:
3697+ apt_install('python-dnspython', fatal=True)
3698+ import dns.resolver
3699+ hostname = hostname or unit_get('private-address')
3700+ try:
3701+ # Test to see if already an IPv4 address
3702+ socket.inet_aton(hostname)
3703+ return hostname
3704+ except socket.error:
3705+ answers = dns.resolver.query(hostname, 'A')
3706+ if answers:
3707+ return answers[0].address
3708+
3709+
3710+SHARED_SECRET = "/etc/{}/secret.txt"
3711+
3712+
3713+def get_shared_secret():
3714+ secret = None
3715+ _path = SHARED_SECRET.format(networking_name())
3716+ if not os.path.exists(_path):
3717+ secret = str(uuid.uuid4())
3718+ with open(_path, 'w') as secret_file:
3719+ secret_file.write(secret)
3720+ else:
3721+ with open(_path, 'r') as secret_file:
3722+ secret = secret_file.read().strip()
3723+ return secret
3724
3725=== renamed file 'hooks/hooks.py' => 'hooks/quantum_hooks.py'
3726--- hooks/hooks.py 2013-05-22 23:02:16 +0000
3727+++ hooks/quantum_hooks.py 2013-10-15 01:35:59 +0000
3728@@ -1,313 +1,136 @@
3729 #!/usr/bin/python
3730
3731-import lib.utils as utils
3732-import lib.cluster_utils as cluster
3733-import lib.openstack_common as openstack
3734+from charmhelpers.core.hookenv import (
3735+ log, ERROR, WARNING,
3736+ config,
3737+ relation_get,
3738+ relation_set,
3739+ unit_get,
3740+ Hooks, UnregisteredHookError
3741+)
3742+from charmhelpers.fetch import (
3743+ apt_update,
3744+ apt_install,
3745+ filter_installed_packages,
3746+)
3747+from charmhelpers.core.host import (
3748+ restart_on_change,
3749+ lsb_release
3750+)
3751+from charmhelpers.contrib.hahelpers.cluster import(
3752+ eligible_leader
3753+)
3754+from charmhelpers.contrib.hahelpers.apache import(
3755+ install_ca_cert
3756+)
3757+from charmhelpers.contrib.openstack.utils import (
3758+ configure_installation_source,
3759+ openstack_upgrade_available
3760+)
3761+from charmhelpers.payload.execd import execd_preinstall
3762+
3763 import sys
3764-import quantum_utils as qutils
3765-import os
3766-
3767-PLUGIN = utils.config_get('plugin')
3768-
3769-
3770+from quantum_utils import (
3771+ register_configs,
3772+ restart_map,
3773+ do_openstack_upgrade,
3774+ get_packages,
3775+ get_early_packages,
3776+ get_common_package,
3777+ valid_plugin,
3778+ configure_ovs,
3779+ reassign_agent_resources,
3780+)
3781+from quantum_contexts import (
3782+ DB_USER, QUANTUM_DB,
3783+ NOVA_DB_USER, NOVA_DB,
3784+)
3785+
3786+hooks = Hooks()
3787+CONFIGS = register_configs()
3788+
3789+
3790+@hooks.hook('install')
3791 def install():
3792- utils.configure_source()
3793- if PLUGIN in qutils.GATEWAY_PKGS.keys():
3794- if PLUGIN == qutils.OVS:
3795- # Install OVS DKMS first to ensure that the ovs module
3796- # loaded supports GRE tunnels
3797- utils.install('openvswitch-datapath-dkms')
3798- utils.install(*qutils.GATEWAY_PKGS[PLUGIN])
3799+ execd_preinstall()
3800+ src = config('openstack-origin')
3801+ if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and
3802+ src == 'distro'):
3803+ src = 'cloud:precise-folsom'
3804+ configure_installation_source(src)
3805+ apt_update(fatal=True)
3806+ if valid_plugin():
3807+ apt_install(filter_installed_packages(get_early_packages()),
3808+ fatal=True)
3809+ apt_install(filter_installed_packages(get_packages()),
3810+ fatal=True)
3811 else:
3812- utils.juju_log('ERROR', 'Please provide a valid plugin config')
3813+ log('Please provide a valid plugin config', level=ERROR)
3814 sys.exit(1)
3815
3816
3817-@utils.inteli_restart(qutils.RESTART_MAP)
3818+@hooks.hook('config-changed')
3819+@restart_on_change(restart_map())
3820 def config_changed():
3821- src = utils.config_get('openstack-origin')
3822- available = openstack.get_os_codename_install_source(src)
3823- installed = openstack.get_os_codename_package('quantum-common')
3824- if (available and
3825- openstack.get_os_version_codename(available) > \
3826- openstack.get_os_version_codename(installed)):
3827- qutils.do_openstack_upgrade()
3828-
3829- if PLUGIN in qutils.GATEWAY_PKGS.keys():
3830- render_quantum_conf()
3831- render_dhcp_agent_conf()
3832- render_l3_agent_conf()
3833- render_metadata_agent_conf()
3834- render_metadata_api_conf()
3835- render_plugin_conf()
3836- render_ext_port_upstart()
3837- render_evacuate_unit()
3838- if PLUGIN == qutils.OVS:
3839- qutils.add_bridge(qutils.INT_BRIDGE)
3840- qutils.add_bridge(qutils.EXT_BRIDGE)
3841- ext_port = utils.config_get('ext-port')
3842- if ext_port:
3843- qutils.add_bridge_port(qutils.EXT_BRIDGE, ext_port)
3844+ if openstack_upgrade_available(get_common_package()):
3845+ do_openstack_upgrade(CONFIGS)
3846+ if valid_plugin():
3847+ CONFIGS.write_all()
3848+ configure_ovs()
3849 else:
3850- utils.juju_log('ERROR',
3851- 'Please provide a valid plugin config')
3852+ log('Please provide a valid plugin config', level=ERROR)
3853 sys.exit(1)
3854
3855
3856+@hooks.hook('upgrade-charm')
3857 def upgrade_charm():
3858 install()
3859 config_changed()
3860
3861
3862-def render_ext_port_upstart():
3863- if utils.config_get('ext-port'):
3864- with open(qutils.EXT_PORT_CONF, "w") as conf:
3865- conf.write(utils.render_template(
3866- os.path.basename(qutils.EXT_PORT_CONF),
3867- {"ext_port": utils.config_get('ext-port')}
3868- )
3869- )
3870- else:
3871- if os.path.exists(qutils.EXT_PORT_CONF):
3872- os.remove(qutils.EXT_PORT_CONF)
3873-
3874-
3875-def render_l3_agent_conf():
3876- context = get_keystone_conf()
3877- if (context and
3878- os.path.exists(qutils.L3_AGENT_CONF)):
3879- with open(qutils.L3_AGENT_CONF, "w") as conf:
3880- conf.write(utils.render_template(
3881- os.path.basename(qutils.L3_AGENT_CONF),
3882- context
3883- )
3884- )
3885-
3886-
3887-def render_dhcp_agent_conf():
3888- if (os.path.exists(qutils.DHCP_AGENT_CONF)):
3889- with open(qutils.DHCP_AGENT_CONF, "w") as conf:
3890- conf.write(utils.render_template(
3891- os.path.basename(qutils.DHCP_AGENT_CONF),
3892- {}
3893- )
3894- )
3895-
3896-
3897-def render_metadata_agent_conf():
3898- context = get_keystone_conf()
3899- if (context and
3900- os.path.exists(qutils.METADATA_AGENT_CONF)):
3901- context['local_ip'] = utils.get_host_ip()
3902- context['shared_secret'] = qutils.get_shared_secret()
3903- with open(qutils.METADATA_AGENT_CONF, "w") as conf:
3904- conf.write(utils.render_template(
3905- os.path.basename(qutils.METADATA_AGENT_CONF),
3906- context
3907- )
3908- )
3909-
3910-
3911-def render_quantum_conf():
3912- context = get_rabbit_conf()
3913- if (context and
3914- os.path.exists(qutils.QUANTUM_CONF)):
3915- context['core_plugin'] = \
3916- qutils.CORE_PLUGIN[PLUGIN]
3917- with open(qutils.QUANTUM_CONF, "w") as conf:
3918- conf.write(utils.render_template(
3919- os.path.basename(qutils.QUANTUM_CONF),
3920- context
3921- )
3922- )
3923-
3924-
3925-def render_plugin_conf():
3926- context = get_quantum_db_conf()
3927- if (context and
3928- os.path.exists(qutils.PLUGIN_CONF[PLUGIN])):
3929- context['local_ip'] = utils.get_host_ip()
3930- conf_file = qutils.PLUGIN_CONF[PLUGIN]
3931- with open(conf_file, "w") as conf:
3932- conf.write(utils.render_template(
3933- os.path.basename(conf_file),
3934- context
3935- )
3936- )
3937-
3938-
3939-def render_metadata_api_conf():
3940- context = get_nova_db_conf()
3941- r_context = get_rabbit_conf()
3942- q_context = get_keystone_conf()
3943- if (context and r_context and q_context and
3944- os.path.exists(qutils.NOVA_CONF)):
3945- context.update(r_context)
3946- context.update(q_context)
3947- context['shared_secret'] = qutils.get_shared_secret()
3948- with open(qutils.NOVA_CONF, "w") as conf:
3949- conf.write(utils.render_template(
3950- os.path.basename(qutils.NOVA_CONF),
3951- context
3952- )
3953- )
3954-
3955-
3956-def render_evacuate_unit():
3957- context = get_keystone_conf()
3958- if context:
3959- with open('/usr/local/bin/quantum-evacuate-unit', "w") as conf:
3960- conf.write(utils.render_template('evacuate_unit.py', context))
3961- os.chmod('/usr/local/bin/quantum-evacuate-unit', 0700)
3962-
3963-
3964-def get_keystone_conf():
3965- for relid in utils.relation_ids('quantum-network-service'):
3966- for unit in utils.relation_list(relid):
3967- conf = {
3968- "keystone_host": utils.relation_get('keystone_host',
3969- unit, relid),
3970- "service_port": utils.relation_get('service_port',
3971- unit, relid),
3972- "auth_port": utils.relation_get('auth_port', unit, relid),
3973- "service_username": utils.relation_get('service_username',
3974- unit, relid),
3975- "service_password": utils.relation_get('service_password',
3976- unit, relid),
3977- "service_tenant": utils.relation_get('service_tenant',
3978- unit, relid),
3979- "quantum_host": utils.relation_get('quantum_host',
3980- unit, relid),
3981- "quantum_port": utils.relation_get('quantum_port',
3982- unit, relid),
3983- "quantum_url": utils.relation_get('quantum_url',
3984- unit, relid),
3985- "region": utils.relation_get('region',
3986- unit, relid)
3987- }
3988- if None not in conf.itervalues():
3989- return conf
3990- return None
3991-
3992-
3993+@hooks.hook('shared-db-relation-joined')
3994 def db_joined():
3995- utils.relation_set(quantum_username=qutils.DB_USER,
3996- quantum_database=qutils.QUANTUM_DB,
3997- quantum_hostname=utils.unit_get('private-address'),
3998- nova_username=qutils.NOVA_DB_USER,
3999- nova_database=qutils.NOVA_DB,
4000- nova_hostname=utils.unit_get('private-address'))
4001-
4002-
4003-@utils.inteli_restart(qutils.RESTART_MAP)
4004-def db_changed():
4005- render_plugin_conf()
4006- render_metadata_api_conf()
4007-
4008-
4009-def get_quantum_db_conf():
4010- for relid in utils.relation_ids('shared-db'):
4011- for unit in utils.relation_list(relid):
4012- conf = {
4013- "host": utils.relation_get('db_host',
4014- unit, relid),
4015- "user": qutils.DB_USER,
4016- "password": utils.relation_get('quantum_password',
4017- unit, relid),
4018- "db": qutils.QUANTUM_DB
4019- }
4020- if None not in conf.itervalues():
4021- return conf
4022- return None
4023-
4024-
4025-def get_nova_db_conf():
4026- for relid in utils.relation_ids('shared-db'):
4027- for unit in utils.relation_list(relid):
4028- conf = {
4029- "host": utils.relation_get('db_host',
4030- unit, relid),
4031- "user": qutils.NOVA_DB_USER,
4032- "password": utils.relation_get('nova_password',
4033- unit, relid),
4034- "db": qutils.NOVA_DB
4035- }
4036- if None not in conf.itervalues():
4037- return conf
4038- return None
4039-
4040-
4041+ relation_set(quantum_username=DB_USER,
4042+ quantum_database=QUANTUM_DB,
4043+ quantum_hostname=unit_get('private-address'),
4044+ nova_username=NOVA_DB_USER,
4045+ nova_database=NOVA_DB,
4046+ nova_hostname=unit_get('private-address'))
4047+
4048+
4049+@hooks.hook('amqp-relation-joined')
4050 def amqp_joined():
4051- utils.relation_set(username=qutils.RABBIT_USER,
4052- vhost=qutils.RABBIT_VHOST)
4053-
4054-
4055-@utils.inteli_restart(qutils.RESTART_MAP)
4056-def amqp_changed():
4057- render_dhcp_agent_conf()
4058- render_quantum_conf()
4059- render_metadata_api_conf()
4060-
4061-
4062-def get_rabbit_conf():
4063- for relid in utils.relation_ids('amqp'):
4064- for unit in utils.relation_list(relid):
4065- conf = {
4066- "rabbit_host": utils.relation_get('private-address',
4067- unit, relid),
4068- "rabbit_virtual_host": qutils.RABBIT_VHOST,
4069- "rabbit_userid": qutils.RABBIT_USER,
4070- "rabbit_password": utils.relation_get('password',
4071- unit, relid)
4072- }
4073- clustered = utils.relation_get('clustered', unit, relid)
4074- if clustered:
4075- conf['rabbit_host'] = utils.relation_get('vip', unit, relid)
4076- if None not in conf.itervalues():
4077- return conf
4078- return None
4079-
4080-
4081-@utils.inteli_restart(qutils.RESTART_MAP)
4082+ relation_set(username=config('rabbit-user'),
4083+ vhost=config('rabbit-vhost'))
4084+
4085+
4086+@hooks.hook('shared-db-relation-changed',
4087+ 'amqp-relation-changed')
4088+@restart_on_change(restart_map())
4089+def db_amqp_changed():
4090+ CONFIGS.write_all()
4091+
4092+
4093+@hooks.hook('quantum-network-service-relation-changed')
4094+@restart_on_change(restart_map())
4095 def nm_changed():
4096- render_dhcp_agent_conf()
4097- render_l3_agent_conf()
4098- render_metadata_agent_conf()
4099- render_metadata_api_conf()
4100- render_evacuate_unit()
4101- store_ca_cert()
4102-
4103-
4104-def store_ca_cert():
4105- ca_cert = get_ca_cert()
4106- if ca_cert:
4107- qutils.install_ca(ca_cert)
4108-
4109-
4110-def get_ca_cert():
4111- for relid in utils.relation_ids('quantum-network-service'):
4112- for unit in utils.relation_list(relid):
4113- ca_cert = utils.relation_get('ca_cert', unit, relid)
4114- if ca_cert:
4115- return ca_cert
4116- return None
4117-
4118-
4119+ CONFIGS.write_all()
4120+ if relation_get('ca_cert'):
4121+ install_ca_cert(relation_get('ca_cert'))
4122+
4123+
4124+@hooks.hook("cluster-relation-departed")
4125 def cluster_departed():
4126- conf = get_keystone_conf()
4127- if conf and cluster.eligible_leader(None):
4128- qutils.reassign_agent_resources(conf)
4129-
4130-utils.do_hooks({
4131- "install": install,
4132- "config-changed": config_changed,
4133- "upgrade-charm": upgrade_charm,
4134- "shared-db-relation-joined": db_joined,
4135- "shared-db-relation-changed": db_changed,
4136- "amqp-relation-joined": amqp_joined,
4137- "amqp-relation-changed": amqp_changed,
4138- "quantum-network-service-relation-changed": nm_changed,
4139- "cluster-relation-departed": cluster_departed
4140- })
4141-
4142-sys.exit(0)
4143+ if config('plugin') == 'nvp':
4144+ log('Unable to re-assign agent resources for failed nodes with nvp',
4145+ level=WARNING)
4146+ return
4147+ if eligible_leader(None):
4148+ reassign_agent_resources()
4149+
4150+
4151+if __name__ == '__main__':
4152+ try:
4153+ hooks.execute(sys.argv)
4154+ except UnregisteredHookError as e:
4155+ log('Unknown hook {} - skipping.'.format(e))
4156
4157=== modified file 'hooks/quantum_utils.py'
4158--- hooks/quantum_utils.py 2013-05-25 21:11:15 +0000
4159+++ hooks/quantum_utils.py 2013-10-15 01:35:59 +0000
4160@@ -1,183 +1,312 @@
4161-import subprocess
4162-import os
4163-import uuid
4164-import base64
4165-import apt_pkg as apt
4166-from lib.utils import (
4167- juju_log as log,
4168- configure_source,
4169- config_get
4170- )
4171-
4172-
4173-OVS = "ovs"
4174-
4175-OVS_PLUGIN = \
4176- "quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2"
4177-CORE_PLUGIN = {
4178- OVS: OVS_PLUGIN,
4179- }
4180-
4181-OVS_PLUGIN_CONF = \
4182+from charmhelpers.core.host import service_running
4183+from charmhelpers.core.hookenv import (
4184+ log,
4185+ config,
4186+)
4187+from charmhelpers.fetch import (
4188+ apt_install,
4189+ apt_update
4190+)
4191+from charmhelpers.contrib.network.ovs import (
4192+ add_bridge,
4193+ add_bridge_port,
4194+ full_restart,
4195+)
4196+from charmhelpers.contrib.openstack.utils import (
4197+ configure_installation_source,
4198+ get_os_codename_install_source,
4199+ get_os_codename_package
4200+)
4201+
4202+import charmhelpers.contrib.openstack.context as context
4203+import charmhelpers.contrib.openstack.templating as templating
4204+from charmhelpers.contrib.openstack.neutron import headers_package
4205+from quantum_contexts import (
4206+ CORE_PLUGIN, OVS, NVP,
4207+ NEUTRON, QUANTUM,
4208+ networking_name,
4209+ QuantumGatewayContext,
4210+ NetworkServiceContext,
4211+ QuantumSharedDBContext,
4212+ ExternalPortContext,
4213+)
4214+
4215+
4216+def valid_plugin():
4217+ return config('plugin') in CORE_PLUGIN[networking_name()]
4218+
4219+QUANTUM_OVS_PLUGIN_CONF = \
4220 "/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini"
4221-PLUGIN_CONF = {
4222- OVS: OVS_PLUGIN_CONF,
4223- }
4224+QUANTUM_NVP_PLUGIN_CONF = \
4225+ "/etc/quantum/plugins/nicira/nvp.ini"
4226+QUANTUM_PLUGIN_CONF = {
4227+ OVS: QUANTUM_OVS_PLUGIN_CONF,
4228+ NVP: QUANTUM_NVP_PLUGIN_CONF
4229+}
4230+
4231+NEUTRON_OVS_PLUGIN_CONF = \
4232+ "/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
4233+NEUTRON_NVP_PLUGIN_CONF = \
4234+ "/etc/neutron/plugins/nicira/nvp.ini"
4235+NEUTRON_PLUGIN_CONF = {
4236+ OVS: NEUTRON_OVS_PLUGIN_CONF,
4237+ NVP: NEUTRON_NVP_PLUGIN_CONF
4238+}
4239+
4240+QUANTUM_GATEWAY_PKGS = {
4241+ OVS: [
4242+ "quantum-plugin-openvswitch-agent",
4243+ "quantum-l3-agent",
4244+ "quantum-dhcp-agent",
4245+ 'python-mysqldb',
4246+ "nova-api-metadata"
4247+ ],
4248+ NVP: [
4249+ "openvswitch-switch",
4250+ "quantum-dhcp-agent",
4251+ 'python-mysqldb',
4252+ "nova-api-metadata"
4253+ ]
4254+}
4255+
4256+NEUTRON_GATEWAY_PKGS = {
4257+ OVS: [
4258+ "neutron-plugin-openvswitch-agent",
4259+ "openvswitch-switch",
4260+ "neutron-l3-agent",
4261+ "neutron-dhcp-agent",
4262+ 'python-mysqldb',
4263+ 'python-oslo.config', # Force upgrade
4264+ "nova-api-metadata"
4265+ ],
4266+ NVP: [
4267+ "openvswitch-switch",
4268+ "neutron-dhcp-agent",
4269+ 'python-mysqldb',
4270+ 'python-oslo.config', # Force upgrade
4271+ "nova-api-metadata"
4272+ ]
4273+}
4274
4275 GATEWAY_PKGS = {
4276- OVS: [
4277- "quantum-plugin-openvswitch-agent",
4278- "quantum-l3-agent",
4279- "quantum-dhcp-agent",
4280- 'python-mysqldb',
4281- "nova-api-metadata"
4282- ],
4283- }
4284-
4285-GATEWAY_AGENTS = {
4286- OVS: [
4287- "quantum-plugin-openvswitch-agent",
4288- "quantum-l3-agent",
4289- "quantum-dhcp-agent",
4290- "nova-api-metadata"
4291- ],
4292- }
4293+ QUANTUM: QUANTUM_GATEWAY_PKGS,
4294+ NEUTRON: NEUTRON_GATEWAY_PKGS,
4295+}
4296+
4297+EARLY_PACKAGES = {
4298+ OVS: ['openvswitch-datapath-dkms']
4299+}
4300+
4301+
4302+def get_early_packages():
4303+ '''Return a list of package for pre-install based on configured plugin'''
4304+ if config('plugin') in EARLY_PACKAGES:
4305+ pkgs = EARLY_PACKAGES[config('plugin')]
4306+ else:
4307+ return []
4308+
4309+ # ensure headers are installed build any required dkms packages
4310+ if [p for p in pkgs if 'dkms' in p]:
4311+ return pkgs + [headers_package()]
4312+ return pkgs
4313+
4314+
4315+def get_packages():
4316+ '''Return a list of packages for install based on the configured plugin'''
4317+ return GATEWAY_PKGS[networking_name()][config('plugin')]
4318+
4319+
4320+def get_common_package():
4321+ if get_os_codename_package('quantum-common', fatal=False) is not None:
4322+ return 'quantum-common'
4323+ else:
4324+ return 'neutron-common'
4325
4326 EXT_PORT_CONF = '/etc/init/ext-port.conf'
4327-
4328-
4329-def get_os_version(package=None):
4330- apt.init()
4331- cache = apt.Cache()
4332- pkg = cache[package or 'quantum-common']
4333- if pkg.current_ver:
4334- return apt.upstream_version(pkg.current_ver.ver_str)
4335- else:
4336- return None
4337-
4338-
4339-if get_os_version('quantum-common') >= "2013.1":
4340- for plugin in GATEWAY_AGENTS:
4341- GATEWAY_AGENTS[plugin].append("quantum-metadata-agent")
4342-
4343-DB_USER = "quantum"
4344-QUANTUM_DB = "quantum"
4345-KEYSTONE_SERVICE = "quantum"
4346-NOVA_DB_USER = "nova"
4347-NOVA_DB = "nova"
4348+TEMPLATES = 'templates'
4349
4350 QUANTUM_CONF = "/etc/quantum/quantum.conf"
4351-L3_AGENT_CONF = "/etc/quantum/l3_agent.ini"
4352-DHCP_AGENT_CONF = "/etc/quantum/dhcp_agent.ini"
4353-METADATA_AGENT_CONF = "/etc/quantum/metadata_agent.ini"
4354+QUANTUM_L3_AGENT_CONF = "/etc/quantum/l3_agent.ini"
4355+QUANTUM_DHCP_AGENT_CONF = "/etc/quantum/dhcp_agent.ini"
4356+QUANTUM_METADATA_AGENT_CONF = "/etc/quantum/metadata_agent.ini"
4357+
4358+NEUTRON_CONF = "/etc/neutron/neutron.conf"
4359+NEUTRON_L3_AGENT_CONF = "/etc/neutron/l3_agent.ini"
4360+NEUTRON_DHCP_AGENT_CONF = "/etc/neutron/dhcp_agent.ini"
4361+NEUTRON_METADATA_AGENT_CONF = "/etc/neutron/metadata_agent.ini"
4362+
4363 NOVA_CONF = "/etc/nova/nova.conf"
4364
4365-RESTART_MAP = {
4366- QUANTUM_CONF: [
4367- 'quantum-l3-agent',
4368- 'quantum-dhcp-agent',
4369- 'quantum-metadata-agent',
4370- 'quantum-plugin-openvswitch-agent'
4371- ],
4372- DHCP_AGENT_CONF: [
4373- 'quantum-dhcp-agent'
4374- ],
4375- L3_AGENT_CONF: [
4376- 'quantum-l3-agent'
4377- ],
4378- METADATA_AGENT_CONF: [
4379- 'quantum-metadata-agent'
4380- ],
4381- OVS_PLUGIN_CONF: [
4382- 'quantum-plugin-openvswitch-agent'
4383- ],
4384- NOVA_CONF: [
4385- 'nova-api-metadata'
4386- ]
4387- }
4388-
4389-RABBIT_USER = "nova"
4390-RABBIT_VHOST = "nova"
4391+NOVA_CONFIG_FILES = {
4392+ NOVA_CONF: {
4393+ 'hook_contexts': [context.AMQPContext(),
4394+ QuantumSharedDBContext(),
4395+ NetworkServiceContext(),
4396+ QuantumGatewayContext()],
4397+ 'services': ['nova-api-metadata']
4398+ },
4399+}
4400+
4401+QUANTUM_SHARED_CONFIG_FILES = {
4402+ QUANTUM_DHCP_AGENT_CONF: {
4403+ 'hook_contexts': [QuantumGatewayContext()],
4404+ 'services': ['quantum-dhcp-agent']
4405+ },
4406+ QUANTUM_METADATA_AGENT_CONF: {
4407+ 'hook_contexts': [NetworkServiceContext(),
4408+ QuantumGatewayContext()],
4409+ 'services': ['quantum-metadata-agent']
4410+ },
4411+}
4412+QUANTUM_SHARED_CONFIG_FILES.update(NOVA_CONFIG_FILES)
4413+
4414+NEUTRON_SHARED_CONFIG_FILES = {
4415+ NEUTRON_DHCP_AGENT_CONF: {
4416+ 'hook_contexts': [QuantumGatewayContext()],
4417+ 'services': ['neutron-dhcp-agent']
4418+ },
4419+ NEUTRON_METADATA_AGENT_CONF: {
4420+ 'hook_contexts': [NetworkServiceContext(),
4421+ QuantumGatewayContext()],
4422+ 'services': ['neutron-metadata-agent']
4423+ },
4424+}
4425+NEUTRON_SHARED_CONFIG_FILES.update(NOVA_CONFIG_FILES)
4426+
4427+QUANTUM_OVS_CONFIG_FILES = {
4428+ QUANTUM_CONF: {
4429+ 'hook_contexts': [context.AMQPContext(),
4430+ QuantumGatewayContext()],
4431+ 'services': ['quantum-l3-agent',
4432+ 'quantum-dhcp-agent',
4433+ 'quantum-metadata-agent',
4434+ 'quantum-plugin-openvswitch-agent']
4435+ },
4436+ QUANTUM_L3_AGENT_CONF: {
4437+ 'hook_contexts': [NetworkServiceContext()],
4438+ 'services': ['quantum-l3-agent']
4439+ },
4440+ # TODO: Check to see if this is actually required
4441+ QUANTUM_OVS_PLUGIN_CONF: {
4442+ 'hook_contexts': [QuantumSharedDBContext(),
4443+ QuantumGatewayContext()],
4444+ 'services': ['quantum-plugin-openvswitch-agent']
4445+ },
4446+ EXT_PORT_CONF: {
4447+ 'hook_contexts': [ExternalPortContext()],
4448+ 'services': []
4449+ }
4450+}
4451+QUANTUM_OVS_CONFIG_FILES.update(QUANTUM_SHARED_CONFIG_FILES)
4452+
4453+NEUTRON_OVS_CONFIG_FILES = {
4454+ NEUTRON_CONF: {
4455+ 'hook_contexts': [context.AMQPContext(),
4456+ QuantumGatewayContext()],
4457+ 'services': ['neutron-l3-agent',
4458+ 'neutron-dhcp-agent',
4459+ 'neutron-metadata-agent',
4460+ 'neutron-plugin-openvswitch-agent']
4461+ },
4462+ NEUTRON_L3_AGENT_CONF: {
4463+ 'hook_contexts': [NetworkServiceContext()],
4464+ 'services': ['neutron-l3-agent']
4465+ },
4466+ # TODO: Check to see if this is actually required
4467+ NEUTRON_OVS_PLUGIN_CONF: {
4468+ 'hook_contexts': [QuantumSharedDBContext(),
4469+ QuantumGatewayContext()],
4470+ 'services': ['neutron-plugin-openvswitch-agent']
4471+ },
4472+ EXT_PORT_CONF: {
4473+ 'hook_contexts': [ExternalPortContext()],
4474+ 'services': []
4475+ }
4476+}
4477+NEUTRON_OVS_CONFIG_FILES.update(NEUTRON_SHARED_CONFIG_FILES)
4478+
4479+QUANTUM_NVP_CONFIG_FILES = {
4480+ QUANTUM_CONF: {
4481+ 'hook_contexts': [context.AMQPContext()],
4482+ 'services': ['quantum-dhcp-agent', 'quantum-metadata-agent']
4483+ },
4484+}
4485+QUANTUM_NVP_CONFIG_FILES.update(QUANTUM_SHARED_CONFIG_FILES)
4486+
4487+NEUTRON_NVP_CONFIG_FILES = {
4488+ NEUTRON_CONF: {
4489+ 'hook_contexts': [context.AMQPContext()],
4490+ 'services': ['neutron-dhcp-agent', 'neutron-metadata-agent']
4491+ },
4492+}
4493+NEUTRON_NVP_CONFIG_FILES.update(NEUTRON_SHARED_CONFIG_FILES)
4494+
4495+CONFIG_FILES = {
4496+ QUANTUM: {
4497+ NVP: QUANTUM_NVP_CONFIG_FILES,
4498+ OVS: QUANTUM_OVS_CONFIG_FILES,
4499+ },
4500+ NEUTRON: {
4501+ NVP: NEUTRON_NVP_CONFIG_FILES,
4502+ OVS: NEUTRON_OVS_CONFIG_FILES,
4503+ },
4504+}
4505+
4506+
4507+def register_configs():
4508+ ''' Register config files with their respective contexts. '''
4509+ release = get_os_codename_install_source(config('openstack-origin'))
4510+ configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
4511+ openstack_release=release)
4512+
4513+ plugin = config('plugin')
4514+ name = networking_name()
4515+ for conf in CONFIG_FILES[name][plugin]:
4516+ configs.register(conf,
4517+ CONFIG_FILES[name][plugin][conf]['hook_contexts'])
4518+
4519+ return configs
4520+
4521+
4522+def restart_map():
4523+ '''
4524+ Determine the correct resource map to be passed to
4525+ charmhelpers.core.restart_on_change() based on the services configured.
4526+
4527+ :returns: dict: A dictionary mapping config file to lists of services
4528+ that should be restarted when file changes.
4529+ '''
4530+ _map = {}
4531+ name = networking_name()
4532+ for f, ctxt in CONFIG_FILES[name][config('plugin')].iteritems():
4533+ svcs = []
4534+ for svc in ctxt['services']:
4535+ svcs.append(svc)
4536+ if svcs:
4537+ _map[f] = svcs
4538+ return _map
4539+
4540
4541 INT_BRIDGE = "br-int"
4542 EXT_BRIDGE = "br-ex"
4543
4544-
4545-def add_bridge(name):
4546- status = subprocess.check_output(["ovs-vsctl", "show"])
4547- if "Bridge {}".format(name) not in status:
4548- log('INFO', 'Creating bridge {}'.format(name))
4549- subprocess.check_call(["ovs-vsctl", "add-br", name])
4550-
4551-
4552-def del_bridge(name):
4553- status = subprocess.check_output(["ovs-vsctl", "show"])
4554- if "Bridge {}".format(name) in status:
4555- log('INFO', 'Deleting bridge {}'.format(name))
4556- subprocess.check_call(["ovs-vsctl", "del-br", name])
4557-
4558-
4559-def add_bridge_port(name, port):
4560- status = subprocess.check_output(["ovs-vsctl", "show"])
4561- if ("Bridge {}".format(name) in status and
4562- "Interface \"{}\"".format(port) not in status):
4563- log('INFO',
4564- 'Adding port {} to bridge {}'.format(port, name))
4565- subprocess.check_call(["ovs-vsctl", "add-port", name, port])
4566- subprocess.check_call(["ip", "link", "set", port, "up"])
4567-
4568-
4569-def del_bridge_port(name, port):
4570- status = subprocess.check_output(["ovs-vsctl", "show"])
4571- if ("Bridge {}".format(name) in status and
4572- "Interface \"{}\"".format(port) in status):
4573- log('INFO',
4574- 'Deleting port {} from bridge {}'.format(port, name))
4575- subprocess.check_call(["ovs-vsctl", "del-port", name, port])
4576- subprocess.check_call(["ip", "link", "set", port, "down"])
4577-
4578-
4579-SHARED_SECRET = "/etc/quantum/secret.txt"
4580-
4581-
4582-def get_shared_secret():
4583- secret = None
4584- if not os.path.exists(SHARED_SECRET):
4585- secret = str(uuid.uuid4())
4586- with open(SHARED_SECRET, 'w') as secret_file:
4587- secret_file.write(secret)
4588- else:
4589- with open(SHARED_SECRET, 'r') as secret_file:
4590- secret = secret_file.read().strip()
4591- return secret
4592-
4593-
4594-def flush_local_configuration():
4595- if os.path.exists('/usr/bin/quantum-netns-cleanup'):
4596- cmd = [
4597- "quantum-netns-cleanup",
4598- "--config-file=/etc/quantum/quantum.conf"
4599- ]
4600- for agent_conf in ['l3_agent.ini', 'dhcp_agent.ini']:
4601- agent_cmd = list(cmd)
4602- agent_cmd.append('--config-file=/etc/quantum/{}'\
4603- .format(agent_conf))
4604- subprocess.call(agent_cmd)
4605-
4606-
4607-def install_ca(ca_cert):
4608- with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
4609- 'w') as crt:
4610- crt.write(base64.b64decode(ca_cert))
4611- subprocess.check_call(['update-ca-certificates', '--fresh'])
4612-
4613 DHCP_AGENT = "DHCP Agent"
4614 L3_AGENT = "L3 Agent"
4615
4616
4617-def reassign_agent_resources(env):
4618+# TODO: make work with neutron
4619+def reassign_agent_resources():
4620 ''' Use agent scheduler API to detect down agents and re-schedule '''
4621- from quantumclient.v2_0 import client
4622+ env = NetworkServiceContext()()
4623+ if not env:
4624+ log('Unable to re-assign resources at this time')
4625+ return
4626+ try:
4627+ from quantumclient.v2_0 import client
4628+ except ImportError:
4629+ ''' Try to import neutronclient instead for havana+ '''
4630+ from neutronclient.v2_0 import client
4631+
4632 # TODO: Fixup for https keystone
4633 auth_url = 'http://%(keystone_host)s:%(auth_port)s/v2.0' % env
4634 quantum = client.Client(username=env['service_username'],
4635@@ -192,7 +321,7 @@
4636 networks = {}
4637 for agent in agents['agents']:
4638 if not agent['alive']:
4639- log('INFO', 'DHCP Agent %s down' % agent['id'])
4640+ log('DHCP Agent %s down' % agent['id'])
4641 for network in \
4642 quantum.list_networks_on_dhcp_agent(agent['id'])['networks']:
4643 networks[network['id']] = agent['id']
4644@@ -203,7 +332,7 @@
4645 routers = {}
4646 for agent in agents['agents']:
4647 if not agent['alive']:
4648- log('INFO', 'L3 Agent %s down' % agent['id'])
4649+ log('L3 Agent %s down' % agent['id'])
4650 for router in \
4651 quantum.list_routers_on_l3_agent(agent['id'])['routers']:
4652 routers[router['id']] = agent['id']
4653@@ -213,8 +342,7 @@
4654 index = 0
4655 for router_id in routers:
4656 agent = index % len(l3_agents)
4657- log('INFO',
4658- 'Moving router %s from %s to %s' % \
4659+ log('Moving router %s from %s to %s' %
4660 (router_id, routers[router_id], l3_agents[agent]))
4661 quantum.remove_router_from_l3_agent(l3_agent=routers[router_id],
4662 router_id=router_id)
4663@@ -225,8 +353,7 @@
4664 index = 0
4665 for network_id in networks:
4666 agent = index % len(dhcp_agents)
4667- log('INFO',
4668- 'Moving network %s from %s to %s' % \
4669+ log('Moving network %s from %s to %s' %
4670 (network_id, networks[network_id], dhcp_agents[agent]))
4671 quantum.remove_network_from_dhcp_agent(dhcp_agent=networks[network_id],
4672 network_id=network_id)
4673@@ -234,16 +361,45 @@
4674 body={'network_id': network_id})
4675 index += 1
4676
4677-def do_openstack_upgrade():
4678- configure_source()
4679- plugin = config_get('plugin')
4680- pkgs = []
4681- if plugin in GATEWAY_PKGS.keys():
4682- pkgs += GATEWAY_PKGS[plugin]
4683- if plugin == OVS:
4684- pkgs.append('openvswitch-datapath-dkms')
4685- cmd = ['apt-get', '-y',
4686- '--option', 'Dpkg::Options::=--force-confold',
4687- '--option', 'Dpkg::Options::=--force-confdef',
4688- 'install'] + pkgs
4689- subprocess.check_call(cmd)
4690+
4691+def do_openstack_upgrade(configs):
4692+ """
4693+ Perform an upgrade. Takes care of upgrading packages, rewriting
4694+ configs, database migrations and potentially any other post-upgrade
4695+ actions.
4696+
4697+ :param configs: The charms main OSConfigRenderer object.
4698+ """
4699+ new_src = config('openstack-origin')
4700+ new_os_rel = get_os_codename_install_source(new_src)
4701+
4702+ log('Performing OpenStack upgrade to %s.' % (new_os_rel))
4703+
4704+ configure_installation_source(new_src)
4705+ dpkg_opts = [
4706+ '--option', 'Dpkg::Options::=--force-confnew',
4707+ '--option', 'Dpkg::Options::=--force-confdef',
4708+ ]
4709+ apt_update(fatal=True)
4710+ apt_install(packages=get_early_packages(),
4711+ options=dpkg_opts,
4712+ fatal=True)
4713+ apt_install(packages=get_packages(),
4714+ options=dpkg_opts,
4715+ fatal=True)
4716+
4717+ # set CONFIGS to load templates from new release
4718+ configs.set_release(openstack_release=new_os_rel)
4719+
4720+
4721+def configure_ovs():
4722+ if not service_running('openvswitch-switch'):
4723+ full_restart()
4724+ if config('plugin') == OVS:
4725+ add_bridge(INT_BRIDGE)
4726+ add_bridge(EXT_BRIDGE)
4727+ ext_port = config('ext-port')
4728+ if ext_port:
4729+ add_bridge_port(EXT_BRIDGE, ext_port)
4730+ if config('plugin') == NVP:
4731+ add_bridge(INT_BRIDGE)
4732
4733=== modified symlink 'hooks/shared-db-relation-changed'
4734=== target changed u'hooks.py' => u'quantum_hooks.py'
4735=== modified symlink 'hooks/shared-db-relation-joined'
4736=== target changed u'hooks.py' => u'quantum_hooks.py'
4737=== modified symlink 'hooks/start'
4738=== target changed u'hooks.py' => u'quantum_hooks.py'
4739=== modified symlink 'hooks/stop'
4740=== target changed u'hooks.py' => u'quantum_hooks.py'
4741=== modified symlink 'hooks/upgrade-charm'
4742=== target changed u'hooks.py' => u'quantum_hooks.py'
4743=== modified file 'metadata.yaml'
4744--- metadata.yaml 2013-03-20 16:08:54 +0000
4745+++ metadata.yaml 2013-10-15 01:35:59 +0000
4746@@ -1,18 +1,18 @@
4747 name: quantum-gateway
4748-summary: Virtual Networking for OpenStack - Quantum Gateway
4749+summary: Virtual Networking for OpenStack - Neutron Gateway
4750 maintainer: James Page <james.page@ubuntu.com>
4751 description: |
4752- Quantum is a virtual network service for Openstack, and a part of
4753+ Neutron is a virtual network service for Openstack, and a part of
4754 Netstack. Just like OpenStack Nova provides an API to dynamically
4755- request and configure virtual servers, Quantum provides an API to
4756+ request and configure virtual servers, Neutron provides an API to
4757 dynamically request and configure virtual networks. These networks
4758 connect "interfaces" from other OpenStack services (e.g., virtual NICs
4759- from Nova VMs). The Quantum API supports extensions to provide
4760+ from Nova VMs). The Neutron API supports extensions to provide
4761 advanced network capabilities (e.g., QoS, ACLs, network monitoring,
4762 etc.)
4763 .
4764- This charm provides central Quantum networking services as part
4765- of a Quantum based Openstack deployment
4766+ This charm provides central Neutron networking services as part
4767+ of a Neutron based Openstack deployment
4768 provides:
4769 quantum-network-service:
4770 interface: quantum
4771@@ -23,4 +23,4 @@
4772 interface: rabbitmq
4773 peers:
4774 cluster:
4775- interface: quantum-gateway-ha
4776\ No newline at end of file
4777+ interface: quantum-gateway-ha
4778
4779=== added file 'setup.cfg'
4780--- setup.cfg 1970-01-01 00:00:00 +0000
4781+++ setup.cfg 2013-10-15 01:35:59 +0000
4782@@ -0,0 +1,5 @@
4783+[nosetests]
4784+verbosity=2
4785+with-coverage=1
4786+cover-erase=1
4787+cover-package=hooks
4788
4789=== removed file 'templates/evacuate_unit.py'
4790--- templates/evacuate_unit.py 2013-04-12 16:27:28 +0000
4791+++ templates/evacuate_unit.py 1970-01-01 00:00:00 +0000
4792@@ -1,70 +0,0 @@
4793-#!/usr/bin/python
4794-
4795-import subprocess
4796-
4797-
4798-def log(priority, message):
4799- print "{}: {}".format(priority, message)
4800-
4801-DHCP_AGENT = "DHCP Agent"
4802-L3_AGENT = "L3 Agent"
4803-
4804-
4805-def evacuate_unit(unit):
4806- ''' Use agent scheduler API to detect down agents and re-schedule '''
4807- from quantumclient.v2_0 import client
4808- # TODO: Fixup for https keystone
4809- auth_url = 'http://{{ keystone_host }}:{{ auth_port }}/v2.0'
4810- quantum = client.Client(username='{{ service_username }}',
4811- password='{{ service_password }}',
4812- tenant_name='{{ service_tenant }}',
4813- auth_url=auth_url,
4814- region_name='{{ region }}')
4815-
4816- agents = quantum.list_agents(agent_type=DHCP_AGENT)
4817- dhcp_agents = []
4818- l3_agents = []
4819- networks = {}
4820- for agent in agents['agents']:
4821- if agent['alive'] and agent['host'] != unit:
4822- dhcp_agents.append(agent['id'])
4823- elif agent['host'] == unit:
4824- for network in \
4825- quantum.list_networks_on_dhcp_agent(agent['id'])['networks']:
4826- networks[network['id']] = agent['id']
4827-
4828- agents = quantum.list_agents(agent_type=L3_AGENT)
4829- routers = {}
4830- for agent in agents['agents']:
4831- if agent['alive'] and agent['host'] != unit:
4832- l3_agents.append(agent['id'])
4833- elif agent['host'] == unit:
4834- for router in \
4835- quantum.list_routers_on_l3_agent(agent['id'])['routers']:
4836- routers[router['id']] = agent['id']
4837-
4838- index = 0
4839- for router_id in routers:
4840- agent = index % len(l3_agents)
4841- log('INFO',
4842- 'Moving router %s from %s to %s' % \
4843- (router_id, routers[router_id], l3_agents[agent]))
4844- quantum.remove_router_from_l3_agent(l3_agent=routers[router_id],
4845- router_id=router_id)
4846- quantum.add_router_to_l3_agent(l3_agent=l3_agents[agent],
4847- body={'router_id': router_id})
4848- index += 1
4849-
4850- index = 0
4851- for network_id in networks:
4852- agent = index % len(dhcp_agents)
4853- log('INFO',
4854- 'Moving network %s from %s to %s' % \
4855- (network_id, networks[network_id], dhcp_agents[agent]))
4856- quantum.remove_network_from_dhcp_agent(dhcp_agent=networks[network_id],
4857- network_id=network_id)
4858- quantum.add_network_to_dhcp_agent(dhcp_agent=dhcp_agents[agent],
4859- body={'network_id': network_id})
4860- index += 1
4861-
4862-evacuate_unit(subprocess.check_output(['hostname', '-f']).strip())
4863
4864=== added directory 'templates/folsom'
4865=== renamed file 'templates/dhcp_agent.ini' => 'templates/folsom/dhcp_agent.ini'
4866--- templates/dhcp_agent.ini 2012-11-05 11:59:27 +0000
4867+++ templates/folsom/dhcp_agent.ini 2013-10-15 01:35:59 +0000
4868@@ -3,3 +3,8 @@
4869 interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
4870 dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
4871 root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf
4872+{% if plugin == 'nvp' %}
4873+ovs_use_veth = True
4874+enable_metadata_network = True
4875+enable_isolated_metadata = True
4876+{% endif %}
4877
4878=== renamed file 'templates/l3_agent.ini' => 'templates/folsom/l3_agent.ini'
4879--- templates/l3_agent.ini 2013-01-22 18:36:19 +0000
4880+++ templates/folsom/l3_agent.ini 2013-10-15 01:35:59 +0000
4881@@ -1,6 +1,6 @@
4882 [DEFAULT]
4883 interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
4884-auth_url = http://{{ keystone_host }}:{{ service_port }}/v2.0
4885+auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
4886 auth_region = {{ region }}
4887 admin_tenant_name = {{ service_tenant }}
4888 admin_user = {{ service_username }}
4889
4890=== renamed file 'templates/metadata_agent.ini' => 'templates/folsom/metadata_agent.ini'
4891--- templates/metadata_agent.ini 2013-01-22 18:36:19 +0000
4892+++ templates/folsom/metadata_agent.ini 2013-10-15 01:35:59 +0000
4893@@ -1,6 +1,6 @@
4894 [DEFAULT]
4895 debug = True
4896-auth_url = http://{{ keystone_host }}:{{ service_port }}/v2.0
4897+auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
4898 auth_region = {{ region }}
4899 admin_tenant_name = {{ service_tenant }}
4900 admin_user = {{ service_username }}
4901
4902=== renamed file 'templates/nova.conf' => 'templates/folsom/nova.conf'
4903--- templates/nova.conf 2013-03-11 12:25:14 +0000
4904+++ templates/folsom/nova.conf 2013-10-15 01:35:59 +0000
4905@@ -7,14 +7,14 @@
4906 api_paste_config=/etc/nova/api-paste.ini
4907 enabled_apis=metadata
4908 multi_host=True
4909-sql_connection=mysql://{{ user }}:{{ password }}@{{ host }}/{{ db }}
4910+sql_connection=mysql://{{ nova_user }}:{{ nova_password }}@{{ database_host }}/{{ nova_db }}
4911 quantum_metadata_proxy_shared_secret={{ shared_secret }}
4912 service_quantum_metadata_proxy=True
4913 # Access to message bus
4914-rabbit_userid={{ rabbit_userid }}
4915-rabbit_virtual_host={{ rabbit_virtual_host }}
4916-rabbit_host={{ rabbit_host }}
4917-rabbit_password={{ rabbit_password }}
4918+rabbit_userid={{ rabbitmq_user }}
4919+rabbit_virtual_host={{ rabbitmq_virtual_host }}
4920+rabbit_host={{ rabbitmq_host }}
4921+rabbit_password={{ rabbitmq_password }}
4922 # Access to quantum API services
4923 network_api_class=nova.network.quantumv2.api.API
4924 quantum_auth_strategy=keystone
4925@@ -22,4 +22,4 @@
4926 quantum_admin_tenant_name={{ service_tenant }}
4927 quantum_admin_username={{ service_username }}
4928 quantum_admin_password={{ service_password }}
4929-quantum_admin_auth_url=http://{{ keystone_host }}:{{ service_port }}/v2.0
4930+quantum_admin_auth_url={{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
4931
4932=== renamed file 'templates/ovs_quantum_plugin.ini' => 'templates/folsom/ovs_quantum_plugin.ini'
4933--- templates/ovs_quantum_plugin.ini 2013-03-11 12:55:52 +0000
4934+++ templates/folsom/ovs_quantum_plugin.ini 2013-10-15 01:35:59 +0000
4935@@ -1,5 +1,5 @@
4936 [DATABASE]
4937-sql_connection = mysql://{{ user }}:{{ password }}@{{ host }}/{{ db }}?charset=utf8
4938+sql_connection = mysql://{{ quantum_user }}:{{ quantum_password }}@{{ database_host }}/{{ quantum_db }}?charset=utf8
4939 reconnect_interval = 2
4940 [OVS]
4941 local_ip = {{ local_ip }}
4942
4943=== renamed file 'templates/quantum.conf' => 'templates/folsom/quantum.conf'
4944--- templates/quantum.conf 2013-03-07 09:11:27 +0000
4945+++ templates/folsom/quantum.conf 2013-10-15 01:35:59 +0000
4946@@ -1,9 +1,9 @@
4947 [DEFAULT]
4948 verbose = True
4949-rabbit_userid = {{ rabbit_userid }}
4950-rabbit_virtual_host = {{ rabbit_virtual_host }}
4951-rabbit_host = {{ rabbit_host }}
4952-rabbit_password = {{ rabbit_password }}
4953+rabbit_userid = {{ rabbitmq_user }}
4954+rabbit_virtual_host = {{ rabbitmq_virtual_host }}
4955+rabbit_host = {{ rabbitmq_host }}
4956+rabbit_password = {{ rabbitmq_password }}
4957 debug = True
4958 bind_host = 0.0.0.0
4959 bind_port = 9696
4960
4961=== added directory 'templates/havana'
4962=== added file 'templates/havana/dhcp_agent.ini'
4963--- templates/havana/dhcp_agent.ini 1970-01-01 00:00:00 +0000
4964+++ templates/havana/dhcp_agent.ini 2013-10-15 01:35:59 +0000
4965@@ -0,0 +1,10 @@
4966+[DEFAULT]
4967+state_path = /var/lib/neutron
4968+interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
4969+dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
4970+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
4971+ovs_use_veth = True
4972+{% if plugin == 'nvp' %}
4973+enable_metadata_network = True
4974+enable_isolated_metadata = True
4975+{% endif %}
4976
4977=== added file 'templates/havana/l3_agent.ini'
4978--- templates/havana/l3_agent.ini 1970-01-01 00:00:00 +0000
4979+++ templates/havana/l3_agent.ini 2013-10-15 01:35:59 +0000
4980@@ -0,0 +1,9 @@
4981+[DEFAULT]
4982+interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
4983+auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
4984+auth_region = {{ region }}
4985+admin_tenant_name = {{ service_tenant }}
4986+admin_user = {{ service_username }}
4987+admin_password = {{ service_password }}
4988+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
4989+ovs_use_veth = True
4990
4991=== added file 'templates/havana/metadata_agent.ini'
4992--- templates/havana/metadata_agent.ini 1970-01-01 00:00:00 +0000
4993+++ templates/havana/metadata_agent.ini 2013-10-15 01:35:59 +0000
4994@@ -0,0 +1,17 @@
4995+[DEFAULT]
4996+debug = True
4997+auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
4998+auth_region = {{ region }}
4999+admin_tenant_name = {{ service_tenant }}
5000+admin_user = {{ service_username }}
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches