Merge lp:~openstack-charmers/charms/precise/glance/python-redux into lp:~charmers/charms/precise/glance/trunk

Proposed by Adam Gandelman
Status: Merged
Merged at revision: 37
Proposed branch: lp:~openstack-charmers/charms/precise/glance/python-redux
Merge into: lp:~charmers/charms/precise/glance/trunk
Diff against target: 6608 lines (+4868/-1413)
47 files modified
.coveragerc (+6/-0)
Makefile (+11/-0)
README.md (+89/-0)
charm-helpers.yaml (+9/-0)
config.yaml (+12/-2)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/openstack/context.py (+522/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+365/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+359/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+241/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+49/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/glance-common (+0/-133)
hooks/glance-relations (+0/-464)
hooks/glance_contexts.py (+89/-0)
hooks/glance_relations.py (+320/-0)
hooks/glance_utils.py (+193/-0)
hooks/lib/openstack-common (+0/-813)
metadata.yaml (+2/-0)
revision (+1/-1)
templates/ceph.conf (+12/-0)
templates/essex/glance-api-paste.ini (+51/-0)
templates/essex/glance-api.conf (+86/-0)
templates/essex/glance-registry-paste.ini (+28/-0)
templates/folsom/glance-api-paste.ini (+68/-0)
templates/folsom/glance-api.conf (+94/-0)
templates/folsom/glance-registry-paste.ini (+28/-0)
templates/glance-registry.conf (+19/-0)
templates/grizzly/glance-api-paste.ini (+68/-0)
templates/grizzly/glance-registry-paste.ini (+30/-0)
templates/haproxy.cfg (+37/-0)
templates/havana/glance-api-paste.ini (+71/-0)
templates/openstack_https_frontend (+23/-0)
unit_tests/__init__.py (+3/-0)
unit_tests/test_glance_relations.py (+401/-0)
unit_tests/test_utils.py (+118/-0)
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/glance/python-redux
Reviewer Review Type Date Requested Status
charmers Pending
Review via email: mp+191082@code.launchpad.net

Description of the change

Update of all Havana / Saucy / python-redux work:

* Full python rewrite using new OpenStack charm-helpers.

* Test coverage

* Havana support

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file '.coveragerc'
--- .coveragerc 1970-01-01 00:00:00 +0000
+++ .coveragerc 2013-10-15 01:35:02 +0000
@@ -0,0 +1,6 @@
1[report]
2# Regexes for lines to exclude from consideration
3exclude_lines =
4 if __name__ == .__main__.:
5include=
6 hooks/glance_*
07
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2013-10-15 01:35:02 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/make
2
3lint:
4 @flake8 --exclude hooks/charmhelpers hooks
5 @charm proof
6
7sync:
8 @charm-helper-sync -c charm-helpers.yaml
9
10test:
11 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
012
=== added file 'README.md'
--- README.md 1970-01-01 00:00:00 +0000
+++ README.md 2013-10-15 01:35:02 +0000
@@ -0,0 +1,89 @@
1Overview
2--------
3
4This charm provides the Glance image service for OpenStack. It is intended to
5be used alongside the other OpenStack components, starting with the Essex
6release in Ubuntu 12.04.
7
8Usage
9-----
10
11Glance may be deployed in a number of ways. This charm focuses on 3 main
12configurations. All require the existence of the other core OpenStack
13services deployed via Juju charms, specifically: mysql, keystone and
14nova-cloud-controller. The following assumes these services have already
15been deployed.
16
17Local Storage
18=============
19
20In this configuration, Glance uses the local storage available on the server
21to store image data:
22
23 juju deploy glance
24 juju add-relation glance keystone
25 juju add-relation glance mysql
26 juju add-relation glance nova-cloud-controller
27
28Swift backed storage
29====================
30
31Glance can also use Swift Object storage for image storage. Swift is often
32deployed as part of an OpenStack cloud and provides increased resilience and
33scale when compared to using local disk storage. This configuration assumes
34that you have already deployed Swift using the swift-proxy and swift-storage
35charms:
36
37 juju deploy glance
38 juju add-relation glance keystone
39 juju add-relation glance mysql
40 juju add-relation glance nova-cloud-controller
41 juju add-relation glance swift-proxy
42
43This configuration can be used to support Glance in HA/Scale-out deployments.
44
45Ceph backed storage
46===================
47
48In this configuration, Glance uses Ceph based object storage to provide
49scalable, resilient storage of images. This configuration assumes that you
50have already deployed Ceph using the ceph charm:
51
52 juju deploy glance
53 juju add-relation glance keystone
54 juju add-relation glance mysql
55 juju add-relation glance nova-cloud-controller
56 juju add-relation glance ceph
57
58This configuration can also be used to support Glance in HA/Scale-out
59deployments.
60
61Glance HA/Scale-out
62===================
63
64The Glance charm can also be used in a HA/scale-out configuration using
65the hacluster charm:
66
67 juju deploy -n 3 glance
68 juju deploy hacluster haglance
69 juju set glance vip=<virtual IP address to access glance over>
70 juju add-relation glance haglance
71 juju add-relation glance mysql
72 juju add-relation glance keystone
73 juju add-relation glance nova-cloud-controller
74 juju add-relation glance ceph|swift-proxy
75
76In this configuration, 3 service units host the Glance image service;
77API requests are load balanced across all 3 service units via the
78configured virtual IP address (which is also registered into Keystone
79as the endpoint for Glance).
80
81Note that Glance in this configuration must be used with either Ceph or
82Swift providing backing image storage.
83
84Contact Information
85-------------------
86
87Author: Adam Gandelman <adamg@canonical.com>
88Report bugs at: http://bugs.launchpad.net/charms
89Location: http://jujucharms.com
090
=== added file 'charm-helpers.yaml'
--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers.yaml 2013-10-15 01:35:02 +0000
@@ -0,0 +1,9 @@
1branch: lp:charm-helpers
2destination: hooks/charmhelpers
3include:
4 - core
5 - fetch
6 - contrib.openstack
7 - contrib.hahelpers
8 - contrib.storage.linux.ceph
9 - payload.execd
010
=== modified file 'config.yaml'
--- config.yaml 2013-09-18 09:12:13 +0000
+++ config.yaml 2013-10-15 01:35:02 +0000
@@ -14,11 +14,11 @@
14 Note that updating this setting to a source that is known to14 Note that updating this setting to a source that is known to
15 provide a later version of OpenStack will trigger a software15 provide a later version of OpenStack will trigger a software
16 upgrade.16 upgrade.
17 db-user:17 database-user:
18 default: glance18 default: glance
19 type: string19 type: string
20 description: Database username20 description: Database username
21 glance-db:21 database:
22 default: glance22 default: glance
23 type: string23 type: string
24 description: Glance database name.24 description: Glance database name.
@@ -26,6 +26,16 @@
26 default: RegionOne26 default: RegionOne
27 type: string27 type: string
28 description: OpenStack Region28 description: OpenStack Region
29 ceph-osd-replication-count:
30 default: 2
31 type: int
32 description: |
33 This value dictates the number of replicas ceph must make of any
34 object it stores within the images rbd pool. Of course, this only
35 applies if using Ceph as a backend store. Note that once the images
36 rbd pool has been created, changing this value will not have any
37 effect (although it can be changed in ceph by manually configuring
38 your ceph cluster).
29 # HA configuration settings39 # HA configuration settings
30 vip:40 vip:
31 type: string41 type: string
3242
=== added file 'hooks/__init__.py'
=== added symlink 'hooks/ceph-relation-broken'
=== target is u'glance_relations.py'
=== modified symlink 'hooks/ceph-relation-changed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/ceph-relation-joined'
=== target changed u'glance-relations' => u'glance_relations.py'
=== added directory 'hooks/charmhelpers'
=== added file 'hooks/charmhelpers/__init__.py'
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/hahelpers'
=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,58 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import subprocess
12
13from charmhelpers.core.hookenv import (
14 config as config_get,
15 relation_get,
16 relation_ids,
17 related_units as relation_list,
18 log,
19 INFO,
20)
21
22
23def get_cert():
24 cert = config_get('ssl_cert')
25 key = config_get('ssl_key')
26 if not (cert and key):
27 log("Inspecting identity-service relations for SSL certificate.",
28 level=INFO)
29 cert = key = None
30 for r_id in relation_ids('identity-service'):
31 for unit in relation_list(r_id):
32 if not cert:
33 cert = relation_get('ssl_cert',
34 rid=r_id, unit=unit)
35 if not key:
36 key = relation_get('ssl_key',
37 rid=r_id, unit=unit)
38 return (cert, key)
39
40
41def get_ca_cert():
42 ca_cert = None
43 log("Inspecting identity-service relations for CA SSL certificate.",
44 level=INFO)
45 for r_id in relation_ids('identity-service'):
46 for unit in relation_list(r_id):
47 if not ca_cert:
48 ca_cert = relation_get('ca_cert',
49 rid=r_id, unit=unit)
50 return ca_cert
51
52
53def install_ca_cert(ca_cert):
54 if ca_cert:
55 with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
56 'w') as crt:
57 crt.write(ca_cert)
58 subprocess.check_call(['update-ca-certificates', '--fresh'])
059
=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,183 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# James Page <james.page@ubuntu.com>
6# Adam Gandelman <adamg@ubuntu.com>
7#
8
9import subprocess
10import os
11
12from socket import gethostname as get_unit_hostname
13
14from charmhelpers.core.hookenv import (
15 log,
16 relation_ids,
17 related_units as relation_list,
18 relation_get,
19 config as config_get,
20 INFO,
21 ERROR,
22 unit_get,
23)
24
25
26class HAIncompleteConfig(Exception):
27 pass
28
29
30def is_clustered():
31 for r_id in (relation_ids('ha') or []):
32 for unit in (relation_list(r_id) or []):
33 clustered = relation_get('clustered',
34 rid=r_id,
35 unit=unit)
36 if clustered:
37 return True
38 return False
39
40
41def is_leader(resource):
42 cmd = [
43 "crm", "resource",
44 "show", resource
45 ]
46 try:
47 status = subprocess.check_output(cmd)
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if get_unit_hostname() in status:
52 return True
53 else:
54 return False
55
56
57def peer_units():
58 peers = []
59 for r_id in (relation_ids('cluster') or []):
60 for unit in (relation_list(r_id) or []):
61 peers.append(unit)
62 return peers
63
64
65def oldest_peer(peers):
66 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
67 for peer in peers:
68 remote_unit_no = int(peer.split('/')[1])
69 if remote_unit_no < local_unit_no:
70 return False
71 return True
72
73
74def eligible_leader(resource):
75 if is_clustered():
76 if not is_leader(resource):
77 log('Deferring action to CRM leader.', level=INFO)
78 return False
79 else:
80 peers = peer_units()
81 if peers and not oldest_peer(peers):
82 log('Deferring action to oldest service unit.', level=INFO)
83 return False
84 return True
85
86
87def https():
88 '''
89 Determines whether enough data has been provided in configuration
90 or relation data to configure HTTPS
91 .
92 returns: boolean
93 '''
94 if config_get('use-https') == "yes":
95 return True
96 if config_get('ssl_cert') and config_get('ssl_key'):
97 return True
98 for r_id in relation_ids('identity-service'):
99 for unit in relation_list(r_id):
100 rel_state = [
101 relation_get('https_keystone', rid=r_id, unit=unit),
102 relation_get('ssl_cert', rid=r_id, unit=unit),
103 relation_get('ssl_key', rid=r_id, unit=unit),
104 relation_get('ca_cert', rid=r_id, unit=unit),
105 ]
106 # NOTE: works around (LP: #1203241)
107 if (None not in rel_state) and ('' not in rel_state):
108 return True
109 return False
110
111
112def determine_api_port(public_port):
113 '''
114 Determine correct API server listening port based on
115 existence of HTTPS reverse proxy and/or haproxy.
116
117 public_port: int: standard public port for given service
118
119 returns: int: the correct listening port for the API service
120 '''
121 i = 0
122 if len(peer_units()) > 0 or is_clustered():
123 i += 1
124 if https():
125 i += 1
126 return public_port - (i * 10)
127
128
129def determine_haproxy_port(public_port):
130 '''
131 Description: Determine correct proxy listening port based on public IP +
132 existence of HTTPS reverse proxy.
133
134 public_port: int: standard public port for given service
135
136 returns: int: the correct listening port for the HAProxy service
137 '''
138 i = 0
139 if https():
140 i += 1
141 return public_port - (i * 10)
142
143
144def get_hacluster_config():
145 '''
146 Obtains all relevant configuration from charm configuration required
147 for initiating a relation to hacluster:
148
149 ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
150
151 returns: dict: A dict containing settings keyed by setting name.
152 raises: HAIncompleteConfig if settings are missing.
153 '''
154 settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
155 conf = {}
156 for setting in settings:
157 conf[setting] = config_get(setting)
158 missing = []
159 [missing.append(s) for s, v in conf.iteritems() if v is None]
160 if missing:
161 log('Insufficient config data to configure hacluster.', level=ERROR)
162 raise HAIncompleteConfig
163 return conf
164
165
166def canonical_url(configs, vip_setting='vip'):
167 '''
168 Returns the correct HTTP URL to this host given the state of HTTPS
169 configuration and hacluster.
170
171 :configs : OSTemplateRenderer: A config tempating object to inspect for
172 a complete https context.
173 :vip_setting: str: Setting in charm config that specifies
174 VIP address.
175 '''
176 scheme = 'http'
177 if 'https' in configs.complete_contexts():
178 scheme = 'https'
179 if is_clustered():
180 addr = config_get(vip_setting)
181 else:
182 addr = unit_get('private-address')
183 return '%s://%s' % (scheme, addr)
0184
=== added directory 'hooks/charmhelpers/contrib/openstack'
=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,522 @@
1import json
2import os
3
4from base64 import b64decode
5
6from subprocess import (
7 check_call
8)
9
10
11from charmhelpers.fetch import (
12 apt_install,
13 filter_installed_packages,
14)
15
16from charmhelpers.core.hookenv import (
17 config,
18 local_unit,
19 log,
20 relation_get,
21 relation_ids,
22 related_units,
23 unit_get,
24 unit_private_ip,
25 ERROR,
26 WARNING,
27)
28
29from charmhelpers.contrib.hahelpers.cluster import (
30 determine_api_port,
31 determine_haproxy_port,
32 https,
33 is_clustered,
34 peer_units,
35)
36
37from charmhelpers.contrib.hahelpers.apache import (
38 get_cert,
39 get_ca_cert,
40)
41
42from charmhelpers.contrib.openstack.neutron import (
43 neutron_plugin_attribute,
44)
45
46CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
47
48
49class OSContextError(Exception):
50 pass
51
52
53def ensure_packages(packages):
54 '''Install but do not upgrade required plugin packages'''
55 required = filter_installed_packages(packages)
56 if required:
57 apt_install(required, fatal=True)
58
59
60def context_complete(ctxt):
61 _missing = []
62 for k, v in ctxt.iteritems():
63 if v is None or v == '':
64 _missing.append(k)
65 if _missing:
66 log('Missing required data: %s' % ' '.join(_missing), level='INFO')
67 return False
68 return True
69
70
71class OSContextGenerator(object):
72 interfaces = []
73
74 def __call__(self):
75 raise NotImplementedError
76
77
78class SharedDBContext(OSContextGenerator):
79 interfaces = ['shared-db']
80
81 def __init__(self, database=None, user=None, relation_prefix=None):
82 '''
83 Allows inspecting relation for settings prefixed with relation_prefix.
84 This is useful for parsing access for multiple databases returned via
85 the shared-db interface (eg, nova_password, quantum_password)
86 '''
87 self.relation_prefix = relation_prefix
88 self.database = database
89 self.user = user
90
91 def __call__(self):
92 self.database = self.database or config('database')
93 self.user = self.user or config('database-user')
94 if None in [self.database, self.user]:
95 log('Could not generate shared_db context. '
96 'Missing required charm config options. '
97 '(database name and user)')
98 raise OSContextError
99 ctxt = {}
100
101 password_setting = 'password'
102 if self.relation_prefix:
103 password_setting = self.relation_prefix + '_password'
104
105 for rid in relation_ids('shared-db'):
106 for unit in related_units(rid):
107 passwd = relation_get(password_setting, rid=rid, unit=unit)
108 ctxt = {
109 'database_host': relation_get('db_host', rid=rid,
110 unit=unit),
111 'database': self.database,
112 'database_user': self.user,
113 'database_password': passwd,
114 }
115 if context_complete(ctxt):
116 return ctxt
117 return {}
118
119
120class IdentityServiceContext(OSContextGenerator):
121 interfaces = ['identity-service']
122
123 def __call__(self):
124 log('Generating template context for identity-service')
125 ctxt = {}
126
127 for rid in relation_ids('identity-service'):
128 for unit in related_units(rid):
129 ctxt = {
130 'service_port': relation_get('service_port', rid=rid,
131 unit=unit),
132 'service_host': relation_get('service_host', rid=rid,
133 unit=unit),
134 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
135 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
136 'admin_tenant_name': relation_get('service_tenant',
137 rid=rid, unit=unit),
138 'admin_user': relation_get('service_username', rid=rid,
139 unit=unit),
140 'admin_password': relation_get('service_password', rid=rid,
141 unit=unit),
142 # XXX: Hard-coded http.
143 'service_protocol': 'http',
144 'auth_protocol': 'http',
145 }
146 if context_complete(ctxt):
147 return ctxt
148 return {}
149
150
151class AMQPContext(OSContextGenerator):
152 interfaces = ['amqp']
153
154 def __call__(self):
155 log('Generating template context for amqp')
156 conf = config()
157 try:
158 username = conf['rabbit-user']
159 vhost = conf['rabbit-vhost']
160 except KeyError as e:
161 log('Could not generate shared_db context. '
162 'Missing required charm config options: %s.' % e)
163 raise OSContextError
164
165 ctxt = {}
166 for rid in relation_ids('amqp'):
167 for unit in related_units(rid):
168 if relation_get('clustered', rid=rid, unit=unit):
169 ctxt['clustered'] = True
170 ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
171 unit=unit)
172 else:
173 ctxt['rabbitmq_host'] = relation_get('private-address',
174 rid=rid, unit=unit)
175 ctxt.update({
176 'rabbitmq_user': username,
177 'rabbitmq_password': relation_get('password', rid=rid,
178 unit=unit),
179 'rabbitmq_virtual_host': vhost,
180 })
181 if context_complete(ctxt):
182 # Sufficient information found = break out!
183 break
184 # Used for active/active rabbitmq >= grizzly
185 ctxt['rabbitmq_hosts'] = []
186 for unit in related_units(rid):
187 ctxt['rabbitmq_hosts'].append(relation_get('private-address',
188 rid=rid, unit=unit))
189 if not context_complete(ctxt):
190 return {}
191 else:
192 return ctxt
193
194
195class CephContext(OSContextGenerator):
196 interfaces = ['ceph']
197
198 def __call__(self):
199 '''This generates context for /etc/ceph/ceph.conf templates'''
200 if not relation_ids('ceph'):
201 return {}
202 log('Generating template context for ceph')
203 mon_hosts = []
204 auth = None
205 key = None
206 for rid in relation_ids('ceph'):
207 for unit in related_units(rid):
208 mon_hosts.append(relation_get('private-address', rid=rid,
209 unit=unit))
210 auth = relation_get('auth', rid=rid, unit=unit)
211 key = relation_get('key', rid=rid, unit=unit)
212
213 ctxt = {
214 'mon_hosts': ' '.join(mon_hosts),
215 'auth': auth,
216 'key': key,
217 }
218
219 if not os.path.isdir('/etc/ceph'):
220 os.mkdir('/etc/ceph')
221
222 if not context_complete(ctxt):
223 return {}
224
225 ensure_packages(['ceph-common'])
226
227 return ctxt
228
229
230class HAProxyContext(OSContextGenerator):
231 interfaces = ['cluster']
232
233 def __call__(self):
234 '''
235 Builds half a context for the haproxy template, which describes
236 all peers to be included in the cluster. Each charm needs to include
237 its own context generator that describes the port mapping.
238 '''
239 if not relation_ids('cluster'):
240 return {}
241
242 cluster_hosts = {}
243 l_unit = local_unit().replace('/', '-')
244 cluster_hosts[l_unit] = unit_get('private-address')
245
246 for rid in relation_ids('cluster'):
247 for unit in related_units(rid):
248 _unit = unit.replace('/', '-')
249 addr = relation_get('private-address', rid=rid, unit=unit)
250 cluster_hosts[_unit] = addr
251
252 ctxt = {
253 'units': cluster_hosts,
254 }
255 if len(cluster_hosts.keys()) > 1:
256 # Enable haproxy when we have enough peers.
257 log('Ensuring haproxy enabled in /etc/default/haproxy.')
258 with open('/etc/default/haproxy', 'w') as out:
259 out.write('ENABLED=1\n')
260 return ctxt
261 log('HAProxy context is incomplete, this unit has no peers.')
262 return {}
263
264
265class ImageServiceContext(OSContextGenerator):
266 interfaces = ['image-service']
267
268 def __call__(self):
269 '''
270 Obtains the glance API server from the image-service relation. Useful
271 in nova and cinder (currently).
272 '''
273 log('Generating template context for image-service.')
274 rids = relation_ids('image-service')
275 if not rids:
276 return {}
277 for rid in rids:
278 for unit in related_units(rid):
279 api_server = relation_get('glance-api-server',
280 rid=rid, unit=unit)
281 if api_server:
282 return {'glance_api_servers': api_server}
283 log('ImageService context is incomplete. '
284 'Missing required relation data.')
285 return {}
286
287
288class ApacheSSLContext(OSContextGenerator):
289 """
290 Generates a context for an apache vhost configuration that configures
291 HTTPS reverse proxying for one or many endpoints. Generated context
292 looks something like:
293 {
294 'namespace': 'cinder',
295 'private_address': 'iscsi.mycinderhost.com',
296 'endpoints': [(8776, 8766), (8777, 8767)]
297 }
298
299 The endpoints list consists of a tuples mapping external ports
300 to internal ports.
301 """
302 interfaces = ['https']
303
304 # charms should inherit this context and set external ports
305 # and service namespace accordingly.
306 external_ports = []
307 service_namespace = None
308
309 def enable_modules(self):
310 cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
311 check_call(cmd)
312
313 def configure_cert(self):
314 if not os.path.isdir('/etc/apache2/ssl'):
315 os.mkdir('/etc/apache2/ssl')
316 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
317 if not os.path.isdir(ssl_dir):
318 os.mkdir(ssl_dir)
319 cert, key = get_cert()
320 with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
321 cert_out.write(b64decode(cert))
322 with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
323 key_out.write(b64decode(key))
324 ca_cert = get_ca_cert()
325 if ca_cert:
326 with open(CA_CERT_PATH, 'w') as ca_out:
327 ca_out.write(b64decode(ca_cert))
328 check_call(['update-ca-certificates'])
329
330 def __call__(self):
331 if isinstance(self.external_ports, basestring):
332 self.external_ports = [self.external_ports]
333 if (not self.external_ports or not https()):
334 return {}
335
336 self.configure_cert()
337 self.enable_modules()
338
339 ctxt = {
340 'namespace': self.service_namespace,
341 'private_address': unit_get('private-address'),
342 'endpoints': []
343 }
344 for ext_port in self.external_ports:
345 if peer_units() or is_clustered():
346 int_port = determine_haproxy_port(ext_port)
347 else:
348 int_port = determine_api_port(ext_port)
349 portmap = (int(ext_port), int(int_port))
350 ctxt['endpoints'].append(portmap)
351 return ctxt
352
353
354class NeutronContext(object):
355 interfaces = []
356
357 @property
358 def plugin(self):
359 return None
360
361 @property
362 def network_manager(self):
363 return None
364
365 @property
366 def packages(self):
367 return neutron_plugin_attribute(
368 self.plugin, 'packages', self.network_manager)
369
370 @property
371 def neutron_security_groups(self):
372 return None
373
374 def _ensure_packages(self):
375 [ensure_packages(pkgs) for pkgs in self.packages]
376
377 def _save_flag_file(self):
378 if self.network_manager == 'quantum':
379 _file = '/etc/nova/quantum_plugin.conf'
380 else:
381 _file = '/etc/nova/neutron_plugin.conf'
382 with open(_file, 'wb') as out:
383 out.write(self.plugin + '\n')
384
385 def ovs_ctxt(self):
386 driver = neutron_plugin_attribute(self.plugin, 'driver',
387 self.network_manager)
388
389 ovs_ctxt = {
390 'core_plugin': driver,
391 'neutron_plugin': 'ovs',
392 'neutron_security_groups': self.neutron_security_groups,
393 'local_ip': unit_private_ip(),
394 }
395
396 return ovs_ctxt
397
398 def __call__(self):
399 self._ensure_packages()
400
401 if self.network_manager not in ['quantum', 'neutron']:
402 return {}
403
404 if not self.plugin:
405 return {}
406
407 ctxt = {'network_manager': self.network_manager}
408
409 if self.plugin == 'ovs':
410 ctxt.update(self.ovs_ctxt())
411
412 self._save_flag_file()
413 return ctxt
414
415
416class OSConfigFlagContext(OSContextGenerator):
417 '''
418 Responsible adding user-defined config-flags in charm config to a
419 to a template context.
420 '''
421 def __call__(self):
422 config_flags = config('config-flags')
423 if not config_flags or config_flags in ['None', '']:
424 return {}
425 config_flags = config_flags.split(',')
426 flags = {}
427 for flag in config_flags:
428 if '=' not in flag:
429 log('Improperly formatted config-flag, expected k=v '
430 'got %s' % flag, level=WARNING)
431 continue
432 k, v = flag.split('=')
433 flags[k.strip()] = v
434 ctxt = {'user_config_flags': flags}
435 return ctxt
436
437
438class SubordinateConfigContext(OSContextGenerator):
439 """
440 Responsible for inspecting relations to subordinates that
441 may be exporting required config via a json blob.
442
443 The subordinate interface allows subordinates to export their
444 configuration requirements to the principle for multiple config
445 files and multiple serivces. Ie, a subordinate that has interfaces
446 to both glance and nova may export to following yaml blob as json:
447
448 glance:
449 /etc/glance/glance-api.conf:
450 sections:
451 DEFAULT:
452 - [key1, value1]
453 /etc/glance/glance-registry.conf:
454 MYSECTION:
455 - [key2, value2]
456 nova:
457 /etc/nova/nova.conf:
458 sections:
459 DEFAULT:
460 - [key3, value3]
461
462
463 It is then up to the principle charms to subscribe this context to
464 the service+config file it is interestd in. Configuration data will
465 be available in the template context, in glance's case, as:
466 ctxt = {
467 ... other context ...
468 'subordinate_config': {
469 'DEFAULT': {
470 'key1': 'value1',
471 },
472 'MYSECTION': {
473 'key2': 'value2',
474 },
475 }
476 }
477
478 """
479 def __init__(self, service, config_file, interface):
480 """
481 :param service : Service name key to query in any subordinate
482 data found
483 :param config_file : Service's config file to query sections
484 :param interface : Subordinate interface to inspect
485 """
486 self.service = service
487 self.config_file = config_file
488 self.interface = interface
489
490 def __call__(self):
491 ctxt = {}
492 for rid in relation_ids(self.interface):
493 for unit in related_units(rid):
494 sub_config = relation_get('subordinate_configuration',
495 rid=rid, unit=unit)
496 if sub_config and sub_config != '':
497 try:
498 sub_config = json.loads(sub_config)
499 except:
500 log('Could not parse JSON from subordinate_config '
501 'setting from %s' % rid, level=ERROR)
502 continue
503
504 if self.service not in sub_config:
505 log('Found subordinate_config on %s but it contained'
506 'nothing for %s service' % (rid, self.service))
507 continue
508
509 sub_config = sub_config[self.service]
510 if self.config_file not in sub_config:
511 log('Found subordinate_config on %s but it contained'
512 'nothing for %s' % (rid, self.config_file))
513 continue
514
515 sub_config = sub_config[self.config_file]
516 for k, v in sub_config.iteritems():
517 ctxt[k] = v
518
519 if not ctxt:
520 ctxt['sections'] = {}
521
522 return ctxt
0523
=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,117 @@
1# Various utilies for dealing with Neutron and the renaming from Quantum.
2
3from subprocess import check_output
4
5from charmhelpers.core.hookenv import (
6 config,
7 log,
8 ERROR,
9)
10
11from charmhelpers.contrib.openstack.utils import os_release
12
13
14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()
18 return 'linux-headers-%s' % kver
19
20
21# legacy
22def quantum_plugins():
23 from charmhelpers.contrib.openstack import context
24 return {
25 'ovs': {
26 'config': '/etc/quantum/plugins/openvswitch/'
27 'ovs_quantum_plugin.ini',
28 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
29 'OVSQuantumPluginV2',
30 'contexts': [
31 context.SharedDBContext(user=config('neutron-database-user'),
32 database=config('neutron-database'),
33 relation_prefix='neutron')],
34 'services': ['quantum-plugin-openvswitch-agent'],
35 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
36 ['quantum-plugin-openvswitch-agent']],
37 },
38 'nvp': {
39 'config': '/etc/quantum/plugins/nicira/nvp.ini',
40 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
41 'QuantumPlugin.NvpPluginV2',
42 'services': [],
43 'packages': [],
44 }
45 }
46
47
48def neutron_plugins():
49 from charmhelpers.contrib.openstack import context
50 return {
51 'ovs': {
52 'config': '/etc/neutron/plugins/openvswitch/'
53 'ovs_neutron_plugin.ini',
54 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
55 'OVSNeutronPluginV2',
56 'contexts': [
57 context.SharedDBContext(user=config('neutron-database-user'),
58 database=config('neutron-database'),
59 relation_prefix='neutron')],
60 'services': ['neutron-plugin-openvswitch-agent'],
61 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
62 ['quantum-plugin-openvswitch-agent']],
63 },
64 'nvp': {
65 'config': '/etc/neutron/plugins/nicira/nvp.ini',
66 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
67 'NeutronPlugin.NvpPluginV2',
68 'services': [],
69 'packages': [],
70 }
71 }
72
73
74def neutron_plugin_attribute(plugin, attr, net_manager=None):
75 manager = net_manager or network_manager()
76 if manager == 'quantum':
77 plugins = quantum_plugins()
78 elif manager == 'neutron':
79 plugins = neutron_plugins()
80 else:
81 log('Error: Network manager does not support plugins.')
82 raise Exception
83
84 try:
85 _plugin = plugins[plugin]
86 except KeyError:
87 log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
88 raise Exception
89
90 try:
91 return _plugin[attr]
92 except KeyError:
93 return None
94
95
96def network_manager():
97 '''
98 Deals with the renaming of Quantum to Neutron in H and any situations
99 that require compatability (eg, deploying H with network-manager=quantum,
100 upgrading from G).
101 '''
102 release = os_release('nova-common')
103 manager = config('network-manager').lower()
104
105 if manager not in ['quantum', 'neutron']:
106 return manager
107
108 if release in ['essex']:
109 # E does not support neutron
110 log('Neutron networking not supported in Essex.', level=ERROR)
111 raise Exception
112 elif release in ['folsom', 'grizzly']:
113 # neutron is named quantum in F and G
114 return 'quantum'
115 else:
116 # ensure accurate naming for all releases post-H
117 return 'neutron'
0118
=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,2 @@
1# dummy __init__.py to fool syncer into thinking this is a syncable python
2# module
03
=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,280 @@
1import os
2
3from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (
6 log,
7 ERROR,
8 INFO
9)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
12
13try:
14 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
15except ImportError:
16 # python-jinja2 may not be installed yet, or we're running unittests.
17 FileSystemLoader = ChoiceLoader = Environment = exceptions = None
18
19
20class OSConfigException(Exception):
21 pass
22
23
24def get_loader(templates_dir, os_release):
25 """
26 Create a jinja2.ChoiceLoader containing template dirs up to
27 and including os_release. If directory template directory
28 is missing at templates_dir, it will be omitted from the loader.
29 templates_dir is added to the bottom of the search list as a base
30 loading dir.
31
32 A charm may also ship a templates dir with this module
33 and it will be appended to the bottom of the search list, eg:
34 hooks/charmhelpers/contrib/openstack/templates.
35
36 :param templates_dir: str: Base template directory containing release
37 sub-directories.
38 :param os_release : str: OpenStack release codename to construct template
39 loader.
40
41 :returns : jinja2.ChoiceLoader constructed with a list of
42 jinja2.FilesystemLoaders, ordered in descending
43 order by OpenStack release.
44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]
47
48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,
50 level=ERROR)
51 raise OSConfigException
52
53 # the bottom contains tempaltes_dir and possibly a common templates dir
54 # shipped with the helper.
55 loaders = [FileSystemLoader(templates_dir)]
56 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
57 if os.path.isdir(helper_templates):
58 loaders.append(FileSystemLoader(helper_templates))
59
60 for rel, tmpl_dir in tmpl_dirs:
61 if os.path.isdir(tmpl_dir):
62 loaders.insert(0, FileSystemLoader(tmpl_dir))
63 if rel == os_release:
64 break
65 log('Creating choice loader with dirs: %s' %
66 [l.searchpath for l in loaders], level=INFO)
67 return ChoiceLoader(loaders)
68
69
70class OSConfigTemplate(object):
71 """
72 Associates a config file template with a list of context generators.
73 Responsible for constructing a template context based on those generators.
74 """
75 def __init__(self, config_file, contexts):
76 self.config_file = config_file
77
78 if hasattr(contexts, '__call__'):
79 self.contexts = [contexts]
80 else:
81 self.contexts = contexts
82
83 self._complete_contexts = []
84
85 def context(self):
86 ctxt = {}
87 for context in self.contexts:
88 _ctxt = context()
89 if _ctxt:
90 ctxt.update(_ctxt)
91 # track interfaces for every complete context.
92 [self._complete_contexts.append(interface)
93 for interface in context.interfaces
94 if interface not in self._complete_contexts]
95 return ctxt
96
97 def complete_contexts(self):
98 '''
99 Return a list of interfaces that have atisfied contexts.
100 '''
101 if self._complete_contexts:
102 return self._complete_contexts
103 self.context()
104 return self._complete_contexts
105
106
107class OSConfigRenderer(object):
108 """
109 This class provides a common templating system to be used by OpenStack
110 charms. It is intended to help charms share common code and templates,
111 and ease the burden of managing config templates across multiple OpenStack
112 releases.
113
114 Basic usage:
115 # import some common context generates from charmhelpers
116 from charmhelpers.contrib.openstack import context
117
118 # Create a renderer object for a specific OS release.
119 configs = OSConfigRenderer(templates_dir='/tmp/templates',
120 openstack_release='folsom')
121 # register some config files with context generators.
122 configs.register(config_file='/etc/nova/nova.conf',
123 contexts=[context.SharedDBContext(),
124 context.AMQPContext()])
125 configs.register(config_file='/etc/nova/api-paste.ini',
126 contexts=[context.IdentityServiceContext()])
127 configs.register(config_file='/etc/haproxy/haproxy.conf',
128 contexts=[context.HAProxyContext()])
129 # write out a single config
130 configs.write('/etc/nova/nova.conf')
131 # write out all registered configs
132 configs.write_all()
133
134 Details:
135
136 OpenStack Releases and template loading
137 ---------------------------------------
138 When the object is instantiated, it is associated with a specific OS
139 release. This dictates how the template loader will be constructed.
140
141 The constructed loader attempts to load the template from several places
142 in the following order:
143 - from the most recent OS release-specific template dir (if one exists)
144 - the base templates_dir
145 - a template directory shipped in the charm with this helper file.
146
147
148 For the example above, '/tmp/templates' contains the following structure:
149 /tmp/templates/nova.conf
150 /tmp/templates/api-paste.ini
151 /tmp/templates/grizzly/api-paste.ini
152 /tmp/templates/havana/api-paste.ini
153
154 Since it was registered with the grizzly release, it first seraches
155 the grizzly directory for nova.conf, then the templates dir.
156
157 When writing api-paste.ini, it will find the template in the grizzly
158 directory.
159
160 If the object were created with folsom, it would fall back to the
161 base templates dir for its api-paste.ini template.
162
163 This system should help manage changes in config files through
164 openstack releases, allowing charms to fall back to the most recently
165 updated config template for a given release
166
167 The haproxy.conf, since it is not shipped in the templates dir, will
168 be loaded from the module directory's template directory, eg
169 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
170 us to ship common templates (haproxy, apache) with the helpers.
171
172 Context generators
173 ---------------------------------------
174 Context generators are used to generate template contexts during hook
175 execution. Doing so may require inspecting service relations, charm
176 config, etc. When registered, a config file is associated with a list
177 of generators. When a template is rendered and written, all context
178 generates are called in a chain to generate the context dictionary
179 passed to the jinja2 template. See context.py for more info.
180 """
181 def __init__(self, templates_dir, openstack_release):
182 if not os.path.isdir(templates_dir):
183 log('Could not locate templates dir %s' % templates_dir,
184 level=ERROR)
185 raise OSConfigException
186
187 self.templates_dir = templates_dir
188 self.openstack_release = openstack_release
189 self.templates = {}
190 self._tmpl_env = None
191
192 if None in [Environment, ChoiceLoader, FileSystemLoader]:
193 # if this code is running, the object is created pre-install hook.
194 # jinja2 shouldn't get touched until the module is reloaded on next
195 # hook execution, with proper jinja2 bits successfully imported.
196 apt_install('python-jinja2')
197
198 def register(self, config_file, contexts):
199 """
200 Register a config file with a list of context generators to be called
201 during rendering.
202 """
203 self.templates[config_file] = OSConfigTemplate(config_file=config_file,
204 contexts=contexts)
205 log('Registered config file: %s' % config_file, level=INFO)
206
207 def _get_tmpl_env(self):
208 if not self._tmpl_env:
209 loader = get_loader(self.templates_dir, self.openstack_release)
210 self._tmpl_env = Environment(loader=loader)
211
212 def _get_template(self, template):
213 self._get_tmpl_env()
214 template = self._tmpl_env.get_template(template)
215 log('Loaded template from %s' % template.filename, level=INFO)
216 return template
217
218 def render(self, config_file):
219 if config_file not in self.templates:
220 log('Config not registered: %s' % config_file, level=ERROR)
221 raise OSConfigException
222 ctxt = self.templates[config_file].context()
223
224 _tmpl = os.path.basename(config_file)
225 try:
226 template = self._get_template(_tmpl)
227 except exceptions.TemplateNotFound:
228 # if no template is found with basename, try looking for it
229 # using a munged full path, eg:
230 # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
231 _tmpl = '_'.join(config_file.split('/')[1:])
232 try:
233 template = self._get_template(_tmpl)
234 except exceptions.TemplateNotFound as e:
235 log('Could not load template from %s by %s or %s.' %
236 (self.templates_dir, os.path.basename(config_file), _tmpl),
237 level=ERROR)
238 raise e
239
240 log('Rendering from template: %s' % _tmpl, level=INFO)
241 return template.render(ctxt)
242
243 def write(self, config_file):
244 """
245 Write a single config file, raises if config file is not registered.
246 """
247 if config_file not in self.templates:
248 log('Config not registered: %s' % config_file, level=ERROR)
249 raise OSConfigException
250
251 _out = self.render(config_file)
252
253 with open(config_file, 'wb') as out:
254 out.write(_out)
255
256 log('Wrote template %s.' % config_file, level=INFO)
257
258 def write_all(self):
259 """
260 Write out all registered config files.
261 """
262 [self.write(k) for k in self.templates.iterkeys()]
263
264 def set_release(self, openstack_release):
265 """
266 Resets the template environment and generates a new template loader
267 based on a the new openstack release.
268 """
269 self._tmpl_env = None
270 self.openstack_release = openstack_release
271 self._get_tmpl_env()
272
273 def complete_contexts(self):
274 '''
275 Returns a list of context interfaces that yield a complete context.
276 '''
277 interfaces = []
278 [interfaces.extend(i.complete_contexts())
279 for i in self.templates.itervalues()]
280 return interfaces
0281
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,365 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict
5
6import apt_pkg as apt
7import subprocess
8import os
9import socket
10import sys
11
12from charmhelpers.core.hookenv import (
13 config,
14 log as juju_log,
15 charm_dir,
16)
17
18from charmhelpers.core.host import (
19 lsb_release,
20)
21
22from charmhelpers.fetch import (
23 apt_install,
24)
25
26CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
27CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
28
29UBUNTU_OPENSTACK_RELEASE = OrderedDict([
30 ('oneiric', 'diablo'),
31 ('precise', 'essex'),
32 ('quantal', 'folsom'),
33 ('raring', 'grizzly'),
34 ('saucy', 'havana'),
35])
36
37
38OPENSTACK_CODENAMES = OrderedDict([
39 ('2011.2', 'diablo'),
40 ('2012.1', 'essex'),
41 ('2012.2', 'folsom'),
42 ('2013.1', 'grizzly'),
43 ('2013.2', 'havana'),
44 ('2014.1', 'icehouse'),
45])
46
47# The ugly duckling
48SWIFT_CODENAMES = OrderedDict([
49 ('1.4.3', 'diablo'),
50 ('1.4.8', 'essex'),
51 ('1.7.4', 'folsom'),
52 ('1.8.0', 'grizzly'),
53 ('1.7.7', 'grizzly'),
54 ('1.7.6', 'grizzly'),
55 ('1.10.0', 'havana'),
56 ('1.9.1', 'havana'),
57 ('1.9.0', 'havana'),
58])
59
60
61def error_out(msg):
62 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
63 sys.exit(1)
64
65
66def get_os_codename_install_source(src):
67 '''Derive OpenStack release codename from a given installation source.'''
68 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
69 rel = ''
70 if src == 'distro':
71 try:
72 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
73 except KeyError:
74 e = 'Could not derive openstack release for '\
75 'this Ubuntu release: %s' % ubuntu_rel
76 error_out(e)
77 return rel
78
79 if src.startswith('cloud:'):
80 ca_rel = src.split(':')[1]
81 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
82 return ca_rel
83
84 # Best guess match based on deb string provided
85 if src.startswith('deb') or src.startswith('ppa'):
86 for k, v in OPENSTACK_CODENAMES.iteritems():
87 if v in src:
88 return v
89
90
91def get_os_version_install_source(src):
92 codename = get_os_codename_install_source(src)
93 return get_os_version_codename(codename)
94
95
96def get_os_codename_version(vers):
97 '''Determine OpenStack codename from version number.'''
98 try:
99 return OPENSTACK_CODENAMES[vers]
100 except KeyError:
101 e = 'Could not determine OpenStack codename for version %s' % vers
102 error_out(e)
103
104
105def get_os_version_codename(codename):
106 '''Determine OpenStack version number from codename.'''
107 for k, v in OPENSTACK_CODENAMES.iteritems():
108 if v == codename:
109 return k
110 e = 'Could not derive OpenStack version for '\
111 'codename: %s' % codename
112 error_out(e)
113
114
115def get_os_codename_package(package, fatal=True):
116 '''Derive OpenStack release codename from an installed package.'''
117 apt.init()
118 cache = apt.Cache()
119
120 try:
121 pkg = cache[package]
122 except:
123 if not fatal:
124 return None
125 # the package is unknown to the current apt cache.
126 e = 'Could not determine version of package with no installation '\
127 'candidate: %s' % package
128 error_out(e)
129
130 if not pkg.current_ver:
131 if not fatal:
132 return None
133 # package is known, but no version is currently installed.
134 e = 'Could not determine version of uninstalled package: %s' % package
135 error_out(e)
136
137 vers = apt.upstream_version(pkg.current_ver.ver_str)
138
139 try:
140 if 'swift' in pkg.name:
141 swift_vers = vers[:5]
142 if swift_vers not in SWIFT_CODENAMES:
143 # Deal with 1.10.0 upward
144 swift_vers = vers[:6]
145 return SWIFT_CODENAMES[swift_vers]
146 else:
147 vers = vers[:6]
148 return OPENSTACK_CODENAMES[vers]
149 except KeyError:
150 e = 'Could not determine OpenStack codename for version %s' % vers
151 error_out(e)
152
153
154def get_os_version_package(pkg, fatal=True):
155 '''Derive OpenStack version number from an installed package.'''
156 codename = get_os_codename_package(pkg, fatal=fatal)
157
158 if not codename:
159 return None
160
161 if 'swift' in pkg:
162 vers_map = SWIFT_CODENAMES
163 else:
164 vers_map = OPENSTACK_CODENAMES
165
166 for version, cname in vers_map.iteritems():
167 if cname == codename:
168 return version
169 #e = "Could not determine OpenStack version for package: %s" % pkg
170 #error_out(e)
171
172
173os_rel = None
174
175
176def os_release(package, base='essex'):
177 '''
178 Returns OpenStack release codename from a cached global.
179 If the codename can not be determined from either an installed package or
180 the installation source, the earliest release supported by the charm should
181 be returned.
182 '''
183 global os_rel
184 if os_rel:
185 return os_rel
186 os_rel = (get_os_codename_package(package, fatal=False) or
187 get_os_codename_install_source(config('openstack-origin')) or
188 base)
189 return os_rel
190
191
192def import_key(keyid):
193 cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
194 "--recv-keys %s" % keyid
195 try:
196 subprocess.check_call(cmd.split(' '))
197 except subprocess.CalledProcessError:
198 error_out("Error importing repo key %s" % keyid)
199
200
201def configure_installation_source(rel):
202 '''Configure apt installation source.'''
203 if rel == 'distro':
204 return
205 elif rel[:4] == "ppa:":
206 src = rel
207 subprocess.check_call(["add-apt-repository", "-y", src])
208 elif rel[:3] == "deb":
209 l = len(rel.split('|'))
210 if l == 2:
211 src, key = rel.split('|')
212 juju_log("Importing PPA key from keyserver for %s" % src)
213 import_key(key)
214 elif l == 1:
215 src = rel
216 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
217 f.write(src)
218 elif rel[:6] == 'cloud:':
219 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
220 rel = rel.split(':')[1]
221 u_rel = rel.split('-')[0]
222 ca_rel = rel.split('-')[1]
223
224 if u_rel != ubuntu_rel:
225 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
226 'version (%s)' % (ca_rel, ubuntu_rel)
227 error_out(e)
228
229 if 'staging' in ca_rel:
230 # staging is just a regular PPA.
231 os_rel = ca_rel.split('/')[0]
232 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
233 cmd = 'add-apt-repository -y %s' % ppa
234 subprocess.check_call(cmd.split(' '))
235 return
236
237 # map charm config options to actual archive pockets.
238 pockets = {
239 'folsom': 'precise-updates/folsom',
240 'folsom/updates': 'precise-updates/folsom',
241 'folsom/proposed': 'precise-proposed/folsom',
242 'grizzly': 'precise-updates/grizzly',
243 'grizzly/updates': 'precise-updates/grizzly',
244 'grizzly/proposed': 'precise-proposed/grizzly',
245 'havana': 'precise-updates/havana',
246 'havana/updates': 'precise-updates/havana',
247 'havana/proposed': 'precise-proposed/havana',
248 }
249
250 try:
251 pocket = pockets[ca_rel]
252 except KeyError:
253 e = 'Invalid Cloud Archive release specified: %s' % rel
254 error_out(e)
255
256 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
257 apt_install('ubuntu-cloud-keyring', fatal=True)
258
259 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
260 f.write(src)
261 else:
262 error_out("Invalid openstack-release specified: %s" % rel)
263
264
265def save_script_rc(script_path="scripts/scriptrc", **env_vars):
266 """
267 Write an rc file in the charm-delivered directory containing
268 exported environment variables provided by env_vars. Any charm scripts run
269 outside the juju hook environment can source this scriptrc to obtain
270 updated config information necessary to perform health checks or
271 service changes.
272 """
273 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
274 if not os.path.exists(os.path.dirname(juju_rc_path)):
275 os.mkdir(os.path.dirname(juju_rc_path))
276 with open(juju_rc_path, 'wb') as rc_script:
277 rc_script.write(
278 "#!/bin/bash\n")
279 [rc_script.write('export %s=%s\n' % (u, p))
280 for u, p in env_vars.iteritems() if u != "script_path"]
281
282
283def openstack_upgrade_available(package):
284 """
285 Determines if an OpenStack upgrade is available from installation
286 source, based on version of installed package.
287
288 :param package: str: Name of installed package.
289
290 :returns: bool: : Returns True if configured installation source offers
291 a newer version of package.
292
293 """
294
295 src = config('openstack-origin')
296 cur_vers = get_os_version_package(package)
297 available_vers = get_os_version_install_source(src)
298 apt.init()
299 return apt.version_compare(available_vers, cur_vers) == 1
300
301
302def is_ip(address):
303 """
304 Returns True if address is a valid IP address.
305 """
306 try:
307 # Test to see if already an IPv4 address
308 socket.inet_aton(address)
309 return True
310 except socket.error:
311 return False
312
313
314def ns_query(address):
315 try:
316 import dns.resolver
317 except ImportError:
318 apt_install('python-dnspython')
319 import dns.resolver
320
321 if isinstance(address, dns.name.Name):
322 rtype = 'PTR'
323 elif isinstance(address, basestring):
324 rtype = 'A'
325
326 answers = dns.resolver.query(address, rtype)
327 if answers:
328 return str(answers[0])
329 return None
330
331
332def get_host_ip(hostname):
333 """
334 Resolves the IP for a given hostname, or returns
335 the input if it is already an IP.
336 """
337 if is_ip(hostname):
338 return hostname
339
340 return ns_query(hostname)
341
342
343def get_hostname(address):
344 """
345 Resolves hostname for given IP, or returns the input
346 if it is already a hostname.
347 """
348 if not is_ip(address):
349 return address
350
351 try:
352 import dns.reversename
353 except ImportError:
354 apt_install('python-dnspython')
355 import dns.reversename
356
357 rev = dns.reversename.from_address(address)
358 result = ns_query(rev)
359 if not result:
360 return None
361
362 # strip trailing .
363 if result.endswith('.'):
364 return result[:-1]
365 return result
0366
=== added directory 'hooks/charmhelpers/contrib/storage'
=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage/linux'
=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,359 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import os
12import shutil
13import json
14import time
15
16from subprocess import (
17 check_call,
18 check_output,
19 CalledProcessError
20)
21
22from charmhelpers.core.hookenv import (
23 relation_get,
24 relation_ids,
25 related_units,
26 log,
27 INFO,
28 WARNING,
29 ERROR
30)
31
32from charmhelpers.core.host import (
33 mount,
34 mounts,
35 service_start,
36 service_stop,
37 service_running,
38 umount,
39)
40
41from charmhelpers.fetch import (
42 apt_install,
43)
44
45KEYRING = '/etc/ceph/ceph.client.{}.keyring'
46KEYFILE = '/etc/ceph/ceph.client.{}.key'
47
48CEPH_CONF = """[global]
49 auth supported = {auth}
50 keyring = {keyring}
51 mon host = {mon_hosts}
52"""
53
54
55def install():
56 ''' Basic Ceph client installation '''
57 ceph_dir = "/etc/ceph"
58 if not os.path.exists(ceph_dir):
59 os.mkdir(ceph_dir)
60 apt_install('ceph-common', fatal=True)
61
62
63def rbd_exists(service, pool, rbd_img):
64 ''' Check to see if a RADOS block device exists '''
65 try:
66 out = check_output(['rbd', 'list', '--id', service,
67 '--pool', pool])
68 except CalledProcessError:
69 return False
70 else:
71 return rbd_img in out
72
73
74def create_rbd_image(service, pool, image, sizemb):
75 ''' Create a new RADOS block device '''
76 cmd = [
77 'rbd',
78 'create',
79 image,
80 '--size',
81 str(sizemb),
82 '--id',
83 service,
84 '--pool',
85 pool
86 ]
87 check_call(cmd)
88
89
90def pool_exists(service, name):
91 ''' Check to see if a RADOS pool already exists '''
92 try:
93 out = check_output(['rados', '--id', service, 'lspools'])
94 except CalledProcessError:
95 return False
96 else:
97 return name in out
98
99
100def get_osds(service):
101 '''
102 Return a list of all Ceph Object Storage Daemons
103 currently in the cluster
104 '''
105 return json.loads(check_output(['ceph', '--id', service,
106 'osd', 'ls', '--format=json']))
107
108
109def create_pool(service, name, replicas=2):
110 ''' Create a new RADOS pool '''
111 if pool_exists(service, name):
112 log("Ceph pool {} already exists, skipping creation".format(name),
113 level=WARNING)
114 return
115 # Calculate the number of placement groups based
116 # on upstream recommended best practices.
117 pgnum = (len(get_osds(service)) * 100 / replicas)
118 cmd = [
119 'ceph', '--id', service,
120 'osd', 'pool', 'create',
121 name, str(pgnum)
122 ]
123 check_call(cmd)
124 cmd = [
125 'ceph', '--id', service,
126 'osd', 'pool', 'set', name,
127 'size', str(replicas)
128 ]
129 check_call(cmd)
130
131
132def delete_pool(service, name):
133 ''' Delete a RADOS pool from ceph '''
134 cmd = [
135 'ceph', '--id', service,
136 'osd', 'pool', 'delete',
137 name, '--yes-i-really-really-mean-it'
138 ]
139 check_call(cmd)
140
141
142def _keyfile_path(service):
143 return KEYFILE.format(service)
144
145
146def _keyring_path(service):
147 return KEYRING.format(service)
148
149
150def create_keyring(service, key):
151 ''' Create a new Ceph keyring containing key'''
152 keyring = _keyring_path(service)
153 if os.path.exists(keyring):
154 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
155 return
156 cmd = [
157 'ceph-authtool',
158 keyring,
159 '--create-keyring',
160 '--name=client.{}'.format(service),
161 '--add-key={}'.format(key)
162 ]
163 check_call(cmd)
164 log('ceph: Created new ring at %s.' % keyring, level=INFO)
165
166
167def create_key_file(service, key):
168 ''' Create a file containing key '''
169 keyfile = _keyfile_path(service)
170 if os.path.exists(keyfile):
171 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
172 return
173 with open(keyfile, 'w') as fd:
174 fd.write(key)
175 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
176
177
178def get_ceph_nodes():
179 ''' Query named relation 'ceph' to detemine current nodes '''
180 hosts = []
181 for r_id in relation_ids('ceph'):
182 for unit in related_units(r_id):
183 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
184 return hosts
185
186
187def configure(service, key, auth):
188 ''' Perform basic configuration of Ceph '''
189 create_keyring(service, key)
190 create_key_file(service, key)
191 hosts = get_ceph_nodes()
192 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
193 ceph_conf.write(CEPH_CONF.format(auth=auth,
194 keyring=_keyring_path(service),
195 mon_hosts=",".join(map(str, hosts))))
196 modprobe('rbd')
197
198
199def image_mapped(name):
200 ''' Determine whether a RADOS block device is mapped locally '''
201 try:
202 out = check_output(['rbd', 'showmapped'])
203 except CalledProcessError:
204 return False
205 else:
206 return name in out
207
208
209def map_block_storage(service, pool, image):
210 ''' Map a RADOS block device for local use '''
211 cmd = [
212 'rbd',
213 'map',
214 '{}/{}'.format(pool, image),
215 '--user',
216 service,
217 '--secret',
218 _keyfile_path(service),
219 ]
220 check_call(cmd)
221
222
223def filesystem_mounted(fs):
224 ''' Determine whether a filesytems is already mounted '''
225 return fs in [f for f, m in mounts()]
226
227
228def make_filesystem(blk_device, fstype='ext4', timeout=10):
229 ''' Make a new filesystem on the specified block device '''
230 count = 0
231 e_noent = os.errno.ENOENT
232 while not os.path.exists(blk_device):
233 if count >= timeout:
234 log('ceph: gave up waiting on block device %s' % blk_device,
235 level=ERROR)
236 raise IOError(e_noent, os.strerror(e_noent), blk_device)
237 log('ceph: waiting for block device %s to appear' % blk_device,
238 level=INFO)
239 count += 1
240 time.sleep(1)
241 else:
242 log('ceph: Formatting block device %s as filesystem %s.' %
243 (blk_device, fstype), level=INFO)
244 check_call(['mkfs', '-t', fstype, blk_device])
245
246
247def place_data_on_block_device(blk_device, data_src_dst):
248 ''' Migrate data in data_src_dst to blk_device and then remount '''
249 # mount block device into /mnt
250 mount(blk_device, '/mnt')
251 # copy data to /mnt
252 copy_files(data_src_dst, '/mnt')
253 # umount block device
254 umount('/mnt')
255 # Grab user/group ID's from original source
256 _dir = os.stat(data_src_dst)
257 uid = _dir.st_uid
258 gid = _dir.st_gid
259 # re-mount where the data should originally be
260 # TODO: persist is currently a NO-OP in core.host
261 mount(blk_device, data_src_dst, persist=True)
262 # ensure original ownership of new mount.
263 os.chown(data_src_dst, uid, gid)
264
265
266# TODO: re-use
267def modprobe(module):
268 ''' Load a kernel module and configure for auto-load on reboot '''
269 log('ceph: Loading kernel module', level=INFO)
270 cmd = ['modprobe', module]
271 check_call(cmd)
272 with open('/etc/modules', 'r+') as modules:
273 if module not in modules.read():
274 modules.write(module)
275
276
277def copy_files(src, dst, symlinks=False, ignore=None):
278 ''' Copy files from src to dst '''
279 for item in os.listdir(src):
280 s = os.path.join(src, item)
281 d = os.path.join(dst, item)
282 if os.path.isdir(s):
283 shutil.copytree(s, d, symlinks, ignore)
284 else:
285 shutil.copy2(s, d)
286
287
288def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
289 blk_device, fstype, system_services=[]):
290 """
291 NOTE: This function must only be called from a single service unit for
292 the same rbd_img otherwise data loss will occur.
293
294 Ensures given pool and RBD image exists, is mapped to a block device,
295 and the device is formatted and mounted at the given mount_point.
296
297 If formatting a device for the first time, data existing at mount_point
298 will be migrated to the RBD device before being re-mounted.
299
300 All services listed in system_services will be stopped prior to data
301 migration and restarted when complete.
302 """
303 # Ensure pool, RBD image, RBD mappings are in place.
304 if not pool_exists(service, pool):
305 log('ceph: Creating new pool {}.'.format(pool))
306 create_pool(service, pool)
307
308 if not rbd_exists(service, pool, rbd_img):
309 log('ceph: Creating RBD image ({}).'.format(rbd_img))
310 create_rbd_image(service, pool, rbd_img, sizemb)
311
312 if not image_mapped(rbd_img):
313 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
314 map_block_storage(service, pool, rbd_img)
315
316 # make file system
317 # TODO: What happens if for whatever reason this is run again and
318 # the data is already in the rbd device and/or is mounted??
319 # When it is mounted already, it will fail to make the fs
320 # XXX: This is really sketchy! Need to at least add an fstab entry
321 # otherwise this hook will blow away existing data if its executed
322 # after a reboot.
323 if not filesystem_mounted(mount_point):
324 make_filesystem(blk_device, fstype)
325
326 for svc in system_services:
327 if service_running(svc):
328 log('ceph: Stopping services {} prior to migrating data.'
329 .format(svc))
330 service_stop(svc)
331
332 place_data_on_block_device(blk_device, mount_point)
333
334 for svc in system_services:
335 log('ceph: Starting service {} after migrating data.'
336 .format(svc))
337 service_start(svc)
338
339
340def ensure_ceph_keyring(service, user=None, group=None):
341 '''
342 Ensures a ceph keyring is created for a named service
343 and optionally ensures user and group ownership.
344
345 Returns False if no ceph key is available in relation state.
346 '''
347 key = None
348 for rid in relation_ids('ceph'):
349 for unit in related_units(rid):
350 key = relation_get('key', rid=rid, unit=unit)
351 if key:
352 break
353 if not key:
354 return False
355 create_keyring(service=service, key=key)
356 keyring = _keyring_path(service)
357 if user and group:
358 check_call(['chown', '%s.%s' % (user, group), keyring])
359 return True
0360
=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,62 @@
1
2import os
3import re
4
5from subprocess import (
6 check_call,
7 check_output,
8)
9
10
11##################################################
12# loopback device helpers.
13##################################################
14def loopback_devices():
15 '''
16 Parse through 'losetup -a' output to determine currently mapped
17 loopback devices. Output is expected to look like:
18
19 /dev/loop0: [0807]:961814 (/tmp/my.img)
20
21 :returns: dict: a dict mapping {loopback_dev: backing_file}
22 '''
23 loopbacks = {}
24 cmd = ['losetup', '-a']
25 devs = [d.strip().split(' ') for d in
26 check_output(cmd).splitlines() if d != '']
27 for dev, _, f in devs:
28 loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
29 return loopbacks
30
31
32def create_loopback(file_path):
33 '''
34 Create a loopback device for a given backing file.
35
36 :returns: str: Full path to new loopback device (eg, /dev/loop0)
37 '''
38 file_path = os.path.abspath(file_path)
39 check_call(['losetup', '--find', file_path])
40 for d, f in loopback_devices().iteritems():
41 if f == file_path:
42 return d
43
44
45def ensure_loopback_device(path, size):
46 '''
47 Ensure a loopback device exists for a given backing file path and size.
48 If it a loopback device is not mapped to file, a new one will be created.
49
50 TODO: Confirm size of found loopback device.
51
52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
53 '''
54 for d, f in loopback_devices().iteritems():
55 if f == path:
56 return d
57
58 if not os.path.exists(path):
59 cmd = ['truncate', '--size', size, path]
60 check_call(cmd)
61
62 return create_loopback(path)
063
=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,88 @@
1from subprocess import (
2 CalledProcessError,
3 check_call,
4 check_output,
5 Popen,
6 PIPE,
7)
8
9
10##################################################
11# LVM helpers.
12##################################################
13def deactivate_lvm_volume_group(block_device):
14 '''
15 Deactivate any volume gruop associated with an LVM physical volume.
16
17 :param block_device: str: Full path to LVM physical volume
18 '''
19 vg = list_lvm_volume_group(block_device)
20 if vg:
21 cmd = ['vgchange', '-an', vg]
22 check_call(cmd)
23
24
25def is_lvm_physical_volume(block_device):
26 '''
27 Determine whether a block device is initialized as an LVM PV.
28
29 :param block_device: str: Full path of block device to inspect.
30
31 :returns: boolean: True if block device is a PV, False if not.
32 '''
33 try:
34 check_output(['pvdisplay', block_device])
35 return True
36 except CalledProcessError:
37 return False
38
39
40def remove_lvm_physical_volume(block_device):
41 '''
42 Remove LVM PV signatures from a given block device.
43
44 :param block_device: str: Full path of block device to scrub.
45 '''
46 p = Popen(['pvremove', '-ff', block_device],
47 stdin=PIPE)
48 p.communicate(input='y\n')
49
50
51def list_lvm_volume_group(block_device):
52 '''
53 List LVM volume group associated with a given block device.
54
55 Assumes block device is a valid LVM PV.
56
57 :param block_device: str: Full path of block device to inspect.
58
59 :returns: str: Name of volume group associated with block device or None
60 '''
61 vg = None
62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:
64 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.split()).split(' ').pop()
66 return vg
67
68
69def create_lvm_physical_volume(block_device):
70 '''
71 Initialize a block device as an LVM physical volume.
72
73 :param block_device: str: Full path of block device to initialize.
74
75 '''
76 check_call(['pvcreate', block_device])
77
78
79def create_lvm_volume_group(volume_group, block_device):
80 '''
81 Create an LVM volume group backed by a given block device.
82
83 Assumes block device has already been initialized as an LVM PV.
84
85 :param volume_group: str: Name of volume group to create.
86 :block_device: str: Full path of PV-initialized block device.
87 '''
88 check_call(['vgcreate', volume_group, block_device])
089
=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,25 @@
1from os import stat
2from stat import S_ISBLK
3
4from subprocess import (
5 check_call
6)
7
8
9def is_block_device(path):
10 '''
11 Confirm device at path is a valid block device node.
12
13 :returns: boolean: True if path is a block device, False if not.
14 '''
15 return S_ISBLK(stat(path).st_mode)
16
17
18def zap_disk(block_device):
19 '''
20 Clear a block device of partition table. Relies on sgdisk, which is
21 installed as pat of the 'gdisk' package in Ubuntu.
22
23 :param block_device: str: Full path of block device to clean.
24 '''
25 check_call(['sgdisk', '--zap-all', block_device])
026
=== added directory 'hooks/charmhelpers/core'
=== added file 'hooks/charmhelpers/core/__init__.py'
=== added file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,340 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import UserDict
12
13CRITICAL = "CRITICAL"
14ERROR = "ERROR"
15WARNING = "WARNING"
16INFO = "INFO"
17DEBUG = "DEBUG"
18MARKER = object()
19
20cache = {}
21
22
23def cached(func):
24 ''' Cache return values for multiple executions of func + args
25
26 For example:
27
28 @cached
29 def unit_get(attribute):
30 pass
31
32 unit_get('test')
33
34 will cache the result of unit_get + 'test' for future calls.
35 '''
36 def wrapper(*args, **kwargs):
37 global cache
38 key = str((func, args, kwargs))
39 try:
40 return cache[key]
41 except KeyError:
42 res = func(*args, **kwargs)
43 cache[key] = res
44 return res
45 return wrapper
46
47
48def flush(key):
49 ''' Flushes any entries from function cache where the
50 key is found in the function+args '''
51 flush_list = []
52 for item in cache:
53 if key in item:
54 flush_list.append(item)
55 for item in flush_list:
56 del cache[item]
57
58
59def log(message, level=None):
60 "Write a message to the juju log"
61 command = ['juju-log']
62 if level:
63 command += ['-l', level]
64 command += [message]
65 subprocess.call(command)
66
67
68class Serializable(UserDict.IterableUserDict):
69 "Wrapper, an object that can be serialized to yaml or json"
70
71 def __init__(self, obj):
72 # wrap the object
73 UserDict.IterableUserDict.__init__(self)
74 self.data = obj
75
76 def __getattr__(self, attr):
77 # See if this object has attribute.
78 if attr in ("json", "yaml", "data"):
79 return self.__dict__[attr]
80 # Check for attribute in wrapped object.
81 got = getattr(self.data, attr, MARKER)
82 if got is not MARKER:
83 return got
84 # Proxy to the wrapped object via dict interface.
85 try:
86 return self.data[attr]
87 except KeyError:
88 raise AttributeError(attr)
89
90 def __getstate__(self):
91 # Pickle as a standard dictionary.
92 return self.data
93
94 def __setstate__(self, state):
95 # Unpickle into our wrapper.
96 self.data = state
97
98 def json(self):
99 "Serialize the object to json"
100 return json.dumps(self.data)
101
102 def yaml(self):
103 "Serialize the object to yaml"
104 return yaml.dump(self.data)
105
106
107def execution_environment():
108 """A convenient bundling of the current execution context"""
109 context = {}
110 context['conf'] = config()
111 if relation_id():
112 context['reltype'] = relation_type()
113 context['relid'] = relation_id()
114 context['rel'] = relation_get()
115 context['unit'] = local_unit()
116 context['rels'] = relations()
117 context['env'] = os.environ
118 return context
119
120
121def in_relation_hook():
122 "Determine whether we're running in a relation hook"
123 return 'JUJU_RELATION' in os.environ
124
125
126def relation_type():
127 "The scope for the current relation hook"
128 return os.environ.get('JUJU_RELATION', None)
129
130
131def relation_id():
132 "The relation ID for the current relation hook"
133 return os.environ.get('JUJU_RELATION_ID', None)
134
135
136def local_unit():
137 "Local unit ID"
138 return os.environ['JUJU_UNIT_NAME']
139
140
141def remote_unit():
142 "The remote unit for the current relation hook"
143 return os.environ['JUJU_REMOTE_UNIT']
144
145
146def service_name():
147 "The name service group this unit belongs to"
148 return local_unit().split('/')[0]
149
150
151@cached
152def config(scope=None):
153 "Juju charm configuration"
154 config_cmd_line = ['config-get']
155 if scope is not None:
156 config_cmd_line.append(scope)
157 config_cmd_line.append('--format=json')
158 try:
159 return json.loads(subprocess.check_output(config_cmd_line))
160 except ValueError:
161 return None
162
163
164@cached
165def relation_get(attribute=None, unit=None, rid=None):
166 _args = ['relation-get', '--format=json']
167 if rid:
168 _args.append('-r')
169 _args.append(rid)
170 _args.append(attribute or '-')
171 if unit:
172 _args.append(unit)
173 try:
174 return json.loads(subprocess.check_output(_args))
175 except ValueError:
176 return None
177
178
179def relation_set(relation_id=None, relation_settings={}, **kwargs):
180 relation_cmd_line = ['relation-set']
181 if relation_id is not None:
182 relation_cmd_line.extend(('-r', relation_id))
183 for k, v in (relation_settings.items() + kwargs.items()):
184 if v is None:
185 relation_cmd_line.append('{}='.format(k))
186 else:
187 relation_cmd_line.append('{}={}'.format(k, v))
188 subprocess.check_call(relation_cmd_line)
189 # Flush cache of any relation-gets for local unit
190 flush(local_unit())
191
192
193@cached
194def relation_ids(reltype=None):
195 "A list of relation_ids"
196 reltype = reltype or relation_type()
197 relid_cmd_line = ['relation-ids', '--format=json']
198 if reltype is not None:
199 relid_cmd_line.append(reltype)
200 return json.loads(subprocess.check_output(relid_cmd_line)) or []
201 return []
202
203
204@cached
205def related_units(relid=None):
206 "A list of related units"
207 relid = relid or relation_id()
208 units_cmd_line = ['relation-list', '--format=json']
209 if relid is not None:
210 units_cmd_line.extend(('-r', relid))
211 return json.loads(subprocess.check_output(units_cmd_line)) or []
212
213
214@cached
215def relation_for_unit(unit=None, rid=None):
216 "Get the json represenation of a unit's relation"
217 unit = unit or remote_unit()
218 relation = relation_get(unit=unit, rid=rid)
219 for key in relation:
220 if key.endswith('-list'):
221 relation[key] = relation[key].split()
222 relation['__unit__'] = unit
223 return relation
224
225
226@cached
227def relations_for_id(relid=None):
228 "Get relations of a specific relation ID"
229 relation_data = []
230 relid = relid or relation_ids()
231 for unit in related_units(relid):
232 unit_data = relation_for_unit(unit, relid)
233 unit_data['__relid__'] = relid
234 relation_data.append(unit_data)
235 return relation_data
236
237
238@cached
239def relations_of_type(reltype=None):
240 "Get relations of a specific type"
241 relation_data = []
242 reltype = reltype or relation_type()
243 for relid in relation_ids(reltype):
244 for relation in relations_for_id(relid):
245 relation['__relid__'] = relid
246 relation_data.append(relation)
247 return relation_data
248
249
250@cached
251def relation_types():
252 "Get a list of relation types supported by this charm"
253 charmdir = os.environ.get('CHARM_DIR', '')
254 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
255 md = yaml.safe_load(mdf)
256 rel_types = []
257 for key in ('provides', 'requires', 'peers'):
258 section = md.get(key)
259 if section:
260 rel_types.extend(section.keys())
261 mdf.close()
262 return rel_types
263
264
265@cached
266def relations():
267 rels = {}
268 for reltype in relation_types():
269 relids = {}
270 for relid in relation_ids(reltype):
271 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
272 for unit in related_units(relid):
273 reldata = relation_get(unit=unit, rid=relid)
274 units[unit] = reldata
275 relids[relid] = units
276 rels[reltype] = relids
277 return rels
278
279
280def open_port(port, protocol="TCP"):
281 "Open a service network port"
282 _args = ['open-port']
283 _args.append('{}/{}'.format(port, protocol))
284 subprocess.check_call(_args)
285
286
287def close_port(port, protocol="TCP"):
288 "Close a service network port"
289 _args = ['close-port']
290 _args.append('{}/{}'.format(port, protocol))
291 subprocess.check_call(_args)
292
293
294@cached
295def unit_get(attribute):
296 _args = ['unit-get', '--format=json', attribute]
297 try:
298 return json.loads(subprocess.check_output(_args))
299 except ValueError:
300 return None
301
302
303def unit_private_ip():
304 return unit_get('private-address')
305
306
307class UnregisteredHookError(Exception):
308 pass
309
310
311class Hooks(object):
312 def __init__(self):
313 super(Hooks, self).__init__()
314 self._hooks = {}
315
316 def register(self, name, function):
317 self._hooks[name] = function
318
319 def execute(self, args):
320 hook_name = os.path.basename(args[0])
321 if hook_name in self._hooks:
322 self._hooks[hook_name]()
323 else:
324 raise UnregisteredHookError(hook_name)
325
326 def hook(self, *hook_names):
327 def wrapper(decorated):
328 for hook_name in hook_names:
329 self.register(hook_name, decorated)
330 else:
331 self.register(decorated.__name__, decorated)
332 if '_' in decorated.__name__:
333 self.register(
334 decorated.__name__.replace('_', '-'), decorated)
335 return decorated
336 return wrapper
337
338
339def charm_dir():
340 return os.environ.get('CHARM_DIR')
0341
=== added file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/host.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,241 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import pwd
10import grp
11import random
12import string
13import subprocess
14import hashlib
15
16from collections import OrderedDict
17
18from hookenv import log
19
20
21def service_start(service_name):
22 return service('start', service_name)
23
24
25def service_stop(service_name):
26 return service('stop', service_name)
27
28
29def service_restart(service_name):
30 return service('restart', service_name)
31
32
33def service_reload(service_name, restart_on_failure=False):
34 service_result = service('reload', service_name)
35 if not service_result and restart_on_failure:
36 service_result = service('restart', service_name)
37 return service_result
38
39
40def service(action, service_name):
41 cmd = ['service', service_name, action]
42 return subprocess.call(cmd) == 0
43
44
45def service_running(service):
46 try:
47 output = subprocess.check_output(['service', service, 'status'])
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if ("start/running" in output or "is running" in output):
52 return True
53 else:
54 return False
55
56
57def adduser(username, password=None, shell='/bin/bash', system_user=False):
58 """Add a user"""
59 try:
60 user_info = pwd.getpwnam(username)
61 log('user {0} already exists!'.format(username))
62 except KeyError:
63 log('creating user {0}'.format(username))
64 cmd = ['useradd']
65 if system_user or password is None:
66 cmd.append('--system')
67 else:
68 cmd.extend([
69 '--create-home',
70 '--shell', shell,
71 '--password', password,
72 ])
73 cmd.append(username)
74 subprocess.check_call(cmd)
75 user_info = pwd.getpwnam(username)
76 return user_info
77
78
79def add_user_to_group(username, group):
80 """Add a user to a group"""
81 cmd = [
82 'gpasswd', '-a',
83 username,
84 group
85 ]
86 log("Adding user {} to group {}".format(username, group))
87 subprocess.check_call(cmd)
88
89
90def rsync(from_path, to_path, flags='-r', options=None):
91 """Replicate the contents of a path"""
92 options = options or ['--delete', '--executability']
93 cmd = ['/usr/bin/rsync', flags]
94 cmd.extend(options)
95 cmd.append(from_path)
96 cmd.append(to_path)
97 log(" ".join(cmd))
98 return subprocess.check_output(cmd).strip()
99
100
101def symlink(source, destination):
102 """Create a symbolic link"""
103 log("Symlinking {} as {}".format(source, destination))
104 cmd = [
105 'ln',
106 '-sf',
107 source,
108 destination,
109 ]
110 subprocess.check_call(cmd)
111
112
113def mkdir(path, owner='root', group='root', perms=0555, force=False):
114 """Create a directory"""
115 log("Making dir {} {}:{} {:o}".format(path, owner, group,
116 perms))
117 uid = pwd.getpwnam(owner).pw_uid
118 gid = grp.getgrnam(group).gr_gid
119 realpath = os.path.abspath(path)
120 if os.path.exists(realpath):
121 if force and not os.path.isdir(realpath):
122 log("Removing non-directory file {} prior to mkdir()".format(path))
123 os.unlink(realpath)
124 else:
125 os.makedirs(realpath, perms)
126 os.chown(realpath, uid, gid)
127
128
129def write_file(path, content, owner='root', group='root', perms=0444):
130 """Create or overwrite a file with the contents of a string"""
131 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
132 uid = pwd.getpwnam(owner).pw_uid
133 gid = grp.getgrnam(group).gr_gid
134 with open(path, 'w') as target:
135 os.fchown(target.fileno(), uid, gid)
136 os.fchmod(target.fileno(), perms)
137 target.write(content)
138
139
140def mount(device, mountpoint, options=None, persist=False):
141 '''Mount a filesystem'''
142 cmd_args = ['mount']
143 if options is not None:
144 cmd_args.extend(['-o', options])
145 cmd_args.extend([device, mountpoint])
146 try:
147 subprocess.check_output(cmd_args)
148 except subprocess.CalledProcessError, e:
149 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
150 return False
151 if persist:
152 # TODO: update fstab
153 pass
154 return True
155
156
157def umount(mountpoint, persist=False):
158 '''Unmount a filesystem'''
159 cmd_args = ['umount', mountpoint]
160 try:
161 subprocess.check_output(cmd_args)
162 except subprocess.CalledProcessError, e:
163 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
164 return False
165 if persist:
166 # TODO: update fstab
167 pass
168 return True
169
170
171def mounts():
172 '''List of all mounted volumes as [[mountpoint,device],[...]]'''
173 with open('/proc/mounts') as f:
174 # [['/mount/point','/dev/path'],[...]]
175 system_mounts = [m[1::-1] for m in [l.strip().split()
176 for l in f.readlines()]]
177 return system_mounts
178
179
180def file_hash(path):
181 ''' Generate a md5 hash of the contents of 'path' or None if not found '''
182 if os.path.exists(path):
183 h = hashlib.md5()
184 with open(path, 'r') as source:
185 h.update(source.read()) # IGNORE:E1101 - it does have update
186 return h.hexdigest()
187 else:
188 return None
189
190
191def restart_on_change(restart_map):
192 ''' Restart services based on configuration files changing
193
194 This function is used a decorator, for example
195
196 @restart_on_change({
197 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
198 })
199 def ceph_client_changed():
200 ...
201
202 In this example, the cinder-api and cinder-volume services
203 would be restarted if /etc/ceph/ceph.conf is changed by the
204 ceph_client_changed function.
205 '''
206 def wrap(f):
207 def wrapped_f(*args):
208 checksums = {}
209 for path in restart_map:
210 checksums[path] = file_hash(path)
211 f(*args)
212 restarts = []
213 for path in restart_map:
214 if checksums[path] != file_hash(path):
215 restarts += restart_map[path]
216 for service_name in list(OrderedDict.fromkeys(restarts)):
217 service('restart', service_name)
218 return wrapped_f
219 return wrap
220
221
222def lsb_release():
223 '''Return /etc/lsb-release in a dict'''
224 d = {}
225 with open('/etc/lsb-release', 'r') as lsb:
226 for l in lsb:
227 k, v = l.split('=')
228 d[k.strip()] = v.strip()
229 return d
230
231
232def pwgen(length=None):
233 '''Generate a random pasword.'''
234 if length is None:
235 length = random.choice(range(35, 45))
236 alphanumeric_chars = [
237 l for l in (string.letters + string.digits)
238 if l not in 'l0QD1vAEIOUaeiou']
239 random_chars = [
240 random.choice(alphanumeric_chars) for _ in range(length)]
241 return(''.join(random_chars))
0242
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,209 @@
1import importlib
2from yaml import safe_load
3from charmhelpers.core.host import (
4 lsb_release
5)
6from urlparse import (
7 urlparse,
8 urlunparse,
9)
10import subprocess
11from charmhelpers.core.hookenv import (
12 config,
13 log,
14)
15import apt_pkg
16
17CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
18deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
19"""
20PROPOSED_POCKET = """# Proposed
21deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
22"""
23
24
25def filter_installed_packages(packages):
26 """Returns a list of packages that require installation"""
27 apt_pkg.init()
28 cache = apt_pkg.Cache()
29 _pkgs = []
30 for package in packages:
31 try:
32 p = cache[package]
33 p.current_ver or _pkgs.append(package)
34 except KeyError:
35 log('Package {} has no installation candidate.'.format(package),
36 level='WARNING')
37 _pkgs.append(package)
38 return _pkgs
39
40
41def apt_install(packages, options=None, fatal=False):
42 """Install one or more packages"""
43 options = options or []
44 cmd = ['apt-get', '-y']
45 cmd.extend(options)
46 cmd.append('install')
47 if isinstance(packages, basestring):
48 cmd.append(packages)
49 else:
50 cmd.extend(packages)
51 log("Installing {} with options: {}".format(packages,
52 options))
53 if fatal:
54 subprocess.check_call(cmd)
55 else:
56 subprocess.call(cmd)
57
58
59def apt_update(fatal=False):
60 """Update local apt cache"""
61 cmd = ['apt-get', 'update']
62 if fatal:
63 subprocess.check_call(cmd)
64 else:
65 subprocess.call(cmd)
66
67
68def apt_purge(packages, fatal=False):
69 """Purge one or more packages"""
70 cmd = ['apt-get', '-y', 'purge']
71 if isinstance(packages, basestring):
72 cmd.append(packages)
73 else:
74 cmd.extend(packages)
75 log("Purging {}".format(packages))
76 if fatal:
77 subprocess.check_call(cmd)
78 else:
79 subprocess.call(cmd)
80
81
82def add_source(source, key=None):
83 if ((source.startswith('ppa:') or
84 source.startswith('http:'))):
85 subprocess.check_call(['add-apt-repository', '--yes', source])
86 elif source.startswith('cloud:'):
87 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
88 fatal=True)
89 pocket = source.split(':')[-1]
90 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
91 apt.write(CLOUD_ARCHIVE.format(pocket))
92 elif source == 'proposed':
93 release = lsb_release()['DISTRIB_CODENAME']
94 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
95 apt.write(PROPOSED_POCKET.format(release))
96 if key:
97 subprocess.check_call(['apt-key', 'import', key])
98
99
100class SourceConfigError(Exception):
101 pass
102
103
104def configure_sources(update=False,
105 sources_var='install_sources',
106 keys_var='install_keys'):
107 """
108 Configure multiple sources from charm configuration
109
110 Example config:
111 install_sources:
112 - "ppa:foo"
113 - "http://example.com/repo precise main"
114 install_keys:
115 - null
116 - "a1b2c3d4"
117
118 Note that 'null' (a.k.a. None) should not be quoted.
119 """
120 sources = safe_load(config(sources_var))
121 keys = safe_load(config(keys_var))
122 if isinstance(sources, basestring) and isinstance(keys, basestring):
123 add_source(sources, keys)
124 else:
125 if not len(sources) == len(keys):
126 msg = 'Install sources and keys lists are different lengths'
127 raise SourceConfigError(msg)
128 for src_num in range(len(sources)):
129 add_source(sources[src_num], keys[src_num])
130 if update:
131 apt_update(fatal=True)
132
133# The order of this list is very important. Handlers should be listed in from
134# least- to most-specific URL matching.
135FETCH_HANDLERS = (
136 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
137 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
138)
139
140
141class UnhandledSource(Exception):
142 pass
143
144
145def install_remote(source):
146 """
147 Install a file tree from a remote source
148
149 The specified source should be a url of the form:
150 scheme://[host]/path[#[option=value][&...]]
151
152 Schemes supported are based on this modules submodules
153 Options supported are submodule-specific"""
154 # We ONLY check for True here because can_handle may return a string
155 # explaining why it can't handle a given source.
156 handlers = [h for h in plugins() if h.can_handle(source) is True]
157 installed_to = None
158 for handler in handlers:
159 try:
160 installed_to = handler.install(source)
161 except UnhandledSource:
162 pass
163 if not installed_to:
164 raise UnhandledSource("No handler found for source {}".format(source))
165 return installed_to
166
167
168def install_from_config(config_var_name):
169 charm_config = config()
170 source = charm_config[config_var_name]
171 return install_remote(source)
172
173
174class BaseFetchHandler(object):
175 """Base class for FetchHandler implementations in fetch plugins"""
176 def can_handle(self, source):
177 """Returns True if the source can be handled. Otherwise returns
178 a string explaining why it cannot"""
179 return "Wrong source type"
180
181 def install(self, source):
182 """Try to download and unpack the source. Return the path to the
183 unpacked files or raise UnhandledSource."""
184 raise UnhandledSource("Wrong source type {}".format(source))
185
186 def parse_url(self, url):
187 return urlparse(url)
188
189 def base_url(self, url):
190 """Return url without querystring or fragment"""
191 parts = list(self.parse_url(url))
192 parts[4:] = ['' for i in parts[4:]]
193 return urlunparse(parts)
194
195
196def plugins(fetch_handlers=None):
197 if not fetch_handlers:
198 fetch_handlers = FETCH_HANDLERS
199 plugin_list = []
200 for handler_name in fetch_handlers:
201 package, classname = handler_name.rsplit('.', 1)
202 try:
203 handler_class = getattr(importlib.import_module(package), classname)
204 plugin_list.append(handler_class())
205 except (ImportError, AttributeError):
206 # Skip missing plugins so that they can be ommitted from
207 # installation if desired
208 log("FetchHandler {} not found, skipping plugin".format(handler_name))
209 return plugin_list
0210
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,48 @@
1import os
2import urllib2
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.payload.archive import (
8 get_archive_handler,
9 extract,
10)
11from charmhelpers.core.host import mkdir
12
13
14class ArchiveUrlFetchHandler(BaseFetchHandler):
15 """Handler for archives via generic URLs"""
16 def can_handle(self, source):
17 url_parts = self.parse_url(source)
18 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
19 return "Wrong source type"
20 if get_archive_handler(self.base_url(source)):
21 return True
22 return False
23
24 def download(self, source, dest):
25 # propogate all exceptions
26 # URLError, OSError, etc
27 response = urllib2.urlopen(source)
28 try:
29 with open(dest, 'w') as dest_file:
30 dest_file.write(response.read())
31 except Exception as e:
32 if os.path.isfile(dest):
33 os.unlink(dest)
34 raise e
35
36 def install(self, source):
37 url_parts = self.parse_url(source)
38 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
39 if not os.path.exists(dest_dir):
40 mkdir(dest_dir, perms=0755)
41 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
42 try:
43 self.download(source, dld_file)
44 except urllib2.URLError as e:
45 raise UnhandledSource(e.reason)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return extract(dld_file)
049
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,49 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8try:
9 from bzrlib.branch import Branch
10except ImportError:
11 from charmhelpers.fetch import apt_install
12 apt_install("python-bzrlib")
13 from bzrlib.branch import Branch
14
15class BzrUrlFetchHandler(BaseFetchHandler):
16 """Handler for bazaar branches via generic and lp URLs"""
17 def can_handle(self, source):
18 url_parts = self.parse_url(source)
19 if url_parts.scheme not in ('bzr+ssh', 'lp'):
20 return False
21 else:
22 return True
23
24 def branch(self, source, dest):
25 url_parts = self.parse_url(source)
26 # If we use lp:branchname scheme we need to load plugins
27 if not self.can_handle(source):
28 raise UnhandledSource("Cannot handle {}".format(source))
29 if url_parts.scheme == "lp":
30 from bzrlib.plugin import load_plugins
31 load_plugins()
32 try:
33 remote_branch = Branch.open(source)
34 remote_branch.bzrdir.sprout(dest).open_branch()
35 except Exception as e:
36 raise e
37
38 def install(self, source):
39 url_parts = self.parse_url(source)
40 branch_name = url_parts.path.strip("/").split("/")[-1]
41 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
42 if not os.path.exists(dest_dir):
43 mkdir(dest_dir, perms=0755)
44 try:
45 self.branch(source, dest_dir)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return dest_dir
49
050
=== added directory 'hooks/charmhelpers/payload'
=== added file 'hooks/charmhelpers/payload/__init__.py'
--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/__init__.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,1 @@
1"Tools for working with files injected into a charm just before deployment."
02
=== added file 'hooks/charmhelpers/payload/execd.py'
--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/execd.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,50 @@
1#!/usr/bin/env python
2
3import os
4import sys
5import subprocess
6from charmhelpers.core import hookenv
7
8
9def default_execd_dir():
10 return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
11
12
13def execd_module_paths(execd_dir=None):
14 """Generate a list of full paths to modules within execd_dir."""
15 if not execd_dir:
16 execd_dir = default_execd_dir()
17
18 if not os.path.exists(execd_dir):
19 return
20
21 for subpath in os.listdir(execd_dir):
22 module = os.path.join(execd_dir, subpath)
23 if os.path.isdir(module):
24 yield module
25
26
27def execd_submodule_paths(command, execd_dir=None):
28 """Generate a list of full paths to the specified command within exec_dir.
29 """
30 for module_path in execd_module_paths(execd_dir):
31 path = os.path.join(module_path, command)
32 if os.access(path, os.X_OK) and os.path.isfile(path):
33 yield path
34
35
36def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
37 """Run command for each module within execd_dir which defines it."""
38 for submodule_path in execd_submodule_paths(command, execd_dir):
39 try:
40 subprocess.check_call(submodule_path, shell=True, stderr=stderr)
41 except subprocess.CalledProcessError as e:
42 hookenv.log("Error ({}) running {}. Output: {}".format(
43 e.returncode, e.cmd, e.output))
44 if die_on_error:
45 sys.exit(e.returncode)
46
47
48def execd_preinstall(execd_dir=None):
49 """Run charm-pre-install for each module within execd_dir."""
50 execd_run('charm-pre-install', execd_dir=execd_dir)
051
=== modified symlink 'hooks/cluster-relation-changed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/cluster-relation-departed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/config-changed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== removed file 'hooks/glance-common'
--- hooks/glance-common 2013-06-03 18:39:29 +0000
+++ hooks/glance-common 1970-01-01 00:00:00 +0000
@@ -1,133 +0,0 @@
1#!/bin/bash
2
3CHARM="glance"
4
5SERVICES="glance-api glance-registry"
6PACKAGES="glance python-mysqldb python-swift python-keystone uuid haproxy"
7
8GLANCE_REGISTRY_CONF="/etc/glance/glance-registry.conf"
9GLANCE_REGISTRY_PASTE_INI="/etc/glance/glance-registry-paste.ini"
10GLANCE_API_CONF="/etc/glance/glance-api.conf"
11GLANCE_API_PASTE_INI="/etc/glance/glance-api-paste.ini"
12CONF_DIR="/etc/glance"
13HOOKS_DIR="$CHARM_DIR/hooks"
14
15# Flag used to track config changes.
16CONFIG_CHANGED="False"
17if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then
18 . $HOOKS_DIR/lib/openstack-common
19else
20 juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1
21fi
22
23function set_or_update {
24 local key="$1"
25 local value="$2"
26 local file="$3"
27 local section="$4"
28 local conf=""
29 [[ -z $key ]] && juju-log "ERROR: set_or_update(): value $value missing key" \
30 && exit 1
31 [[ -z $value ]] && juju-log "ERROR: set_or_update(): key $key missing value" \
32 && exit 1
33
34 case "$file" in
35 "api") conf=$GLANCE_API_CONF ;;
36 "api-paste") conf=$GLANCE_API_PASTE_INI ;;
37 "registry") conf=$GLANCE_REGISTRY_CONF ;;
38 "registry-paste") conf=$GLANCE_REGISTRY_PASTE_INI ;;
39 *) juju-log "ERROR: set_or_update(): Invalid or no config file specified." \
40 && exit 1 ;;
41 esac
42
43 [[ ! -e $conf ]] && juju-log "ERROR: set_or_update(): File not found $conf" \
44 && exit 1
45
46 if [[ "$(local_config_get "$conf" "$key" "$section")" == "$value" ]] ; then
47 juju-log "$CHARM: set_or_update(): $key=$value already set in $conf."
48 return 0
49 fi
50
51 cfg_set_or_update "$key" "$value" "$conf" "$section"
52 CONFIG_CHANGED="True"
53}
54
55do_openstack_upgrade() {
56 # update openstack components to those provided by a new installation source
57 # it is assumed the calling hook has confirmed that the upgrade is sane.
58 local rel="$1"
59 shift
60 local packages=$@
61 orig_os_rel=$(get_os_codename_package "glance-common")
62 new_rel=$(get_os_codename_install_source "$rel")
63
64 # Backup the config directory.
65 local stamp=$(date +"%Y%m%d%M%S")
66 tar -pcf /var/lib/juju/$CHARM-backup-$stamp.tar $CONF_DIR
67
68 # Setup apt repository access and kick off the actual package upgrade.
69 configure_install_source "$rel"
70 apt-get update
71 DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confnew -y \
72 install --no-install-recommends $packages
73
74 # Update the new config files for existing relations.
75 local r_id=""
76
77 r_id=$(relation-ids shared-db)
78 if [[ -n "$r_id" ]] ; then
79 juju-log "$CHARM: Configuring database after upgrade to $rel."
80 db_changed $r_id
81 fi
82
83 r_id=$(relation-ids identity-service)
84 if [[ -n "$r_id" ]] ; then
85 juju-log "$CHARM: Configuring identity service after upgrade to $rel."
86 keystone_changed $r_id
87 fi
88
89 local ceph_ids="$(relation-ids ceph)"
90 [[ -n "$ceph_ids" ]] && apt-get -y install ceph-common python-ceph
91 for r_id in $ceph_ids ; do
92 for unit in $(relation-list -r $r_id) ; do
93 ceph_changed "$r_id" "$unit"
94 done
95 done
96
97 [[ -n "$(relation-ids object-store)" ]] && object-store_joined
98}
99
100configure_https() {
101 # request openstack-common setup reverse proxy mapping for API and registry
102 # servers
103 service_ctl glance-api stop
104 if [[ -n "$(peer_units)" ]] || is_clustered ; then
105 # haproxy may already be configured. need to push it back in the request
106 # pipeline in preparation for a change from:
107 # from: haproxy (9292) -> glance_api (9282)
108 # to: ssl (9292) -> haproxy (9291) -> glance_api (9272)
109 local next_server=$(determine_haproxy_port 9292)
110 local api_port=$(determine_api_port 9292)
111 configure_haproxy "glance_api:$next_server:$api_port"
112 else
113 # if not clustered, the glance-api is next in the pipeline.
114 local api_port=$(determine_api_port 9292)
115 local next_server=$api_port
116 fi
117
118 # setup https to point to either haproxy or directly to api server, depending.
119 setup_https 9292:$next_server
120
121 # configure servers to listen on new ports accordingly.
122 set_or_update bind_port "$api_port" "api"
123 service_ctl all start
124
125 local r_id=""
126 # (re)configure ks endpoint accordingly in ks and nova.
127 for r_id in $(relation-ids identity-service) ; do
128 keystone_joined "$r_id"
129 done
130 for r_id in $(relation-ids image-service) ; do
131 image-service_joined "$r_id"
132 done
133}
1340
=== removed file 'hooks/glance-relations'
--- hooks/glance-relations 2013-09-18 18:40:06 +0000
+++ hooks/glance-relations 1970-01-01 00:00:00 +0000
@@ -1,464 +0,0 @@
1#!/bin/bash -e
2
3HOOKS_DIR="$CHARM_DIR/hooks"
4ARG0=${0##*/}
5
6if [[ -e $HOOKS_DIR/glance-common ]] ; then
7 . $HOOKS_DIR/glance-common
8else
9 echo "ERROR: Could not load glance-common from $HOOKS_DIR"
10fi
11
12function install_hook {
13 juju-log "Installing glance packages"
14 apt-get -y install python-software-properties || exit 1
15
16 configure_install_source "$(config-get openstack-origin)"
17
18 apt-get update || exit 1
19 apt-get -y install $PACKAGES || exit 1
20
21 service_ctl all stop
22
23 # TODO: Make debug logging a config option.
24 set_or_update verbose True api
25 set_or_update debug True api
26 set_or_update verbose True registry
27 set_or_update debug True registry
28
29 configure_https
30}
31
32function db_joined {
33 local glance_db=$(config-get glance-db)
34 local db_user=$(config-get db-user)
35 local hostname=$(unit-get private-address)
36 juju-log "$CHARM - db_joined: requesting database access to $glance_db for "\
37 "$db_user@$hostname"
38 relation-set database=$glance_db username=$db_user hostname=$hostname
39}
40
41function db_changed {
42 # serves as the main shared-db changed hook but may also be called with a
43 # relation-id to configure new config files for existing relations.
44 local r_id="$1"
45 local r_args=""
46 if [[ -n "$r_id" ]] ; then
47 # set up environment for an existing relation to a single unit.
48 export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1)
49 export JUJU_RELATION="shared-db"
50 export JUJU_RELATION_ID="$r_id"
51 local r_args="-r $JUJU_RELATION_ID"
52 juju-log "$CHARM - db_changed: Running hook for existing relation to "\
53 "$JUJU_REMOTE_UNIT-$JUJU_RELATION_ID"
54 fi
55
56 local db_host=$(relation-get $r_args db_host)
57 local db_password=$(relation-get $r_args password)
58
59 if [[ -z "$db_host" ]] || [[ -z "$db_password" ]] ; then
60 juju-log "$CHARM - db_changed: db_host||db_password set, will retry."
61 exit 0
62 fi
63
64 local glance_db=$(config-get glance-db)
65 local db_user=$(config-get db-user)
66 local rel=$(get_os_codename_package glance-common)
67
68 if [[ -n "$r_id" ]] ; then
69 unset JUJU_REMOTE_UNIT JUJU_RELATION JUJU_RELATION_ID
70 fi
71
72 juju-log "$CHARM - db_changed: Configuring glance.conf for access to $glance_db"
73
74 set_or_update sql_connection "mysql://$db_user:$db_password@$db_host/$glance_db" registry
75
76 # since folsom, a db connection setting in glance-api.conf is required.
77 [[ "$rel" != "essex" ]] &&
78 set_or_update sql_connection "mysql://$db_user:$db_password@$db_host/$glance_db" api
79
80 if eligible_leader 'res_glance_vip'; then
81 if [[ "$rel" == "essex" ]] ; then
82 # Essex required initializing new databases to version 0
83 if ! glance-manage db_version >/dev/null 2>&1; then
84 juju-log "Setting glance database version to 0"
85 glance-manage version_control 0
86 fi
87 fi
88 juju-log "$CHARM - db_changed: Running database migrations for $rel."
89 glance-manage db_sync
90 fi
91 service_ctl all restart
92}
93
94function image-service_joined {
95 # Check to see if unit is potential leader
96 local r_id="$1"
97 [[ -n "$r_id" ]] && r_id="-r $r_id"
98 eligible_leader 'res_glance_vip' || return 0
99 https && scheme="https" || scheme="http"
100 is_clustered && local host=$(config-get vip) ||
101 local host=$(unit-get private-address)
102 url="$scheme://$host:9292"
103 juju-log "glance: image-service_joined: To peer glance-api-server=$url"
104 relation-set $r_id glance-api-server=$url
105}
106
107function object-store_joined {
108 local relids="$(relation-ids identity-service)"
109 [[ -z "$relids" ]] && \
110 juju-log "$CHARM: Deferring swift store configuration until " \
111 "an identity-service relation exists." && exit 0
112
113 set_or_update default_store swift api
114 set_or_update swift_store_create_container_on_put true api
115
116 for relid in $relids ; do
117 local unit=$(relation-list -r $relid)
118 local svc_tenant=$(relation-get -r $relid service_tenant $unit)
119 local svc_username=$(relation-get -r $relid service_username $unit)
120 local svc_password=$(relation-get -r $relid service_password $unit)
121 local auth_host=$(relation-get -r $relid private-address $unit)
122 local port=$(relation-get -r $relid service_port $unit)
123 local auth_url=""
124
125 [[ -n "$auth_host" ]] && [[ -n "$port" ]] &&
126 auth_url="http://$auth_host:$port/v2.0/"
127
128 [[ -n "$svc_tenant" ]] && [[ -n "$svc_username" ]] &&
129 set_or_update swift_store_user "$svc_tenant:$svc_username" api
130 [[ -n "$svc_password" ]] &&
131 set_or_update swift_store_key "$svc_password" api
132 [[ -n "$auth_url" ]] &&
133 set_or_update swift_store_auth_address "$auth_url" api
134 done
135 service_ctl glance-api restart
136}
137
138function object-store_changed {
139 exit 0
140}
141
142function ceph_joined {
143 mkdir -p /etc/ceph
144 apt-get -y install ceph-common python-ceph || exit 1
145}
146
147function ceph_changed {
148 local r_id="$1"
149 local unit_id="$2"
150 local r_arg=""
151 [[ -n "$r_id" ]] && r_arg="-r $r_id"
152 SERVICE_NAME=`echo $JUJU_UNIT_NAME | cut -d / -f 1`
153 KEYRING=/etc/ceph/ceph.client.$SERVICE_NAME.keyring
154 KEY=`relation-get $r_arg key $unit_id`
155 if [ -n "$KEY" ]; then
156 # But only once
157 if [ ! -f $KEYRING ]; then
158 ceph-authtool $KEYRING \
159 --create-keyring --name=client.$SERVICE_NAME \
160 --add-key="$KEY"
161 chmod +r $KEYRING
162 fi
163 else
164 # No key - bail for the time being
165 exit 0
166 fi
167
168 MONS=`relation-list $r_arg`
169 mon_hosts=""
170 for mon in $MONS; do
171 mon_hosts="$mon_hosts $(get_ip $(relation-get $r_arg private-address $mon)):6789,"
172 done
173 cat > /etc/ceph/ceph.conf << EOF
174[global]
175 auth supported = $(relation-get $r_arg auth $unit_id)
176 keyring = /etc/ceph/\$cluster.\$name.keyring
177 mon host = $mon_hosts
178EOF
179
180 # Create the images pool if it does not already exist
181 if ! rados --id $SERVICE_NAME lspools | grep -q images; then
182 local num_osds=$(ceph --id $SERVICE_NAME osd ls| egrep "[^\s]"| wc -l)
183 local cfg_key='ceph-osd-replication-count'
184 local rep_count="$(config-get $cfg_key)"
185 if [ -z "$rep_count" ]
186 then
187 rep_count=2
188 juju-log "config returned empty string for $cfg_key - using value of 2"
189 fi
190 local num_pgs=$(((num_osds*100)/rep_count))
191 ceph --id $SERVICE_NAME osd pool create images $num_pgs $num_pgs
192 ceph --id $SERVICE_NAME osd pool set images size $rep_count
193 # TODO: set appropriate crush ruleset
194 fi
195
196 # Configure glance for ceph storage options
197 set_or_update default_store rbd api
198 set_or_update rbd_store_ceph_conf /etc/ceph/ceph.conf api
199 set_or_update rbd_store_user $SERVICE_NAME api
200 set_or_update rbd_store_pool images api
201 set_or_update rbd_store_chunk_size 8 api
202 # This option only applies to Grizzly.
203 [ "`get_os_codename_package "glance-common"`" = "grizzly" ] && \
204 set_or_update show_image_direct_url 'True' api
205
206 service_ctl glance-api restart
207}
208
209function keystone_joined {
210 # Leadership check
211 eligible_leader 'res_glance_vip' || return 0
212 local r_id="$1"
213 [[ -n "$r_id" ]] && r_id=" -r $r_id"
214
215 # determine correct endpoint URL
216 https && scheme="https" || scheme="http"
217 is_clustered && local host=$(config-get vip) ||
218 local host=$(unit-get private-address)
219 url="$scheme://$host:9292"
220
221 # advertise our API endpoint to keystone
222 relation-set service="glance" \
223 region="$(config-get region)" public_url=$url admin_url=$url internal_url=$url
224}
225
226function keystone_changed {
227 # serves as the main identity-service changed hook, but may also be called
228 # with a relation-id to configure new config files for existing relations.
229 local r_id="$1"
230 local r_args=""
231 if [[ -n "$r_id" ]] ; then
232 # set up environment for an existing relation to a single unit.
233 export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1)
234 export JUJU_RELATION="identity-service"
235 export JUJU_RELATION_ID="$r_id"
236 local r_args="-r $JUJU_RELATION_ID"
237 juju-log "$CHARM - db_changed: Running hook for existing relation to "\
238 "$JUJU_REMOTE_UNIT-$JUJU_RELATION_ID"
239 fi
240
241 token=$(relation-get $r_args $r_args admin_token)
242 service_port=$(relation-get $r_args service_port)
243 auth_port=$(relation-get $r_args auth_port)
244 service_username=$(relation-get $r_args service_username)
245 service_password=$(relation-get $r_args service_password)
246 service_tenant=$(relation-get $r_args service_tenant)
247 [[ -z "$token" ]] || [[ -z "$service_port" ]] || [[ -z "$auth_port" ]] ||
248 [[ -z "$service_username" ]] || [[ -z "$service_password" ]] ||
249 [[ -z "$service_tenant" ]] && juju-log "keystone_changed: Peer not ready" &&
250 exit 0
251 [[ "$token" == "-1" ]] &&
252 juju-log "keystone_changed: admin token error" && exit 1
253 juju-log "keystone_changed: Acquired admin. token"
254 keystone_host=$(relation-get $r_args auth_host)
255
256 if [[ -n "$r_id" ]] ; then
257 unset JUJU_REMOTE_UNIT JUJU_RELATION JUJU_RELATION_ID
258 fi
259
260 set_or_update "flavor" "keystone" "api" "paste_deploy"
261 set_or_update "flavor" "keystone" "registry" "paste_deploy"
262
263 local sect="filter:authtoken"
264 for i in api-paste registry-paste ; do
265 set_or_update "service_host" "$keystone_host" $i $sect
266 set_or_update "service_port" "$service_port" $i $sect
267 set_or_update "auth_host" "$keystone_host" $i $sect
268 set_or_update "auth_port" "$auth_port" $i $sect
269 set_or_update "auth_uri" "http://$keystone_host:$service_port/" $i $sect
270 set_or_update "admin_token" "$token" $i $sect
271 set_or_update "admin_tenant_name" "$service_tenant" $i $sect
272 set_or_update "admin_user" "$service_username" $i $sect
273 set_or_update "admin_password" "$service_password" $i $sect
274 done
275 service_ctl all restart
276
277 # Configure any object-store / swift relations now that we have an
278 # identity-service
279 if [[ -n "$(relation-ids object-store)" ]] ; then
280 object-store_joined
281 fi
282
283 # possibly configure HTTPS for API and registry
284 configure_https
285}
286
287function config_changed() {
288 # Determine whether or not we should do an upgrade, based on whether or not
289 # the version offered in openstack-origin is greater than what is installed.
290
291 local install_src=$(config-get openstack-origin)
292 local cur=$(get_os_codename_package "glance-common")
293 local available=$(get_os_codename_install_source "$install_src")
294
295 if [[ "$available" != "unknown" ]] ; then
296 if dpkg --compare-versions $(get_os_version_codename "$cur") lt \
297 $(get_os_version_codename "$available") ; then
298 juju-log "$CHARM: Upgrading OpenStack release: $cur -> $available."
299 do_openstack_upgrade "$install_src" $PACKAGES
300 fi
301 fi
302 configure_https
303 service_ctl all restart
304
305 # Save our scriptrc env variables for health checks
306 declare -a env_vars=(
307 "OPENSTACK_PORT_MCASTPORT=$(config-get ha-mcastport)"
308 'OPENSTACK_SERVICE_API=glance-api'
309 'OPENSTACK_SERVICE_REGISTRY=glance-registry')
310 save_script_rc ${env_vars[@]}
311}
312
313function cluster_changed() {
314 configure_haproxy "glance_api:9292"
315}
316
317function upgrade_charm() {
318 cluster_changed
319}
320
321function ha_relation_joined() {
322 local corosync_bindiface=`config-get ha-bindiface`
323 local corosync_mcastport=`config-get ha-mcastport`
324 local vip=`config-get vip`
325 local vip_iface=`config-get vip_iface`
326 local vip_cidr=`config-get vip_cidr`
327 if [ -n "$vip" ] && [ -n "$vip_iface" ] && \
328 [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \
329 [ -n "$corosync_mcastport" ]; then
330 # TODO: This feels horrible but the data required by the hacluster
331 # charm is quite complex and is python ast parsed.
332 resources="{
333'res_glance_vip':'ocf:heartbeat:IPaddr2',
334'res_glance_haproxy':'lsb:haproxy'
335}"
336 resource_params="{
337'res_glance_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"',
338'res_glance_haproxy': 'op monitor interval=\"5s\"'
339}"
340 init_services="{
341'res_glance_haproxy':'haproxy'
342}"
343 groups="{
344'grp_glance_haproxy':'res_glance_vip res_glance_haproxy'
345}"
346 relation-set corosync_bindiface=$corosync_bindiface \
347 corosync_mcastport=$corosync_mcastport \
348 resources="$resources" resource_params="$resource_params" \
349 init_services="$init_services" groups="$groups"
350 else
351 juju-log "Insufficient configuration data to configure hacluster"
352 exit 1
353 fi
354}
355
356function ha_relation_changed() {
357 local clustered=`relation-get clustered`
358 if [ -n "$clustered" ] && is_leader 'res_glance_vip'; then
359 local port=$((9292 + 10000))
360 local host=$(config-get vip)
361 local url="http://$host:$port"
362 for r_id in `relation-ids identity-service`; do
363 relation-set -r $r_id service="glance" \
364 region="$(config-get region)" \
365 public_url="$url" admin_url="$url" internal_url="$url"
366 done
367 for r_id in `relation-ids image-service`; do
368 relation-set -r $r_id \
369 glance-api-server="$host:$port"
370 done
371 fi
372}
373
374
375function cluster_changed() {
376 [[ -z "$(peer_units)" ]] &&
377 juju-log "cluster_changed() with no peers." && exit 0
378 local haproxy_port=$(determine_haproxy_port 9292)
379 local backend_port=$(determine_api_port 9292)
380 service glance-api stop
381 configure_haproxy "glance_api:$haproxy_port:$backend_port"
382 set_or_update bind_port "$backend_port" "api"
383 service glance-api start
384}
385
386function upgrade_charm() {
387 cluster_changed
388}
389
390function ha_relation_joined() {
391 local corosync_bindiface=`config-get ha-bindiface`
392 local corosync_mcastport=`config-get ha-mcastport`
393 local vip=`config-get vip`
394 local vip_iface=`config-get vip_iface`
395 local vip_cidr=`config-get vip_cidr`
396 if [ -n "$vip" ] && [ -n "$vip_iface" ] && \
397 [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \
398 [ -n "$corosync_mcastport" ]; then
399 # TODO: This feels horrible but the data required by the hacluster
400 # charm is quite complex and is python ast parsed.
401 resources="{
402'res_glance_vip':'ocf:heartbeat:IPaddr2',
403'res_glance_haproxy':'lsb:haproxy'
404}"
405 resource_params="{
406'res_glance_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"',
407'res_glance_haproxy': 'op monitor interval=\"5s\"'
408}"
409 init_services="{
410'res_glance_haproxy':'haproxy'
411}"
412 clones="{
413'cl_glance_haproxy': 'res_glance_haproxy'
414}"
415 relation-set corosync_bindiface=$corosync_bindiface \
416 corosync_mcastport=$corosync_mcastport \
417 resources="$resources" resource_params="$resource_params" \
418 init_services="$init_services" clones="$clones"
419 else
420 juju-log "Insufficient configuration data to configure hacluster"
421 exit 1
422 fi
423}
424
425function ha_relation_changed() {
426 local clustered=`relation-get clustered`
427 if [ -n "$clustered" ] && is_leader 'res_glance_vip'; then
428 local host=$(config-get vip)
429 https && local scheme="https" || local scheme="http"
430 local url="$scheme://$host:9292"
431
432 for r_id in `relation-ids identity-service`; do
433 relation-set -r $r_id service="glance" \
434 region="$(config-get region)" \
435 public_url="$url" admin_url="$url" internal_url="$url"
436 done
437 for r_id in `relation-ids image-service`; do
438 relation-set -r $r_id \
439 glance-api-server="$scheme://$host:9292"
440 done
441 fi
442}
443
444
445case $ARG0 in
446 "start"|"stop") service_ctl all $ARG0 ;;
447 "install") install_hook ;;
448 "config-changed") config_changed ;;
449 "shared-db-relation-joined") db_joined ;;
450 "shared-db-relation-changed") db_changed;;
451 "image-service-relation-joined") image-service_joined ;;
452 "image-service-relation-changed") exit 0 ;;
453 "object-store-relation-joined") object-store_joined ;;
454 "object-store-relation-changed") object-store_changed ;;
455 "identity-service-relation-joined") keystone_joined ;;
456 "identity-service-relation-changed") keystone_changed ;;
457 "ceph-relation-joined") ceph_joined;;
458 "ceph-relation-changed") ceph_changed;;
459 "cluster-relation-changed") cluster_changed ;;
460 "cluster-relation-departed") cluster_changed ;;
461 "ha-relation-joined") ha_relation_joined ;;
462 "ha-relation-changed") ha_relation_changed ;;
463 "upgrade-charm") upgrade_charm ;;
464esac
4650
=== added file 'hooks/glance_contexts.py'
--- hooks/glance_contexts.py 1970-01-01 00:00:00 +0000
+++ hooks/glance_contexts.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,89 @@
1from charmhelpers.core.hookenv import (
2 relation_ids,
3 related_units,
4 relation_get,
5 service_name,
6)
7
8from charmhelpers.contrib.openstack.context import (
9 OSContextGenerator,
10 ApacheSSLContext as SSLContext,
11)
12
13from charmhelpers.contrib.hahelpers.cluster import (
14 determine_api_port,
15 determine_haproxy_port,
16)
17
18
19def is_relation_made(relation, key='private-address'):
20 for r_id in relation_ids(relation):
21 for unit in related_units(r_id):
22 if relation_get(key, rid=r_id, unit=unit):
23 return True
24 return False
25
26
27class CephGlanceContext(OSContextGenerator):
28 interfaces = ['ceph-glance']
29
30 def __call__(self):
31 """
32 Used to generate template context to be added to glance-api.conf in
33 the presence of a ceph relation.
34 """
35 if not is_relation_made(relation="ceph",
36 key="key"):
37 return {}
38 service = service_name()
39 return {
40 # ensure_ceph_pool() creates pool based on service name.
41 'rbd_pool': service,
42 'rbd_user': service,
43 }
44
45
46class ObjectStoreContext(OSContextGenerator):
47 interfaces = ['object-store']
48
49 def __call__(self):
50 """
51 Used to generate template context to be added to glance-api.conf in
52 the presence of a 'object-store' relation.
53 """
54 if not relation_ids('object-store'):
55 return {}
56 return {
57 'swift_store': True,
58 }
59
60
61class HAProxyContext(OSContextGenerator):
62 interfaces = ['cluster']
63
64 def __call__(self):
65 '''
66 Extends the main charmhelpers HAProxyContext with a port mapping
67 specific to this charm.
68 Also used to extend glance-api.conf context with correct bind_port
69 '''
70 haproxy_port = determine_haproxy_port(9292)
71 api_port = determine_api_port(9292)
72
73 ctxt = {
74 'service_ports': {'glance_api': [haproxy_port, api_port]},
75 'bind_port': api_port,
76 }
77 return ctxt
78
79
80class ApacheSSLContext(SSLContext):
81 interfaces = ['https']
82 external_ports = [9292]
83 service_namespace = 'glance'
84
85 def __call__(self):
86 #from glance_utils import service_enabled
87 #if not service_enabled('glance-api'):
88 # return {}
89 return super(ApacheSSLContext, self).__call__()
090
=== added file 'hooks/glance_relations.py'
--- hooks/glance_relations.py 1970-01-01 00:00:00 +0000
+++ hooks/glance_relations.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,320 @@
1#!/usr/bin/python
2import os
3import sys
4
5from glance_utils import (
6 do_openstack_upgrade,
7 ensure_ceph_pool,
8 migrate_database,
9 register_configs,
10 restart_map,
11 CLUSTER_RES,
12 PACKAGES,
13 SERVICES,
14 CHARM,
15 GLANCE_REGISTRY_CONF,
16 GLANCE_REGISTRY_PASTE_INI,
17 GLANCE_API_CONF,
18 GLANCE_API_PASTE_INI,
19 HAPROXY_CONF,
20 CEPH_CONF, )
21
22from charmhelpers.core.hookenv import (
23 config,
24 Hooks,
25 log as juju_log,
26 open_port,
27 relation_get,
28 relation_set,
29 relation_ids,
30 service_name,
31 unit_get,
32 UnregisteredHookError, )
33
34from charmhelpers.core.host import restart_on_change, service_stop
35
36from charmhelpers.fetch import apt_install, apt_update
37
38from charmhelpers.contrib.hahelpers.cluster import (
39 canonical_url, eligible_leader, is_leader)
40
41from charmhelpers.contrib.openstack.utils import (
42 configure_installation_source,
43 get_os_codename_package,
44 openstack_upgrade_available,
45 lsb_release, )
46
47from charmhelpers.contrib.storage.linux.ceph import ensure_ceph_keyring
48from charmhelpers.payload.execd import execd_preinstall
49
50from subprocess import (
51 check_call, )
52
53from commands import getstatusoutput
54
55hooks = Hooks()
56
57CONFIGS = register_configs()
58
59
60@hooks.hook('install')
61def install_hook():
62 juju_log('Installing glance packages')
63 execd_preinstall()
64 src = config('openstack-origin')
65 if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and
66 src == 'distro'):
67 src = 'cloud:precise-folsom'
68
69 configure_installation_source(src)
70
71 apt_update()
72 apt_install(PACKAGES)
73
74 for service in SERVICES:
75 service_stop(service)
76
77
78@hooks.hook('shared-db-relation-joined')
79def db_joined():
80 relation_set(database=config('database'), username=config('database-user'),
81 hostname=unit_get('private-address'))
82
83
84@hooks.hook('shared-db-relation-changed')
85@restart_on_change(restart_map())
86def db_changed():
87 rel = get_os_codename_package("glance-common")
88
89 if 'shared-db' not in CONFIGS.complete_contexts():
90 juju_log('shared-db relation incomplete. Peer not ready?')
91 return
92
93 CONFIGS.write(GLANCE_REGISTRY_CONF)
94 # since folsom, a db connection setting in glance-api.conf is required.
95 if rel != "essex":
96 CONFIGS.write(GLANCE_API_CONF)
97
98 if eligible_leader(CLUSTER_RES):
99 if rel == "essex":
100 (status, output) = getstatusoutput('glance-manage db_version')
101 if status != 0:
102 juju_log('Setting version_control to 0')
103 check_call(["glance-manage", "version_control", "0"])
104
105 juju_log('Cluster leader, performing db sync')
106 migrate_database()
107
108
109@hooks.hook('image-service-relation-joined')
110def image_service_joined(relation_id=None):
111 if not eligible_leader(CLUSTER_RES):
112 return
113
114 relation_data = {
115 'glance-api-server': canonical_url(CONFIGS) + ":9292"
116 }
117
118 juju_log("%s: image-service_joined: To peer glance-api-server=%s" %
119 (CHARM, relation_data['glance-api-server']))
120
121 relation_set(relation_id=relation_id, **relation_data)
122
123
124@hooks.hook('object-store-relation-joined')
125@restart_on_change(restart_map())
126def object_store_joined():
127
128 if 'identity-service' not in CONFIGS.complete_contexts():
129 juju_log('Deferring swift stora configuration until '
130 'an identity-service relation exists')
131 return
132
133 if 'object-store' not in CONFIGS.complete_contexts():
134 juju_log('swift relation incomplete')
135 return
136
137 CONFIGS.write(GLANCE_API_CONF)
138
139
140@hooks.hook('ceph-relation-joined')
141def ceph_joined():
142 if not os.path.isdir('/etc/ceph'):
143 os.mkdir('/etc/ceph')
144 apt_install(['ceph-common', 'python-ceph'])
145
146
147@hooks.hook('ceph-relation-changed')
148@restart_on_change(restart_map())
149def ceph_changed():
150 if 'ceph' not in CONFIGS.complete_contexts():
151 juju_log('ceph relation incomplete. Peer not ready?')
152 return
153
154 service = service_name()
155
156 if not ensure_ceph_keyring(service=service,
157 user='glance', group='glance'):
158 juju_log('Could not create ceph keyring: peer not ready?')
159 return
160
161 CONFIGS.write(GLANCE_API_CONF)
162 CONFIGS.write(CEPH_CONF)
163
164 if eligible_leader(CLUSTER_RES):
165 _config = config()
166 ensure_ceph_pool(service=service,
167 replicas=_config['ceph-osd-replication-count'])
168
169
170@hooks.hook('identity-service-relation-joined')
171def keystone_joined(relation_id=None):
172 if not eligible_leader(CLUSTER_RES):
173 juju_log('Deferring keystone_joined() to service leader.')
174 return
175
176 url = canonical_url(CONFIGS) + ":9292"
177 relation_data = {
178 'service': 'glance',
179 'region': config('region'),
180 'public_url': url,
181 'admin_url': url,
182 'internal_url': url, }
183
184 relation_set(relation_id=relation_id, **relation_data)
185
186
187@hooks.hook('identity-service-relation-changed')
188@restart_on_change(restart_map())
189def keystone_changed():
190 if 'identity-service' not in CONFIGS.complete_contexts():
191 juju_log('identity-service relation incomplete. Peer not ready?')
192 return
193
194 CONFIGS.write(GLANCE_API_CONF)
195 CONFIGS.write(GLANCE_REGISTRY_CONF)
196
197 CONFIGS.write(GLANCE_API_PASTE_INI)
198 CONFIGS.write(GLANCE_REGISTRY_PASTE_INI)
199
200 # Configure any object-store / swift relations now that we have an
201 # identity-service
202 if relation_ids('object-store'):
203 object_store_joined()
204
205 # possibly configure HTTPS for API and registry
206 configure_https()
207
208
209@hooks.hook('config-changed')
210@restart_on_change(restart_map())
211def config_changed():
212 if openstack_upgrade_available('glance-common'):
213 juju_log('Upgrading OpenStack release')
214 do_openstack_upgrade(CONFIGS)
215
216 open_port(9292)
217 configure_https()
218
219 #env_vars = {'OPENSTACK_PORT_MCASTPORT': config("ha-mcastport"),
220 # 'OPENSTACK_SERVICE_API': "glance-api",
221 # 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"}
222 #save_script_rc(**env_vars)
223
224
225@hooks.hook('cluster-relation-changed')
226@restart_on_change(restart_map())
227def cluster_changed():
228 CONFIGS.write(GLANCE_API_CONF)
229 CONFIGS.write(HAPROXY_CONF)
230
231
232@hooks.hook('upgrade-charm')
233def upgrade_charm():
234 cluster_changed()
235
236
237@hooks.hook('ha-relation-joined')
238def ha_relation_joined():
239 corosync_bindiface = config("ha-bindiface")
240 corosync_mcastport = config("ha-mcastport")
241 vip = config("vip")
242 vip_iface = config("vip_iface")
243 vip_cidr = config("vip_cidr")
244
245 #if vip and vip_iface and vip_cidr and \
246 # corosync_bindiface and corosync_mcastport:
247
248 resources = {
249 'res_glance_vip': 'ocf:heartbeat:IPaddr2',
250 'res_glance_haproxy': 'lsb:haproxy', }
251
252 resource_params = {
253 'res_glance_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' %
254 (vip, vip_cidr, vip_iface),
255 'res_glance_haproxy': 'op monitor interval="5s"', }
256
257 init_services = {
258 'res_glance_haproxy': 'haproxy', }
259
260 clones = {
261 'cl_glance_haproxy': 'res_glance_haproxy', }
262
263 relation_set(init_services=init_services,
264 corosync_bindiface=corosync_bindiface,
265 corosync_mcastport=corosync_mcastport,
266 resources=resources,
267 resource_params=resource_params,
268 clones=clones)
269
270
271@hooks.hook('ha-relation-changed')
272def ha_relation_changed():
273 clustered = relation_get('clustered')
274 if not clustered or clustered in [None, 'None', '']:
275 juju_log('ha_changed: hacluster subordinate is not fully clustered.')
276 return
277 if not is_leader(CLUSTER_RES):
278 juju_log('ha_changed: hacluster complete but we are not leader.')
279 return
280
281 # reconfigure endpoint in keystone to point to clustered VIP.
282 [keystone_joined(rid) for rid in relation_ids('identity-service')]
283
284 # notify glance client services of reconfigured URL.
285 [image_service_joined(rid) for rid in relation_ids('image-service')]
286
287
288@hooks.hook('ceph-relation-broken',
289 'identity-service-relation-broken',
290 'object-store-relation-broken',
291 'shared-db-relation-broken')
292def relation_broken():
293 CONFIGS.write_all()
294
295
296def configure_https():
297 '''
298 Enables SSL API Apache config if appropriate and kicks
299 identity-service and image-service with any required
300 updates
301 '''
302 CONFIGS.write_all()
303 if 'https' in CONFIGS.complete_contexts():
304 cmd = ['a2ensite', 'openstack_https_frontend']
305 check_call(cmd)
306 else:
307 cmd = ['a2dissite', 'openstack_https_frontend']
308 check_call(cmd)
309
310 for r_id in relation_ids('identity-service'):
311 keystone_joined(relation_id=r_id)
312 for r_id in relation_ids('image-service'):
313 image_service_joined(relation_id=r_id)
314
315
316if __name__ == '__main__':
317 try:
318 hooks.execute(sys.argv)
319 except UnregisteredHookError as e:
320 juju_log('Unknown hook {} - skiping.'.format(e))
0321
=== added file 'hooks/glance_utils.py'
--- hooks/glance_utils.py 1970-01-01 00:00:00 +0000
+++ hooks/glance_utils.py 2013-10-15 01:35:02 +0000
@@ -0,0 +1,193 @@
1#!/usr/bin/python
2
3import os
4import subprocess
5
6import glance_contexts
7
8from collections import OrderedDict
9
10from charmhelpers.fetch import (
11 apt_install,
12 apt_update, )
13
14from charmhelpers.core.hookenv import (
15 config,
16 log as juju_log,
17 relation_ids)
18
19from charmhelpers.contrib.openstack import (
20 templating,
21 context, )
22
23from charmhelpers.contrib.hahelpers.cluster import (
24 eligible_leader,
25)
26
27from charmhelpers.contrib.storage.linux.ceph import (
28 create_pool as ceph_create_pool,
29 pool_exists as ceph_pool_exists)
30
31from charmhelpers.contrib.openstack.utils import (
32 get_os_codename_install_source,
33 get_os_codename_package,
34 configure_installation_source, )
35
36CLUSTER_RES = "res_glance_vip"
37
38PACKAGES = [
39 "apache2", "glance", "python-mysqldb", "python-swift",
40 "python-keystone", "uuid", "haproxy", ]
41
42SERVICES = [
43 "glance-api", "glance-registry", ]
44
45CHARM = "glance"
46
47GLANCE_REGISTRY_CONF = "/etc/glance/glance-registry.conf"
48GLANCE_REGISTRY_PASTE_INI = "/etc/glance/glance-registry-paste.ini"
49GLANCE_API_CONF = "/etc/glance/glance-api.conf"
50GLANCE_API_PASTE_INI = "/etc/glance/glance-api-paste.ini"
51CEPH_CONF = "/etc/ceph/ceph.conf"
52HAPROXY_CONF = "/etc/haproxy/haproxy.cfg"
53HTTPS_APACHE_CONF = "/etc/apache2/sites-available/openstack_https_frontend"
54HTTPS_APACHE_24_CONF = "/etc/apache2/sites-available/" \
55 "openstack_https_frontend.conf"
56
57CONF_DIR = "/etc/glance"
58
59TEMPLATES = 'templates/'
60
61CONFIG_FILES = OrderedDict([
62 (GLANCE_REGISTRY_CONF, {
63 'hook_contexts': [context.SharedDBContext(),
64 context.IdentityServiceContext()],
65 'services': ['glance-registry']
66 }),
67 (GLANCE_API_CONF, {
68 'hook_contexts': [context.SharedDBContext(),
69 context.IdentityServiceContext(),
70 glance_contexts.CephGlanceContext(),
71 glance_contexts.ObjectStoreContext(),
72 glance_contexts.HAProxyContext()],
73 'services': ['glance-api']
74 }),
75 (GLANCE_API_PASTE_INI, {
76 'hook_contexts': [context.IdentityServiceContext()],
77 'services': ['glance-api']
78 }),
79 (GLANCE_REGISTRY_PASTE_INI, {
80 'hook_contexts': [context.IdentityServiceContext()],
81 'services': ['glance-registry']
82 }),
83 (CEPH_CONF, {
84 'hook_contexts': [context.CephContext()],
85 'services': []
86 }),
87 (HAPROXY_CONF, {
88 'hook_contexts': [context.HAProxyContext(),
89 glance_contexts.HAProxyContext()],
90 'services': ['haproxy'],
91 }),
92 (HTTPS_APACHE_CONF, {
93 'hook_contexts': [glance_contexts.ApacheSSLContext()],
94 'services': ['apache2'],
95 }),
96 (HTTPS_APACHE_24_CONF, {
97 'hook_contexts': [glance_contexts.ApacheSSLContext()],
98 'services': ['apache2'],
99 })
100])
101
102
103def register_configs():
104 # Register config files with their respective contexts.
105 # Regstration of some configs may not be required depending on
106 # existing of certain relations.
107 release = get_os_codename_package('glance-common', fatal=False) or 'essex'
108 configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
109 openstack_release=release)
110
111 confs = [GLANCE_REGISTRY_CONF,
112 GLANCE_API_CONF,
113 GLANCE_API_PASTE_INI,
114 GLANCE_REGISTRY_PASTE_INI,
115 HAPROXY_CONF]
116
117 if relation_ids('ceph'):
118 if not os.path.isdir('/etc/ceph'):
119 os.mkdir('/etc/ceph')
120 confs.append(CEPH_CONF)
121
122 for conf in confs:
123 configs.register(conf, CONFIG_FILES[conf]['hook_contexts'])
124
125 if os.path.exists('/etc/apache2/conf-available'):
126 configs.register(HTTPS_APACHE_24_CONF,
127 CONFIG_FILES[HTTPS_APACHE_24_CONF]['hook_contexts'])
128 else:
129 configs.register(HTTPS_APACHE_CONF,
130 CONFIG_FILES[HTTPS_APACHE_CONF]['hook_contexts'])
131
132 return configs
133
134
135def migrate_database():
136 '''Runs glance-manage to initialize a new database or migrate existing'''
137 cmd = ['glance-manage', 'db_sync']
138 subprocess.check_call(cmd)
139
140
141def ensure_ceph_pool(service, replicas):
142 '''Creates a ceph pool for service if one does not exist'''
143 # TODO: Ditto about moving somewhere sharable.
144 if not ceph_pool_exists(service=service, name=service):
145 ceph_create_pool(service=service, name=service, replicas=replicas)
146
147
148def do_openstack_upgrade(configs):
149 """
150 Perform an uprade of cinder. Takes care of upgrading packages, rewriting
151 configs + database migration and potentially any other post-upgrade
152 actions.
153
154 :param configs: The charms main OSConfigRenderer object.
155
156 """
157 new_src = config('openstack-origin')
158 new_os_rel = get_os_codename_install_source(new_src)
159
160 juju_log('Performing OpenStack upgrade to %s.' % (new_os_rel))
161
162 configure_installation_source(new_src)
163 dpkg_opts = [
164 '--option', 'Dpkg::Options::=--force-confnew',
165 '--option', 'Dpkg::Options::=--force-confdef',
166 ]
167 apt_update()
168 apt_install(packages=PACKAGES, options=dpkg_opts, fatal=True)
169
170 # set CONFIGS to load templates from new release and regenerate config
171 configs.set_release(openstack_release=new_os_rel)
172 configs.write_all()
173
174 if eligible_leader(CLUSTER_RES):
175 migrate_database()
176
177
178def restart_map():
179 '''
180 Determine the correct resource map to be passed to
181 charmhelpers.core.restart_on_change() based on the services configured.
182
183 :returns: dict: A dictionary mapping config file to lists of services
184 that should be restarted when file changes.
185 '''
186 _map = []
187 for f, ctxt in CONFIG_FILES.iteritems():
188 svcs = []
189 for svc in ctxt['services']:
190 svcs.append(svc)
191 if svcs:
192 _map.append((f, svcs))
193 return OrderedDict(_map)
0194
=== modified symlink 'hooks/ha-relation-changed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/ha-relation-joined'
=== target changed u'glance-relations' => u'glance_relations.py'
=== added symlink 'hooks/identity-service-relation-broken'
=== target is u'glance_relations.py'
=== modified symlink 'hooks/identity-service-relation-changed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/identity-service-relation-joined'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/image-service-relation-changed'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/image-service-relation-joined'
=== target changed u'glance-relations' => u'glance_relations.py'
=== modified symlink 'hooks/install'
=== target changed u'glance-relations' => u'glance_relations.py'
=== removed directory 'hooks/lib'
=== removed file 'hooks/lib/openstack-common'
--- hooks/lib/openstack-common 2013-06-03 18:39:29 +0000
+++ hooks/lib/openstack-common 1970-01-01 00:00:00 +0000
@@ -1,813 +0,0 @@
1#!/bin/bash -e
2
3# Common utility functions used across all OpenStack charms.
4
5error_out() {
6 juju-log "$CHARM ERROR: $@"
7 exit 1
8}
9
10function service_ctl_status {
11 # Return 0 if a service is running, 1 otherwise.
12 local svc="$1"
13 local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }')
14 case $status in
15 "start") return 0 ;;
16 "stop") return 1 ;;
17 *) error_out "Unexpected status of service $svc: $status" ;;
18 esac
19}
20
21function service_ctl {
22 # control a specific service, or all (as defined by $SERVICES)
23 # service restarts will only occur depending on global $CONFIG_CHANGED,
24 # which should be updated in charm's set_or_update().
25 local config_changed=${CONFIG_CHANGED:-True}
26 if [[ $1 == "all" ]] ; then
27 ctl="$SERVICES"
28 else
29 ctl="$1"
30 fi
31 action="$2"
32 if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then
33 error_out "ERROR service_ctl: Not enough arguments"
34 fi
35
36 for i in $ctl ; do
37 case $action in
38 "start")
39 service_ctl_status $i || service $i start ;;
40 "stop")
41 service_ctl_status $i && service $i stop || return 0 ;;
42 "restart")
43 if [[ "$config_changed" == "True" ]] ; then
44 service_ctl_status $i && service $i restart || service $i start
45 fi
46 ;;
47 esac
48 if [[ $? != 0 ]] ; then
49 juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action"
50 fi
51 done
52 # all configs should have been reloaded on restart of all services, reset
53 # flag if its being used.
54 if [[ "$action" == "restart" ]] && [[ -n "$CONFIG_CHANGED" ]] &&
55 [[ "$ctl" == "all" ]]; then
56 CONFIG_CHANGED="False"
57 fi
58}
59
60function configure_install_source {
61 # Setup and configure installation source based on a config flag.
62 local src="$1"
63
64 # Default to installing from the main Ubuntu archive.
65 [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0
66
67 . /etc/lsb-release
68
69 # standard 'ppa:someppa/name' format.
70 if [[ "${src:0:4}" == "ppa:" ]] ; then
71 juju-log "$CHARM: Configuring installation from custom src ($src)"
72 add-apt-repository -y "$src" || error_out "Could not configure PPA access."
73 return 0
74 fi
75
76 # standard 'deb http://url/ubuntu main' entries. gpg key ids must
77 # be appended to the end of url after a |, ie:
78 # 'deb http://url/ubuntu main|$GPGKEYID'
79 if [[ "${src:0:3}" == "deb" ]] ; then
80 juju-log "$CHARM: Configuring installation from custom src URL ($src)"
81 if echo "$src" | grep -q "|" ; then
82 # gpg key id tagged to end of url folloed by a |
83 url=$(echo $src | cut -d'|' -f1)
84 key=$(echo $src | cut -d'|' -f2)
85 juju-log "$CHARM: Importing repository key: $key"
86 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \
87 juju-log "$CHARM WARN: Could not import key from keyserver: $key"
88 else
89 juju-log "$CHARM No repository key specified."
90 url="$src"
91 fi
92 echo "$url" > /etc/apt/sources.list.d/juju_deb.list
93 return 0
94 fi
95
96 # Cloud Archive
97 if [[ "${src:0:6}" == "cloud:" ]] ; then
98
99 # current os releases supported by the UCA.
100 local cloud_archive_versions="folsom grizzly"
101
102 local ca_rel=$(echo $src | cut -d: -f2)
103 local u_rel=$(echo $ca_rel | cut -d- -f1)
104 local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1)
105
106 [[ "$u_rel" != "$DISTRIB_CODENAME" ]] &&
107 error_out "Cannot install from Cloud Archive pocket $src " \
108 "on this Ubuntu version ($DISTRIB_CODENAME)!"
109
110 valid_release=""
111 for rel in $cloud_archive_versions ; do
112 if [[ "$os_rel" == "$rel" ]] ; then
113 valid_release=1
114 juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive."
115 fi
116 done
117 if [[ -z "$valid_release" ]] ; then
118 error_out "OpenStack release ($os_rel) not supported by "\
119 "the Ubuntu Cloud Archive."
120 fi
121
122 # CA staging repos are standard PPAs.
123 if echo $ca_rel | grep -q "staging" ; then
124 add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging
125 return 0
126 fi
127
128 # the others are LP-external deb repos.
129 case "$ca_rel" in
130 "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
131 "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
132 "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
133 "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
134 *) error_out "Invalid Cloud Archive repo specified: $src"
135 esac
136
137 apt-get -y install ubuntu-cloud-keyring
138 entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main"
139 echo "$entry" \
140 >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list
141 return 0
142 fi
143
144 error_out "Invalid installation source specified in config: $src"
145
146}
147
148get_os_codename_install_source() {
149 # derive the openstack release provided by a supported installation source.
150 local rel="$1"
151 local codename="unknown"
152 . /etc/lsb-release
153
154 # map ubuntu releases to the openstack version shipped with it.
155 if [[ "$rel" == "distro" ]] ; then
156 case "$DISTRIB_CODENAME" in
157 "oneiric") codename="diablo" ;;
158 "precise") codename="essex" ;;
159 "quantal") codename="folsom" ;;
160 "raring") codename="grizzly" ;;
161 esac
162 fi
163
164 # derive version from cloud archive strings.
165 if [[ "${rel:0:6}" == "cloud:" ]] ; then
166 rel=$(echo $rel | cut -d: -f2)
167 local u_rel=$(echo $rel | cut -d- -f1)
168 local ca_rel=$(echo $rel | cut -d- -f2)
169 if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then
170 case "$ca_rel" in
171 "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging")
172 codename="folsom" ;;
173 "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging")
174 codename="grizzly" ;;
175 esac
176 fi
177 fi
178
179 # have a guess based on the deb string provided
180 if [[ "${rel:0:3}" == "deb" ]] || \
181 [[ "${rel:0:3}" == "ppa" ]] ; then
182 CODENAMES="diablo essex folsom grizzly havana"
183 for cname in $CODENAMES; do
184 if echo $rel | grep -q $cname; then
185 codename=$cname
186 fi
187 done
188 fi
189 echo $codename
190}
191
192get_os_codename_package() {
193 local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none"
194 pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs
195 case "${pkg_vers:0:6}" in
196 "2011.2") echo "diablo" ;;
197 "2012.1") echo "essex" ;;
198 "2012.2") echo "folsom" ;;
199 "2013.1") echo "grizzly" ;;
200 "2013.2") echo "havana" ;;
201 esac
202}
203
204get_os_version_codename() {
205 case "$1" in
206 "diablo") echo "2011.2" ;;
207 "essex") echo "2012.1" ;;
208 "folsom") echo "2012.2" ;;
209 "grizzly") echo "2013.1" ;;
210 "havana") echo "2013.2" ;;
211 esac
212}
213
214get_ip() {
215 dpkg -l | grep -q python-dnspython || {
216 apt-get -y install python-dnspython 2>&1 > /dev/null
217 }
218 hostname=$1
219 python -c "
220import dns.resolver
221import socket
222try:
223 # Test to see if already an IPv4 address
224 socket.inet_aton('$hostname')
225 print '$hostname'
226except socket.error:
227 try:
228 answers = dns.resolver.query('$hostname', 'A')
229 if answers:
230 print answers[0].address
231 except dns.resolver.NXDOMAIN:
232 pass
233"
234}
235
236# Common storage routines used by cinder, nova-volume and swift-storage.
237clean_storage() {
238 # if configured to overwrite existing storage, we unmount the block-dev
239 # if mounted and clear any previous pv signatures
240 local block_dev="$1"
241 juju-log "Cleaining storage '$block_dev'"
242 if grep -q "^$block_dev" /proc/mounts ; then
243 mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }')
244 juju-log "Unmounting $block_dev from $mp"
245 umount "$mp" || error_out "ERROR: Could not unmount storage from $mp"
246 fi
247 if pvdisplay "$block_dev" >/dev/null 2>&1 ; then
248 juju-log "Removing existing LVM PV signatures from $block_dev"
249
250 # deactivate any volgroups that may be built on this dev
251 vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }')
252 if [[ -n "$vg" ]] ; then
253 juju-log "Deactivating existing volume group: $vg"
254 vgchange -an "$vg" ||
255 error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?"
256 fi
257 echo "yes" | pvremove -ff "$block_dev" ||
258 error_out "Could not pvremove $block_dev"
259 else
260 juju-log "Zapping disk of all GPT and MBR structures"
261 sgdisk --zap-all $block_dev ||
262 error_out "Unable to zap $block_dev"
263 fi
264}
265
266function get_block_device() {
267 # given a string, return full path to the block device for that
268 # if input is not a block device, find a loopback device
269 local input="$1"
270
271 case "$input" in
272 /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist."
273 echo "$input"; return 0;;
274 /*) :;;
275 *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist."
276 echo "/dev/$input"; return 0;;
277 esac
278
279 # this represents a file
280 # support "/path/to/file|5G"
281 local fpath size oifs="$IFS"
282 if [ "${input#*|}" != "${input}" ]; then
283 size=${input##*|}
284 fpath=${input%|*}
285 else
286 fpath=${input}
287 size=5G
288 fi
289
290 ## loop devices are not namespaced. This is bad for containers.
291 ## it means that the output of 'losetup' may have the given $fpath
292 ## in it, but that may not represent this containers $fpath, but
293 ## another containers. To address that, we really need to
294 ## allow some uniq container-id to be expanded within path.
295 ## TODO: find a unique container-id that will be consistent for
296 ## this container throughout its lifetime and expand it
297 ## in the fpath.
298 # fpath=${fpath//%{id}/$THAT_ID}
299
300 local found=""
301 # parse through 'losetup -a' output, looking for this file
302 # output is expected to look like:
303 # /dev/loop0: [0807]:961814 (/tmp/my.img)
304 found=$(losetup -a |
305 awk 'BEGIN { found=0; }
306 $3 == f { sub(/:$/,"",$1); print $1; found=found+1; }
307 END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \
308 f="($fpath)")
309
310 if [ $? -ne 0 ]; then
311 echo "multiple devices found for $fpath: $found" 1>&2
312 return 1;
313 fi
314
315 [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; }
316
317 if [ -n "$found" ]; then
318 echo "confused, $found is not a block device for $fpath";
319 return 1;
320 fi
321
322 # no existing device was found, create one
323 mkdir -p "${fpath%/*}"
324 truncate --size "$size" "$fpath" ||
325 { echo "failed to create $fpath of size $size"; return 1; }
326
327 found=$(losetup --find --show "$fpath") ||
328 { echo "failed to setup loop device for $fpath" 1>&2; return 1; }
329
330 echo "$found"
331 return 0
332}
333
334HAPROXY_CFG=/etc/haproxy/haproxy.cfg
335HAPROXY_DEFAULT=/etc/default/haproxy
336##########################################################################
337# Description: Configures HAProxy services for Openstack API's
338# Parameters:
339# Space delimited list of service:port:mode combinations for which
340# haproxy service configuration should be generated for. The function
341# assumes the name of the peer relation is 'cluster' and that every
342# service unit in the peer relation is running the same services.
343#
344# Services that do not specify :mode in parameter will default to http.
345#
346# Example
347# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http
348##########################################################################
349configure_haproxy() {
350 local address=`unit-get private-address`
351 local name=${JUJU_UNIT_NAME////-}
352 cat > $HAPROXY_CFG << EOF
353global
354 log 127.0.0.1 local0
355 log 127.0.0.1 local1 notice
356 maxconn 20000
357 user haproxy
358 group haproxy
359 spread-checks 0
360
361defaults
362 log global
363 mode http
364 option httplog
365 option dontlognull
366 retries 3
367 timeout queue 1000
368 timeout connect 1000
369 timeout client 30000
370 timeout server 30000
371
372listen stats :8888
373 mode http
374 stats enable
375 stats hide-version
376 stats realm Haproxy\ Statistics
377 stats uri /
378 stats auth admin:password
379
380EOF
381 for service in $@; do
382 local service_name=$(echo $service | cut -d : -f 1)
383 local haproxy_listen_port=$(echo $service | cut -d : -f 2)
384 local api_listen_port=$(echo $service | cut -d : -f 3)
385 local mode=$(echo $service | cut -d : -f 4)
386 [[ -z "$mode" ]] && mode="http"
387 juju-log "Adding haproxy configuration entry for $service "\
388 "($haproxy_listen_port -> $api_listen_port)"
389 cat >> $HAPROXY_CFG << EOF
390listen $service_name 0.0.0.0:$haproxy_listen_port
391 balance roundrobin
392 mode $mode
393 option ${mode}log
394 server $name $address:$api_listen_port check
395EOF
396 local r_id=""
397 local unit=""
398 for r_id in `relation-ids cluster`; do
399 for unit in `relation-list -r $r_id`; do
400 local unit_name=${unit////-}
401 local unit_address=`relation-get -r $r_id private-address $unit`
402 if [ -n "$unit_address" ]; then
403 echo " server $unit_name $unit_address:$api_listen_port check" \
404 >> $HAPROXY_CFG
405 fi
406 done
407 done
408 done
409 echo "ENABLED=1" > $HAPROXY_DEFAULT
410 service haproxy restart
411}
412
413##########################################################################
414# Description: Query HA interface to determine is cluster is configured
415# Returns: 0 if configured, 1 if not configured
416##########################################################################
417is_clustered() {
418 local r_id=""
419 local unit=""
420 for r_id in $(relation-ids ha); do
421 if [ -n "$r_id" ]; then
422 for unit in $(relation-list -r $r_id); do
423 clustered=$(relation-get -r $r_id clustered $unit)
424 if [ -n "$clustered" ]; then
425 juju-log "Unit is haclustered"
426 return 0
427 fi
428 done
429 fi
430 done
431 juju-log "Unit is not haclustered"
432 return 1
433}
434
435##########################################################################
436# Description: Return a list of all peers in cluster relations
437##########################################################################
438peer_units() {
439 local peers=""
440 local r_id=""
441 for r_id in $(relation-ids cluster); do
442 peers="$peers $(relation-list -r $r_id)"
443 done
444 echo $peers
445}
446
447##########################################################################
448# Description: Determines whether the current unit is the oldest of all
449# its peers - supports partial leader election
450# Returns: 0 if oldest, 1 if not
451##########################################################################
452oldest_peer() {
453 peers=$1
454 local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2)
455 for peer in $peers; do
456 echo "Comparing $JUJU_UNIT_NAME with peers: $peers"
457 local r_unit_no=$(echo $peer | cut -d / -f 2)
458 if (($r_unit_no<$l_unit_no)); then
459 juju-log "Not oldest peer; deferring"
460 return 1
461 fi
462 done
463 juju-log "Oldest peer; might take charge?"
464 return 0
465}
466
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches