Merge lp:~openstack-charmers/charms/precise/glance/python-redux into lp:~charmers/charms/precise/glance/trunk
- Precise Pangolin (12.04)
- python-redux
- Merge into trunk
Proposed by
Adam Gandelman
Status: | Merged |
---|---|
Merged at revision: | 37 |
Proposed branch: | lp:~openstack-charmers/charms/precise/glance/python-redux |
Merge into: | lp:~charmers/charms/precise/glance/trunk |
Diff against target: |
6608 lines (+4868/-1413) 47 files modified
.coveragerc (+6/-0) Makefile (+11/-0) README.md (+89/-0) charm-helpers.yaml (+9/-0) config.yaml (+12/-2) hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0) hooks/charmhelpers/contrib/openstack/context.py (+522/-0) hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0) hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0) hooks/charmhelpers/contrib/openstack/templating.py (+280/-0) hooks/charmhelpers/contrib/openstack/utils.py (+365/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+359/-0) hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0) hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0) hooks/charmhelpers/core/hookenv.py (+340/-0) hooks/charmhelpers/core/host.py (+241/-0) hooks/charmhelpers/fetch/__init__.py (+209/-0) hooks/charmhelpers/fetch/archiveurl.py (+48/-0) hooks/charmhelpers/fetch/bzrurl.py (+49/-0) hooks/charmhelpers/payload/__init__.py (+1/-0) hooks/charmhelpers/payload/execd.py (+50/-0) hooks/glance-common (+0/-133) hooks/glance-relations (+0/-464) hooks/glance_contexts.py (+89/-0) hooks/glance_relations.py (+320/-0) hooks/glance_utils.py (+193/-0) hooks/lib/openstack-common (+0/-813) metadata.yaml (+2/-0) revision (+1/-1) templates/ceph.conf (+12/-0) templates/essex/glance-api-paste.ini (+51/-0) templates/essex/glance-api.conf (+86/-0) templates/essex/glance-registry-paste.ini (+28/-0) templates/folsom/glance-api-paste.ini (+68/-0) templates/folsom/glance-api.conf (+94/-0) templates/folsom/glance-registry-paste.ini (+28/-0) templates/glance-registry.conf (+19/-0) templates/grizzly/glance-api-paste.ini (+68/-0) templates/grizzly/glance-registry-paste.ini (+30/-0) templates/haproxy.cfg (+37/-0) templates/havana/glance-api-paste.ini (+71/-0) templates/openstack_https_frontend (+23/-0) unit_tests/__init__.py (+3/-0) unit_tests/test_glance_relations.py (+401/-0) unit_tests/test_utils.py (+118/-0) |
To merge this branch: | bzr merge lp:~openstack-charmers/charms/precise/glance/python-redux |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email: mp+191082@code.launchpad.net |
Commit message
Description of the change
Update of all Havana / Saucy / python-redux work:
* Full python rewrite using new OpenStack charm-helpers.
* Test coverage
* Havana support
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file '.coveragerc' | |||
2 | --- .coveragerc 1970-01-01 00:00:00 +0000 | |||
3 | +++ .coveragerc 2013-10-15 01:35:02 +0000 | |||
4 | @@ -0,0 +1,6 @@ | |||
5 | 1 | [report] | ||
6 | 2 | # Regexes for lines to exclude from consideration | ||
7 | 3 | exclude_lines = | ||
8 | 4 | if __name__ == .__main__.: | ||
9 | 5 | include= | ||
10 | 6 | hooks/glance_* | ||
11 | 0 | 7 | ||
12 | === added file 'Makefile' | |||
13 | --- Makefile 1970-01-01 00:00:00 +0000 | |||
14 | +++ Makefile 2013-10-15 01:35:02 +0000 | |||
15 | @@ -0,0 +1,11 @@ | |||
16 | 1 | #!/usr/bin/make | ||
17 | 2 | |||
18 | 3 | lint: | ||
19 | 4 | @flake8 --exclude hooks/charmhelpers hooks | ||
20 | 5 | @charm proof | ||
21 | 6 | |||
22 | 7 | sync: | ||
23 | 8 | @charm-helper-sync -c charm-helpers.yaml | ||
24 | 9 | |||
25 | 10 | test: | ||
26 | 11 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests | ||
27 | 0 | 12 | ||
28 | === added file 'README.md' | |||
29 | --- README.md 1970-01-01 00:00:00 +0000 | |||
30 | +++ README.md 2013-10-15 01:35:02 +0000 | |||
31 | @@ -0,0 +1,89 @@ | |||
32 | 1 | Overview | ||
33 | 2 | -------- | ||
34 | 3 | |||
35 | 4 | This charm provides the Glance image service for OpenStack. It is intended to | ||
36 | 5 | be used alongside the other OpenStack components, starting with the Essex | ||
37 | 6 | release in Ubuntu 12.04. | ||
38 | 7 | |||
39 | 8 | Usage | ||
40 | 9 | ----- | ||
41 | 10 | |||
42 | 11 | Glance may be deployed in a number of ways. This charm focuses on 3 main | ||
43 | 12 | configurations. All require the existence of the other core OpenStack | ||
44 | 13 | services deployed via Juju charms, specifically: mysql, keystone and | ||
45 | 14 | nova-cloud-controller. The following assumes these services have already | ||
46 | 15 | been deployed. | ||
47 | 16 | |||
48 | 17 | Local Storage | ||
49 | 18 | ============= | ||
50 | 19 | |||
51 | 20 | In this configuration, Glance uses the local storage available on the server | ||
52 | 21 | to store image data: | ||
53 | 22 | |||
54 | 23 | juju deploy glance | ||
55 | 24 | juju add-relation glance keystone | ||
56 | 25 | juju add-relation glance mysql | ||
57 | 26 | juju add-relation glance nova-cloud-controller | ||
58 | 27 | |||
59 | 28 | Swift backed storage | ||
60 | 29 | ==================== | ||
61 | 30 | |||
62 | 31 | Glance can also use Swift Object storage for image storage. Swift is often | ||
63 | 32 | deployed as part of an OpenStack cloud and provides increased resilience and | ||
64 | 33 | scale when compared to using local disk storage. This configuration assumes | ||
65 | 34 | that you have already deployed Swift using the swift-proxy and swift-storage | ||
66 | 35 | charms: | ||
67 | 36 | |||
68 | 37 | juju deploy glance | ||
69 | 38 | juju add-relation glance keystone | ||
70 | 39 | juju add-relation glance mysql | ||
71 | 40 | juju add-relation glance nova-cloud-controller | ||
72 | 41 | juju add-relation glance swift-proxy | ||
73 | 42 | |||
74 | 43 | This configuration can be used to support Glance in HA/Scale-out deployments. | ||
75 | 44 | |||
76 | 45 | Ceph backed storage | ||
77 | 46 | =================== | ||
78 | 47 | |||
79 | 48 | In this configuration, Glance uses Ceph based object storage to provide | ||
80 | 49 | scalable, resilient storage of images. This configuration assumes that you | ||
81 | 50 | have already deployed Ceph using the ceph charm: | ||
82 | 51 | |||
83 | 52 | juju deploy glance | ||
84 | 53 | juju add-relation glance keystone | ||
85 | 54 | juju add-relation glance mysql | ||
86 | 55 | juju add-relation glance nova-cloud-controller | ||
87 | 56 | juju add-relation glance ceph | ||
88 | 57 | |||
89 | 58 | This configuration can also be used to support Glance in HA/Scale-out | ||
90 | 59 | deployments. | ||
91 | 60 | |||
92 | 61 | Glance HA/Scale-out | ||
93 | 62 | =================== | ||
94 | 63 | |||
95 | 64 | The Glance charm can also be used in a HA/scale-out configuration using | ||
96 | 65 | the hacluster charm: | ||
97 | 66 | |||
98 | 67 | juju deploy -n 3 glance | ||
99 | 68 | juju deploy hacluster haglance | ||
100 | 69 | juju set glance vip=<virtual IP address to access glance over> | ||
101 | 70 | juju add-relation glance haglance | ||
102 | 71 | juju add-relation glance mysql | ||
103 | 72 | juju add-relation glance keystone | ||
104 | 73 | juju add-relation glance nova-cloud-controller | ||
105 | 74 | juju add-relation glance ceph|swift-proxy | ||
106 | 75 | |||
107 | 76 | In this configuration, 3 service units host the Glance image service; | ||
108 | 77 | API requests are load balanced across all 3 service units via the | ||
109 | 78 | configured virtual IP address (which is also registered into Keystone | ||
110 | 79 | as the endpoint for Glance). | ||
111 | 80 | |||
112 | 81 | Note that Glance in this configuration must be used with either Ceph or | ||
113 | 82 | Swift providing backing image storage. | ||
114 | 83 | |||
115 | 84 | Contact Information | ||
116 | 85 | ------------------- | ||
117 | 86 | |||
118 | 87 | Author: Adam Gandelman <adamg@canonical.com> | ||
119 | 88 | Report bugs at: http://bugs.launchpad.net/charms | ||
120 | 89 | Location: http://jujucharms.com | ||
121 | 0 | 90 | ||
122 | === added file 'charm-helpers.yaml' | |||
123 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 | |||
124 | +++ charm-helpers.yaml 2013-10-15 01:35:02 +0000 | |||
125 | @@ -0,0 +1,9 @@ | |||
126 | 1 | branch: lp:charm-helpers | ||
127 | 2 | destination: hooks/charmhelpers | ||
128 | 3 | include: | ||
129 | 4 | - core | ||
130 | 5 | - fetch | ||
131 | 6 | - contrib.openstack | ||
132 | 7 | - contrib.hahelpers | ||
133 | 8 | - contrib.storage.linux.ceph | ||
134 | 9 | - payload.execd | ||
135 | 0 | 10 | ||
136 | === modified file 'config.yaml' | |||
137 | --- config.yaml 2013-09-18 09:12:13 +0000 | |||
138 | +++ config.yaml 2013-10-15 01:35:02 +0000 | |||
139 | @@ -14,11 +14,11 @@ | |||
140 | 14 | Note that updating this setting to a source that is known to | 14 | Note that updating this setting to a source that is known to |
141 | 15 | provide a later version of OpenStack will trigger a software | 15 | provide a later version of OpenStack will trigger a software |
142 | 16 | upgrade. | 16 | upgrade. |
144 | 17 | db-user: | 17 | database-user: |
145 | 18 | default: glance | 18 | default: glance |
146 | 19 | type: string | 19 | type: string |
147 | 20 | description: Database username | 20 | description: Database username |
149 | 21 | glance-db: | 21 | database: |
150 | 22 | default: glance | 22 | default: glance |
151 | 23 | type: string | 23 | type: string |
152 | 24 | description: Glance database name. | 24 | description: Glance database name. |
153 | @@ -26,6 +26,16 @@ | |||
154 | 26 | default: RegionOne | 26 | default: RegionOne |
155 | 27 | type: string | 27 | type: string |
156 | 28 | description: OpenStack Region | 28 | description: OpenStack Region |
157 | 29 | ceph-osd-replication-count: | ||
158 | 30 | default: 2 | ||
159 | 31 | type: int | ||
160 | 32 | description: | | ||
161 | 33 | This value dictates the number of replicas ceph must make of any | ||
162 | 34 | object it stores within the images rbd pool. Of course, this only | ||
163 | 35 | applies if using Ceph as a backend store. Note that once the images | ||
164 | 36 | rbd pool has been created, changing this value will not have any | ||
165 | 37 | effect (although it can be changed in ceph by manually configuring | ||
166 | 38 | your ceph cluster). | ||
167 | 29 | # HA configuration settings | 39 | # HA configuration settings |
168 | 30 | vip: | 40 | vip: |
169 | 31 | type: string | 41 | type: string |
170 | 32 | 42 | ||
171 | === added file 'hooks/__init__.py' | |||
172 | === added symlink 'hooks/ceph-relation-broken' | |||
173 | === target is u'glance_relations.py' | |||
174 | === modified symlink 'hooks/ceph-relation-changed' | |||
175 | === target changed u'glance-relations' => u'glance_relations.py' | |||
176 | === modified symlink 'hooks/ceph-relation-joined' | |||
177 | === target changed u'glance-relations' => u'glance_relations.py' | |||
178 | === added directory 'hooks/charmhelpers' | |||
179 | === added file 'hooks/charmhelpers/__init__.py' | |||
180 | === added directory 'hooks/charmhelpers/contrib' | |||
181 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
182 | === added directory 'hooks/charmhelpers/contrib/hahelpers' | |||
183 | === added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py' | |||
184 | === added file 'hooks/charmhelpers/contrib/hahelpers/apache.py' | |||
185 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000 | |||
186 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 01:35:02 +0000 | |||
187 | @@ -0,0 +1,58 @@ | |||
188 | 1 | # | ||
189 | 2 | # Copyright 2012 Canonical Ltd. | ||
190 | 3 | # | ||
191 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
192 | 5 | # | ||
193 | 6 | # Authors: | ||
194 | 7 | # James Page <james.page@ubuntu.com> | ||
195 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
196 | 9 | # | ||
197 | 10 | |||
198 | 11 | import subprocess | ||
199 | 12 | |||
200 | 13 | from charmhelpers.core.hookenv import ( | ||
201 | 14 | config as config_get, | ||
202 | 15 | relation_get, | ||
203 | 16 | relation_ids, | ||
204 | 17 | related_units as relation_list, | ||
205 | 18 | log, | ||
206 | 19 | INFO, | ||
207 | 20 | ) | ||
208 | 21 | |||
209 | 22 | |||
210 | 23 | def get_cert(): | ||
211 | 24 | cert = config_get('ssl_cert') | ||
212 | 25 | key = config_get('ssl_key') | ||
213 | 26 | if not (cert and key): | ||
214 | 27 | log("Inspecting identity-service relations for SSL certificate.", | ||
215 | 28 | level=INFO) | ||
216 | 29 | cert = key = None | ||
217 | 30 | for r_id in relation_ids('identity-service'): | ||
218 | 31 | for unit in relation_list(r_id): | ||
219 | 32 | if not cert: | ||
220 | 33 | cert = relation_get('ssl_cert', | ||
221 | 34 | rid=r_id, unit=unit) | ||
222 | 35 | if not key: | ||
223 | 36 | key = relation_get('ssl_key', | ||
224 | 37 | rid=r_id, unit=unit) | ||
225 | 38 | return (cert, key) | ||
226 | 39 | |||
227 | 40 | |||
228 | 41 | def get_ca_cert(): | ||
229 | 42 | ca_cert = None | ||
230 | 43 | log("Inspecting identity-service relations for CA SSL certificate.", | ||
231 | 44 | level=INFO) | ||
232 | 45 | for r_id in relation_ids('identity-service'): | ||
233 | 46 | for unit in relation_list(r_id): | ||
234 | 47 | if not ca_cert: | ||
235 | 48 | ca_cert = relation_get('ca_cert', | ||
236 | 49 | rid=r_id, unit=unit) | ||
237 | 50 | return ca_cert | ||
238 | 51 | |||
239 | 52 | |||
240 | 53 | def install_ca_cert(ca_cert): | ||
241 | 54 | if ca_cert: | ||
242 | 55 | with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', | ||
243 | 56 | 'w') as crt: | ||
244 | 57 | crt.write(ca_cert) | ||
245 | 58 | subprocess.check_call(['update-ca-certificates', '--fresh']) | ||
246 | 0 | 59 | ||
247 | === added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' | |||
248 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000 | |||
249 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 01:35:02 +0000 | |||
250 | @@ -0,0 +1,183 @@ | |||
251 | 1 | # | ||
252 | 2 | # Copyright 2012 Canonical Ltd. | ||
253 | 3 | # | ||
254 | 4 | # Authors: | ||
255 | 5 | # James Page <james.page@ubuntu.com> | ||
256 | 6 | # Adam Gandelman <adamg@ubuntu.com> | ||
257 | 7 | # | ||
258 | 8 | |||
259 | 9 | import subprocess | ||
260 | 10 | import os | ||
261 | 11 | |||
262 | 12 | from socket import gethostname as get_unit_hostname | ||
263 | 13 | |||
264 | 14 | from charmhelpers.core.hookenv import ( | ||
265 | 15 | log, | ||
266 | 16 | relation_ids, | ||
267 | 17 | related_units as relation_list, | ||
268 | 18 | relation_get, | ||
269 | 19 | config as config_get, | ||
270 | 20 | INFO, | ||
271 | 21 | ERROR, | ||
272 | 22 | unit_get, | ||
273 | 23 | ) | ||
274 | 24 | |||
275 | 25 | |||
276 | 26 | class HAIncompleteConfig(Exception): | ||
277 | 27 | pass | ||
278 | 28 | |||
279 | 29 | |||
280 | 30 | def is_clustered(): | ||
281 | 31 | for r_id in (relation_ids('ha') or []): | ||
282 | 32 | for unit in (relation_list(r_id) or []): | ||
283 | 33 | clustered = relation_get('clustered', | ||
284 | 34 | rid=r_id, | ||
285 | 35 | unit=unit) | ||
286 | 36 | if clustered: | ||
287 | 37 | return True | ||
288 | 38 | return False | ||
289 | 39 | |||
290 | 40 | |||
291 | 41 | def is_leader(resource): | ||
292 | 42 | cmd = [ | ||
293 | 43 | "crm", "resource", | ||
294 | 44 | "show", resource | ||
295 | 45 | ] | ||
296 | 46 | try: | ||
297 | 47 | status = subprocess.check_output(cmd) | ||
298 | 48 | except subprocess.CalledProcessError: | ||
299 | 49 | return False | ||
300 | 50 | else: | ||
301 | 51 | if get_unit_hostname() in status: | ||
302 | 52 | return True | ||
303 | 53 | else: | ||
304 | 54 | return False | ||
305 | 55 | |||
306 | 56 | |||
307 | 57 | def peer_units(): | ||
308 | 58 | peers = [] | ||
309 | 59 | for r_id in (relation_ids('cluster') or []): | ||
310 | 60 | for unit in (relation_list(r_id) or []): | ||
311 | 61 | peers.append(unit) | ||
312 | 62 | return peers | ||
313 | 63 | |||
314 | 64 | |||
315 | 65 | def oldest_peer(peers): | ||
316 | 66 | local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) | ||
317 | 67 | for peer in peers: | ||
318 | 68 | remote_unit_no = int(peer.split('/')[1]) | ||
319 | 69 | if remote_unit_no < local_unit_no: | ||
320 | 70 | return False | ||
321 | 71 | return True | ||
322 | 72 | |||
323 | 73 | |||
324 | 74 | def eligible_leader(resource): | ||
325 | 75 | if is_clustered(): | ||
326 | 76 | if not is_leader(resource): | ||
327 | 77 | log('Deferring action to CRM leader.', level=INFO) | ||
328 | 78 | return False | ||
329 | 79 | else: | ||
330 | 80 | peers = peer_units() | ||
331 | 81 | if peers and not oldest_peer(peers): | ||
332 | 82 | log('Deferring action to oldest service unit.', level=INFO) | ||
333 | 83 | return False | ||
334 | 84 | return True | ||
335 | 85 | |||
336 | 86 | |||
337 | 87 | def https(): | ||
338 | 88 | ''' | ||
339 | 89 | Determines whether enough data has been provided in configuration | ||
340 | 90 | or relation data to configure HTTPS | ||
341 | 91 | . | ||
342 | 92 | returns: boolean | ||
343 | 93 | ''' | ||
344 | 94 | if config_get('use-https') == "yes": | ||
345 | 95 | return True | ||
346 | 96 | if config_get('ssl_cert') and config_get('ssl_key'): | ||
347 | 97 | return True | ||
348 | 98 | for r_id in relation_ids('identity-service'): | ||
349 | 99 | for unit in relation_list(r_id): | ||
350 | 100 | rel_state = [ | ||
351 | 101 | relation_get('https_keystone', rid=r_id, unit=unit), | ||
352 | 102 | relation_get('ssl_cert', rid=r_id, unit=unit), | ||
353 | 103 | relation_get('ssl_key', rid=r_id, unit=unit), | ||
354 | 104 | relation_get('ca_cert', rid=r_id, unit=unit), | ||
355 | 105 | ] | ||
356 | 106 | # NOTE: works around (LP: #1203241) | ||
357 | 107 | if (None not in rel_state) and ('' not in rel_state): | ||
358 | 108 | return True | ||
359 | 109 | return False | ||
360 | 110 | |||
361 | 111 | |||
362 | 112 | def determine_api_port(public_port): | ||
363 | 113 | ''' | ||
364 | 114 | Determine correct API server listening port based on | ||
365 | 115 | existence of HTTPS reverse proxy and/or haproxy. | ||
366 | 116 | |||
367 | 117 | public_port: int: standard public port for given service | ||
368 | 118 | |||
369 | 119 | returns: int: the correct listening port for the API service | ||
370 | 120 | ''' | ||
371 | 121 | i = 0 | ||
372 | 122 | if len(peer_units()) > 0 or is_clustered(): | ||
373 | 123 | i += 1 | ||
374 | 124 | if https(): | ||
375 | 125 | i += 1 | ||
376 | 126 | return public_port - (i * 10) | ||
377 | 127 | |||
378 | 128 | |||
379 | 129 | def determine_haproxy_port(public_port): | ||
380 | 130 | ''' | ||
381 | 131 | Description: Determine correct proxy listening port based on public IP + | ||
382 | 132 | existence of HTTPS reverse proxy. | ||
383 | 133 | |||
384 | 134 | public_port: int: standard public port for given service | ||
385 | 135 | |||
386 | 136 | returns: int: the correct listening port for the HAProxy service | ||
387 | 137 | ''' | ||
388 | 138 | i = 0 | ||
389 | 139 | if https(): | ||
390 | 140 | i += 1 | ||
391 | 141 | return public_port - (i * 10) | ||
392 | 142 | |||
393 | 143 | |||
394 | 144 | def get_hacluster_config(): | ||
395 | 145 | ''' | ||
396 | 146 | Obtains all relevant configuration from charm configuration required | ||
397 | 147 | for initiating a relation to hacluster: | ||
398 | 148 | |||
399 | 149 | ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr | ||
400 | 150 | |||
401 | 151 | returns: dict: A dict containing settings keyed by setting name. | ||
402 | 152 | raises: HAIncompleteConfig if settings are missing. | ||
403 | 153 | ''' | ||
404 | 154 | settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] | ||
405 | 155 | conf = {} | ||
406 | 156 | for setting in settings: | ||
407 | 157 | conf[setting] = config_get(setting) | ||
408 | 158 | missing = [] | ||
409 | 159 | [missing.append(s) for s, v in conf.iteritems() if v is None] | ||
410 | 160 | if missing: | ||
411 | 161 | log('Insufficient config data to configure hacluster.', level=ERROR) | ||
412 | 162 | raise HAIncompleteConfig | ||
413 | 163 | return conf | ||
414 | 164 | |||
415 | 165 | |||
416 | 166 | def canonical_url(configs, vip_setting='vip'): | ||
417 | 167 | ''' | ||
418 | 168 | Returns the correct HTTP URL to this host given the state of HTTPS | ||
419 | 169 | configuration and hacluster. | ||
420 | 170 | |||
421 | 171 | :configs : OSTemplateRenderer: A config tempating object to inspect for | ||
422 | 172 | a complete https context. | ||
423 | 173 | :vip_setting: str: Setting in charm config that specifies | ||
424 | 174 | VIP address. | ||
425 | 175 | ''' | ||
426 | 176 | scheme = 'http' | ||
427 | 177 | if 'https' in configs.complete_contexts(): | ||
428 | 178 | scheme = 'https' | ||
429 | 179 | if is_clustered(): | ||
430 | 180 | addr = config_get(vip_setting) | ||
431 | 181 | else: | ||
432 | 182 | addr = unit_get('private-address') | ||
433 | 183 | return '%s://%s' % (scheme, addr) | ||
434 | 0 | 184 | ||
435 | === added directory 'hooks/charmhelpers/contrib/openstack' | |||
436 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' | |||
437 | === added file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
438 | --- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000 | |||
439 | +++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 01:35:02 +0000 | |||
440 | @@ -0,0 +1,522 @@ | |||
441 | 1 | import json | ||
442 | 2 | import os | ||
443 | 3 | |||
444 | 4 | from base64 import b64decode | ||
445 | 5 | |||
446 | 6 | from subprocess import ( | ||
447 | 7 | check_call | ||
448 | 8 | ) | ||
449 | 9 | |||
450 | 10 | |||
451 | 11 | from charmhelpers.fetch import ( | ||
452 | 12 | apt_install, | ||
453 | 13 | filter_installed_packages, | ||
454 | 14 | ) | ||
455 | 15 | |||
456 | 16 | from charmhelpers.core.hookenv import ( | ||
457 | 17 | config, | ||
458 | 18 | local_unit, | ||
459 | 19 | log, | ||
460 | 20 | relation_get, | ||
461 | 21 | relation_ids, | ||
462 | 22 | related_units, | ||
463 | 23 | unit_get, | ||
464 | 24 | unit_private_ip, | ||
465 | 25 | ERROR, | ||
466 | 26 | WARNING, | ||
467 | 27 | ) | ||
468 | 28 | |||
469 | 29 | from charmhelpers.contrib.hahelpers.cluster import ( | ||
470 | 30 | determine_api_port, | ||
471 | 31 | determine_haproxy_port, | ||
472 | 32 | https, | ||
473 | 33 | is_clustered, | ||
474 | 34 | peer_units, | ||
475 | 35 | ) | ||
476 | 36 | |||
477 | 37 | from charmhelpers.contrib.hahelpers.apache import ( | ||
478 | 38 | get_cert, | ||
479 | 39 | get_ca_cert, | ||
480 | 40 | ) | ||
481 | 41 | |||
482 | 42 | from charmhelpers.contrib.openstack.neutron import ( | ||
483 | 43 | neutron_plugin_attribute, | ||
484 | 44 | ) | ||
485 | 45 | |||
486 | 46 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' | ||
487 | 47 | |||
488 | 48 | |||
489 | 49 | class OSContextError(Exception): | ||
490 | 50 | pass | ||
491 | 51 | |||
492 | 52 | |||
493 | 53 | def ensure_packages(packages): | ||
494 | 54 | '''Install but do not upgrade required plugin packages''' | ||
495 | 55 | required = filter_installed_packages(packages) | ||
496 | 56 | if required: | ||
497 | 57 | apt_install(required, fatal=True) | ||
498 | 58 | |||
499 | 59 | |||
500 | 60 | def context_complete(ctxt): | ||
501 | 61 | _missing = [] | ||
502 | 62 | for k, v in ctxt.iteritems(): | ||
503 | 63 | if v is None or v == '': | ||
504 | 64 | _missing.append(k) | ||
505 | 65 | if _missing: | ||
506 | 66 | log('Missing required data: %s' % ' '.join(_missing), level='INFO') | ||
507 | 67 | return False | ||
508 | 68 | return True | ||
509 | 69 | |||
510 | 70 | |||
511 | 71 | class OSContextGenerator(object): | ||
512 | 72 | interfaces = [] | ||
513 | 73 | |||
514 | 74 | def __call__(self): | ||
515 | 75 | raise NotImplementedError | ||
516 | 76 | |||
517 | 77 | |||
518 | 78 | class SharedDBContext(OSContextGenerator): | ||
519 | 79 | interfaces = ['shared-db'] | ||
520 | 80 | |||
521 | 81 | def __init__(self, database=None, user=None, relation_prefix=None): | ||
522 | 82 | ''' | ||
523 | 83 | Allows inspecting relation for settings prefixed with relation_prefix. | ||
524 | 84 | This is useful for parsing access for multiple databases returned via | ||
525 | 85 | the shared-db interface (eg, nova_password, quantum_password) | ||
526 | 86 | ''' | ||
527 | 87 | self.relation_prefix = relation_prefix | ||
528 | 88 | self.database = database | ||
529 | 89 | self.user = user | ||
530 | 90 | |||
531 | 91 | def __call__(self): | ||
532 | 92 | self.database = self.database or config('database') | ||
533 | 93 | self.user = self.user or config('database-user') | ||
534 | 94 | if None in [self.database, self.user]: | ||
535 | 95 | log('Could not generate shared_db context. ' | ||
536 | 96 | 'Missing required charm config options. ' | ||
537 | 97 | '(database name and user)') | ||
538 | 98 | raise OSContextError | ||
539 | 99 | ctxt = {} | ||
540 | 100 | |||
541 | 101 | password_setting = 'password' | ||
542 | 102 | if self.relation_prefix: | ||
543 | 103 | password_setting = self.relation_prefix + '_password' | ||
544 | 104 | |||
545 | 105 | for rid in relation_ids('shared-db'): | ||
546 | 106 | for unit in related_units(rid): | ||
547 | 107 | passwd = relation_get(password_setting, rid=rid, unit=unit) | ||
548 | 108 | ctxt = { | ||
549 | 109 | 'database_host': relation_get('db_host', rid=rid, | ||
550 | 110 | unit=unit), | ||
551 | 111 | 'database': self.database, | ||
552 | 112 | 'database_user': self.user, | ||
553 | 113 | 'database_password': passwd, | ||
554 | 114 | } | ||
555 | 115 | if context_complete(ctxt): | ||
556 | 116 | return ctxt | ||
557 | 117 | return {} | ||
558 | 118 | |||
559 | 119 | |||
560 | 120 | class IdentityServiceContext(OSContextGenerator): | ||
561 | 121 | interfaces = ['identity-service'] | ||
562 | 122 | |||
563 | 123 | def __call__(self): | ||
564 | 124 | log('Generating template context for identity-service') | ||
565 | 125 | ctxt = {} | ||
566 | 126 | |||
567 | 127 | for rid in relation_ids('identity-service'): | ||
568 | 128 | for unit in related_units(rid): | ||
569 | 129 | ctxt = { | ||
570 | 130 | 'service_port': relation_get('service_port', rid=rid, | ||
571 | 131 | unit=unit), | ||
572 | 132 | 'service_host': relation_get('service_host', rid=rid, | ||
573 | 133 | unit=unit), | ||
574 | 134 | 'auth_host': relation_get('auth_host', rid=rid, unit=unit), | ||
575 | 135 | 'auth_port': relation_get('auth_port', rid=rid, unit=unit), | ||
576 | 136 | 'admin_tenant_name': relation_get('service_tenant', | ||
577 | 137 | rid=rid, unit=unit), | ||
578 | 138 | 'admin_user': relation_get('service_username', rid=rid, | ||
579 | 139 | unit=unit), | ||
580 | 140 | 'admin_password': relation_get('service_password', rid=rid, | ||
581 | 141 | unit=unit), | ||
582 | 142 | # XXX: Hard-coded http. | ||
583 | 143 | 'service_protocol': 'http', | ||
584 | 144 | 'auth_protocol': 'http', | ||
585 | 145 | } | ||
586 | 146 | if context_complete(ctxt): | ||
587 | 147 | return ctxt | ||
588 | 148 | return {} | ||
589 | 149 | |||
590 | 150 | |||
591 | 151 | class AMQPContext(OSContextGenerator): | ||
592 | 152 | interfaces = ['amqp'] | ||
593 | 153 | |||
594 | 154 | def __call__(self): | ||
595 | 155 | log('Generating template context for amqp') | ||
596 | 156 | conf = config() | ||
597 | 157 | try: | ||
598 | 158 | username = conf['rabbit-user'] | ||
599 | 159 | vhost = conf['rabbit-vhost'] | ||
600 | 160 | except KeyError as e: | ||
601 | 161 | log('Could not generate shared_db context. ' | ||
602 | 162 | 'Missing required charm config options: %s.' % e) | ||
603 | 163 | raise OSContextError | ||
604 | 164 | |||
605 | 165 | ctxt = {} | ||
606 | 166 | for rid in relation_ids('amqp'): | ||
607 | 167 | for unit in related_units(rid): | ||
608 | 168 | if relation_get('clustered', rid=rid, unit=unit): | ||
609 | 169 | ctxt['clustered'] = True | ||
610 | 170 | ctxt['rabbitmq_host'] = relation_get('vip', rid=rid, | ||
611 | 171 | unit=unit) | ||
612 | 172 | else: | ||
613 | 173 | ctxt['rabbitmq_host'] = relation_get('private-address', | ||
614 | 174 | rid=rid, unit=unit) | ||
615 | 175 | ctxt.update({ | ||
616 | 176 | 'rabbitmq_user': username, | ||
617 | 177 | 'rabbitmq_password': relation_get('password', rid=rid, | ||
618 | 178 | unit=unit), | ||
619 | 179 | 'rabbitmq_virtual_host': vhost, | ||
620 | 180 | }) | ||
621 | 181 | if context_complete(ctxt): | ||
622 | 182 | # Sufficient information found = break out! | ||
623 | 183 | break | ||
624 | 184 | # Used for active/active rabbitmq >= grizzly | ||
625 | 185 | ctxt['rabbitmq_hosts'] = [] | ||
626 | 186 | for unit in related_units(rid): | ||
627 | 187 | ctxt['rabbitmq_hosts'].append(relation_get('private-address', | ||
628 | 188 | rid=rid, unit=unit)) | ||
629 | 189 | if not context_complete(ctxt): | ||
630 | 190 | return {} | ||
631 | 191 | else: | ||
632 | 192 | return ctxt | ||
633 | 193 | |||
634 | 194 | |||
635 | 195 | class CephContext(OSContextGenerator): | ||
636 | 196 | interfaces = ['ceph'] | ||
637 | 197 | |||
638 | 198 | def __call__(self): | ||
639 | 199 | '''This generates context for /etc/ceph/ceph.conf templates''' | ||
640 | 200 | if not relation_ids('ceph'): | ||
641 | 201 | return {} | ||
642 | 202 | log('Generating template context for ceph') | ||
643 | 203 | mon_hosts = [] | ||
644 | 204 | auth = None | ||
645 | 205 | key = None | ||
646 | 206 | for rid in relation_ids('ceph'): | ||
647 | 207 | for unit in related_units(rid): | ||
648 | 208 | mon_hosts.append(relation_get('private-address', rid=rid, | ||
649 | 209 | unit=unit)) | ||
650 | 210 | auth = relation_get('auth', rid=rid, unit=unit) | ||
651 | 211 | key = relation_get('key', rid=rid, unit=unit) | ||
652 | 212 | |||
653 | 213 | ctxt = { | ||
654 | 214 | 'mon_hosts': ' '.join(mon_hosts), | ||
655 | 215 | 'auth': auth, | ||
656 | 216 | 'key': key, | ||
657 | 217 | } | ||
658 | 218 | |||
659 | 219 | if not os.path.isdir('/etc/ceph'): | ||
660 | 220 | os.mkdir('/etc/ceph') | ||
661 | 221 | |||
662 | 222 | if not context_complete(ctxt): | ||
663 | 223 | return {} | ||
664 | 224 | |||
665 | 225 | ensure_packages(['ceph-common']) | ||
666 | 226 | |||
667 | 227 | return ctxt | ||
668 | 228 | |||
669 | 229 | |||
670 | 230 | class HAProxyContext(OSContextGenerator): | ||
671 | 231 | interfaces = ['cluster'] | ||
672 | 232 | |||
673 | 233 | def __call__(self): | ||
674 | 234 | ''' | ||
675 | 235 | Builds half a context for the haproxy template, which describes | ||
676 | 236 | all peers to be included in the cluster. Each charm needs to include | ||
677 | 237 | its own context generator that describes the port mapping. | ||
678 | 238 | ''' | ||
679 | 239 | if not relation_ids('cluster'): | ||
680 | 240 | return {} | ||
681 | 241 | |||
682 | 242 | cluster_hosts = {} | ||
683 | 243 | l_unit = local_unit().replace('/', '-') | ||
684 | 244 | cluster_hosts[l_unit] = unit_get('private-address') | ||
685 | 245 | |||
686 | 246 | for rid in relation_ids('cluster'): | ||
687 | 247 | for unit in related_units(rid): | ||
688 | 248 | _unit = unit.replace('/', '-') | ||
689 | 249 | addr = relation_get('private-address', rid=rid, unit=unit) | ||
690 | 250 | cluster_hosts[_unit] = addr | ||
691 | 251 | |||
692 | 252 | ctxt = { | ||
693 | 253 | 'units': cluster_hosts, | ||
694 | 254 | } | ||
695 | 255 | if len(cluster_hosts.keys()) > 1: | ||
696 | 256 | # Enable haproxy when we have enough peers. | ||
697 | 257 | log('Ensuring haproxy enabled in /etc/default/haproxy.') | ||
698 | 258 | with open('/etc/default/haproxy', 'w') as out: | ||
699 | 259 | out.write('ENABLED=1\n') | ||
700 | 260 | return ctxt | ||
701 | 261 | log('HAProxy context is incomplete, this unit has no peers.') | ||
702 | 262 | return {} | ||
703 | 263 | |||
704 | 264 | |||
705 | 265 | class ImageServiceContext(OSContextGenerator): | ||
706 | 266 | interfaces = ['image-service'] | ||
707 | 267 | |||
708 | 268 | def __call__(self): | ||
709 | 269 | ''' | ||
710 | 270 | Obtains the glance API server from the image-service relation. Useful | ||
711 | 271 | in nova and cinder (currently). | ||
712 | 272 | ''' | ||
713 | 273 | log('Generating template context for image-service.') | ||
714 | 274 | rids = relation_ids('image-service') | ||
715 | 275 | if not rids: | ||
716 | 276 | return {} | ||
717 | 277 | for rid in rids: | ||
718 | 278 | for unit in related_units(rid): | ||
719 | 279 | api_server = relation_get('glance-api-server', | ||
720 | 280 | rid=rid, unit=unit) | ||
721 | 281 | if api_server: | ||
722 | 282 | return {'glance_api_servers': api_server} | ||
723 | 283 | log('ImageService context is incomplete. ' | ||
724 | 284 | 'Missing required relation data.') | ||
725 | 285 | return {} | ||
726 | 286 | |||
727 | 287 | |||
728 | 288 | class ApacheSSLContext(OSContextGenerator): | ||
729 | 289 | """ | ||
730 | 290 | Generates a context for an apache vhost configuration that configures | ||
731 | 291 | HTTPS reverse proxying for one or many endpoints. Generated context | ||
732 | 292 | looks something like: | ||
733 | 293 | { | ||
734 | 294 | 'namespace': 'cinder', | ||
735 | 295 | 'private_address': 'iscsi.mycinderhost.com', | ||
736 | 296 | 'endpoints': [(8776, 8766), (8777, 8767)] | ||
737 | 297 | } | ||
738 | 298 | |||
739 | 299 | The endpoints list consists of a tuples mapping external ports | ||
740 | 300 | to internal ports. | ||
741 | 301 | """ | ||
742 | 302 | interfaces = ['https'] | ||
743 | 303 | |||
744 | 304 | # charms should inherit this context and set external ports | ||
745 | 305 | # and service namespace accordingly. | ||
746 | 306 | external_ports = [] | ||
747 | 307 | service_namespace = None | ||
748 | 308 | |||
749 | 309 | def enable_modules(self): | ||
750 | 310 | cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http'] | ||
751 | 311 | check_call(cmd) | ||
752 | 312 | |||
753 | 313 | def configure_cert(self): | ||
754 | 314 | if not os.path.isdir('/etc/apache2/ssl'): | ||
755 | 315 | os.mkdir('/etc/apache2/ssl') | ||
756 | 316 | ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) | ||
757 | 317 | if not os.path.isdir(ssl_dir): | ||
758 | 318 | os.mkdir(ssl_dir) | ||
759 | 319 | cert, key = get_cert() | ||
760 | 320 | with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out: | ||
761 | 321 | cert_out.write(b64decode(cert)) | ||
762 | 322 | with open(os.path.join(ssl_dir, 'key'), 'w') as key_out: | ||
763 | 323 | key_out.write(b64decode(key)) | ||
764 | 324 | ca_cert = get_ca_cert() | ||
765 | 325 | if ca_cert: | ||
766 | 326 | with open(CA_CERT_PATH, 'w') as ca_out: | ||
767 | 327 | ca_out.write(b64decode(ca_cert)) | ||
768 | 328 | check_call(['update-ca-certificates']) | ||
769 | 329 | |||
770 | 330 | def __call__(self): | ||
771 | 331 | if isinstance(self.external_ports, basestring): | ||
772 | 332 | self.external_ports = [self.external_ports] | ||
773 | 333 | if (not self.external_ports or not https()): | ||
774 | 334 | return {} | ||
775 | 335 | |||
776 | 336 | self.configure_cert() | ||
777 | 337 | self.enable_modules() | ||
778 | 338 | |||
779 | 339 | ctxt = { | ||
780 | 340 | 'namespace': self.service_namespace, | ||
781 | 341 | 'private_address': unit_get('private-address'), | ||
782 | 342 | 'endpoints': [] | ||
783 | 343 | } | ||
784 | 344 | for ext_port in self.external_ports: | ||
785 | 345 | if peer_units() or is_clustered(): | ||
786 | 346 | int_port = determine_haproxy_port(ext_port) | ||
787 | 347 | else: | ||
788 | 348 | int_port = determine_api_port(ext_port) | ||
789 | 349 | portmap = (int(ext_port), int(int_port)) | ||
790 | 350 | ctxt['endpoints'].append(portmap) | ||
791 | 351 | return ctxt | ||
792 | 352 | |||
793 | 353 | |||
794 | 354 | class NeutronContext(object): | ||
795 | 355 | interfaces = [] | ||
796 | 356 | |||
797 | 357 | @property | ||
798 | 358 | def plugin(self): | ||
799 | 359 | return None | ||
800 | 360 | |||
801 | 361 | @property | ||
802 | 362 | def network_manager(self): | ||
803 | 363 | return None | ||
804 | 364 | |||
805 | 365 | @property | ||
806 | 366 | def packages(self): | ||
807 | 367 | return neutron_plugin_attribute( | ||
808 | 368 | self.plugin, 'packages', self.network_manager) | ||
809 | 369 | |||
810 | 370 | @property | ||
811 | 371 | def neutron_security_groups(self): | ||
812 | 372 | return None | ||
813 | 373 | |||
814 | 374 | def _ensure_packages(self): | ||
815 | 375 | [ensure_packages(pkgs) for pkgs in self.packages] | ||
816 | 376 | |||
817 | 377 | def _save_flag_file(self): | ||
818 | 378 | if self.network_manager == 'quantum': | ||
819 | 379 | _file = '/etc/nova/quantum_plugin.conf' | ||
820 | 380 | else: | ||
821 | 381 | _file = '/etc/nova/neutron_plugin.conf' | ||
822 | 382 | with open(_file, 'wb') as out: | ||
823 | 383 | out.write(self.plugin + '\n') | ||
824 | 384 | |||
825 | 385 | def ovs_ctxt(self): | ||
826 | 386 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
827 | 387 | self.network_manager) | ||
828 | 388 | |||
829 | 389 | ovs_ctxt = { | ||
830 | 390 | 'core_plugin': driver, | ||
831 | 391 | 'neutron_plugin': 'ovs', | ||
832 | 392 | 'neutron_security_groups': self.neutron_security_groups, | ||
833 | 393 | 'local_ip': unit_private_ip(), | ||
834 | 394 | } | ||
835 | 395 | |||
836 | 396 | return ovs_ctxt | ||
837 | 397 | |||
838 | 398 | def __call__(self): | ||
839 | 399 | self._ensure_packages() | ||
840 | 400 | |||
841 | 401 | if self.network_manager not in ['quantum', 'neutron']: | ||
842 | 402 | return {} | ||
843 | 403 | |||
844 | 404 | if not self.plugin: | ||
845 | 405 | return {} | ||
846 | 406 | |||
847 | 407 | ctxt = {'network_manager': self.network_manager} | ||
848 | 408 | |||
849 | 409 | if self.plugin == 'ovs': | ||
850 | 410 | ctxt.update(self.ovs_ctxt()) | ||
851 | 411 | |||
852 | 412 | self._save_flag_file() | ||
853 | 413 | return ctxt | ||
854 | 414 | |||
855 | 415 | |||
856 | 416 | class OSConfigFlagContext(OSContextGenerator): | ||
857 | 417 | ''' | ||
858 | 418 | Responsible adding user-defined config-flags in charm config to a | ||
859 | 419 | to a template context. | ||
860 | 420 | ''' | ||
861 | 421 | def __call__(self): | ||
862 | 422 | config_flags = config('config-flags') | ||
863 | 423 | if not config_flags or config_flags in ['None', '']: | ||
864 | 424 | return {} | ||
865 | 425 | config_flags = config_flags.split(',') | ||
866 | 426 | flags = {} | ||
867 | 427 | for flag in config_flags: | ||
868 | 428 | if '=' not in flag: | ||
869 | 429 | log('Improperly formatted config-flag, expected k=v ' | ||
870 | 430 | 'got %s' % flag, level=WARNING) | ||
871 | 431 | continue | ||
872 | 432 | k, v = flag.split('=') | ||
873 | 433 | flags[k.strip()] = v | ||
874 | 434 | ctxt = {'user_config_flags': flags} | ||
875 | 435 | return ctxt | ||
876 | 436 | |||
877 | 437 | |||
878 | 438 | class SubordinateConfigContext(OSContextGenerator): | ||
879 | 439 | """ | ||
880 | 440 | Responsible for inspecting relations to subordinates that | ||
881 | 441 | may be exporting required config via a json blob. | ||
882 | 442 | |||
883 | 443 | The subordinate interface allows subordinates to export their | ||
884 | 444 | configuration requirements to the principle for multiple config | ||
885 | 445 | files and multiple serivces. Ie, a subordinate that has interfaces | ||
886 | 446 | to both glance and nova may export to following yaml blob as json: | ||
887 | 447 | |||
888 | 448 | glance: | ||
889 | 449 | /etc/glance/glance-api.conf: | ||
890 | 450 | sections: | ||
891 | 451 | DEFAULT: | ||
892 | 452 | - [key1, value1] | ||
893 | 453 | /etc/glance/glance-registry.conf: | ||
894 | 454 | MYSECTION: | ||
895 | 455 | - [key2, value2] | ||
896 | 456 | nova: | ||
897 | 457 | /etc/nova/nova.conf: | ||
898 | 458 | sections: | ||
899 | 459 | DEFAULT: | ||
900 | 460 | - [key3, value3] | ||
901 | 461 | |||
902 | 462 | |||
903 | 463 | It is then up to the principle charms to subscribe this context to | ||
904 | 464 | the service+config file it is interestd in. Configuration data will | ||
905 | 465 | be available in the template context, in glance's case, as: | ||
906 | 466 | ctxt = { | ||
907 | 467 | ... other context ... | ||
908 | 468 | 'subordinate_config': { | ||
909 | 469 | 'DEFAULT': { | ||
910 | 470 | 'key1': 'value1', | ||
911 | 471 | }, | ||
912 | 472 | 'MYSECTION': { | ||
913 | 473 | 'key2': 'value2', | ||
914 | 474 | }, | ||
915 | 475 | } | ||
916 | 476 | } | ||
917 | 477 | |||
918 | 478 | """ | ||
919 | 479 | def __init__(self, service, config_file, interface): | ||
920 | 480 | """ | ||
921 | 481 | :param service : Service name key to query in any subordinate | ||
922 | 482 | data found | ||
923 | 483 | :param config_file : Service's config file to query sections | ||
924 | 484 | :param interface : Subordinate interface to inspect | ||
925 | 485 | """ | ||
926 | 486 | self.service = service | ||
927 | 487 | self.config_file = config_file | ||
928 | 488 | self.interface = interface | ||
929 | 489 | |||
930 | 490 | def __call__(self): | ||
931 | 491 | ctxt = {} | ||
932 | 492 | for rid in relation_ids(self.interface): | ||
933 | 493 | for unit in related_units(rid): | ||
934 | 494 | sub_config = relation_get('subordinate_configuration', | ||
935 | 495 | rid=rid, unit=unit) | ||
936 | 496 | if sub_config and sub_config != '': | ||
937 | 497 | try: | ||
938 | 498 | sub_config = json.loads(sub_config) | ||
939 | 499 | except: | ||
940 | 500 | log('Could not parse JSON from subordinate_config ' | ||
941 | 501 | 'setting from %s' % rid, level=ERROR) | ||
942 | 502 | continue | ||
943 | 503 | |||
944 | 504 | if self.service not in sub_config: | ||
945 | 505 | log('Found subordinate_config on %s but it contained' | ||
946 | 506 | 'nothing for %s service' % (rid, self.service)) | ||
947 | 507 | continue | ||
948 | 508 | |||
949 | 509 | sub_config = sub_config[self.service] | ||
950 | 510 | if self.config_file not in sub_config: | ||
951 | 511 | log('Found subordinate_config on %s but it contained' | ||
952 | 512 | 'nothing for %s' % (rid, self.config_file)) | ||
953 | 513 | continue | ||
954 | 514 | |||
955 | 515 | sub_config = sub_config[self.config_file] | ||
956 | 516 | for k, v in sub_config.iteritems(): | ||
957 | 517 | ctxt[k] = v | ||
958 | 518 | |||
959 | 519 | if not ctxt: | ||
960 | 520 | ctxt['sections'] = {} | ||
961 | 521 | |||
962 | 522 | return ctxt | ||
963 | 0 | 523 | ||
964 | === added file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
965 | --- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000 | |||
966 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 01:35:02 +0000 | |||
967 | @@ -0,0 +1,117 @@ | |||
968 | 1 | # Various utilies for dealing with Neutron and the renaming from Quantum. | ||
969 | 2 | |||
970 | 3 | from subprocess import check_output | ||
971 | 4 | |||
972 | 5 | from charmhelpers.core.hookenv import ( | ||
973 | 6 | config, | ||
974 | 7 | log, | ||
975 | 8 | ERROR, | ||
976 | 9 | ) | ||
977 | 10 | |||
978 | 11 | from charmhelpers.contrib.openstack.utils import os_release | ||
979 | 12 | |||
980 | 13 | |||
981 | 14 | def headers_package(): | ||
982 | 15 | """Ensures correct linux-headers for running kernel are installed, | ||
983 | 16 | for building DKMS package""" | ||
984 | 17 | kver = check_output(['uname', '-r']).strip() | ||
985 | 18 | return 'linux-headers-%s' % kver | ||
986 | 19 | |||
987 | 20 | |||
988 | 21 | # legacy | ||
989 | 22 | def quantum_plugins(): | ||
990 | 23 | from charmhelpers.contrib.openstack import context | ||
991 | 24 | return { | ||
992 | 25 | 'ovs': { | ||
993 | 26 | 'config': '/etc/quantum/plugins/openvswitch/' | ||
994 | 27 | 'ovs_quantum_plugin.ini', | ||
995 | 28 | 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' | ||
996 | 29 | 'OVSQuantumPluginV2', | ||
997 | 30 | 'contexts': [ | ||
998 | 31 | context.SharedDBContext(user=config('neutron-database-user'), | ||
999 | 32 | database=config('neutron-database'), | ||
1000 | 33 | relation_prefix='neutron')], | ||
1001 | 34 | 'services': ['quantum-plugin-openvswitch-agent'], | ||
1002 | 35 | 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], | ||
1003 | 36 | ['quantum-plugin-openvswitch-agent']], | ||
1004 | 37 | }, | ||
1005 | 38 | 'nvp': { | ||
1006 | 39 | 'config': '/etc/quantum/plugins/nicira/nvp.ini', | ||
1007 | 40 | 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' | ||
1008 | 41 | 'QuantumPlugin.NvpPluginV2', | ||
1009 | 42 | 'services': [], | ||
1010 | 43 | 'packages': [], | ||
1011 | 44 | } | ||
1012 | 45 | } | ||
1013 | 46 | |||
1014 | 47 | |||
1015 | 48 | def neutron_plugins(): | ||
1016 | 49 | from charmhelpers.contrib.openstack import context | ||
1017 | 50 | return { | ||
1018 | 51 | 'ovs': { | ||
1019 | 52 | 'config': '/etc/neutron/plugins/openvswitch/' | ||
1020 | 53 | 'ovs_neutron_plugin.ini', | ||
1021 | 54 | 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' | ||
1022 | 55 | 'OVSNeutronPluginV2', | ||
1023 | 56 | 'contexts': [ | ||
1024 | 57 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1025 | 58 | database=config('neutron-database'), | ||
1026 | 59 | relation_prefix='neutron')], | ||
1027 | 60 | 'services': ['neutron-plugin-openvswitch-agent'], | ||
1028 | 61 | 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], | ||
1029 | 62 | ['quantum-plugin-openvswitch-agent']], | ||
1030 | 63 | }, | ||
1031 | 64 | 'nvp': { | ||
1032 | 65 | 'config': '/etc/neutron/plugins/nicira/nvp.ini', | ||
1033 | 66 | 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' | ||
1034 | 67 | 'NeutronPlugin.NvpPluginV2', | ||
1035 | 68 | 'services': [], | ||
1036 | 69 | 'packages': [], | ||
1037 | 70 | } | ||
1038 | 71 | } | ||
1039 | 72 | |||
1040 | 73 | |||
1041 | 74 | def neutron_plugin_attribute(plugin, attr, net_manager=None): | ||
1042 | 75 | manager = net_manager or network_manager() | ||
1043 | 76 | if manager == 'quantum': | ||
1044 | 77 | plugins = quantum_plugins() | ||
1045 | 78 | elif manager == 'neutron': | ||
1046 | 79 | plugins = neutron_plugins() | ||
1047 | 80 | else: | ||
1048 | 81 | log('Error: Network manager does not support plugins.') | ||
1049 | 82 | raise Exception | ||
1050 | 83 | |||
1051 | 84 | try: | ||
1052 | 85 | _plugin = plugins[plugin] | ||
1053 | 86 | except KeyError: | ||
1054 | 87 | log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) | ||
1055 | 88 | raise Exception | ||
1056 | 89 | |||
1057 | 90 | try: | ||
1058 | 91 | return _plugin[attr] | ||
1059 | 92 | except KeyError: | ||
1060 | 93 | return None | ||
1061 | 94 | |||
1062 | 95 | |||
1063 | 96 | def network_manager(): | ||
1064 | 97 | ''' | ||
1065 | 98 | Deals with the renaming of Quantum to Neutron in H and any situations | ||
1066 | 99 | that require compatability (eg, deploying H with network-manager=quantum, | ||
1067 | 100 | upgrading from G). | ||
1068 | 101 | ''' | ||
1069 | 102 | release = os_release('nova-common') | ||
1070 | 103 | manager = config('network-manager').lower() | ||
1071 | 104 | |||
1072 | 105 | if manager not in ['quantum', 'neutron']: | ||
1073 | 106 | return manager | ||
1074 | 107 | |||
1075 | 108 | if release in ['essex']: | ||
1076 | 109 | # E does not support neutron | ||
1077 | 110 | log('Neutron networking not supported in Essex.', level=ERROR) | ||
1078 | 111 | raise Exception | ||
1079 | 112 | elif release in ['folsom', 'grizzly']: | ||
1080 | 113 | # neutron is named quantum in F and G | ||
1081 | 114 | return 'quantum' | ||
1082 | 115 | else: | ||
1083 | 116 | # ensure accurate naming for all releases post-H | ||
1084 | 117 | return 'neutron' | ||
1085 | 0 | 118 | ||
1086 | === added directory 'hooks/charmhelpers/contrib/openstack/templates' | |||
1087 | === added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py' | |||
1088 | --- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000 | |||
1089 | +++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 01:35:02 +0000 | |||
1090 | @@ -0,0 +1,2 @@ | |||
1091 | 1 | # dummy __init__.py to fool syncer into thinking this is a syncable python | ||
1092 | 2 | # module | ||
1093 | 0 | 3 | ||
1094 | === added file 'hooks/charmhelpers/contrib/openstack/templating.py' | |||
1095 | --- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000 | |||
1096 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 01:35:02 +0000 | |||
1097 | @@ -0,0 +1,280 @@ | |||
1098 | 1 | import os | ||
1099 | 2 | |||
1100 | 3 | from charmhelpers.fetch import apt_install | ||
1101 | 4 | |||
1102 | 5 | from charmhelpers.core.hookenv import ( | ||
1103 | 6 | log, | ||
1104 | 7 | ERROR, | ||
1105 | 8 | INFO | ||
1106 | 9 | ) | ||
1107 | 10 | |||
1108 | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES | ||
1109 | 12 | |||
1110 | 13 | try: | ||
1111 | 14 | from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions | ||
1112 | 15 | except ImportError: | ||
1113 | 16 | # python-jinja2 may not be installed yet, or we're running unittests. | ||
1114 | 17 | FileSystemLoader = ChoiceLoader = Environment = exceptions = None | ||
1115 | 18 | |||
1116 | 19 | |||
1117 | 20 | class OSConfigException(Exception): | ||
1118 | 21 | pass | ||
1119 | 22 | |||
1120 | 23 | |||
1121 | 24 | def get_loader(templates_dir, os_release): | ||
1122 | 25 | """ | ||
1123 | 26 | Create a jinja2.ChoiceLoader containing template dirs up to | ||
1124 | 27 | and including os_release. If directory template directory | ||
1125 | 28 | is missing at templates_dir, it will be omitted from the loader. | ||
1126 | 29 | templates_dir is added to the bottom of the search list as a base | ||
1127 | 30 | loading dir. | ||
1128 | 31 | |||
1129 | 32 | A charm may also ship a templates dir with this module | ||
1130 | 33 | and it will be appended to the bottom of the search list, eg: | ||
1131 | 34 | hooks/charmhelpers/contrib/openstack/templates. | ||
1132 | 35 | |||
1133 | 36 | :param templates_dir: str: Base template directory containing release | ||
1134 | 37 | sub-directories. | ||
1135 | 38 | :param os_release : str: OpenStack release codename to construct template | ||
1136 | 39 | loader. | ||
1137 | 40 | |||
1138 | 41 | :returns : jinja2.ChoiceLoader constructed with a list of | ||
1139 | 42 | jinja2.FilesystemLoaders, ordered in descending | ||
1140 | 43 | order by OpenStack release. | ||
1141 | 44 | """ | ||
1142 | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) | ||
1143 | 46 | for rel in OPENSTACK_CODENAMES.itervalues()] | ||
1144 | 47 | |||
1145 | 48 | if not os.path.isdir(templates_dir): | ||
1146 | 49 | log('Templates directory not found @ %s.' % templates_dir, | ||
1147 | 50 | level=ERROR) | ||
1148 | 51 | raise OSConfigException | ||
1149 | 52 | |||
1150 | 53 | # the bottom contains tempaltes_dir and possibly a common templates dir | ||
1151 | 54 | # shipped with the helper. | ||
1152 | 55 | loaders = [FileSystemLoader(templates_dir)] | ||
1153 | 56 | helper_templates = os.path.join(os.path.dirname(__file__), 'templates') | ||
1154 | 57 | if os.path.isdir(helper_templates): | ||
1155 | 58 | loaders.append(FileSystemLoader(helper_templates)) | ||
1156 | 59 | |||
1157 | 60 | for rel, tmpl_dir in tmpl_dirs: | ||
1158 | 61 | if os.path.isdir(tmpl_dir): | ||
1159 | 62 | loaders.insert(0, FileSystemLoader(tmpl_dir)) | ||
1160 | 63 | if rel == os_release: | ||
1161 | 64 | break | ||
1162 | 65 | log('Creating choice loader with dirs: %s' % | ||
1163 | 66 | [l.searchpath for l in loaders], level=INFO) | ||
1164 | 67 | return ChoiceLoader(loaders) | ||
1165 | 68 | |||
1166 | 69 | |||
1167 | 70 | class OSConfigTemplate(object): | ||
1168 | 71 | """ | ||
1169 | 72 | Associates a config file template with a list of context generators. | ||
1170 | 73 | Responsible for constructing a template context based on those generators. | ||
1171 | 74 | """ | ||
1172 | 75 | def __init__(self, config_file, contexts): | ||
1173 | 76 | self.config_file = config_file | ||
1174 | 77 | |||
1175 | 78 | if hasattr(contexts, '__call__'): | ||
1176 | 79 | self.contexts = [contexts] | ||
1177 | 80 | else: | ||
1178 | 81 | self.contexts = contexts | ||
1179 | 82 | |||
1180 | 83 | self._complete_contexts = [] | ||
1181 | 84 | |||
1182 | 85 | def context(self): | ||
1183 | 86 | ctxt = {} | ||
1184 | 87 | for context in self.contexts: | ||
1185 | 88 | _ctxt = context() | ||
1186 | 89 | if _ctxt: | ||
1187 | 90 | ctxt.update(_ctxt) | ||
1188 | 91 | # track interfaces for every complete context. | ||
1189 | 92 | [self._complete_contexts.append(interface) | ||
1190 | 93 | for interface in context.interfaces | ||
1191 | 94 | if interface not in self._complete_contexts] | ||
1192 | 95 | return ctxt | ||
1193 | 96 | |||
1194 | 97 | def complete_contexts(self): | ||
1195 | 98 | ''' | ||
1196 | 99 | Return a list of interfaces that have atisfied contexts. | ||
1197 | 100 | ''' | ||
1198 | 101 | if self._complete_contexts: | ||
1199 | 102 | return self._complete_contexts | ||
1200 | 103 | self.context() | ||
1201 | 104 | return self._complete_contexts | ||
1202 | 105 | |||
1203 | 106 | |||
1204 | 107 | class OSConfigRenderer(object): | ||
1205 | 108 | """ | ||
1206 | 109 | This class provides a common templating system to be used by OpenStack | ||
1207 | 110 | charms. It is intended to help charms share common code and templates, | ||
1208 | 111 | and ease the burden of managing config templates across multiple OpenStack | ||
1209 | 112 | releases. | ||
1210 | 113 | |||
1211 | 114 | Basic usage: | ||
1212 | 115 | # import some common context generates from charmhelpers | ||
1213 | 116 | from charmhelpers.contrib.openstack import context | ||
1214 | 117 | |||
1215 | 118 | # Create a renderer object for a specific OS release. | ||
1216 | 119 | configs = OSConfigRenderer(templates_dir='/tmp/templates', | ||
1217 | 120 | openstack_release='folsom') | ||
1218 | 121 | # register some config files with context generators. | ||
1219 | 122 | configs.register(config_file='/etc/nova/nova.conf', | ||
1220 | 123 | contexts=[context.SharedDBContext(), | ||
1221 | 124 | context.AMQPContext()]) | ||
1222 | 125 | configs.register(config_file='/etc/nova/api-paste.ini', | ||
1223 | 126 | contexts=[context.IdentityServiceContext()]) | ||
1224 | 127 | configs.register(config_file='/etc/haproxy/haproxy.conf', | ||
1225 | 128 | contexts=[context.HAProxyContext()]) | ||
1226 | 129 | # write out a single config | ||
1227 | 130 | configs.write('/etc/nova/nova.conf') | ||
1228 | 131 | # write out all registered configs | ||
1229 | 132 | configs.write_all() | ||
1230 | 133 | |||
1231 | 134 | Details: | ||
1232 | 135 | |||
1233 | 136 | OpenStack Releases and template loading | ||
1234 | 137 | --------------------------------------- | ||
1235 | 138 | When the object is instantiated, it is associated with a specific OS | ||
1236 | 139 | release. This dictates how the template loader will be constructed. | ||
1237 | 140 | |||
1238 | 141 | The constructed loader attempts to load the template from several places | ||
1239 | 142 | in the following order: | ||
1240 | 143 | - from the most recent OS release-specific template dir (if one exists) | ||
1241 | 144 | - the base templates_dir | ||
1242 | 145 | - a template directory shipped in the charm with this helper file. | ||
1243 | 146 | |||
1244 | 147 | |||
1245 | 148 | For the example above, '/tmp/templates' contains the following structure: | ||
1246 | 149 | /tmp/templates/nova.conf | ||
1247 | 150 | /tmp/templates/api-paste.ini | ||
1248 | 151 | /tmp/templates/grizzly/api-paste.ini | ||
1249 | 152 | /tmp/templates/havana/api-paste.ini | ||
1250 | 153 | |||
1251 | 154 | Since it was registered with the grizzly release, it first seraches | ||
1252 | 155 | the grizzly directory for nova.conf, then the templates dir. | ||
1253 | 156 | |||
1254 | 157 | When writing api-paste.ini, it will find the template in the grizzly | ||
1255 | 158 | directory. | ||
1256 | 159 | |||
1257 | 160 | If the object were created with folsom, it would fall back to the | ||
1258 | 161 | base templates dir for its api-paste.ini template. | ||
1259 | 162 | |||
1260 | 163 | This system should help manage changes in config files through | ||
1261 | 164 | openstack releases, allowing charms to fall back to the most recently | ||
1262 | 165 | updated config template for a given release | ||
1263 | 166 | |||
1264 | 167 | The haproxy.conf, since it is not shipped in the templates dir, will | ||
1265 | 168 | be loaded from the module directory's template directory, eg | ||
1266 | 169 | $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows | ||
1267 | 170 | us to ship common templates (haproxy, apache) with the helpers. | ||
1268 | 171 | |||
1269 | 172 | Context generators | ||
1270 | 173 | --------------------------------------- | ||
1271 | 174 | Context generators are used to generate template contexts during hook | ||
1272 | 175 | execution. Doing so may require inspecting service relations, charm | ||
1273 | 176 | config, etc. When registered, a config file is associated with a list | ||
1274 | 177 | of generators. When a template is rendered and written, all context | ||
1275 | 178 | generates are called in a chain to generate the context dictionary | ||
1276 | 179 | passed to the jinja2 template. See context.py for more info. | ||
1277 | 180 | """ | ||
1278 | 181 | def __init__(self, templates_dir, openstack_release): | ||
1279 | 182 | if not os.path.isdir(templates_dir): | ||
1280 | 183 | log('Could not locate templates dir %s' % templates_dir, | ||
1281 | 184 | level=ERROR) | ||
1282 | 185 | raise OSConfigException | ||
1283 | 186 | |||
1284 | 187 | self.templates_dir = templates_dir | ||
1285 | 188 | self.openstack_release = openstack_release | ||
1286 | 189 | self.templates = {} | ||
1287 | 190 | self._tmpl_env = None | ||
1288 | 191 | |||
1289 | 192 | if None in [Environment, ChoiceLoader, FileSystemLoader]: | ||
1290 | 193 | # if this code is running, the object is created pre-install hook. | ||
1291 | 194 | # jinja2 shouldn't get touched until the module is reloaded on next | ||
1292 | 195 | # hook execution, with proper jinja2 bits successfully imported. | ||
1293 | 196 | apt_install('python-jinja2') | ||
1294 | 197 | |||
1295 | 198 | def register(self, config_file, contexts): | ||
1296 | 199 | """ | ||
1297 | 200 | Register a config file with a list of context generators to be called | ||
1298 | 201 | during rendering. | ||
1299 | 202 | """ | ||
1300 | 203 | self.templates[config_file] = OSConfigTemplate(config_file=config_file, | ||
1301 | 204 | contexts=contexts) | ||
1302 | 205 | log('Registered config file: %s' % config_file, level=INFO) | ||
1303 | 206 | |||
1304 | 207 | def _get_tmpl_env(self): | ||
1305 | 208 | if not self._tmpl_env: | ||
1306 | 209 | loader = get_loader(self.templates_dir, self.openstack_release) | ||
1307 | 210 | self._tmpl_env = Environment(loader=loader) | ||
1308 | 211 | |||
1309 | 212 | def _get_template(self, template): | ||
1310 | 213 | self._get_tmpl_env() | ||
1311 | 214 | template = self._tmpl_env.get_template(template) | ||
1312 | 215 | log('Loaded template from %s' % template.filename, level=INFO) | ||
1313 | 216 | return template | ||
1314 | 217 | |||
1315 | 218 | def render(self, config_file): | ||
1316 | 219 | if config_file not in self.templates: | ||
1317 | 220 | log('Config not registered: %s' % config_file, level=ERROR) | ||
1318 | 221 | raise OSConfigException | ||
1319 | 222 | ctxt = self.templates[config_file].context() | ||
1320 | 223 | |||
1321 | 224 | _tmpl = os.path.basename(config_file) | ||
1322 | 225 | try: | ||
1323 | 226 | template = self._get_template(_tmpl) | ||
1324 | 227 | except exceptions.TemplateNotFound: | ||
1325 | 228 | # if no template is found with basename, try looking for it | ||
1326 | 229 | # using a munged full path, eg: | ||
1327 | 230 | # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf | ||
1328 | 231 | _tmpl = '_'.join(config_file.split('/')[1:]) | ||
1329 | 232 | try: | ||
1330 | 233 | template = self._get_template(_tmpl) | ||
1331 | 234 | except exceptions.TemplateNotFound as e: | ||
1332 | 235 | log('Could not load template from %s by %s or %s.' % | ||
1333 | 236 | (self.templates_dir, os.path.basename(config_file), _tmpl), | ||
1334 | 237 | level=ERROR) | ||
1335 | 238 | raise e | ||
1336 | 239 | |||
1337 | 240 | log('Rendering from template: %s' % _tmpl, level=INFO) | ||
1338 | 241 | return template.render(ctxt) | ||
1339 | 242 | |||
1340 | 243 | def write(self, config_file): | ||
1341 | 244 | """ | ||
1342 | 245 | Write a single config file, raises if config file is not registered. | ||
1343 | 246 | """ | ||
1344 | 247 | if config_file not in self.templates: | ||
1345 | 248 | log('Config not registered: %s' % config_file, level=ERROR) | ||
1346 | 249 | raise OSConfigException | ||
1347 | 250 | |||
1348 | 251 | _out = self.render(config_file) | ||
1349 | 252 | |||
1350 | 253 | with open(config_file, 'wb') as out: | ||
1351 | 254 | out.write(_out) | ||
1352 | 255 | |||
1353 | 256 | log('Wrote template %s.' % config_file, level=INFO) | ||
1354 | 257 | |||
1355 | 258 | def write_all(self): | ||
1356 | 259 | """ | ||
1357 | 260 | Write out all registered config files. | ||
1358 | 261 | """ | ||
1359 | 262 | [self.write(k) for k in self.templates.iterkeys()] | ||
1360 | 263 | |||
1361 | 264 | def set_release(self, openstack_release): | ||
1362 | 265 | """ | ||
1363 | 266 | Resets the template environment and generates a new template loader | ||
1364 | 267 | based on a the new openstack release. | ||
1365 | 268 | """ | ||
1366 | 269 | self._tmpl_env = None | ||
1367 | 270 | self.openstack_release = openstack_release | ||
1368 | 271 | self._get_tmpl_env() | ||
1369 | 272 | |||
1370 | 273 | def complete_contexts(self): | ||
1371 | 274 | ''' | ||
1372 | 275 | Returns a list of context interfaces that yield a complete context. | ||
1373 | 276 | ''' | ||
1374 | 277 | interfaces = [] | ||
1375 | 278 | [interfaces.extend(i.complete_contexts()) | ||
1376 | 279 | for i in self.templates.itervalues()] | ||
1377 | 280 | return interfaces | ||
1378 | 0 | 281 | ||
1379 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
1380 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 | |||
1381 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 01:35:02 +0000 | |||
1382 | @@ -0,0 +1,365 @@ | |||
1383 | 1 | #!/usr/bin/python | ||
1384 | 2 | |||
1385 | 3 | # Common python helper functions used for OpenStack charms. | ||
1386 | 4 | from collections import OrderedDict | ||
1387 | 5 | |||
1388 | 6 | import apt_pkg as apt | ||
1389 | 7 | import subprocess | ||
1390 | 8 | import os | ||
1391 | 9 | import socket | ||
1392 | 10 | import sys | ||
1393 | 11 | |||
1394 | 12 | from charmhelpers.core.hookenv import ( | ||
1395 | 13 | config, | ||
1396 | 14 | log as juju_log, | ||
1397 | 15 | charm_dir, | ||
1398 | 16 | ) | ||
1399 | 17 | |||
1400 | 18 | from charmhelpers.core.host import ( | ||
1401 | 19 | lsb_release, | ||
1402 | 20 | ) | ||
1403 | 21 | |||
1404 | 22 | from charmhelpers.fetch import ( | ||
1405 | 23 | apt_install, | ||
1406 | 24 | ) | ||
1407 | 25 | |||
1408 | 26 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" | ||
1409 | 27 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' | ||
1410 | 28 | |||
1411 | 29 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ | ||
1412 | 30 | ('oneiric', 'diablo'), | ||
1413 | 31 | ('precise', 'essex'), | ||
1414 | 32 | ('quantal', 'folsom'), | ||
1415 | 33 | ('raring', 'grizzly'), | ||
1416 | 34 | ('saucy', 'havana'), | ||
1417 | 35 | ]) | ||
1418 | 36 | |||
1419 | 37 | |||
1420 | 38 | OPENSTACK_CODENAMES = OrderedDict([ | ||
1421 | 39 | ('2011.2', 'diablo'), | ||
1422 | 40 | ('2012.1', 'essex'), | ||
1423 | 41 | ('2012.2', 'folsom'), | ||
1424 | 42 | ('2013.1', 'grizzly'), | ||
1425 | 43 | ('2013.2', 'havana'), | ||
1426 | 44 | ('2014.1', 'icehouse'), | ||
1427 | 45 | ]) | ||
1428 | 46 | |||
1429 | 47 | # The ugly duckling | ||
1430 | 48 | SWIFT_CODENAMES = OrderedDict([ | ||
1431 | 49 | ('1.4.3', 'diablo'), | ||
1432 | 50 | ('1.4.8', 'essex'), | ||
1433 | 51 | ('1.7.4', 'folsom'), | ||
1434 | 52 | ('1.8.0', 'grizzly'), | ||
1435 | 53 | ('1.7.7', 'grizzly'), | ||
1436 | 54 | ('1.7.6', 'grizzly'), | ||
1437 | 55 | ('1.10.0', 'havana'), | ||
1438 | 56 | ('1.9.1', 'havana'), | ||
1439 | 57 | ('1.9.0', 'havana'), | ||
1440 | 58 | ]) | ||
1441 | 59 | |||
1442 | 60 | |||
1443 | 61 | def error_out(msg): | ||
1444 | 62 | juju_log("FATAL ERROR: %s" % msg, level='ERROR') | ||
1445 | 63 | sys.exit(1) | ||
1446 | 64 | |||
1447 | 65 | |||
1448 | 66 | def get_os_codename_install_source(src): | ||
1449 | 67 | '''Derive OpenStack release codename from a given installation source.''' | ||
1450 | 68 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
1451 | 69 | rel = '' | ||
1452 | 70 | if src == 'distro': | ||
1453 | 71 | try: | ||
1454 | 72 | rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] | ||
1455 | 73 | except KeyError: | ||
1456 | 74 | e = 'Could not derive openstack release for '\ | ||
1457 | 75 | 'this Ubuntu release: %s' % ubuntu_rel | ||
1458 | 76 | error_out(e) | ||
1459 | 77 | return rel | ||
1460 | 78 | |||
1461 | 79 | if src.startswith('cloud:'): | ||
1462 | 80 | ca_rel = src.split(':')[1] | ||
1463 | 81 | ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] | ||
1464 | 82 | return ca_rel | ||
1465 | 83 | |||
1466 | 84 | # Best guess match based on deb string provided | ||
1467 | 85 | if src.startswith('deb') or src.startswith('ppa'): | ||
1468 | 86 | for k, v in OPENSTACK_CODENAMES.iteritems(): | ||
1469 | 87 | if v in src: | ||
1470 | 88 | return v | ||
1471 | 89 | |||
1472 | 90 | |||
1473 | 91 | def get_os_version_install_source(src): | ||
1474 | 92 | codename = get_os_codename_install_source(src) | ||
1475 | 93 | return get_os_version_codename(codename) | ||
1476 | 94 | |||
1477 | 95 | |||
1478 | 96 | def get_os_codename_version(vers): | ||
1479 | 97 | '''Determine OpenStack codename from version number.''' | ||
1480 | 98 | try: | ||
1481 | 99 | return OPENSTACK_CODENAMES[vers] | ||
1482 | 100 | except KeyError: | ||
1483 | 101 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1484 | 102 | error_out(e) | ||
1485 | 103 | |||
1486 | 104 | |||
1487 | 105 | def get_os_version_codename(codename): | ||
1488 | 106 | '''Determine OpenStack version number from codename.''' | ||
1489 | 107 | for k, v in OPENSTACK_CODENAMES.iteritems(): | ||
1490 | 108 | if v == codename: | ||
1491 | 109 | return k | ||
1492 | 110 | e = 'Could not derive OpenStack version for '\ | ||
1493 | 111 | 'codename: %s' % codename | ||
1494 | 112 | error_out(e) | ||
1495 | 113 | |||
1496 | 114 | |||
1497 | 115 | def get_os_codename_package(package, fatal=True): | ||
1498 | 116 | '''Derive OpenStack release codename from an installed package.''' | ||
1499 | 117 | apt.init() | ||
1500 | 118 | cache = apt.Cache() | ||
1501 | 119 | |||
1502 | 120 | try: | ||
1503 | 121 | pkg = cache[package] | ||
1504 | 122 | except: | ||
1505 | 123 | if not fatal: | ||
1506 | 124 | return None | ||
1507 | 125 | # the package is unknown to the current apt cache. | ||
1508 | 126 | e = 'Could not determine version of package with no installation '\ | ||
1509 | 127 | 'candidate: %s' % package | ||
1510 | 128 | error_out(e) | ||
1511 | 129 | |||
1512 | 130 | if not pkg.current_ver: | ||
1513 | 131 | if not fatal: | ||
1514 | 132 | return None | ||
1515 | 133 | # package is known, but no version is currently installed. | ||
1516 | 134 | e = 'Could not determine version of uninstalled package: %s' % package | ||
1517 | 135 | error_out(e) | ||
1518 | 136 | |||
1519 | 137 | vers = apt.upstream_version(pkg.current_ver.ver_str) | ||
1520 | 138 | |||
1521 | 139 | try: | ||
1522 | 140 | if 'swift' in pkg.name: | ||
1523 | 141 | swift_vers = vers[:5] | ||
1524 | 142 | if swift_vers not in SWIFT_CODENAMES: | ||
1525 | 143 | # Deal with 1.10.0 upward | ||
1526 | 144 | swift_vers = vers[:6] | ||
1527 | 145 | return SWIFT_CODENAMES[swift_vers] | ||
1528 | 146 | else: | ||
1529 | 147 | vers = vers[:6] | ||
1530 | 148 | return OPENSTACK_CODENAMES[vers] | ||
1531 | 149 | except KeyError: | ||
1532 | 150 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1533 | 151 | error_out(e) | ||
1534 | 152 | |||
1535 | 153 | |||
1536 | 154 | def get_os_version_package(pkg, fatal=True): | ||
1537 | 155 | '''Derive OpenStack version number from an installed package.''' | ||
1538 | 156 | codename = get_os_codename_package(pkg, fatal=fatal) | ||
1539 | 157 | |||
1540 | 158 | if not codename: | ||
1541 | 159 | return None | ||
1542 | 160 | |||
1543 | 161 | if 'swift' in pkg: | ||
1544 | 162 | vers_map = SWIFT_CODENAMES | ||
1545 | 163 | else: | ||
1546 | 164 | vers_map = OPENSTACK_CODENAMES | ||
1547 | 165 | |||
1548 | 166 | for version, cname in vers_map.iteritems(): | ||
1549 | 167 | if cname == codename: | ||
1550 | 168 | return version | ||
1551 | 169 | #e = "Could not determine OpenStack version for package: %s" % pkg | ||
1552 | 170 | #error_out(e) | ||
1553 | 171 | |||
1554 | 172 | |||
1555 | 173 | os_rel = None | ||
1556 | 174 | |||
1557 | 175 | |||
1558 | 176 | def os_release(package, base='essex'): | ||
1559 | 177 | ''' | ||
1560 | 178 | Returns OpenStack release codename from a cached global. | ||
1561 | 179 | If the codename can not be determined from either an installed package or | ||
1562 | 180 | the installation source, the earliest release supported by the charm should | ||
1563 | 181 | be returned. | ||
1564 | 182 | ''' | ||
1565 | 183 | global os_rel | ||
1566 | 184 | if os_rel: | ||
1567 | 185 | return os_rel | ||
1568 | 186 | os_rel = (get_os_codename_package(package, fatal=False) or | ||
1569 | 187 | get_os_codename_install_source(config('openstack-origin')) or | ||
1570 | 188 | base) | ||
1571 | 189 | return os_rel | ||
1572 | 190 | |||
1573 | 191 | |||
1574 | 192 | def import_key(keyid): | ||
1575 | 193 | cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ | ||
1576 | 194 | "--recv-keys %s" % keyid | ||
1577 | 195 | try: | ||
1578 | 196 | subprocess.check_call(cmd.split(' ')) | ||
1579 | 197 | except subprocess.CalledProcessError: | ||
1580 | 198 | error_out("Error importing repo key %s" % keyid) | ||
1581 | 199 | |||
1582 | 200 | |||
1583 | 201 | def configure_installation_source(rel): | ||
1584 | 202 | '''Configure apt installation source.''' | ||
1585 | 203 | if rel == 'distro': | ||
1586 | 204 | return | ||
1587 | 205 | elif rel[:4] == "ppa:": | ||
1588 | 206 | src = rel | ||
1589 | 207 | subprocess.check_call(["add-apt-repository", "-y", src]) | ||
1590 | 208 | elif rel[:3] == "deb": | ||
1591 | 209 | l = len(rel.split('|')) | ||
1592 | 210 | if l == 2: | ||
1593 | 211 | src, key = rel.split('|') | ||
1594 | 212 | juju_log("Importing PPA key from keyserver for %s" % src) | ||
1595 | 213 | import_key(key) | ||
1596 | 214 | elif l == 1: | ||
1597 | 215 | src = rel | ||
1598 | 216 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
1599 | 217 | f.write(src) | ||
1600 | 218 | elif rel[:6] == 'cloud:': | ||
1601 | 219 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
1602 | 220 | rel = rel.split(':')[1] | ||
1603 | 221 | u_rel = rel.split('-')[0] | ||
1604 | 222 | ca_rel = rel.split('-')[1] | ||
1605 | 223 | |||
1606 | 224 | if u_rel != ubuntu_rel: | ||
1607 | 225 | e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ | ||
1608 | 226 | 'version (%s)' % (ca_rel, ubuntu_rel) | ||
1609 | 227 | error_out(e) | ||
1610 | 228 | |||
1611 | 229 | if 'staging' in ca_rel: | ||
1612 | 230 | # staging is just a regular PPA. | ||
1613 | 231 | os_rel = ca_rel.split('/')[0] | ||
1614 | 232 | ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel | ||
1615 | 233 | cmd = 'add-apt-repository -y %s' % ppa | ||
1616 | 234 | subprocess.check_call(cmd.split(' ')) | ||
1617 | 235 | return | ||
1618 | 236 | |||
1619 | 237 | # map charm config options to actual archive pockets. | ||
1620 | 238 | pockets = { | ||
1621 | 239 | 'folsom': 'precise-updates/folsom', | ||
1622 | 240 | 'folsom/updates': 'precise-updates/folsom', | ||
1623 | 241 | 'folsom/proposed': 'precise-proposed/folsom', | ||
1624 | 242 | 'grizzly': 'precise-updates/grizzly', | ||
1625 | 243 | 'grizzly/updates': 'precise-updates/grizzly', | ||
1626 | 244 | 'grizzly/proposed': 'precise-proposed/grizzly', | ||
1627 | 245 | 'havana': 'precise-updates/havana', | ||
1628 | 246 | 'havana/updates': 'precise-updates/havana', | ||
1629 | 247 | 'havana/proposed': 'precise-proposed/havana', | ||
1630 | 248 | } | ||
1631 | 249 | |||
1632 | 250 | try: | ||
1633 | 251 | pocket = pockets[ca_rel] | ||
1634 | 252 | except KeyError: | ||
1635 | 253 | e = 'Invalid Cloud Archive release specified: %s' % rel | ||
1636 | 254 | error_out(e) | ||
1637 | 255 | |||
1638 | 256 | src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) | ||
1639 | 257 | apt_install('ubuntu-cloud-keyring', fatal=True) | ||
1640 | 258 | |||
1641 | 259 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: | ||
1642 | 260 | f.write(src) | ||
1643 | 261 | else: | ||
1644 | 262 | error_out("Invalid openstack-release specified: %s" % rel) | ||
1645 | 263 | |||
1646 | 264 | |||
1647 | 265 | def save_script_rc(script_path="scripts/scriptrc", **env_vars): | ||
1648 | 266 | """ | ||
1649 | 267 | Write an rc file in the charm-delivered directory containing | ||
1650 | 268 | exported environment variables provided by env_vars. Any charm scripts run | ||
1651 | 269 | outside the juju hook environment can source this scriptrc to obtain | ||
1652 | 270 | updated config information necessary to perform health checks or | ||
1653 | 271 | service changes. | ||
1654 | 272 | """ | ||
1655 | 273 | juju_rc_path = "%s/%s" % (charm_dir(), script_path) | ||
1656 | 274 | if not os.path.exists(os.path.dirname(juju_rc_path)): | ||
1657 | 275 | os.mkdir(os.path.dirname(juju_rc_path)) | ||
1658 | 276 | with open(juju_rc_path, 'wb') as rc_script: | ||
1659 | 277 | rc_script.write( | ||
1660 | 278 | "#!/bin/bash\n") | ||
1661 | 279 | [rc_script.write('export %s=%s\n' % (u, p)) | ||
1662 | 280 | for u, p in env_vars.iteritems() if u != "script_path"] | ||
1663 | 281 | |||
1664 | 282 | |||
1665 | 283 | def openstack_upgrade_available(package): | ||
1666 | 284 | """ | ||
1667 | 285 | Determines if an OpenStack upgrade is available from installation | ||
1668 | 286 | source, based on version of installed package. | ||
1669 | 287 | |||
1670 | 288 | :param package: str: Name of installed package. | ||
1671 | 289 | |||
1672 | 290 | :returns: bool: : Returns True if configured installation source offers | ||
1673 | 291 | a newer version of package. | ||
1674 | 292 | |||
1675 | 293 | """ | ||
1676 | 294 | |||
1677 | 295 | src = config('openstack-origin') | ||
1678 | 296 | cur_vers = get_os_version_package(package) | ||
1679 | 297 | available_vers = get_os_version_install_source(src) | ||
1680 | 298 | apt.init() | ||
1681 | 299 | return apt.version_compare(available_vers, cur_vers) == 1 | ||
1682 | 300 | |||
1683 | 301 | |||
1684 | 302 | def is_ip(address): | ||
1685 | 303 | """ | ||
1686 | 304 | Returns True if address is a valid IP address. | ||
1687 | 305 | """ | ||
1688 | 306 | try: | ||
1689 | 307 | # Test to see if already an IPv4 address | ||
1690 | 308 | socket.inet_aton(address) | ||
1691 | 309 | return True | ||
1692 | 310 | except socket.error: | ||
1693 | 311 | return False | ||
1694 | 312 | |||
1695 | 313 | |||
1696 | 314 | def ns_query(address): | ||
1697 | 315 | try: | ||
1698 | 316 | import dns.resolver | ||
1699 | 317 | except ImportError: | ||
1700 | 318 | apt_install('python-dnspython') | ||
1701 | 319 | import dns.resolver | ||
1702 | 320 | |||
1703 | 321 | if isinstance(address, dns.name.Name): | ||
1704 | 322 | rtype = 'PTR' | ||
1705 | 323 | elif isinstance(address, basestring): | ||
1706 | 324 | rtype = 'A' | ||
1707 | 325 | |||
1708 | 326 | answers = dns.resolver.query(address, rtype) | ||
1709 | 327 | if answers: | ||
1710 | 328 | return str(answers[0]) | ||
1711 | 329 | return None | ||
1712 | 330 | |||
1713 | 331 | |||
1714 | 332 | def get_host_ip(hostname): | ||
1715 | 333 | """ | ||
1716 | 334 | Resolves the IP for a given hostname, or returns | ||
1717 | 335 | the input if it is already an IP. | ||
1718 | 336 | """ | ||
1719 | 337 | if is_ip(hostname): | ||
1720 | 338 | return hostname | ||
1721 | 339 | |||
1722 | 340 | return ns_query(hostname) | ||
1723 | 341 | |||
1724 | 342 | |||
1725 | 343 | def get_hostname(address): | ||
1726 | 344 | """ | ||
1727 | 345 | Resolves hostname for given IP, or returns the input | ||
1728 | 346 | if it is already a hostname. | ||
1729 | 347 | """ | ||
1730 | 348 | if not is_ip(address): | ||
1731 | 349 | return address | ||
1732 | 350 | |||
1733 | 351 | try: | ||
1734 | 352 | import dns.reversename | ||
1735 | 353 | except ImportError: | ||
1736 | 354 | apt_install('python-dnspython') | ||
1737 | 355 | import dns.reversename | ||
1738 | 356 | |||
1739 | 357 | rev = dns.reversename.from_address(address) | ||
1740 | 358 | result = ns_query(rev) | ||
1741 | 359 | if not result: | ||
1742 | 360 | return None | ||
1743 | 361 | |||
1744 | 362 | # strip trailing . | ||
1745 | 363 | if result.endswith('.'): | ||
1746 | 364 | return result[:-1] | ||
1747 | 365 | return result | ||
1748 | 0 | 366 | ||
1749 | === added directory 'hooks/charmhelpers/contrib/storage' | |||
1750 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' | |||
1751 | === added directory 'hooks/charmhelpers/contrib/storage/linux' | |||
1752 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' | |||
1753 | === added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
1754 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 | |||
1755 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2013-10-15 01:35:02 +0000 | |||
1756 | @@ -0,0 +1,359 @@ | |||
1757 | 1 | # | ||
1758 | 2 | # Copyright 2012 Canonical Ltd. | ||
1759 | 3 | # | ||
1760 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
1761 | 5 | # | ||
1762 | 6 | # Authors: | ||
1763 | 7 | # James Page <james.page@ubuntu.com> | ||
1764 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
1765 | 9 | # | ||
1766 | 10 | |||
1767 | 11 | import os | ||
1768 | 12 | import shutil | ||
1769 | 13 | import json | ||
1770 | 14 | import time | ||
1771 | 15 | |||
1772 | 16 | from subprocess import ( | ||
1773 | 17 | check_call, | ||
1774 | 18 | check_output, | ||
1775 | 19 | CalledProcessError | ||
1776 | 20 | ) | ||
1777 | 21 | |||
1778 | 22 | from charmhelpers.core.hookenv import ( | ||
1779 | 23 | relation_get, | ||
1780 | 24 | relation_ids, | ||
1781 | 25 | related_units, | ||
1782 | 26 | log, | ||
1783 | 27 | INFO, | ||
1784 | 28 | WARNING, | ||
1785 | 29 | ERROR | ||
1786 | 30 | ) | ||
1787 | 31 | |||
1788 | 32 | from charmhelpers.core.host import ( | ||
1789 | 33 | mount, | ||
1790 | 34 | mounts, | ||
1791 | 35 | service_start, | ||
1792 | 36 | service_stop, | ||
1793 | 37 | service_running, | ||
1794 | 38 | umount, | ||
1795 | 39 | ) | ||
1796 | 40 | |||
1797 | 41 | from charmhelpers.fetch import ( | ||
1798 | 42 | apt_install, | ||
1799 | 43 | ) | ||
1800 | 44 | |||
1801 | 45 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' | ||
1802 | 46 | KEYFILE = '/etc/ceph/ceph.client.{}.key' | ||
1803 | 47 | |||
1804 | 48 | CEPH_CONF = """[global] | ||
1805 | 49 | auth supported = {auth} | ||
1806 | 50 | keyring = {keyring} | ||
1807 | 51 | mon host = {mon_hosts} | ||
1808 | 52 | """ | ||
1809 | 53 | |||
1810 | 54 | |||
1811 | 55 | def install(): | ||
1812 | 56 | ''' Basic Ceph client installation ''' | ||
1813 | 57 | ceph_dir = "/etc/ceph" | ||
1814 | 58 | if not os.path.exists(ceph_dir): | ||
1815 | 59 | os.mkdir(ceph_dir) | ||
1816 | 60 | apt_install('ceph-common', fatal=True) | ||
1817 | 61 | |||
1818 | 62 | |||
1819 | 63 | def rbd_exists(service, pool, rbd_img): | ||
1820 | 64 | ''' Check to see if a RADOS block device exists ''' | ||
1821 | 65 | try: | ||
1822 | 66 | out = check_output(['rbd', 'list', '--id', service, | ||
1823 | 67 | '--pool', pool]) | ||
1824 | 68 | except CalledProcessError: | ||
1825 | 69 | return False | ||
1826 | 70 | else: | ||
1827 | 71 | return rbd_img in out | ||
1828 | 72 | |||
1829 | 73 | |||
1830 | 74 | def create_rbd_image(service, pool, image, sizemb): | ||
1831 | 75 | ''' Create a new RADOS block device ''' | ||
1832 | 76 | cmd = [ | ||
1833 | 77 | 'rbd', | ||
1834 | 78 | 'create', | ||
1835 | 79 | image, | ||
1836 | 80 | '--size', | ||
1837 | 81 | str(sizemb), | ||
1838 | 82 | '--id', | ||
1839 | 83 | service, | ||
1840 | 84 | '--pool', | ||
1841 | 85 | pool | ||
1842 | 86 | ] | ||
1843 | 87 | check_call(cmd) | ||
1844 | 88 | |||
1845 | 89 | |||
1846 | 90 | def pool_exists(service, name): | ||
1847 | 91 | ''' Check to see if a RADOS pool already exists ''' | ||
1848 | 92 | try: | ||
1849 | 93 | out = check_output(['rados', '--id', service, 'lspools']) | ||
1850 | 94 | except CalledProcessError: | ||
1851 | 95 | return False | ||
1852 | 96 | else: | ||
1853 | 97 | return name in out | ||
1854 | 98 | |||
1855 | 99 | |||
1856 | 100 | def get_osds(service): | ||
1857 | 101 | ''' | ||
1858 | 102 | Return a list of all Ceph Object Storage Daemons | ||
1859 | 103 | currently in the cluster | ||
1860 | 104 | ''' | ||
1861 | 105 | return json.loads(check_output(['ceph', '--id', service, | ||
1862 | 106 | 'osd', 'ls', '--format=json'])) | ||
1863 | 107 | |||
1864 | 108 | |||
1865 | 109 | def create_pool(service, name, replicas=2): | ||
1866 | 110 | ''' Create a new RADOS pool ''' | ||
1867 | 111 | if pool_exists(service, name): | ||
1868 | 112 | log("Ceph pool {} already exists, skipping creation".format(name), | ||
1869 | 113 | level=WARNING) | ||
1870 | 114 | return | ||
1871 | 115 | # Calculate the number of placement groups based | ||
1872 | 116 | # on upstream recommended best practices. | ||
1873 | 117 | pgnum = (len(get_osds(service)) * 100 / replicas) | ||
1874 | 118 | cmd = [ | ||
1875 | 119 | 'ceph', '--id', service, | ||
1876 | 120 | 'osd', 'pool', 'create', | ||
1877 | 121 | name, str(pgnum) | ||
1878 | 122 | ] | ||
1879 | 123 | check_call(cmd) | ||
1880 | 124 | cmd = [ | ||
1881 | 125 | 'ceph', '--id', service, | ||
1882 | 126 | 'osd', 'pool', 'set', name, | ||
1883 | 127 | 'size', str(replicas) | ||
1884 | 128 | ] | ||
1885 | 129 | check_call(cmd) | ||
1886 | 130 | |||
1887 | 131 | |||
1888 | 132 | def delete_pool(service, name): | ||
1889 | 133 | ''' Delete a RADOS pool from ceph ''' | ||
1890 | 134 | cmd = [ | ||
1891 | 135 | 'ceph', '--id', service, | ||
1892 | 136 | 'osd', 'pool', 'delete', | ||
1893 | 137 | name, '--yes-i-really-really-mean-it' | ||
1894 | 138 | ] | ||
1895 | 139 | check_call(cmd) | ||
1896 | 140 | |||
1897 | 141 | |||
1898 | 142 | def _keyfile_path(service): | ||
1899 | 143 | return KEYFILE.format(service) | ||
1900 | 144 | |||
1901 | 145 | |||
1902 | 146 | def _keyring_path(service): | ||
1903 | 147 | return KEYRING.format(service) | ||
1904 | 148 | |||
1905 | 149 | |||
1906 | 150 | def create_keyring(service, key): | ||
1907 | 151 | ''' Create a new Ceph keyring containing key''' | ||
1908 | 152 | keyring = _keyring_path(service) | ||
1909 | 153 | if os.path.exists(keyring): | ||
1910 | 154 | log('ceph: Keyring exists at %s.' % keyring, level=WARNING) | ||
1911 | 155 | return | ||
1912 | 156 | cmd = [ | ||
1913 | 157 | 'ceph-authtool', | ||
1914 | 158 | keyring, | ||
1915 | 159 | '--create-keyring', | ||
1916 | 160 | '--name=client.{}'.format(service), | ||
1917 | 161 | '--add-key={}'.format(key) | ||
1918 | 162 | ] | ||
1919 | 163 | check_call(cmd) | ||
1920 | 164 | log('ceph: Created new ring at %s.' % keyring, level=INFO) | ||
1921 | 165 | |||
1922 | 166 | |||
1923 | 167 | def create_key_file(service, key): | ||
1924 | 168 | ''' Create a file containing key ''' | ||
1925 | 169 | keyfile = _keyfile_path(service) | ||
1926 | 170 | if os.path.exists(keyfile): | ||
1927 | 171 | log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) | ||
1928 | 172 | return | ||
1929 | 173 | with open(keyfile, 'w') as fd: | ||
1930 | 174 | fd.write(key) | ||
1931 | 175 | log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) | ||
1932 | 176 | |||
1933 | 177 | |||
1934 | 178 | def get_ceph_nodes(): | ||
1935 | 179 | ''' Query named relation 'ceph' to detemine current nodes ''' | ||
1936 | 180 | hosts = [] | ||
1937 | 181 | for r_id in relation_ids('ceph'): | ||
1938 | 182 | for unit in related_units(r_id): | ||
1939 | 183 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) | ||
1940 | 184 | return hosts | ||
1941 | 185 | |||
1942 | 186 | |||
1943 | 187 | def configure(service, key, auth): | ||
1944 | 188 | ''' Perform basic configuration of Ceph ''' | ||
1945 | 189 | create_keyring(service, key) | ||
1946 | 190 | create_key_file(service, key) | ||
1947 | 191 | hosts = get_ceph_nodes() | ||
1948 | 192 | with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: | ||
1949 | 193 | ceph_conf.write(CEPH_CONF.format(auth=auth, | ||
1950 | 194 | keyring=_keyring_path(service), | ||
1951 | 195 | mon_hosts=",".join(map(str, hosts)))) | ||
1952 | 196 | modprobe('rbd') | ||
1953 | 197 | |||
1954 | 198 | |||
1955 | 199 | def image_mapped(name): | ||
1956 | 200 | ''' Determine whether a RADOS block device is mapped locally ''' | ||
1957 | 201 | try: | ||
1958 | 202 | out = check_output(['rbd', 'showmapped']) | ||
1959 | 203 | except CalledProcessError: | ||
1960 | 204 | return False | ||
1961 | 205 | else: | ||
1962 | 206 | return name in out | ||
1963 | 207 | |||
1964 | 208 | |||
1965 | 209 | def map_block_storage(service, pool, image): | ||
1966 | 210 | ''' Map a RADOS block device for local use ''' | ||
1967 | 211 | cmd = [ | ||
1968 | 212 | 'rbd', | ||
1969 | 213 | 'map', | ||
1970 | 214 | '{}/{}'.format(pool, image), | ||
1971 | 215 | '--user', | ||
1972 | 216 | service, | ||
1973 | 217 | '--secret', | ||
1974 | 218 | _keyfile_path(service), | ||
1975 | 219 | ] | ||
1976 | 220 | check_call(cmd) | ||
1977 | 221 | |||
1978 | 222 | |||
1979 | 223 | def filesystem_mounted(fs): | ||
1980 | 224 | ''' Determine whether a filesytems is already mounted ''' | ||
1981 | 225 | return fs in [f for f, m in mounts()] | ||
1982 | 226 | |||
1983 | 227 | |||
1984 | 228 | def make_filesystem(blk_device, fstype='ext4', timeout=10): | ||
1985 | 229 | ''' Make a new filesystem on the specified block device ''' | ||
1986 | 230 | count = 0 | ||
1987 | 231 | e_noent = os.errno.ENOENT | ||
1988 | 232 | while not os.path.exists(blk_device): | ||
1989 | 233 | if count >= timeout: | ||
1990 | 234 | log('ceph: gave up waiting on block device %s' % blk_device, | ||
1991 | 235 | level=ERROR) | ||
1992 | 236 | raise IOError(e_noent, os.strerror(e_noent), blk_device) | ||
1993 | 237 | log('ceph: waiting for block device %s to appear' % blk_device, | ||
1994 | 238 | level=INFO) | ||
1995 | 239 | count += 1 | ||
1996 | 240 | time.sleep(1) | ||
1997 | 241 | else: | ||
1998 | 242 | log('ceph: Formatting block device %s as filesystem %s.' % | ||
1999 | 243 | (blk_device, fstype), level=INFO) | ||
2000 | 244 | check_call(['mkfs', '-t', fstype, blk_device]) | ||
2001 | 245 | |||
2002 | 246 | |||
2003 | 247 | def place_data_on_block_device(blk_device, data_src_dst): | ||
2004 | 248 | ''' Migrate data in data_src_dst to blk_device and then remount ''' | ||
2005 | 249 | # mount block device into /mnt | ||
2006 | 250 | mount(blk_device, '/mnt') | ||
2007 | 251 | # copy data to /mnt | ||
2008 | 252 | copy_files(data_src_dst, '/mnt') | ||
2009 | 253 | # umount block device | ||
2010 | 254 | umount('/mnt') | ||
2011 | 255 | # Grab user/group ID's from original source | ||
2012 | 256 | _dir = os.stat(data_src_dst) | ||
2013 | 257 | uid = _dir.st_uid | ||
2014 | 258 | gid = _dir.st_gid | ||
2015 | 259 | # re-mount where the data should originally be | ||
2016 | 260 | # TODO: persist is currently a NO-OP in core.host | ||
2017 | 261 | mount(blk_device, data_src_dst, persist=True) | ||
2018 | 262 | # ensure original ownership of new mount. | ||
2019 | 263 | os.chown(data_src_dst, uid, gid) | ||
2020 | 264 | |||
2021 | 265 | |||
2022 | 266 | # TODO: re-use | ||
2023 | 267 | def modprobe(module): | ||
2024 | 268 | ''' Load a kernel module and configure for auto-load on reboot ''' | ||
2025 | 269 | log('ceph: Loading kernel module', level=INFO) | ||
2026 | 270 | cmd = ['modprobe', module] | ||
2027 | 271 | check_call(cmd) | ||
2028 | 272 | with open('/etc/modules', 'r+') as modules: | ||
2029 | 273 | if module not in modules.read(): | ||
2030 | 274 | modules.write(module) | ||
2031 | 275 | |||
2032 | 276 | |||
2033 | 277 | def copy_files(src, dst, symlinks=False, ignore=None): | ||
2034 | 278 | ''' Copy files from src to dst ''' | ||
2035 | 279 | for item in os.listdir(src): | ||
2036 | 280 | s = os.path.join(src, item) | ||
2037 | 281 | d = os.path.join(dst, item) | ||
2038 | 282 | if os.path.isdir(s): | ||
2039 | 283 | shutil.copytree(s, d, symlinks, ignore) | ||
2040 | 284 | else: | ||
2041 | 285 | shutil.copy2(s, d) | ||
2042 | 286 | |||
2043 | 287 | |||
2044 | 288 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | ||
2045 | 289 | blk_device, fstype, system_services=[]): | ||
2046 | 290 | """ | ||
2047 | 291 | NOTE: This function must only be called from a single service unit for | ||
2048 | 292 | the same rbd_img otherwise data loss will occur. | ||
2049 | 293 | |||
2050 | 294 | Ensures given pool and RBD image exists, is mapped to a block device, | ||
2051 | 295 | and the device is formatted and mounted at the given mount_point. | ||
2052 | 296 | |||
2053 | 297 | If formatting a device for the first time, data existing at mount_point | ||
2054 | 298 | will be migrated to the RBD device before being re-mounted. | ||
2055 | 299 | |||
2056 | 300 | All services listed in system_services will be stopped prior to data | ||
2057 | 301 | migration and restarted when complete. | ||
2058 | 302 | """ | ||
2059 | 303 | # Ensure pool, RBD image, RBD mappings are in place. | ||
2060 | 304 | if not pool_exists(service, pool): | ||
2061 | 305 | log('ceph: Creating new pool {}.'.format(pool)) | ||
2062 | 306 | create_pool(service, pool) | ||
2063 | 307 | |||
2064 | 308 | if not rbd_exists(service, pool, rbd_img): | ||
2065 | 309 | log('ceph: Creating RBD image ({}).'.format(rbd_img)) | ||
2066 | 310 | create_rbd_image(service, pool, rbd_img, sizemb) | ||
2067 | 311 | |||
2068 | 312 | if not image_mapped(rbd_img): | ||
2069 | 313 | log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) | ||
2070 | 314 | map_block_storage(service, pool, rbd_img) | ||
2071 | 315 | |||
2072 | 316 | # make file system | ||
2073 | 317 | # TODO: What happens if for whatever reason this is run again and | ||
2074 | 318 | # the data is already in the rbd device and/or is mounted?? | ||
2075 | 319 | # When it is mounted already, it will fail to make the fs | ||
2076 | 320 | # XXX: This is really sketchy! Need to at least add an fstab entry | ||
2077 | 321 | # otherwise this hook will blow away existing data if its executed | ||
2078 | 322 | # after a reboot. | ||
2079 | 323 | if not filesystem_mounted(mount_point): | ||
2080 | 324 | make_filesystem(blk_device, fstype) | ||
2081 | 325 | |||
2082 | 326 | for svc in system_services: | ||
2083 | 327 | if service_running(svc): | ||
2084 | 328 | log('ceph: Stopping services {} prior to migrating data.' | ||
2085 | 329 | .format(svc)) | ||
2086 | 330 | service_stop(svc) | ||
2087 | 331 | |||
2088 | 332 | place_data_on_block_device(blk_device, mount_point) | ||
2089 | 333 | |||
2090 | 334 | for svc in system_services: | ||
2091 | 335 | log('ceph: Starting service {} after migrating data.' | ||
2092 | 336 | .format(svc)) | ||
2093 | 337 | service_start(svc) | ||
2094 | 338 | |||
2095 | 339 | |||
2096 | 340 | def ensure_ceph_keyring(service, user=None, group=None): | ||
2097 | 341 | ''' | ||
2098 | 342 | Ensures a ceph keyring is created for a named service | ||
2099 | 343 | and optionally ensures user and group ownership. | ||
2100 | 344 | |||
2101 | 345 | Returns False if no ceph key is available in relation state. | ||
2102 | 346 | ''' | ||
2103 | 347 | key = None | ||
2104 | 348 | for rid in relation_ids('ceph'): | ||
2105 | 349 | for unit in related_units(rid): | ||
2106 | 350 | key = relation_get('key', rid=rid, unit=unit) | ||
2107 | 351 | if key: | ||
2108 | 352 | break | ||
2109 | 353 | if not key: | ||
2110 | 354 | return False | ||
2111 | 355 | create_keyring(service=service, key=key) | ||
2112 | 356 | keyring = _keyring_path(service) | ||
2113 | 357 | if user and group: | ||
2114 | 358 | check_call(['chown', '%s.%s' % (user, group), keyring]) | ||
2115 | 359 | return True | ||
2116 | 0 | 360 | ||
2117 | === added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' | |||
2118 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000 | |||
2119 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-10-15 01:35:02 +0000 | |||
2120 | @@ -0,0 +1,62 @@ | |||
2121 | 1 | |||
2122 | 2 | import os | ||
2123 | 3 | import re | ||
2124 | 4 | |||
2125 | 5 | from subprocess import ( | ||
2126 | 6 | check_call, | ||
2127 | 7 | check_output, | ||
2128 | 8 | ) | ||
2129 | 9 | |||
2130 | 10 | |||
2131 | 11 | ################################################## | ||
2132 | 12 | # loopback device helpers. | ||
2133 | 13 | ################################################## | ||
2134 | 14 | def loopback_devices(): | ||
2135 | 15 | ''' | ||
2136 | 16 | Parse through 'losetup -a' output to determine currently mapped | ||
2137 | 17 | loopback devices. Output is expected to look like: | ||
2138 | 18 | |||
2139 | 19 | /dev/loop0: [0807]:961814 (/tmp/my.img) | ||
2140 | 20 | |||
2141 | 21 | :returns: dict: a dict mapping {loopback_dev: backing_file} | ||
2142 | 22 | ''' | ||
2143 | 23 | loopbacks = {} | ||
2144 | 24 | cmd = ['losetup', '-a'] | ||
2145 | 25 | devs = [d.strip().split(' ') for d in | ||
2146 | 26 | check_output(cmd).splitlines() if d != ''] | ||
2147 | 27 | for dev, _, f in devs: | ||
2148 | 28 | loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0] | ||
2149 | 29 | return loopbacks | ||
2150 | 30 | |||
2151 | 31 | |||
2152 | 32 | def create_loopback(file_path): | ||
2153 | 33 | ''' | ||
2154 | 34 | Create a loopback device for a given backing file. | ||
2155 | 35 | |||
2156 | 36 | :returns: str: Full path to new loopback device (eg, /dev/loop0) | ||
2157 | 37 | ''' | ||
2158 | 38 | file_path = os.path.abspath(file_path) | ||
2159 | 39 | check_call(['losetup', '--find', file_path]) | ||
2160 | 40 | for d, f in loopback_devices().iteritems(): | ||
2161 | 41 | if f == file_path: | ||
2162 | 42 | return d | ||
2163 | 43 | |||
2164 | 44 | |||
2165 | 45 | def ensure_loopback_device(path, size): | ||
2166 | 46 | ''' | ||
2167 | 47 | Ensure a loopback device exists for a given backing file path and size. | ||
2168 | 48 | If it a loopback device is not mapped to file, a new one will be created. | ||
2169 | 49 | |||
2170 | 50 | TODO: Confirm size of found loopback device. | ||
2171 | 51 | |||
2172 | 52 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) | ||
2173 | 53 | ''' | ||
2174 | 54 | for d, f in loopback_devices().iteritems(): | ||
2175 | 55 | if f == path: | ||
2176 | 56 | return d | ||
2177 | 57 | |||
2178 | 58 | if not os.path.exists(path): | ||
2179 | 59 | cmd = ['truncate', '--size', size, path] | ||
2180 | 60 | check_call(cmd) | ||
2181 | 61 | |||
2182 | 62 | return create_loopback(path) | ||
2183 | 0 | 63 | ||
2184 | === added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' | |||
2185 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000 | |||
2186 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2013-10-15 01:35:02 +0000 | |||
2187 | @@ -0,0 +1,88 @@ | |||
2188 | 1 | from subprocess import ( | ||
2189 | 2 | CalledProcessError, | ||
2190 | 3 | check_call, | ||
2191 | 4 | check_output, | ||
2192 | 5 | Popen, | ||
2193 | 6 | PIPE, | ||
2194 | 7 | ) | ||
2195 | 8 | |||
2196 | 9 | |||
2197 | 10 | ################################################## | ||
2198 | 11 | # LVM helpers. | ||
2199 | 12 | ################################################## | ||
2200 | 13 | def deactivate_lvm_volume_group(block_device): | ||
2201 | 14 | ''' | ||
2202 | 15 | Deactivate any volume gruop associated with an LVM physical volume. | ||
2203 | 16 | |||
2204 | 17 | :param block_device: str: Full path to LVM physical volume | ||
2205 | 18 | ''' | ||
2206 | 19 | vg = list_lvm_volume_group(block_device) | ||
2207 | 20 | if vg: | ||
2208 | 21 | cmd = ['vgchange', '-an', vg] | ||
2209 | 22 | check_call(cmd) | ||
2210 | 23 | |||
2211 | 24 | |||
2212 | 25 | def is_lvm_physical_volume(block_device): | ||
2213 | 26 | ''' | ||
2214 | 27 | Determine whether a block device is initialized as an LVM PV. | ||
2215 | 28 | |||
2216 | 29 | :param block_device: str: Full path of block device to inspect. | ||
2217 | 30 | |||
2218 | 31 | :returns: boolean: True if block device is a PV, False if not. | ||
2219 | 32 | ''' | ||
2220 | 33 | try: | ||
2221 | 34 | check_output(['pvdisplay', block_device]) | ||
2222 | 35 | return True | ||
2223 | 36 | except CalledProcessError: | ||
2224 | 37 | return False | ||
2225 | 38 | |||
2226 | 39 | |||
2227 | 40 | def remove_lvm_physical_volume(block_device): | ||
2228 | 41 | ''' | ||
2229 | 42 | Remove LVM PV signatures from a given block device. | ||
2230 | 43 | |||
2231 | 44 | :param block_device: str: Full path of block device to scrub. | ||
2232 | 45 | ''' | ||
2233 | 46 | p = Popen(['pvremove', '-ff', block_device], | ||
2234 | 47 | stdin=PIPE) | ||
2235 | 48 | p.communicate(input='y\n') | ||
2236 | 49 | |||
2237 | 50 | |||
2238 | 51 | def list_lvm_volume_group(block_device): | ||
2239 | 52 | ''' | ||
2240 | 53 | List LVM volume group associated with a given block device. | ||
2241 | 54 | |||
2242 | 55 | Assumes block device is a valid LVM PV. | ||
2243 | 56 | |||
2244 | 57 | :param block_device: str: Full path of block device to inspect. | ||
2245 | 58 | |||
2246 | 59 | :returns: str: Name of volume group associated with block device or None | ||
2247 | 60 | ''' | ||
2248 | 61 | vg = None | ||
2249 | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() | ||
2250 | 63 | for l in pvd: | ||
2251 | 64 | if l.strip().startswith('VG Name'): | ||
2252 | 65 | vg = ' '.join(l.split()).split(' ').pop() | ||
2253 | 66 | return vg | ||
2254 | 67 | |||
2255 | 68 | |||
2256 | 69 | def create_lvm_physical_volume(block_device): | ||
2257 | 70 | ''' | ||
2258 | 71 | Initialize a block device as an LVM physical volume. | ||
2259 | 72 | |||
2260 | 73 | :param block_device: str: Full path of block device to initialize. | ||
2261 | 74 | |||
2262 | 75 | ''' | ||
2263 | 76 | check_call(['pvcreate', block_device]) | ||
2264 | 77 | |||
2265 | 78 | |||
2266 | 79 | def create_lvm_volume_group(volume_group, block_device): | ||
2267 | 80 | ''' | ||
2268 | 81 | Create an LVM volume group backed by a given block device. | ||
2269 | 82 | |||
2270 | 83 | Assumes block device has already been initialized as an LVM PV. | ||
2271 | 84 | |||
2272 | 85 | :param volume_group: str: Name of volume group to create. | ||
2273 | 86 | :block_device: str: Full path of PV-initialized block device. | ||
2274 | 87 | ''' | ||
2275 | 88 | check_call(['vgcreate', volume_group, block_device]) | ||
2276 | 0 | 89 | ||
2277 | === added file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
2278 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 | |||
2279 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-10-15 01:35:02 +0000 | |||
2280 | @@ -0,0 +1,25 @@ | |||
2281 | 1 | from os import stat | ||
2282 | 2 | from stat import S_ISBLK | ||
2283 | 3 | |||
2284 | 4 | from subprocess import ( | ||
2285 | 5 | check_call | ||
2286 | 6 | ) | ||
2287 | 7 | |||
2288 | 8 | |||
2289 | 9 | def is_block_device(path): | ||
2290 | 10 | ''' | ||
2291 | 11 | Confirm device at path is a valid block device node. | ||
2292 | 12 | |||
2293 | 13 | :returns: boolean: True if path is a block device, False if not. | ||
2294 | 14 | ''' | ||
2295 | 15 | return S_ISBLK(stat(path).st_mode) | ||
2296 | 16 | |||
2297 | 17 | |||
2298 | 18 | def zap_disk(block_device): | ||
2299 | 19 | ''' | ||
2300 | 20 | Clear a block device of partition table. Relies on sgdisk, which is | ||
2301 | 21 | installed as pat of the 'gdisk' package in Ubuntu. | ||
2302 | 22 | |||
2303 | 23 | :param block_device: str: Full path of block device to clean. | ||
2304 | 24 | ''' | ||
2305 | 25 | check_call(['sgdisk', '--zap-all', block_device]) | ||
2306 | 0 | 26 | ||
2307 | === added directory 'hooks/charmhelpers/core' | |||
2308 | === added file 'hooks/charmhelpers/core/__init__.py' | |||
2309 | === added file 'hooks/charmhelpers/core/hookenv.py' | |||
2310 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 | |||
2311 | +++ hooks/charmhelpers/core/hookenv.py 2013-10-15 01:35:02 +0000 | |||
2312 | @@ -0,0 +1,340 @@ | |||
2313 | 1 | "Interactions with the Juju environment" | ||
2314 | 2 | # Copyright 2013 Canonical Ltd. | ||
2315 | 3 | # | ||
2316 | 4 | # Authors: | ||
2317 | 5 | # Charm Helpers Developers <juju@lists.ubuntu.com> | ||
2318 | 6 | |||
2319 | 7 | import os | ||
2320 | 8 | import json | ||
2321 | 9 | import yaml | ||
2322 | 10 | import subprocess | ||
2323 | 11 | import UserDict | ||
2324 | 12 | |||
2325 | 13 | CRITICAL = "CRITICAL" | ||
2326 | 14 | ERROR = "ERROR" | ||
2327 | 15 | WARNING = "WARNING" | ||
2328 | 16 | INFO = "INFO" | ||
2329 | 17 | DEBUG = "DEBUG" | ||
2330 | 18 | MARKER = object() | ||
2331 | 19 | |||
2332 | 20 | cache = {} | ||
2333 | 21 | |||
2334 | 22 | |||
2335 | 23 | def cached(func): | ||
2336 | 24 | ''' Cache return values for multiple executions of func + args | ||
2337 | 25 | |||
2338 | 26 | For example: | ||
2339 | 27 | |||
2340 | 28 | @cached | ||
2341 | 29 | def unit_get(attribute): | ||
2342 | 30 | pass | ||
2343 | 31 | |||
2344 | 32 | unit_get('test') | ||
2345 | 33 | |||
2346 | 34 | will cache the result of unit_get + 'test' for future calls. | ||
2347 | 35 | ''' | ||
2348 | 36 | def wrapper(*args, **kwargs): | ||
2349 | 37 | global cache | ||
2350 | 38 | key = str((func, args, kwargs)) | ||
2351 | 39 | try: | ||
2352 | 40 | return cache[key] | ||
2353 | 41 | except KeyError: | ||
2354 | 42 | res = func(*args, **kwargs) | ||
2355 | 43 | cache[key] = res | ||
2356 | 44 | return res | ||
2357 | 45 | return wrapper | ||
2358 | 46 | |||
2359 | 47 | |||
2360 | 48 | def flush(key): | ||
2361 | 49 | ''' Flushes any entries from function cache where the | ||
2362 | 50 | key is found in the function+args ''' | ||
2363 | 51 | flush_list = [] | ||
2364 | 52 | for item in cache: | ||
2365 | 53 | if key in item: | ||
2366 | 54 | flush_list.append(item) | ||
2367 | 55 | for item in flush_list: | ||
2368 | 56 | del cache[item] | ||
2369 | 57 | |||
2370 | 58 | |||
2371 | 59 | def log(message, level=None): | ||
2372 | 60 | "Write a message to the juju log" | ||
2373 | 61 | command = ['juju-log'] | ||
2374 | 62 | if level: | ||
2375 | 63 | command += ['-l', level] | ||
2376 | 64 | command += [message] | ||
2377 | 65 | subprocess.call(command) | ||
2378 | 66 | |||
2379 | 67 | |||
2380 | 68 | class Serializable(UserDict.IterableUserDict): | ||
2381 | 69 | "Wrapper, an object that can be serialized to yaml or json" | ||
2382 | 70 | |||
2383 | 71 | def __init__(self, obj): | ||
2384 | 72 | # wrap the object | ||
2385 | 73 | UserDict.IterableUserDict.__init__(self) | ||
2386 | 74 | self.data = obj | ||
2387 | 75 | |||
2388 | 76 | def __getattr__(self, attr): | ||
2389 | 77 | # See if this object has attribute. | ||
2390 | 78 | if attr in ("json", "yaml", "data"): | ||
2391 | 79 | return self.__dict__[attr] | ||
2392 | 80 | # Check for attribute in wrapped object. | ||
2393 | 81 | got = getattr(self.data, attr, MARKER) | ||
2394 | 82 | if got is not MARKER: | ||
2395 | 83 | return got | ||
2396 | 84 | # Proxy to the wrapped object via dict interface. | ||
2397 | 85 | try: | ||
2398 | 86 | return self.data[attr] | ||
2399 | 87 | except KeyError: | ||
2400 | 88 | raise AttributeError(attr) | ||
2401 | 89 | |||
2402 | 90 | def __getstate__(self): | ||
2403 | 91 | # Pickle as a standard dictionary. | ||
2404 | 92 | return self.data | ||
2405 | 93 | |||
2406 | 94 | def __setstate__(self, state): | ||
2407 | 95 | # Unpickle into our wrapper. | ||
2408 | 96 | self.data = state | ||
2409 | 97 | |||
2410 | 98 | def json(self): | ||
2411 | 99 | "Serialize the object to json" | ||
2412 | 100 | return json.dumps(self.data) | ||
2413 | 101 | |||
2414 | 102 | def yaml(self): | ||
2415 | 103 | "Serialize the object to yaml" | ||
2416 | 104 | return yaml.dump(self.data) | ||
2417 | 105 | |||
2418 | 106 | |||
2419 | 107 | def execution_environment(): | ||
2420 | 108 | """A convenient bundling of the current execution context""" | ||
2421 | 109 | context = {} | ||
2422 | 110 | context['conf'] = config() | ||
2423 | 111 | if relation_id(): | ||
2424 | 112 | context['reltype'] = relation_type() | ||
2425 | 113 | context['relid'] = relation_id() | ||
2426 | 114 | context['rel'] = relation_get() | ||
2427 | 115 | context['unit'] = local_unit() | ||
2428 | 116 | context['rels'] = relations() | ||
2429 | 117 | context['env'] = os.environ | ||
2430 | 118 | return context | ||
2431 | 119 | |||
2432 | 120 | |||
2433 | 121 | def in_relation_hook(): | ||
2434 | 122 | "Determine whether we're running in a relation hook" | ||
2435 | 123 | return 'JUJU_RELATION' in os.environ | ||
2436 | 124 | |||
2437 | 125 | |||
2438 | 126 | def relation_type(): | ||
2439 | 127 | "The scope for the current relation hook" | ||
2440 | 128 | return os.environ.get('JUJU_RELATION', None) | ||
2441 | 129 | |||
2442 | 130 | |||
2443 | 131 | def relation_id(): | ||
2444 | 132 | "The relation ID for the current relation hook" | ||
2445 | 133 | return os.environ.get('JUJU_RELATION_ID', None) | ||
2446 | 134 | |||
2447 | 135 | |||
2448 | 136 | def local_unit(): | ||
2449 | 137 | "Local unit ID" | ||
2450 | 138 | return os.environ['JUJU_UNIT_NAME'] | ||
2451 | 139 | |||
2452 | 140 | |||
2453 | 141 | def remote_unit(): | ||
2454 | 142 | "The remote unit for the current relation hook" | ||
2455 | 143 | return os.environ['JUJU_REMOTE_UNIT'] | ||
2456 | 144 | |||
2457 | 145 | |||
2458 | 146 | def service_name(): | ||
2459 | 147 | "The name service group this unit belongs to" | ||
2460 | 148 | return local_unit().split('/')[0] | ||
2461 | 149 | |||
2462 | 150 | |||
2463 | 151 | @cached | ||
2464 | 152 | def config(scope=None): | ||
2465 | 153 | "Juju charm configuration" | ||
2466 | 154 | config_cmd_line = ['config-get'] | ||
2467 | 155 | if scope is not None: | ||
2468 | 156 | config_cmd_line.append(scope) | ||
2469 | 157 | config_cmd_line.append('--format=json') | ||
2470 | 158 | try: | ||
2471 | 159 | return json.loads(subprocess.check_output(config_cmd_line)) | ||
2472 | 160 | except ValueError: | ||
2473 | 161 | return None | ||
2474 | 162 | |||
2475 | 163 | |||
2476 | 164 | @cached | ||
2477 | 165 | def relation_get(attribute=None, unit=None, rid=None): | ||
2478 | 166 | _args = ['relation-get', '--format=json'] | ||
2479 | 167 | if rid: | ||
2480 | 168 | _args.append('-r') | ||
2481 | 169 | _args.append(rid) | ||
2482 | 170 | _args.append(attribute or '-') | ||
2483 | 171 | if unit: | ||
2484 | 172 | _args.append(unit) | ||
2485 | 173 | try: | ||
2486 | 174 | return json.loads(subprocess.check_output(_args)) | ||
2487 | 175 | except ValueError: | ||
2488 | 176 | return None | ||
2489 | 177 | |||
2490 | 178 | |||
2491 | 179 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | ||
2492 | 180 | relation_cmd_line = ['relation-set'] | ||
2493 | 181 | if relation_id is not None: | ||
2494 | 182 | relation_cmd_line.extend(('-r', relation_id)) | ||
2495 | 183 | for k, v in (relation_settings.items() + kwargs.items()): | ||
2496 | 184 | if v is None: | ||
2497 | 185 | relation_cmd_line.append('{}='.format(k)) | ||
2498 | 186 | else: | ||
2499 | 187 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
2500 | 188 | subprocess.check_call(relation_cmd_line) | ||
2501 | 189 | # Flush cache of any relation-gets for local unit | ||
2502 | 190 | flush(local_unit()) | ||
2503 | 191 | |||
2504 | 192 | |||
2505 | 193 | @cached | ||
2506 | 194 | def relation_ids(reltype=None): | ||
2507 | 195 | "A list of relation_ids" | ||
2508 | 196 | reltype = reltype or relation_type() | ||
2509 | 197 | relid_cmd_line = ['relation-ids', '--format=json'] | ||
2510 | 198 | if reltype is not None: | ||
2511 | 199 | relid_cmd_line.append(reltype) | ||
2512 | 200 | return json.loads(subprocess.check_output(relid_cmd_line)) or [] | ||
2513 | 201 | return [] | ||
2514 | 202 | |||
2515 | 203 | |||
2516 | 204 | @cached | ||
2517 | 205 | def related_units(relid=None): | ||
2518 | 206 | "A list of related units" | ||
2519 | 207 | relid = relid or relation_id() | ||
2520 | 208 | units_cmd_line = ['relation-list', '--format=json'] | ||
2521 | 209 | if relid is not None: | ||
2522 | 210 | units_cmd_line.extend(('-r', relid)) | ||
2523 | 211 | return json.loads(subprocess.check_output(units_cmd_line)) or [] | ||
2524 | 212 | |||
2525 | 213 | |||
2526 | 214 | @cached | ||
2527 | 215 | def relation_for_unit(unit=None, rid=None): | ||
2528 | 216 | "Get the json represenation of a unit's relation" | ||
2529 | 217 | unit = unit or remote_unit() | ||
2530 | 218 | relation = relation_get(unit=unit, rid=rid) | ||
2531 | 219 | for key in relation: | ||
2532 | 220 | if key.endswith('-list'): | ||
2533 | 221 | relation[key] = relation[key].split() | ||
2534 | 222 | relation['__unit__'] = unit | ||
2535 | 223 | return relation | ||
2536 | 224 | |||
2537 | 225 | |||
2538 | 226 | @cached | ||
2539 | 227 | def relations_for_id(relid=None): | ||
2540 | 228 | "Get relations of a specific relation ID" | ||
2541 | 229 | relation_data = [] | ||
2542 | 230 | relid = relid or relation_ids() | ||
2543 | 231 | for unit in related_units(relid): | ||
2544 | 232 | unit_data = relation_for_unit(unit, relid) | ||
2545 | 233 | unit_data['__relid__'] = relid | ||
2546 | 234 | relation_data.append(unit_data) | ||
2547 | 235 | return relation_data | ||
2548 | 236 | |||
2549 | 237 | |||
2550 | 238 | @cached | ||
2551 | 239 | def relations_of_type(reltype=None): | ||
2552 | 240 | "Get relations of a specific type" | ||
2553 | 241 | relation_data = [] | ||
2554 | 242 | reltype = reltype or relation_type() | ||
2555 | 243 | for relid in relation_ids(reltype): | ||
2556 | 244 | for relation in relations_for_id(relid): | ||
2557 | 245 | relation['__relid__'] = relid | ||
2558 | 246 | relation_data.append(relation) | ||
2559 | 247 | return relation_data | ||
2560 | 248 | |||
2561 | 249 | |||
2562 | 250 | @cached | ||
2563 | 251 | def relation_types(): | ||
2564 | 252 | "Get a list of relation types supported by this charm" | ||
2565 | 253 | charmdir = os.environ.get('CHARM_DIR', '') | ||
2566 | 254 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) | ||
2567 | 255 | md = yaml.safe_load(mdf) | ||
2568 | 256 | rel_types = [] | ||
2569 | 257 | for key in ('provides', 'requires', 'peers'): | ||
2570 | 258 | section = md.get(key) | ||
2571 | 259 | if section: | ||
2572 | 260 | rel_types.extend(section.keys()) | ||
2573 | 261 | mdf.close() | ||
2574 | 262 | return rel_types | ||
2575 | 263 | |||
2576 | 264 | |||
2577 | 265 | @cached | ||
2578 | 266 | def relations(): | ||
2579 | 267 | rels = {} | ||
2580 | 268 | for reltype in relation_types(): | ||
2581 | 269 | relids = {} | ||
2582 | 270 | for relid in relation_ids(reltype): | ||
2583 | 271 | units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} | ||
2584 | 272 | for unit in related_units(relid): | ||
2585 | 273 | reldata = relation_get(unit=unit, rid=relid) | ||
2586 | 274 | units[unit] = reldata | ||
2587 | 275 | relids[relid] = units | ||
2588 | 276 | rels[reltype] = relids | ||
2589 | 277 | return rels | ||
2590 | 278 | |||
2591 | 279 | |||
2592 | 280 | def open_port(port, protocol="TCP"): | ||
2593 | 281 | "Open a service network port" | ||
2594 | 282 | _args = ['open-port'] | ||
2595 | 283 | _args.append('{}/{}'.format(port, protocol)) | ||
2596 | 284 | subprocess.check_call(_args) | ||
2597 | 285 | |||
2598 | 286 | |||
2599 | 287 | def close_port(port, protocol="TCP"): | ||
2600 | 288 | "Close a service network port" | ||
2601 | 289 | _args = ['close-port'] | ||
2602 | 290 | _args.append('{}/{}'.format(port, protocol)) | ||
2603 | 291 | subprocess.check_call(_args) | ||
2604 | 292 | |||
2605 | 293 | |||
2606 | 294 | @cached | ||
2607 | 295 | def unit_get(attribute): | ||
2608 | 296 | _args = ['unit-get', '--format=json', attribute] | ||
2609 | 297 | try: | ||
2610 | 298 | return json.loads(subprocess.check_output(_args)) | ||
2611 | 299 | except ValueError: | ||
2612 | 300 | return None | ||
2613 | 301 | |||
2614 | 302 | |||
2615 | 303 | def unit_private_ip(): | ||
2616 | 304 | return unit_get('private-address') | ||
2617 | 305 | |||
2618 | 306 | |||
2619 | 307 | class UnregisteredHookError(Exception): | ||
2620 | 308 | pass | ||
2621 | 309 | |||
2622 | 310 | |||
2623 | 311 | class Hooks(object): | ||
2624 | 312 | def __init__(self): | ||
2625 | 313 | super(Hooks, self).__init__() | ||
2626 | 314 | self._hooks = {} | ||
2627 | 315 | |||
2628 | 316 | def register(self, name, function): | ||
2629 | 317 | self._hooks[name] = function | ||
2630 | 318 | |||
2631 | 319 | def execute(self, args): | ||
2632 | 320 | hook_name = os.path.basename(args[0]) | ||
2633 | 321 | if hook_name in self._hooks: | ||
2634 | 322 | self._hooks[hook_name]() | ||
2635 | 323 | else: | ||
2636 | 324 | raise UnregisteredHookError(hook_name) | ||
2637 | 325 | |||
2638 | 326 | def hook(self, *hook_names): | ||
2639 | 327 | def wrapper(decorated): | ||
2640 | 328 | for hook_name in hook_names: | ||
2641 | 329 | self.register(hook_name, decorated) | ||
2642 | 330 | else: | ||
2643 | 331 | self.register(decorated.__name__, decorated) | ||
2644 | 332 | if '_' in decorated.__name__: | ||
2645 | 333 | self.register( | ||
2646 | 334 | decorated.__name__.replace('_', '-'), decorated) | ||
2647 | 335 | return decorated | ||
2648 | 336 | return wrapper | ||
2649 | 337 | |||
2650 | 338 | |||
2651 | 339 | def charm_dir(): | ||
2652 | 340 | return os.environ.get('CHARM_DIR') | ||
2653 | 0 | 341 | ||
2654 | === added file 'hooks/charmhelpers/core/host.py' | |||
2655 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 | |||
2656 | +++ hooks/charmhelpers/core/host.py 2013-10-15 01:35:02 +0000 | |||
2657 | @@ -0,0 +1,241 @@ | |||
2658 | 1 | """Tools for working with the host system""" | ||
2659 | 2 | # Copyright 2012 Canonical Ltd. | ||
2660 | 3 | # | ||
2661 | 4 | # Authors: | ||
2662 | 5 | # Nick Moffitt <nick.moffitt@canonical.com> | ||
2663 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
2664 | 7 | |||
2665 | 8 | import os | ||
2666 | 9 | import pwd | ||
2667 | 10 | import grp | ||
2668 | 11 | import random | ||
2669 | 12 | import string | ||
2670 | 13 | import subprocess | ||
2671 | 14 | import hashlib | ||
2672 | 15 | |||
2673 | 16 | from collections import OrderedDict | ||
2674 | 17 | |||
2675 | 18 | from hookenv import log | ||
2676 | 19 | |||
2677 | 20 | |||
2678 | 21 | def service_start(service_name): | ||
2679 | 22 | return service('start', service_name) | ||
2680 | 23 | |||
2681 | 24 | |||
2682 | 25 | def service_stop(service_name): | ||
2683 | 26 | return service('stop', service_name) | ||
2684 | 27 | |||
2685 | 28 | |||
2686 | 29 | def service_restart(service_name): | ||
2687 | 30 | return service('restart', service_name) | ||
2688 | 31 | |||
2689 | 32 | |||
2690 | 33 | def service_reload(service_name, restart_on_failure=False): | ||
2691 | 34 | service_result = service('reload', service_name) | ||
2692 | 35 | if not service_result and restart_on_failure: | ||
2693 | 36 | service_result = service('restart', service_name) | ||
2694 | 37 | return service_result | ||
2695 | 38 | |||
2696 | 39 | |||
2697 | 40 | def service(action, service_name): | ||
2698 | 41 | cmd = ['service', service_name, action] | ||
2699 | 42 | return subprocess.call(cmd) == 0 | ||
2700 | 43 | |||
2701 | 44 | |||
2702 | 45 | def service_running(service): | ||
2703 | 46 | try: | ||
2704 | 47 | output = subprocess.check_output(['service', service, 'status']) | ||
2705 | 48 | except subprocess.CalledProcessError: | ||
2706 | 49 | return False | ||
2707 | 50 | else: | ||
2708 | 51 | if ("start/running" in output or "is running" in output): | ||
2709 | 52 | return True | ||
2710 | 53 | else: | ||
2711 | 54 | return False | ||
2712 | 55 | |||
2713 | 56 | |||
2714 | 57 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | ||
2715 | 58 | """Add a user""" | ||
2716 | 59 | try: | ||
2717 | 60 | user_info = pwd.getpwnam(username) | ||
2718 | 61 | log('user {0} already exists!'.format(username)) | ||
2719 | 62 | except KeyError: | ||
2720 | 63 | log('creating user {0}'.format(username)) | ||
2721 | 64 | cmd = ['useradd'] | ||
2722 | 65 | if system_user or password is None: | ||
2723 | 66 | cmd.append('--system') | ||
2724 | 67 | else: | ||
2725 | 68 | cmd.extend([ | ||
2726 | 69 | '--create-home', | ||
2727 | 70 | '--shell', shell, | ||
2728 | 71 | '--password', password, | ||
2729 | 72 | ]) | ||
2730 | 73 | cmd.append(username) | ||
2731 | 74 | subprocess.check_call(cmd) | ||
2732 | 75 | user_info = pwd.getpwnam(username) | ||
2733 | 76 | return user_info | ||
2734 | 77 | |||
2735 | 78 | |||
2736 | 79 | def add_user_to_group(username, group): | ||
2737 | 80 | """Add a user to a group""" | ||
2738 | 81 | cmd = [ | ||
2739 | 82 | 'gpasswd', '-a', | ||
2740 | 83 | username, | ||
2741 | 84 | group | ||
2742 | 85 | ] | ||
2743 | 86 | log("Adding user {} to group {}".format(username, group)) | ||
2744 | 87 | subprocess.check_call(cmd) | ||
2745 | 88 | |||
2746 | 89 | |||
2747 | 90 | def rsync(from_path, to_path, flags='-r', options=None): | ||
2748 | 91 | """Replicate the contents of a path""" | ||
2749 | 92 | options = options or ['--delete', '--executability'] | ||
2750 | 93 | cmd = ['/usr/bin/rsync', flags] | ||
2751 | 94 | cmd.extend(options) | ||
2752 | 95 | cmd.append(from_path) | ||
2753 | 96 | cmd.append(to_path) | ||
2754 | 97 | log(" ".join(cmd)) | ||
2755 | 98 | return subprocess.check_output(cmd).strip() | ||
2756 | 99 | |||
2757 | 100 | |||
2758 | 101 | def symlink(source, destination): | ||
2759 | 102 | """Create a symbolic link""" | ||
2760 | 103 | log("Symlinking {} as {}".format(source, destination)) | ||
2761 | 104 | cmd = [ | ||
2762 | 105 | 'ln', | ||
2763 | 106 | '-sf', | ||
2764 | 107 | source, | ||
2765 | 108 | destination, | ||
2766 | 109 | ] | ||
2767 | 110 | subprocess.check_call(cmd) | ||
2768 | 111 | |||
2769 | 112 | |||
2770 | 113 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | ||
2771 | 114 | """Create a directory""" | ||
2772 | 115 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | ||
2773 | 116 | perms)) | ||
2774 | 117 | uid = pwd.getpwnam(owner).pw_uid | ||
2775 | 118 | gid = grp.getgrnam(group).gr_gid | ||
2776 | 119 | realpath = os.path.abspath(path) | ||
2777 | 120 | if os.path.exists(realpath): | ||
2778 | 121 | if force and not os.path.isdir(realpath): | ||
2779 | 122 | log("Removing non-directory file {} prior to mkdir()".format(path)) | ||
2780 | 123 | os.unlink(realpath) | ||
2781 | 124 | else: | ||
2782 | 125 | os.makedirs(realpath, perms) | ||
2783 | 126 | os.chown(realpath, uid, gid) | ||
2784 | 127 | |||
2785 | 128 | |||
2786 | 129 | def write_file(path, content, owner='root', group='root', perms=0444): | ||
2787 | 130 | """Create or overwrite a file with the contents of a string""" | ||
2788 | 131 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) | ||
2789 | 132 | uid = pwd.getpwnam(owner).pw_uid | ||
2790 | 133 | gid = grp.getgrnam(group).gr_gid | ||
2791 | 134 | with open(path, 'w') as target: | ||
2792 | 135 | os.fchown(target.fileno(), uid, gid) | ||
2793 | 136 | os.fchmod(target.fileno(), perms) | ||
2794 | 137 | target.write(content) | ||
2795 | 138 | |||
2796 | 139 | |||
2797 | 140 | def mount(device, mountpoint, options=None, persist=False): | ||
2798 | 141 | '''Mount a filesystem''' | ||
2799 | 142 | cmd_args = ['mount'] | ||
2800 | 143 | if options is not None: | ||
2801 | 144 | cmd_args.extend(['-o', options]) | ||
2802 | 145 | cmd_args.extend([device, mountpoint]) | ||
2803 | 146 | try: | ||
2804 | 147 | subprocess.check_output(cmd_args) | ||
2805 | 148 | except subprocess.CalledProcessError, e: | ||
2806 | 149 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | ||
2807 | 150 | return False | ||
2808 | 151 | if persist: | ||
2809 | 152 | # TODO: update fstab | ||
2810 | 153 | pass | ||
2811 | 154 | return True | ||
2812 | 155 | |||
2813 | 156 | |||
2814 | 157 | def umount(mountpoint, persist=False): | ||
2815 | 158 | '''Unmount a filesystem''' | ||
2816 | 159 | cmd_args = ['umount', mountpoint] | ||
2817 | 160 | try: | ||
2818 | 161 | subprocess.check_output(cmd_args) | ||
2819 | 162 | except subprocess.CalledProcessError, e: | ||
2820 | 163 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
2821 | 164 | return False | ||
2822 | 165 | if persist: | ||
2823 | 166 | # TODO: update fstab | ||
2824 | 167 | pass | ||
2825 | 168 | return True | ||
2826 | 169 | |||
2827 | 170 | |||
2828 | 171 | def mounts(): | ||
2829 | 172 | '''List of all mounted volumes as [[mountpoint,device],[...]]''' | ||
2830 | 173 | with open('/proc/mounts') as f: | ||
2831 | 174 | # [['/mount/point','/dev/path'],[...]] | ||
2832 | 175 | system_mounts = [m[1::-1] for m in [l.strip().split() | ||
2833 | 176 | for l in f.readlines()]] | ||
2834 | 177 | return system_mounts | ||
2835 | 178 | |||
2836 | 179 | |||
2837 | 180 | def file_hash(path): | ||
2838 | 181 | ''' Generate a md5 hash of the contents of 'path' or None if not found ''' | ||
2839 | 182 | if os.path.exists(path): | ||
2840 | 183 | h = hashlib.md5() | ||
2841 | 184 | with open(path, 'r') as source: | ||
2842 | 185 | h.update(source.read()) # IGNORE:E1101 - it does have update | ||
2843 | 186 | return h.hexdigest() | ||
2844 | 187 | else: | ||
2845 | 188 | return None | ||
2846 | 189 | |||
2847 | 190 | |||
2848 | 191 | def restart_on_change(restart_map): | ||
2849 | 192 | ''' Restart services based on configuration files changing | ||
2850 | 193 | |||
2851 | 194 | This function is used a decorator, for example | ||
2852 | 195 | |||
2853 | 196 | @restart_on_change({ | ||
2854 | 197 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | ||
2855 | 198 | }) | ||
2856 | 199 | def ceph_client_changed(): | ||
2857 | 200 | ... | ||
2858 | 201 | |||
2859 | 202 | In this example, the cinder-api and cinder-volume services | ||
2860 | 203 | would be restarted if /etc/ceph/ceph.conf is changed by the | ||
2861 | 204 | ceph_client_changed function. | ||
2862 | 205 | ''' | ||
2863 | 206 | def wrap(f): | ||
2864 | 207 | def wrapped_f(*args): | ||
2865 | 208 | checksums = {} | ||
2866 | 209 | for path in restart_map: | ||
2867 | 210 | checksums[path] = file_hash(path) | ||
2868 | 211 | f(*args) | ||
2869 | 212 | restarts = [] | ||
2870 | 213 | for path in restart_map: | ||
2871 | 214 | if checksums[path] != file_hash(path): | ||
2872 | 215 | restarts += restart_map[path] | ||
2873 | 216 | for service_name in list(OrderedDict.fromkeys(restarts)): | ||
2874 | 217 | service('restart', service_name) | ||
2875 | 218 | return wrapped_f | ||
2876 | 219 | return wrap | ||
2877 | 220 | |||
2878 | 221 | |||
2879 | 222 | def lsb_release(): | ||
2880 | 223 | '''Return /etc/lsb-release in a dict''' | ||
2881 | 224 | d = {} | ||
2882 | 225 | with open('/etc/lsb-release', 'r') as lsb: | ||
2883 | 226 | for l in lsb: | ||
2884 | 227 | k, v = l.split('=') | ||
2885 | 228 | d[k.strip()] = v.strip() | ||
2886 | 229 | return d | ||
2887 | 230 | |||
2888 | 231 | |||
2889 | 232 | def pwgen(length=None): | ||
2890 | 233 | '''Generate a random pasword.''' | ||
2891 | 234 | if length is None: | ||
2892 | 235 | length = random.choice(range(35, 45)) | ||
2893 | 236 | alphanumeric_chars = [ | ||
2894 | 237 | l for l in (string.letters + string.digits) | ||
2895 | 238 | if l not in 'l0QD1vAEIOUaeiou'] | ||
2896 | 239 | random_chars = [ | ||
2897 | 240 | random.choice(alphanumeric_chars) for _ in range(length)] | ||
2898 | 241 | return(''.join(random_chars)) | ||
2899 | 0 | 242 | ||
2900 | === added directory 'hooks/charmhelpers/fetch' | |||
2901 | === added file 'hooks/charmhelpers/fetch/__init__.py' | |||
2902 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
2903 | +++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 01:35:02 +0000 | |||
2904 | @@ -0,0 +1,209 @@ | |||
2905 | 1 | import importlib | ||
2906 | 2 | from yaml import safe_load | ||
2907 | 3 | from charmhelpers.core.host import ( | ||
2908 | 4 | lsb_release | ||
2909 | 5 | ) | ||
2910 | 6 | from urlparse import ( | ||
2911 | 7 | urlparse, | ||
2912 | 8 | urlunparse, | ||
2913 | 9 | ) | ||
2914 | 10 | import subprocess | ||
2915 | 11 | from charmhelpers.core.hookenv import ( | ||
2916 | 12 | config, | ||
2917 | 13 | log, | ||
2918 | 14 | ) | ||
2919 | 15 | import apt_pkg | ||
2920 | 16 | |||
2921 | 17 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | ||
2922 | 18 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
2923 | 19 | """ | ||
2924 | 20 | PROPOSED_POCKET = """# Proposed | ||
2925 | 21 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted | ||
2926 | 22 | """ | ||
2927 | 23 | |||
2928 | 24 | |||
2929 | 25 | def filter_installed_packages(packages): | ||
2930 | 26 | """Returns a list of packages that require installation""" | ||
2931 | 27 | apt_pkg.init() | ||
2932 | 28 | cache = apt_pkg.Cache() | ||
2933 | 29 | _pkgs = [] | ||
2934 | 30 | for package in packages: | ||
2935 | 31 | try: | ||
2936 | 32 | p = cache[package] | ||
2937 | 33 | p.current_ver or _pkgs.append(package) | ||
2938 | 34 | except KeyError: | ||
2939 | 35 | log('Package {} has no installation candidate.'.format(package), | ||
2940 | 36 | level='WARNING') | ||
2941 | 37 | _pkgs.append(package) | ||
2942 | 38 | return _pkgs | ||
2943 | 39 | |||
2944 | 40 | |||
2945 | 41 | def apt_install(packages, options=None, fatal=False): | ||
2946 | 42 | """Install one or more packages""" | ||
2947 | 43 | options = options or [] | ||
2948 | 44 | cmd = ['apt-get', '-y'] | ||
2949 | 45 | cmd.extend(options) | ||
2950 | 46 | cmd.append('install') | ||
2951 | 47 | if isinstance(packages, basestring): | ||
2952 | 48 | cmd.append(packages) | ||
2953 | 49 | else: | ||
2954 | 50 | cmd.extend(packages) | ||
2955 | 51 | log("Installing {} with options: {}".format(packages, | ||
2956 | 52 | options)) | ||
2957 | 53 | if fatal: | ||
2958 | 54 | subprocess.check_call(cmd) | ||
2959 | 55 | else: | ||
2960 | 56 | subprocess.call(cmd) | ||
2961 | 57 | |||
2962 | 58 | |||
2963 | 59 | def apt_update(fatal=False): | ||
2964 | 60 | """Update local apt cache""" | ||
2965 | 61 | cmd = ['apt-get', 'update'] | ||
2966 | 62 | if fatal: | ||
2967 | 63 | subprocess.check_call(cmd) | ||
2968 | 64 | else: | ||
2969 | 65 | subprocess.call(cmd) | ||
2970 | 66 | |||
2971 | 67 | |||
2972 | 68 | def apt_purge(packages, fatal=False): | ||
2973 | 69 | """Purge one or more packages""" | ||
2974 | 70 | cmd = ['apt-get', '-y', 'purge'] | ||
2975 | 71 | if isinstance(packages, basestring): | ||
2976 | 72 | cmd.append(packages) | ||
2977 | 73 | else: | ||
2978 | 74 | cmd.extend(packages) | ||
2979 | 75 | log("Purging {}".format(packages)) | ||
2980 | 76 | if fatal: | ||
2981 | 77 | subprocess.check_call(cmd) | ||
2982 | 78 | else: | ||
2983 | 79 | subprocess.call(cmd) | ||
2984 | 80 | |||
2985 | 81 | |||
2986 | 82 | def add_source(source, key=None): | ||
2987 | 83 | if ((source.startswith('ppa:') or | ||
2988 | 84 | source.startswith('http:'))): | ||
2989 | 85 | subprocess.check_call(['add-apt-repository', '--yes', source]) | ||
2990 | 86 | elif source.startswith('cloud:'): | ||
2991 | 87 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), | ||
2992 | 88 | fatal=True) | ||
2993 | 89 | pocket = source.split(':')[-1] | ||
2994 | 90 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
2995 | 91 | apt.write(CLOUD_ARCHIVE.format(pocket)) | ||
2996 | 92 | elif source == 'proposed': | ||
2997 | 93 | release = lsb_release()['DISTRIB_CODENAME'] | ||
2998 | 94 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | ||
2999 | 95 | apt.write(PROPOSED_POCKET.format(release)) | ||
3000 | 96 | if key: | ||
3001 | 97 | subprocess.check_call(['apt-key', 'import', key]) | ||
3002 | 98 | |||
3003 | 99 | |||
3004 | 100 | class SourceConfigError(Exception): | ||
3005 | 101 | pass | ||
3006 | 102 | |||
3007 | 103 | |||
3008 | 104 | def configure_sources(update=False, | ||
3009 | 105 | sources_var='install_sources', | ||
3010 | 106 | keys_var='install_keys'): | ||
3011 | 107 | """ | ||
3012 | 108 | Configure multiple sources from charm configuration | ||
3013 | 109 | |||
3014 | 110 | Example config: | ||
3015 | 111 | install_sources: | ||
3016 | 112 | - "ppa:foo" | ||
3017 | 113 | - "http://example.com/repo precise main" | ||
3018 | 114 | install_keys: | ||
3019 | 115 | - null | ||
3020 | 116 | - "a1b2c3d4" | ||
3021 | 117 | |||
3022 | 118 | Note that 'null' (a.k.a. None) should not be quoted. | ||
3023 | 119 | """ | ||
3024 | 120 | sources = safe_load(config(sources_var)) | ||
3025 | 121 | keys = safe_load(config(keys_var)) | ||
3026 | 122 | if isinstance(sources, basestring) and isinstance(keys, basestring): | ||
3027 | 123 | add_source(sources, keys) | ||
3028 | 124 | else: | ||
3029 | 125 | if not len(sources) == len(keys): | ||
3030 | 126 | msg = 'Install sources and keys lists are different lengths' | ||
3031 | 127 | raise SourceConfigError(msg) | ||
3032 | 128 | for src_num in range(len(sources)): | ||
3033 | 129 | add_source(sources[src_num], keys[src_num]) | ||
3034 | 130 | if update: | ||
3035 | 131 | apt_update(fatal=True) | ||
3036 | 132 | |||
3037 | 133 | # The order of this list is very important. Handlers should be listed in from | ||
3038 | 134 | # least- to most-specific URL matching. | ||
3039 | 135 | FETCH_HANDLERS = ( | ||
3040 | 136 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
3041 | 137 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
3042 | 138 | ) | ||
3043 | 139 | |||
3044 | 140 | |||
3045 | 141 | class UnhandledSource(Exception): | ||
3046 | 142 | pass | ||
3047 | 143 | |||
3048 | 144 | |||
3049 | 145 | def install_remote(source): | ||
3050 | 146 | """ | ||
3051 | 147 | Install a file tree from a remote source | ||
3052 | 148 | |||
3053 | 149 | The specified source should be a url of the form: | ||
3054 | 150 | scheme://[host]/path[#[option=value][&...]] | ||
3055 | 151 | |||
3056 | 152 | Schemes supported are based on this modules submodules | ||
3057 | 153 | Options supported are submodule-specific""" | ||
3058 | 154 | # We ONLY check for True here because can_handle may return a string | ||
3059 | 155 | # explaining why it can't handle a given source. | ||
3060 | 156 | handlers = [h for h in plugins() if h.can_handle(source) is True] | ||
3061 | 157 | installed_to = None | ||
3062 | 158 | for handler in handlers: | ||
3063 | 159 | try: | ||
3064 | 160 | installed_to = handler.install(source) | ||
3065 | 161 | except UnhandledSource: | ||
3066 | 162 | pass | ||
3067 | 163 | if not installed_to: | ||
3068 | 164 | raise UnhandledSource("No handler found for source {}".format(source)) | ||
3069 | 165 | return installed_to | ||
3070 | 166 | |||
3071 | 167 | |||
3072 | 168 | def install_from_config(config_var_name): | ||
3073 | 169 | charm_config = config() | ||
3074 | 170 | source = charm_config[config_var_name] | ||
3075 | 171 | return install_remote(source) | ||
3076 | 172 | |||
3077 | 173 | |||
3078 | 174 | class BaseFetchHandler(object): | ||
3079 | 175 | """Base class for FetchHandler implementations in fetch plugins""" | ||
3080 | 176 | def can_handle(self, source): | ||
3081 | 177 | """Returns True if the source can be handled. Otherwise returns | ||
3082 | 178 | a string explaining why it cannot""" | ||
3083 | 179 | return "Wrong source type" | ||
3084 | 180 | |||
3085 | 181 | def install(self, source): | ||
3086 | 182 | """Try to download and unpack the source. Return the path to the | ||
3087 | 183 | unpacked files or raise UnhandledSource.""" | ||
3088 | 184 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
3089 | 185 | |||
3090 | 186 | def parse_url(self, url): | ||
3091 | 187 | return urlparse(url) | ||
3092 | 188 | |||
3093 | 189 | def base_url(self, url): | ||
3094 | 190 | """Return url without querystring or fragment""" | ||
3095 | 191 | parts = list(self.parse_url(url)) | ||
3096 | 192 | parts[4:] = ['' for i in parts[4:]] | ||
3097 | 193 | return urlunparse(parts) | ||
3098 | 194 | |||
3099 | 195 | |||
3100 | 196 | def plugins(fetch_handlers=None): | ||
3101 | 197 | if not fetch_handlers: | ||
3102 | 198 | fetch_handlers = FETCH_HANDLERS | ||
3103 | 199 | plugin_list = [] | ||
3104 | 200 | for handler_name in fetch_handlers: | ||
3105 | 201 | package, classname = handler_name.rsplit('.', 1) | ||
3106 | 202 | try: | ||
3107 | 203 | handler_class = getattr(importlib.import_module(package), classname) | ||
3108 | 204 | plugin_list.append(handler_class()) | ||
3109 | 205 | except (ImportError, AttributeError): | ||
3110 | 206 | # Skip missing plugins so that they can be ommitted from | ||
3111 | 207 | # installation if desired | ||
3112 | 208 | log("FetchHandler {} not found, skipping plugin".format(handler_name)) | ||
3113 | 209 | return plugin_list | ||
3114 | 0 | 210 | ||
3115 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
3116 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 | |||
3117 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 01:35:02 +0000 | |||
3118 | @@ -0,0 +1,48 @@ | |||
3119 | 1 | import os | ||
3120 | 2 | import urllib2 | ||
3121 | 3 | from charmhelpers.fetch import ( | ||
3122 | 4 | BaseFetchHandler, | ||
3123 | 5 | UnhandledSource | ||
3124 | 6 | ) | ||
3125 | 7 | from charmhelpers.payload.archive import ( | ||
3126 | 8 | get_archive_handler, | ||
3127 | 9 | extract, | ||
3128 | 10 | ) | ||
3129 | 11 | from charmhelpers.core.host import mkdir | ||
3130 | 12 | |||
3131 | 13 | |||
3132 | 14 | class ArchiveUrlFetchHandler(BaseFetchHandler): | ||
3133 | 15 | """Handler for archives via generic URLs""" | ||
3134 | 16 | def can_handle(self, source): | ||
3135 | 17 | url_parts = self.parse_url(source) | ||
3136 | 18 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): | ||
3137 | 19 | return "Wrong source type" | ||
3138 | 20 | if get_archive_handler(self.base_url(source)): | ||
3139 | 21 | return True | ||
3140 | 22 | return False | ||
3141 | 23 | |||
3142 | 24 | def download(self, source, dest): | ||
3143 | 25 | # propogate all exceptions | ||
3144 | 26 | # URLError, OSError, etc | ||
3145 | 27 | response = urllib2.urlopen(source) | ||
3146 | 28 | try: | ||
3147 | 29 | with open(dest, 'w') as dest_file: | ||
3148 | 30 | dest_file.write(response.read()) | ||
3149 | 31 | except Exception as e: | ||
3150 | 32 | if os.path.isfile(dest): | ||
3151 | 33 | os.unlink(dest) | ||
3152 | 34 | raise e | ||
3153 | 35 | |||
3154 | 36 | def install(self, source): | ||
3155 | 37 | url_parts = self.parse_url(source) | ||
3156 | 38 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | ||
3157 | 39 | if not os.path.exists(dest_dir): | ||
3158 | 40 | mkdir(dest_dir, perms=0755) | ||
3159 | 41 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | ||
3160 | 42 | try: | ||
3161 | 43 | self.download(source, dld_file) | ||
3162 | 44 | except urllib2.URLError as e: | ||
3163 | 45 | raise UnhandledSource(e.reason) | ||
3164 | 46 | except OSError as e: | ||
3165 | 47 | raise UnhandledSource(e.strerror) | ||
3166 | 48 | return extract(dld_file) | ||
3167 | 0 | 49 | ||
3168 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
3169 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 | |||
3170 | +++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 01:35:02 +0000 | |||
3171 | @@ -0,0 +1,49 @@ | |||
3172 | 1 | import os | ||
3173 | 2 | from charmhelpers.fetch import ( | ||
3174 | 3 | BaseFetchHandler, | ||
3175 | 4 | UnhandledSource | ||
3176 | 5 | ) | ||
3177 | 6 | from charmhelpers.core.host import mkdir | ||
3178 | 7 | |||
3179 | 8 | try: | ||
3180 | 9 | from bzrlib.branch import Branch | ||
3181 | 10 | except ImportError: | ||
3182 | 11 | from charmhelpers.fetch import apt_install | ||
3183 | 12 | apt_install("python-bzrlib") | ||
3184 | 13 | from bzrlib.branch import Branch | ||
3185 | 14 | |||
3186 | 15 | class BzrUrlFetchHandler(BaseFetchHandler): | ||
3187 | 16 | """Handler for bazaar branches via generic and lp URLs""" | ||
3188 | 17 | def can_handle(self, source): | ||
3189 | 18 | url_parts = self.parse_url(source) | ||
3190 | 19 | if url_parts.scheme not in ('bzr+ssh', 'lp'): | ||
3191 | 20 | return False | ||
3192 | 21 | else: | ||
3193 | 22 | return True | ||
3194 | 23 | |||
3195 | 24 | def branch(self, source, dest): | ||
3196 | 25 | url_parts = self.parse_url(source) | ||
3197 | 26 | # If we use lp:branchname scheme we need to load plugins | ||
3198 | 27 | if not self.can_handle(source): | ||
3199 | 28 | raise UnhandledSource("Cannot handle {}".format(source)) | ||
3200 | 29 | if url_parts.scheme == "lp": | ||
3201 | 30 | from bzrlib.plugin import load_plugins | ||
3202 | 31 | load_plugins() | ||
3203 | 32 | try: | ||
3204 | 33 | remote_branch = Branch.open(source) | ||
3205 | 34 | remote_branch.bzrdir.sprout(dest).open_branch() | ||
3206 | 35 | except Exception as e: | ||
3207 | 36 | raise e | ||
3208 | 37 | |||
3209 | 38 | def install(self, source): | ||
3210 | 39 | url_parts = self.parse_url(source) | ||
3211 | 40 | branch_name = url_parts.path.strip("/").split("/")[-1] | ||
3212 | 41 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) | ||
3213 | 42 | if not os.path.exists(dest_dir): | ||
3214 | 43 | mkdir(dest_dir, perms=0755) | ||
3215 | 44 | try: | ||
3216 | 45 | self.branch(source, dest_dir) | ||
3217 | 46 | except OSError as e: | ||
3218 | 47 | raise UnhandledSource(e.strerror) | ||
3219 | 48 | return dest_dir | ||
3220 | 49 | |||
3221 | 0 | 50 | ||
3222 | === added directory 'hooks/charmhelpers/payload' | |||
3223 | === added file 'hooks/charmhelpers/payload/__init__.py' | |||
3224 | --- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000 | |||
3225 | +++ hooks/charmhelpers/payload/__init__.py 2013-10-15 01:35:02 +0000 | |||
3226 | @@ -0,0 +1,1 @@ | |||
3227 | 1 | "Tools for working with files injected into a charm just before deployment." | ||
3228 | 0 | 2 | ||
3229 | === added file 'hooks/charmhelpers/payload/execd.py' | |||
3230 | --- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000 | |||
3231 | +++ hooks/charmhelpers/payload/execd.py 2013-10-15 01:35:02 +0000 | |||
3232 | @@ -0,0 +1,50 @@ | |||
3233 | 1 | #!/usr/bin/env python | ||
3234 | 2 | |||
3235 | 3 | import os | ||
3236 | 4 | import sys | ||
3237 | 5 | import subprocess | ||
3238 | 6 | from charmhelpers.core import hookenv | ||
3239 | 7 | |||
3240 | 8 | |||
3241 | 9 | def default_execd_dir(): | ||
3242 | 10 | return os.path.join(os.environ['CHARM_DIR'], 'exec.d') | ||
3243 | 11 | |||
3244 | 12 | |||
3245 | 13 | def execd_module_paths(execd_dir=None): | ||
3246 | 14 | """Generate a list of full paths to modules within execd_dir.""" | ||
3247 | 15 | if not execd_dir: | ||
3248 | 16 | execd_dir = default_execd_dir() | ||
3249 | 17 | |||
3250 | 18 | if not os.path.exists(execd_dir): | ||
3251 | 19 | return | ||
3252 | 20 | |||
3253 | 21 | for subpath in os.listdir(execd_dir): | ||
3254 | 22 | module = os.path.join(execd_dir, subpath) | ||
3255 | 23 | if os.path.isdir(module): | ||
3256 | 24 | yield module | ||
3257 | 25 | |||
3258 | 26 | |||
3259 | 27 | def execd_submodule_paths(command, execd_dir=None): | ||
3260 | 28 | """Generate a list of full paths to the specified command within exec_dir. | ||
3261 | 29 | """ | ||
3262 | 30 | for module_path in execd_module_paths(execd_dir): | ||
3263 | 31 | path = os.path.join(module_path, command) | ||
3264 | 32 | if os.access(path, os.X_OK) and os.path.isfile(path): | ||
3265 | 33 | yield path | ||
3266 | 34 | |||
3267 | 35 | |||
3268 | 36 | def execd_run(command, execd_dir=None, die_on_error=False, stderr=None): | ||
3269 | 37 | """Run command for each module within execd_dir which defines it.""" | ||
3270 | 38 | for submodule_path in execd_submodule_paths(command, execd_dir): | ||
3271 | 39 | try: | ||
3272 | 40 | subprocess.check_call(submodule_path, shell=True, stderr=stderr) | ||
3273 | 41 | except subprocess.CalledProcessError as e: | ||
3274 | 42 | hookenv.log("Error ({}) running {}. Output: {}".format( | ||
3275 | 43 | e.returncode, e.cmd, e.output)) | ||
3276 | 44 | if die_on_error: | ||
3277 | 45 | sys.exit(e.returncode) | ||
3278 | 46 | |||
3279 | 47 | |||
3280 | 48 | def execd_preinstall(execd_dir=None): | ||
3281 | 49 | """Run charm-pre-install for each module within execd_dir.""" | ||
3282 | 50 | execd_run('charm-pre-install', execd_dir=execd_dir) | ||
3283 | 0 | 51 | ||
3284 | === modified symlink 'hooks/cluster-relation-changed' | |||
3285 | === target changed u'glance-relations' => u'glance_relations.py' | |||
3286 | === modified symlink 'hooks/cluster-relation-departed' | |||
3287 | === target changed u'glance-relations' => u'glance_relations.py' | |||
3288 | === modified symlink 'hooks/config-changed' | |||
3289 | === target changed u'glance-relations' => u'glance_relations.py' | |||
3290 | === removed file 'hooks/glance-common' | |||
3291 | --- hooks/glance-common 2013-06-03 18:39:29 +0000 | |||
3292 | +++ hooks/glance-common 1970-01-01 00:00:00 +0000 | |||
3293 | @@ -1,133 +0,0 @@ | |||
3294 | 1 | #!/bin/bash | ||
3295 | 2 | |||
3296 | 3 | CHARM="glance" | ||
3297 | 4 | |||
3298 | 5 | SERVICES="glance-api glance-registry" | ||
3299 | 6 | PACKAGES="glance python-mysqldb python-swift python-keystone uuid haproxy" | ||
3300 | 7 | |||
3301 | 8 | GLANCE_REGISTRY_CONF="/etc/glance/glance-registry.conf" | ||
3302 | 9 | GLANCE_REGISTRY_PASTE_INI="/etc/glance/glance-registry-paste.ini" | ||
3303 | 10 | GLANCE_API_CONF="/etc/glance/glance-api.conf" | ||
3304 | 11 | GLANCE_API_PASTE_INI="/etc/glance/glance-api-paste.ini" | ||
3305 | 12 | CONF_DIR="/etc/glance" | ||
3306 | 13 | HOOKS_DIR="$CHARM_DIR/hooks" | ||
3307 | 14 | |||
3308 | 15 | # Flag used to track config changes. | ||
3309 | 16 | CONFIG_CHANGED="False" | ||
3310 | 17 | if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then | ||
3311 | 18 | . $HOOKS_DIR/lib/openstack-common | ||
3312 | 19 | else | ||
3313 | 20 | juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1 | ||
3314 | 21 | fi | ||
3315 | 22 | |||
3316 | 23 | function set_or_update { | ||
3317 | 24 | local key="$1" | ||
3318 | 25 | local value="$2" | ||
3319 | 26 | local file="$3" | ||
3320 | 27 | local section="$4" | ||
3321 | 28 | local conf="" | ||
3322 | 29 | [[ -z $key ]] && juju-log "ERROR: set_or_update(): value $value missing key" \ | ||
3323 | 30 | && exit 1 | ||
3324 | 31 | [[ -z $value ]] && juju-log "ERROR: set_or_update(): key $key missing value" \ | ||
3325 | 32 | && exit 1 | ||
3326 | 33 | |||
3327 | 34 | case "$file" in | ||
3328 | 35 | "api") conf=$GLANCE_API_CONF ;; | ||
3329 | 36 | "api-paste") conf=$GLANCE_API_PASTE_INI ;; | ||
3330 | 37 | "registry") conf=$GLANCE_REGISTRY_CONF ;; | ||
3331 | 38 | "registry-paste") conf=$GLANCE_REGISTRY_PASTE_INI ;; | ||
3332 | 39 | *) juju-log "ERROR: set_or_update(): Invalid or no config file specified." \ | ||
3333 | 40 | && exit 1 ;; | ||
3334 | 41 | esac | ||
3335 | 42 | |||
3336 | 43 | [[ ! -e $conf ]] && juju-log "ERROR: set_or_update(): File not found $conf" \ | ||
3337 | 44 | && exit 1 | ||
3338 | 45 | |||
3339 | 46 | if [[ "$(local_config_get "$conf" "$key" "$section")" == "$value" ]] ; then | ||
3340 | 47 | juju-log "$CHARM: set_or_update(): $key=$value already set in $conf." | ||
3341 | 48 | return 0 | ||
3342 | 49 | fi | ||
3343 | 50 | |||
3344 | 51 | cfg_set_or_update "$key" "$value" "$conf" "$section" | ||
3345 | 52 | CONFIG_CHANGED="True" | ||
3346 | 53 | } | ||
3347 | 54 | |||
3348 | 55 | do_openstack_upgrade() { | ||
3349 | 56 | # update openstack components to those provided by a new installation source | ||
3350 | 57 | # it is assumed the calling hook has confirmed that the upgrade is sane. | ||
3351 | 58 | local rel="$1" | ||
3352 | 59 | shift | ||
3353 | 60 | local packages=$@ | ||
3354 | 61 | orig_os_rel=$(get_os_codename_package "glance-common") | ||
3355 | 62 | new_rel=$(get_os_codename_install_source "$rel") | ||
3356 | 63 | |||
3357 | 64 | # Backup the config directory. | ||
3358 | 65 | local stamp=$(date +"%Y%m%d%M%S") | ||
3359 | 66 | tar -pcf /var/lib/juju/$CHARM-backup-$stamp.tar $CONF_DIR | ||
3360 | 67 | |||
3361 | 68 | # Setup apt repository access and kick off the actual package upgrade. | ||
3362 | 69 | configure_install_source "$rel" | ||
3363 | 70 | apt-get update | ||
3364 | 71 | DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confnew -y \ | ||
3365 | 72 | install --no-install-recommends $packages | ||
3366 | 73 | |||
3367 | 74 | # Update the new config files for existing relations. | ||
3368 | 75 | local r_id="" | ||
3369 | 76 | |||
3370 | 77 | r_id=$(relation-ids shared-db) | ||
3371 | 78 | if [[ -n "$r_id" ]] ; then | ||
3372 | 79 | juju-log "$CHARM: Configuring database after upgrade to $rel." | ||
3373 | 80 | db_changed $r_id | ||
3374 | 81 | fi | ||
3375 | 82 | |||
3376 | 83 | r_id=$(relation-ids identity-service) | ||
3377 | 84 | if [[ -n "$r_id" ]] ; then | ||
3378 | 85 | juju-log "$CHARM: Configuring identity service after upgrade to $rel." | ||
3379 | 86 | keystone_changed $r_id | ||
3380 | 87 | fi | ||
3381 | 88 | |||
3382 | 89 | local ceph_ids="$(relation-ids ceph)" | ||
3383 | 90 | [[ -n "$ceph_ids" ]] && apt-get -y install ceph-common python-ceph | ||
3384 | 91 | for r_id in $ceph_ids ; do | ||
3385 | 92 | for unit in $(relation-list -r $r_id) ; do | ||
3386 | 93 | ceph_changed "$r_id" "$unit" | ||
3387 | 94 | done | ||
3388 | 95 | done | ||
3389 | 96 | |||
3390 | 97 | [[ -n "$(relation-ids object-store)" ]] && object-store_joined | ||
3391 | 98 | } | ||
3392 | 99 | |||
3393 | 100 | configure_https() { | ||
3394 | 101 | # request openstack-common setup reverse proxy mapping for API and registry | ||
3395 | 102 | # servers | ||
3396 | 103 | service_ctl glance-api stop | ||
3397 | 104 | if [[ -n "$(peer_units)" ]] || is_clustered ; then | ||
3398 | 105 | # haproxy may already be configured. need to push it back in the request | ||
3399 | 106 | # pipeline in preparation for a change from: | ||
3400 | 107 | # from: haproxy (9292) -> glance_api (9282) | ||
3401 | 108 | # to: ssl (9292) -> haproxy (9291) -> glance_api (9272) | ||
3402 | 109 | local next_server=$(determine_haproxy_port 9292) | ||
3403 | 110 | local api_port=$(determine_api_port 9292) | ||
3404 | 111 | configure_haproxy "glance_api:$next_server:$api_port" | ||
3405 | 112 | else | ||
3406 | 113 | # if not clustered, the glance-api is next in the pipeline. | ||
3407 | 114 | local api_port=$(determine_api_port 9292) | ||
3408 | 115 | local next_server=$api_port | ||
3409 | 116 | fi | ||
3410 | 117 | |||
3411 | 118 | # setup https to point to either haproxy or directly to api server, depending. | ||
3412 | 119 | setup_https 9292:$next_server | ||
3413 | 120 | |||
3414 | 121 | # configure servers to listen on new ports accordingly. | ||
3415 | 122 | set_or_update bind_port "$api_port" "api" | ||
3416 | 123 | service_ctl all start | ||
3417 | 124 | |||
3418 | 125 | local r_id="" | ||
3419 | 126 | # (re)configure ks endpoint accordingly in ks and nova. | ||
3420 | 127 | for r_id in $(relation-ids identity-service) ; do | ||
3421 | 128 | keystone_joined "$r_id" | ||
3422 | 129 | done | ||
3423 | 130 | for r_id in $(relation-ids image-service) ; do | ||
3424 | 131 | image-service_joined "$r_id" | ||
3425 | 132 | done | ||
3426 | 133 | } | ||
3427 | 134 | 0 | ||
3428 | === removed file 'hooks/glance-relations' | |||
3429 | --- hooks/glance-relations 2013-09-18 18:40:06 +0000 | |||
3430 | +++ hooks/glance-relations 1970-01-01 00:00:00 +0000 | |||
3431 | @@ -1,464 +0,0 @@ | |||
3432 | 1 | #!/bin/bash -e | ||
3433 | 2 | |||
3434 | 3 | HOOKS_DIR="$CHARM_DIR/hooks" | ||
3435 | 4 | ARG0=${0##*/} | ||
3436 | 5 | |||
3437 | 6 | if [[ -e $HOOKS_DIR/glance-common ]] ; then | ||
3438 | 7 | . $HOOKS_DIR/glance-common | ||
3439 | 8 | else | ||
3440 | 9 | echo "ERROR: Could not load glance-common from $HOOKS_DIR" | ||
3441 | 10 | fi | ||
3442 | 11 | |||
3443 | 12 | function install_hook { | ||
3444 | 13 | juju-log "Installing glance packages" | ||
3445 | 14 | apt-get -y install python-software-properties || exit 1 | ||
3446 | 15 | |||
3447 | 16 | configure_install_source "$(config-get openstack-origin)" | ||
3448 | 17 | |||
3449 | 18 | apt-get update || exit 1 | ||
3450 | 19 | apt-get -y install $PACKAGES || exit 1 | ||
3451 | 20 | |||
3452 | 21 | service_ctl all stop | ||
3453 | 22 | |||
3454 | 23 | # TODO: Make debug logging a config option. | ||
3455 | 24 | set_or_update verbose True api | ||
3456 | 25 | set_or_update debug True api | ||
3457 | 26 | set_or_update verbose True registry | ||
3458 | 27 | set_or_update debug True registry | ||
3459 | 28 | |||
3460 | 29 | configure_https | ||
3461 | 30 | } | ||
3462 | 31 | |||
3463 | 32 | function db_joined { | ||
3464 | 33 | local glance_db=$(config-get glance-db) | ||
3465 | 34 | local db_user=$(config-get db-user) | ||
3466 | 35 | local hostname=$(unit-get private-address) | ||
3467 | 36 | juju-log "$CHARM - db_joined: requesting database access to $glance_db for "\ | ||
3468 | 37 | "$db_user@$hostname" | ||
3469 | 38 | relation-set database=$glance_db username=$db_user hostname=$hostname | ||
3470 | 39 | } | ||
3471 | 40 | |||
3472 | 41 | function db_changed { | ||
3473 | 42 | # serves as the main shared-db changed hook but may also be called with a | ||
3474 | 43 | # relation-id to configure new config files for existing relations. | ||
3475 | 44 | local r_id="$1" | ||
3476 | 45 | local r_args="" | ||
3477 | 46 | if [[ -n "$r_id" ]] ; then | ||
3478 | 47 | # set up environment for an existing relation to a single unit. | ||
3479 | 48 | export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1) | ||
3480 | 49 | export JUJU_RELATION="shared-db" | ||
3481 | 50 | export JUJU_RELATION_ID="$r_id" | ||
3482 | 51 | local r_args="-r $JUJU_RELATION_ID" | ||
3483 | 52 | juju-log "$CHARM - db_changed: Running hook for existing relation to "\ | ||
3484 | 53 | "$JUJU_REMOTE_UNIT-$JUJU_RELATION_ID" | ||
3485 | 54 | fi | ||
3486 | 55 | |||
3487 | 56 | local db_host=$(relation-get $r_args db_host) | ||
3488 | 57 | local db_password=$(relation-get $r_args password) | ||
3489 | 58 | |||
3490 | 59 | if [[ -z "$db_host" ]] || [[ -z "$db_password" ]] ; then | ||
3491 | 60 | juju-log "$CHARM - db_changed: db_host||db_password set, will retry." | ||
3492 | 61 | exit 0 | ||
3493 | 62 | fi | ||
3494 | 63 | |||
3495 | 64 | local glance_db=$(config-get glance-db) | ||
3496 | 65 | local db_user=$(config-get db-user) | ||
3497 | 66 | local rel=$(get_os_codename_package glance-common) | ||
3498 | 67 | |||
3499 | 68 | if [[ -n "$r_id" ]] ; then | ||
3500 | 69 | unset JUJU_REMOTE_UNIT JUJU_RELATION JUJU_RELATION_ID | ||
3501 | 70 | fi | ||
3502 | 71 | |||
3503 | 72 | juju-log "$CHARM - db_changed: Configuring glance.conf for access to $glance_db" | ||
3504 | 73 | |||
3505 | 74 | set_or_update sql_connection "mysql://$db_user:$db_password@$db_host/$glance_db" registry | ||
3506 | 75 | |||
3507 | 76 | # since folsom, a db connection setting in glance-api.conf is required. | ||
3508 | 77 | [[ "$rel" != "essex" ]] && | ||
3509 | 78 | set_or_update sql_connection "mysql://$db_user:$db_password@$db_host/$glance_db" api | ||
3510 | 79 | |||
3511 | 80 | if eligible_leader 'res_glance_vip'; then | ||
3512 | 81 | if [[ "$rel" == "essex" ]] ; then | ||
3513 | 82 | # Essex required initializing new databases to version 0 | ||
3514 | 83 | if ! glance-manage db_version >/dev/null 2>&1; then | ||
3515 | 84 | juju-log "Setting glance database version to 0" | ||
3516 | 85 | glance-manage version_control 0 | ||
3517 | 86 | fi | ||
3518 | 87 | fi | ||
3519 | 88 | juju-log "$CHARM - db_changed: Running database migrations for $rel." | ||
3520 | 89 | glance-manage db_sync | ||
3521 | 90 | fi | ||
3522 | 91 | service_ctl all restart | ||
3523 | 92 | } | ||
3524 | 93 | |||
3525 | 94 | function image-service_joined { | ||
3526 | 95 | # Check to see if unit is potential leader | ||
3527 | 96 | local r_id="$1" | ||
3528 | 97 | [[ -n "$r_id" ]] && r_id="-r $r_id" | ||
3529 | 98 | eligible_leader 'res_glance_vip' || return 0 | ||
3530 | 99 | https && scheme="https" || scheme="http" | ||
3531 | 100 | is_clustered && local host=$(config-get vip) || | ||
3532 | 101 | local host=$(unit-get private-address) | ||
3533 | 102 | url="$scheme://$host:9292" | ||
3534 | 103 | juju-log "glance: image-service_joined: To peer glance-api-server=$url" | ||
3535 | 104 | relation-set $r_id glance-api-server=$url | ||
3536 | 105 | } | ||
3537 | 106 | |||
3538 | 107 | function object-store_joined { | ||
3539 | 108 | local relids="$(relation-ids identity-service)" | ||
3540 | 109 | [[ -z "$relids" ]] && \ | ||
3541 | 110 | juju-log "$CHARM: Deferring swift store configuration until " \ | ||
3542 | 111 | "an identity-service relation exists." && exit 0 | ||
3543 | 112 | |||
3544 | 113 | set_or_update default_store swift api | ||
3545 | 114 | set_or_update swift_store_create_container_on_put true api | ||
3546 | 115 | |||
3547 | 116 | for relid in $relids ; do | ||
3548 | 117 | local unit=$(relation-list -r $relid) | ||
3549 | 118 | local svc_tenant=$(relation-get -r $relid service_tenant $unit) | ||
3550 | 119 | local svc_username=$(relation-get -r $relid service_username $unit) | ||
3551 | 120 | local svc_password=$(relation-get -r $relid service_password $unit) | ||
3552 | 121 | local auth_host=$(relation-get -r $relid private-address $unit) | ||
3553 | 122 | local port=$(relation-get -r $relid service_port $unit) | ||
3554 | 123 | local auth_url="" | ||
3555 | 124 | |||
3556 | 125 | [[ -n "$auth_host" ]] && [[ -n "$port" ]] && | ||
3557 | 126 | auth_url="http://$auth_host:$port/v2.0/" | ||
3558 | 127 | |||
3559 | 128 | [[ -n "$svc_tenant" ]] && [[ -n "$svc_username" ]] && | ||
3560 | 129 | set_or_update swift_store_user "$svc_tenant:$svc_username" api | ||
3561 | 130 | [[ -n "$svc_password" ]] && | ||
3562 | 131 | set_or_update swift_store_key "$svc_password" api | ||
3563 | 132 | [[ -n "$auth_url" ]] && | ||
3564 | 133 | set_or_update swift_store_auth_address "$auth_url" api | ||
3565 | 134 | done | ||
3566 | 135 | service_ctl glance-api restart | ||
3567 | 136 | } | ||
3568 | 137 | |||
3569 | 138 | function object-store_changed { | ||
3570 | 139 | exit 0 | ||
3571 | 140 | } | ||
3572 | 141 | |||
3573 | 142 | function ceph_joined { | ||
3574 | 143 | mkdir -p /etc/ceph | ||
3575 | 144 | apt-get -y install ceph-common python-ceph || exit 1 | ||
3576 | 145 | } | ||
3577 | 146 | |||
3578 | 147 | function ceph_changed { | ||
3579 | 148 | local r_id="$1" | ||
3580 | 149 | local unit_id="$2" | ||
3581 | 150 | local r_arg="" | ||
3582 | 151 | [[ -n "$r_id" ]] && r_arg="-r $r_id" | ||
3583 | 152 | SERVICE_NAME=`echo $JUJU_UNIT_NAME | cut -d / -f 1` | ||
3584 | 153 | KEYRING=/etc/ceph/ceph.client.$SERVICE_NAME.keyring | ||
3585 | 154 | KEY=`relation-get $r_arg key $unit_id` | ||
3586 | 155 | if [ -n "$KEY" ]; then | ||
3587 | 156 | # But only once | ||
3588 | 157 | if [ ! -f $KEYRING ]; then | ||
3589 | 158 | ceph-authtool $KEYRING \ | ||
3590 | 159 | --create-keyring --name=client.$SERVICE_NAME \ | ||
3591 | 160 | --add-key="$KEY" | ||
3592 | 161 | chmod +r $KEYRING | ||
3593 | 162 | fi | ||
3594 | 163 | else | ||
3595 | 164 | # No key - bail for the time being | ||
3596 | 165 | exit 0 | ||
3597 | 166 | fi | ||
3598 | 167 | |||
3599 | 168 | MONS=`relation-list $r_arg` | ||
3600 | 169 | mon_hosts="" | ||
3601 | 170 | for mon in $MONS; do | ||
3602 | 171 | mon_hosts="$mon_hosts $(get_ip $(relation-get $r_arg private-address $mon)):6789," | ||
3603 | 172 | done | ||
3604 | 173 | cat > /etc/ceph/ceph.conf << EOF | ||
3605 | 174 | [global] | ||
3606 | 175 | auth supported = $(relation-get $r_arg auth $unit_id) | ||
3607 | 176 | keyring = /etc/ceph/\$cluster.\$name.keyring | ||
3608 | 177 | mon host = $mon_hosts | ||
3609 | 178 | EOF | ||
3610 | 179 | |||
3611 | 180 | # Create the images pool if it does not already exist | ||
3612 | 181 | if ! rados --id $SERVICE_NAME lspools | grep -q images; then | ||
3613 | 182 | local num_osds=$(ceph --id $SERVICE_NAME osd ls| egrep "[^\s]"| wc -l) | ||
3614 | 183 | local cfg_key='ceph-osd-replication-count' | ||
3615 | 184 | local rep_count="$(config-get $cfg_key)" | ||
3616 | 185 | if [ -z "$rep_count" ] | ||
3617 | 186 | then | ||
3618 | 187 | rep_count=2 | ||
3619 | 188 | juju-log "config returned empty string for $cfg_key - using value of 2" | ||
3620 | 189 | fi | ||
3621 | 190 | local num_pgs=$(((num_osds*100)/rep_count)) | ||
3622 | 191 | ceph --id $SERVICE_NAME osd pool create images $num_pgs $num_pgs | ||
3623 | 192 | ceph --id $SERVICE_NAME osd pool set images size $rep_count | ||
3624 | 193 | # TODO: set appropriate crush ruleset | ||
3625 | 194 | fi | ||
3626 | 195 | |||
3627 | 196 | # Configure glance for ceph storage options | ||
3628 | 197 | set_or_update default_store rbd api | ||
3629 | 198 | set_or_update rbd_store_ceph_conf /etc/ceph/ceph.conf api | ||
3630 | 199 | set_or_update rbd_store_user $SERVICE_NAME api | ||
3631 | 200 | set_or_update rbd_store_pool images api | ||
3632 | 201 | set_or_update rbd_store_chunk_size 8 api | ||
3633 | 202 | # This option only applies to Grizzly. | ||
3634 | 203 | [ "`get_os_codename_package "glance-common"`" = "grizzly" ] && \ | ||
3635 | 204 | set_or_update show_image_direct_url 'True' api | ||
3636 | 205 | |||
3637 | 206 | service_ctl glance-api restart | ||
3638 | 207 | } | ||
3639 | 208 | |||
3640 | 209 | function keystone_joined { | ||
3641 | 210 | # Leadership check | ||
3642 | 211 | eligible_leader 'res_glance_vip' || return 0 | ||
3643 | 212 | local r_id="$1" | ||
3644 | 213 | [[ -n "$r_id" ]] && r_id=" -r $r_id" | ||
3645 | 214 | |||
3646 | 215 | # determine correct endpoint URL | ||
3647 | 216 | https && scheme="https" || scheme="http" | ||
3648 | 217 | is_clustered && local host=$(config-get vip) || | ||
3649 | 218 | local host=$(unit-get private-address) | ||
3650 | 219 | url="$scheme://$host:9292" | ||
3651 | 220 | |||
3652 | 221 | # advertise our API endpoint to keystone | ||
3653 | 222 | relation-set service="glance" \ | ||
3654 | 223 | region="$(config-get region)" public_url=$url admin_url=$url internal_url=$url | ||
3655 | 224 | } | ||
3656 | 225 | |||
3657 | 226 | function keystone_changed { | ||
3658 | 227 | # serves as the main identity-service changed hook, but may also be called | ||
3659 | 228 | # with a relation-id to configure new config files for existing relations. | ||
3660 | 229 | local r_id="$1" | ||
3661 | 230 | local r_args="" | ||
3662 | 231 | if [[ -n "$r_id" ]] ; then | ||
3663 | 232 | # set up environment for an existing relation to a single unit. | ||
3664 | 233 | export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1) | ||
3665 | 234 | export JUJU_RELATION="identity-service" | ||
3666 | 235 | export JUJU_RELATION_ID="$r_id" | ||
3667 | 236 | local r_args="-r $JUJU_RELATION_ID" | ||
3668 | 237 | juju-log "$CHARM - db_changed: Running hook for existing relation to "\ | ||
3669 | 238 | "$JUJU_REMOTE_UNIT-$JUJU_RELATION_ID" | ||
3670 | 239 | fi | ||
3671 | 240 | |||
3672 | 241 | token=$(relation-get $r_args $r_args admin_token) | ||
3673 | 242 | service_port=$(relation-get $r_args service_port) | ||
3674 | 243 | auth_port=$(relation-get $r_args auth_port) | ||
3675 | 244 | service_username=$(relation-get $r_args service_username) | ||
3676 | 245 | service_password=$(relation-get $r_args service_password) | ||
3677 | 246 | service_tenant=$(relation-get $r_args service_tenant) | ||
3678 | 247 | [[ -z "$token" ]] || [[ -z "$service_port" ]] || [[ -z "$auth_port" ]] || | ||
3679 | 248 | [[ -z "$service_username" ]] || [[ -z "$service_password" ]] || | ||
3680 | 249 | [[ -z "$service_tenant" ]] && juju-log "keystone_changed: Peer not ready" && | ||
3681 | 250 | exit 0 | ||
3682 | 251 | [[ "$token" == "-1" ]] && | ||
3683 | 252 | juju-log "keystone_changed: admin token error" && exit 1 | ||
3684 | 253 | juju-log "keystone_changed: Acquired admin. token" | ||
3685 | 254 | keystone_host=$(relation-get $r_args auth_host) | ||
3686 | 255 | |||
3687 | 256 | if [[ -n "$r_id" ]] ; then | ||
3688 | 257 | unset JUJU_REMOTE_UNIT JUJU_RELATION JUJU_RELATION_ID | ||
3689 | 258 | fi | ||
3690 | 259 | |||
3691 | 260 | set_or_update "flavor" "keystone" "api" "paste_deploy" | ||
3692 | 261 | set_or_update "flavor" "keystone" "registry" "paste_deploy" | ||
3693 | 262 | |||
3694 | 263 | local sect="filter:authtoken" | ||
3695 | 264 | for i in api-paste registry-paste ; do | ||
3696 | 265 | set_or_update "service_host" "$keystone_host" $i $sect | ||
3697 | 266 | set_or_update "service_port" "$service_port" $i $sect | ||
3698 | 267 | set_or_update "auth_host" "$keystone_host" $i $sect | ||
3699 | 268 | set_or_update "auth_port" "$auth_port" $i $sect | ||
3700 | 269 | set_or_update "auth_uri" "http://$keystone_host:$service_port/" $i $sect | ||
3701 | 270 | set_or_update "admin_token" "$token" $i $sect | ||
3702 | 271 | set_or_update "admin_tenant_name" "$service_tenant" $i $sect | ||
3703 | 272 | set_or_update "admin_user" "$service_username" $i $sect | ||
3704 | 273 | set_or_update "admin_password" "$service_password" $i $sect | ||
3705 | 274 | done | ||
3706 | 275 | service_ctl all restart | ||
3707 | 276 | |||
3708 | 277 | # Configure any object-store / swift relations now that we have an | ||
3709 | 278 | # identity-service | ||
3710 | 279 | if [[ -n "$(relation-ids object-store)" ]] ; then | ||
3711 | 280 | object-store_joined | ||
3712 | 281 | fi | ||
3713 | 282 | |||
3714 | 283 | # possibly configure HTTPS for API and registry | ||
3715 | 284 | configure_https | ||
3716 | 285 | } | ||
3717 | 286 | |||
3718 | 287 | function config_changed() { | ||
3719 | 288 | # Determine whether or not we should do an upgrade, based on whether or not | ||
3720 | 289 | # the version offered in openstack-origin is greater than what is installed. | ||
3721 | 290 | |||
3722 | 291 | local install_src=$(config-get openstack-origin) | ||
3723 | 292 | local cur=$(get_os_codename_package "glance-common") | ||
3724 | 293 | local available=$(get_os_codename_install_source "$install_src") | ||
3725 | 294 | |||
3726 | 295 | if [[ "$available" != "unknown" ]] ; then | ||
3727 | 296 | if dpkg --compare-versions $(get_os_version_codename "$cur") lt \ | ||
3728 | 297 | $(get_os_version_codename "$available") ; then | ||
3729 | 298 | juju-log "$CHARM: Upgrading OpenStack release: $cur -> $available." | ||
3730 | 299 | do_openstack_upgrade "$install_src" $PACKAGES | ||
3731 | 300 | fi | ||
3732 | 301 | fi | ||
3733 | 302 | configure_https | ||
3734 | 303 | service_ctl all restart | ||
3735 | 304 | |||
3736 | 305 | # Save our scriptrc env variables for health checks | ||
3737 | 306 | declare -a env_vars=( | ||
3738 | 307 | "OPENSTACK_PORT_MCASTPORT=$(config-get ha-mcastport)" | ||
3739 | 308 | 'OPENSTACK_SERVICE_API=glance-api' | ||
3740 | 309 | 'OPENSTACK_SERVICE_REGISTRY=glance-registry') | ||
3741 | 310 | save_script_rc ${env_vars[@]} | ||
3742 | 311 | } | ||
3743 | 312 | |||
3744 | 313 | function cluster_changed() { | ||
3745 | 314 | configure_haproxy "glance_api:9292" | ||
3746 | 315 | } | ||
3747 | 316 | |||
3748 | 317 | function upgrade_charm() { | ||
3749 | 318 | cluster_changed | ||
3750 | 319 | } | ||
3751 | 320 | |||
3752 | 321 | function ha_relation_joined() { | ||
3753 | 322 | local corosync_bindiface=`config-get ha-bindiface` | ||
3754 | 323 | local corosync_mcastport=`config-get ha-mcastport` | ||
3755 | 324 | local vip=`config-get vip` | ||
3756 | 325 | local vip_iface=`config-get vip_iface` | ||
3757 | 326 | local vip_cidr=`config-get vip_cidr` | ||
3758 | 327 | if [ -n "$vip" ] && [ -n "$vip_iface" ] && \ | ||
3759 | 328 | [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \ | ||
3760 | 329 | [ -n "$corosync_mcastport" ]; then | ||
3761 | 330 | # TODO: This feels horrible but the data required by the hacluster | ||
3762 | 331 | # charm is quite complex and is python ast parsed. | ||
3763 | 332 | resources="{ | ||
3764 | 333 | 'res_glance_vip':'ocf:heartbeat:IPaddr2', | ||
3765 | 334 | 'res_glance_haproxy':'lsb:haproxy' | ||
3766 | 335 | }" | ||
3767 | 336 | resource_params="{ | ||
3768 | 337 | 'res_glance_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"', | ||
3769 | 338 | 'res_glance_haproxy': 'op monitor interval=\"5s\"' | ||
3770 | 339 | }" | ||
3771 | 340 | init_services="{ | ||
3772 | 341 | 'res_glance_haproxy':'haproxy' | ||
3773 | 342 | }" | ||
3774 | 343 | groups="{ | ||
3775 | 344 | 'grp_glance_haproxy':'res_glance_vip res_glance_haproxy' | ||
3776 | 345 | }" | ||
3777 | 346 | relation-set corosync_bindiface=$corosync_bindiface \ | ||
3778 | 347 | corosync_mcastport=$corosync_mcastport \ | ||
3779 | 348 | resources="$resources" resource_params="$resource_params" \ | ||
3780 | 349 | init_services="$init_services" groups="$groups" | ||
3781 | 350 | else | ||
3782 | 351 | juju-log "Insufficient configuration data to configure hacluster" | ||
3783 | 352 | exit 1 | ||
3784 | 353 | fi | ||
3785 | 354 | } | ||
3786 | 355 | |||
3787 | 356 | function ha_relation_changed() { | ||
3788 | 357 | local clustered=`relation-get clustered` | ||
3789 | 358 | if [ -n "$clustered" ] && is_leader 'res_glance_vip'; then | ||
3790 | 359 | local port=$((9292 + 10000)) | ||
3791 | 360 | local host=$(config-get vip) | ||
3792 | 361 | local url="http://$host:$port" | ||
3793 | 362 | for r_id in `relation-ids identity-service`; do | ||
3794 | 363 | relation-set -r $r_id service="glance" \ | ||
3795 | 364 | region="$(config-get region)" \ | ||
3796 | 365 | public_url="$url" admin_url="$url" internal_url="$url" | ||
3797 | 366 | done | ||
3798 | 367 | for r_id in `relation-ids image-service`; do | ||
3799 | 368 | relation-set -r $r_id \ | ||
3800 | 369 | glance-api-server="$host:$port" | ||
3801 | 370 | done | ||
3802 | 371 | fi | ||
3803 | 372 | } | ||
3804 | 373 | |||
3805 | 374 | |||
3806 | 375 | function cluster_changed() { | ||
3807 | 376 | [[ -z "$(peer_units)" ]] && | ||
3808 | 377 | juju-log "cluster_changed() with no peers." && exit 0 | ||
3809 | 378 | local haproxy_port=$(determine_haproxy_port 9292) | ||
3810 | 379 | local backend_port=$(determine_api_port 9292) | ||
3811 | 380 | service glance-api stop | ||
3812 | 381 | configure_haproxy "glance_api:$haproxy_port:$backend_port" | ||
3813 | 382 | set_or_update bind_port "$backend_port" "api" | ||
3814 | 383 | service glance-api start | ||
3815 | 384 | } | ||
3816 | 385 | |||
3817 | 386 | function upgrade_charm() { | ||
3818 | 387 | cluster_changed | ||
3819 | 388 | } | ||
3820 | 389 | |||
3821 | 390 | function ha_relation_joined() { | ||
3822 | 391 | local corosync_bindiface=`config-get ha-bindiface` | ||
3823 | 392 | local corosync_mcastport=`config-get ha-mcastport` | ||
3824 | 393 | local vip=`config-get vip` | ||
3825 | 394 | local vip_iface=`config-get vip_iface` | ||
3826 | 395 | local vip_cidr=`config-get vip_cidr` | ||
3827 | 396 | if [ -n "$vip" ] && [ -n "$vip_iface" ] && \ | ||
3828 | 397 | [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \ | ||
3829 | 398 | [ -n "$corosync_mcastport" ]; then | ||
3830 | 399 | # TODO: This feels horrible but the data required by the hacluster | ||
3831 | 400 | # charm is quite complex and is python ast parsed. | ||
3832 | 401 | resources="{ | ||
3833 | 402 | 'res_glance_vip':'ocf:heartbeat:IPaddr2', | ||
3834 | 403 | 'res_glance_haproxy':'lsb:haproxy' | ||
3835 | 404 | }" | ||
3836 | 405 | resource_params="{ | ||
3837 | 406 | 'res_glance_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"', | ||
3838 | 407 | 'res_glance_haproxy': 'op monitor interval=\"5s\"' | ||
3839 | 408 | }" | ||
3840 | 409 | init_services="{ | ||
3841 | 410 | 'res_glance_haproxy':'haproxy' | ||
3842 | 411 | }" | ||
3843 | 412 | clones="{ | ||
3844 | 413 | 'cl_glance_haproxy': 'res_glance_haproxy' | ||
3845 | 414 | }" | ||
3846 | 415 | relation-set corosync_bindiface=$corosync_bindiface \ | ||
3847 | 416 | corosync_mcastport=$corosync_mcastport \ | ||
3848 | 417 | resources="$resources" resource_params="$resource_params" \ | ||
3849 | 418 | init_services="$init_services" clones="$clones" | ||
3850 | 419 | else | ||
3851 | 420 | juju-log "Insufficient configuration data to configure hacluster" | ||
3852 | 421 | exit 1 | ||
3853 | 422 | fi | ||
3854 | 423 | } | ||
3855 | 424 | |||
3856 | 425 | function ha_relation_changed() { | ||
3857 | 426 | local clustered=`relation-get clustered` | ||
3858 | 427 | if [ -n "$clustered" ] && is_leader 'res_glance_vip'; then | ||
3859 | 428 | local host=$(config-get vip) | ||
3860 | 429 | https && local scheme="https" || local scheme="http" | ||
3861 | 430 | local url="$scheme://$host:9292" | ||
3862 | 431 | |||
3863 | 432 | for r_id in `relation-ids identity-service`; do | ||
3864 | 433 | relation-set -r $r_id service="glance" \ | ||
3865 | 434 | region="$(config-get region)" \ | ||
3866 | 435 | public_url="$url" admin_url="$url" internal_url="$url" | ||
3867 | 436 | done | ||
3868 | 437 | for r_id in `relation-ids image-service`; do | ||
3869 | 438 | relation-set -r $r_id \ | ||
3870 | 439 | glance-api-server="$scheme://$host:9292" | ||
3871 | 440 | done | ||
3872 | 441 | fi | ||
3873 | 442 | } | ||
3874 | 443 | |||
3875 | 444 | |||
3876 | 445 | case $ARG0 in | ||
3877 | 446 | "start"|"stop") service_ctl all $ARG0 ;; | ||
3878 | 447 | "install") install_hook ;; | ||
3879 | 448 | "config-changed") config_changed ;; | ||
3880 | 449 | "shared-db-relation-joined") db_joined ;; | ||
3881 | 450 | "shared-db-relation-changed") db_changed;; | ||
3882 | 451 | "image-service-relation-joined") image-service_joined ;; | ||
3883 | 452 | "image-service-relation-changed") exit 0 ;; | ||
3884 | 453 | "object-store-relation-joined") object-store_joined ;; | ||
3885 | 454 | "object-store-relation-changed") object-store_changed ;; | ||
3886 | 455 | "identity-service-relation-joined") keystone_joined ;; | ||
3887 | 456 | "identity-service-relation-changed") keystone_changed ;; | ||
3888 | 457 | "ceph-relation-joined") ceph_joined;; | ||
3889 | 458 | "ceph-relation-changed") ceph_changed;; | ||
3890 | 459 | "cluster-relation-changed") cluster_changed ;; | ||
3891 | 460 | "cluster-relation-departed") cluster_changed ;; | ||
3892 | 461 | "ha-relation-joined") ha_relation_joined ;; | ||
3893 | 462 | "ha-relation-changed") ha_relation_changed ;; | ||
3894 | 463 | "upgrade-charm") upgrade_charm ;; | ||
3895 | 464 | esac | ||
3896 | 465 | 0 | ||
3897 | === added file 'hooks/glance_contexts.py' | |||
3898 | --- hooks/glance_contexts.py 1970-01-01 00:00:00 +0000 | |||
3899 | +++ hooks/glance_contexts.py 2013-10-15 01:35:02 +0000 | |||
3900 | @@ -0,0 +1,89 @@ | |||
3901 | 1 | from charmhelpers.core.hookenv import ( | ||
3902 | 2 | relation_ids, | ||
3903 | 3 | related_units, | ||
3904 | 4 | relation_get, | ||
3905 | 5 | service_name, | ||
3906 | 6 | ) | ||
3907 | 7 | |||
3908 | 8 | from charmhelpers.contrib.openstack.context import ( | ||
3909 | 9 | OSContextGenerator, | ||
3910 | 10 | ApacheSSLContext as SSLContext, | ||
3911 | 11 | ) | ||
3912 | 12 | |||
3913 | 13 | from charmhelpers.contrib.hahelpers.cluster import ( | ||
3914 | 14 | determine_api_port, | ||
3915 | 15 | determine_haproxy_port, | ||
3916 | 16 | ) | ||
3917 | 17 | |||
3918 | 18 | |||
3919 | 19 | def is_relation_made(relation, key='private-address'): | ||
3920 | 20 | for r_id in relation_ids(relation): | ||
3921 | 21 | for unit in related_units(r_id): | ||
3922 | 22 | if relation_get(key, rid=r_id, unit=unit): | ||
3923 | 23 | return True | ||
3924 | 24 | return False | ||
3925 | 25 | |||
3926 | 26 | |||
3927 | 27 | class CephGlanceContext(OSContextGenerator): | ||
3928 | 28 | interfaces = ['ceph-glance'] | ||
3929 | 29 | |||
3930 | 30 | def __call__(self): | ||
3931 | 31 | """ | ||
3932 | 32 | Used to generate template context to be added to glance-api.conf in | ||
3933 | 33 | the presence of a ceph relation. | ||
3934 | 34 | """ | ||
3935 | 35 | if not is_relation_made(relation="ceph", | ||
3936 | 36 | key="key"): | ||
3937 | 37 | return {} | ||
3938 | 38 | service = service_name() | ||
3939 | 39 | return { | ||
3940 | 40 | # ensure_ceph_pool() creates pool based on service name. | ||
3941 | 41 | 'rbd_pool': service, | ||
3942 | 42 | 'rbd_user': service, | ||
3943 | 43 | } | ||
3944 | 44 | |||
3945 | 45 | |||
3946 | 46 | class ObjectStoreContext(OSContextGenerator): | ||
3947 | 47 | interfaces = ['object-store'] | ||
3948 | 48 | |||
3949 | 49 | def __call__(self): | ||
3950 | 50 | """ | ||
3951 | 51 | Used to generate template context to be added to glance-api.conf in | ||
3952 | 52 | the presence of a 'object-store' relation. | ||
3953 | 53 | """ | ||
3954 | 54 | if not relation_ids('object-store'): | ||
3955 | 55 | return {} | ||
3956 | 56 | return { | ||
3957 | 57 | 'swift_store': True, | ||
3958 | 58 | } | ||
3959 | 59 | |||
3960 | 60 | |||
3961 | 61 | class HAProxyContext(OSContextGenerator): | ||
3962 | 62 | interfaces = ['cluster'] | ||
3963 | 63 | |||
3964 | 64 | def __call__(self): | ||
3965 | 65 | ''' | ||
3966 | 66 | Extends the main charmhelpers HAProxyContext with a port mapping | ||
3967 | 67 | specific to this charm. | ||
3968 | 68 | Also used to extend glance-api.conf context with correct bind_port | ||
3969 | 69 | ''' | ||
3970 | 70 | haproxy_port = determine_haproxy_port(9292) | ||
3971 | 71 | api_port = determine_api_port(9292) | ||
3972 | 72 | |||
3973 | 73 | ctxt = { | ||
3974 | 74 | 'service_ports': {'glance_api': [haproxy_port, api_port]}, | ||
3975 | 75 | 'bind_port': api_port, | ||
3976 | 76 | } | ||
3977 | 77 | return ctxt | ||
3978 | 78 | |||
3979 | 79 | |||
3980 | 80 | class ApacheSSLContext(SSLContext): | ||
3981 | 81 | interfaces = ['https'] | ||
3982 | 82 | external_ports = [9292] | ||
3983 | 83 | service_namespace = 'glance' | ||
3984 | 84 | |||
3985 | 85 | def __call__(self): | ||
3986 | 86 | #from glance_utils import service_enabled | ||
3987 | 87 | #if not service_enabled('glance-api'): | ||
3988 | 88 | # return {} | ||
3989 | 89 | return super(ApacheSSLContext, self).__call__() | ||
3990 | 0 | 90 | ||
3991 | === added file 'hooks/glance_relations.py' | |||
3992 | --- hooks/glance_relations.py 1970-01-01 00:00:00 +0000 | |||
3993 | +++ hooks/glance_relations.py 2013-10-15 01:35:02 +0000 | |||
3994 | @@ -0,0 +1,320 @@ | |||
3995 | 1 | #!/usr/bin/python | ||
3996 | 2 | import os | ||
3997 | 3 | import sys | ||
3998 | 4 | |||
3999 | 5 | from glance_utils import ( | ||
4000 | 6 | do_openstack_upgrade, | ||
4001 | 7 | ensure_ceph_pool, | ||
4002 | 8 | migrate_database, | ||
4003 | 9 | register_configs, | ||
4004 | 10 | restart_map, | ||
4005 | 11 | CLUSTER_RES, | ||
4006 | 12 | PACKAGES, | ||
4007 | 13 | SERVICES, | ||
4008 | 14 | CHARM, | ||
4009 | 15 | GLANCE_REGISTRY_CONF, | ||
4010 | 16 | GLANCE_REGISTRY_PASTE_INI, | ||
4011 | 17 | GLANCE_API_CONF, | ||
4012 | 18 | GLANCE_API_PASTE_INI, | ||
4013 | 19 | HAPROXY_CONF, | ||
4014 | 20 | CEPH_CONF, ) | ||
4015 | 21 | |||
4016 | 22 | from charmhelpers.core.hookenv import ( | ||
4017 | 23 | config, | ||
4018 | 24 | Hooks, | ||
4019 | 25 | log as juju_log, | ||
4020 | 26 | open_port, | ||
4021 | 27 | relation_get, | ||
4022 | 28 | relation_set, | ||
4023 | 29 | relation_ids, | ||
4024 | 30 | service_name, | ||
4025 | 31 | unit_get, | ||
4026 | 32 | UnregisteredHookError, ) | ||
4027 | 33 | |||
4028 | 34 | from charmhelpers.core.host import restart_on_change, service_stop | ||
4029 | 35 | |||
4030 | 36 | from charmhelpers.fetch import apt_install, apt_update | ||
4031 | 37 | |||
4032 | 38 | from charmhelpers.contrib.hahelpers.cluster import ( | ||
4033 | 39 | canonical_url, eligible_leader, is_leader) | ||
4034 | 40 | |||
4035 | 41 | from charmhelpers.contrib.openstack.utils import ( | ||
4036 | 42 | configure_installation_source, | ||
4037 | 43 | get_os_codename_package, | ||
4038 | 44 | openstack_upgrade_available, | ||
4039 | 45 | lsb_release, ) | ||
4040 | 46 | |||
4041 | 47 | from charmhelpers.contrib.storage.linux.ceph import ensure_ceph_keyring | ||
4042 | 48 | from charmhelpers.payload.execd import execd_preinstall | ||
4043 | 49 | |||
4044 | 50 | from subprocess import ( | ||
4045 | 51 | check_call, ) | ||
4046 | 52 | |||
4047 | 53 | from commands import getstatusoutput | ||
4048 | 54 | |||
4049 | 55 | hooks = Hooks() | ||
4050 | 56 | |||
4051 | 57 | CONFIGS = register_configs() | ||
4052 | 58 | |||
4053 | 59 | |||
4054 | 60 | @hooks.hook('install') | ||
4055 | 61 | def install_hook(): | ||
4056 | 62 | juju_log('Installing glance packages') | ||
4057 | 63 | execd_preinstall() | ||
4058 | 64 | src = config('openstack-origin') | ||
4059 | 65 | if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and | ||
4060 | 66 | src == 'distro'): | ||
4061 | 67 | src = 'cloud:precise-folsom' | ||
4062 | 68 | |||
4063 | 69 | configure_installation_source(src) | ||
4064 | 70 | |||
4065 | 71 | apt_update() | ||
4066 | 72 | apt_install(PACKAGES) | ||
4067 | 73 | |||
4068 | 74 | for service in SERVICES: | ||
4069 | 75 | service_stop(service) | ||
4070 | 76 | |||
4071 | 77 | |||
4072 | 78 | @hooks.hook('shared-db-relation-joined') | ||
4073 | 79 | def db_joined(): | ||
4074 | 80 | relation_set(database=config('database'), username=config('database-user'), | ||
4075 | 81 | hostname=unit_get('private-address')) | ||
4076 | 82 | |||
4077 | 83 | |||
4078 | 84 | @hooks.hook('shared-db-relation-changed') | ||
4079 | 85 | @restart_on_change(restart_map()) | ||
4080 | 86 | def db_changed(): | ||
4081 | 87 | rel = get_os_codename_package("glance-common") | ||
4082 | 88 | |||
4083 | 89 | if 'shared-db' not in CONFIGS.complete_contexts(): | ||
4084 | 90 | juju_log('shared-db relation incomplete. Peer not ready?') | ||
4085 | 91 | return | ||
4086 | 92 | |||
4087 | 93 | CONFIGS.write(GLANCE_REGISTRY_CONF) | ||
4088 | 94 | # since folsom, a db connection setting in glance-api.conf is required. | ||
4089 | 95 | if rel != "essex": | ||
4090 | 96 | CONFIGS.write(GLANCE_API_CONF) | ||
4091 | 97 | |||
4092 | 98 | if eligible_leader(CLUSTER_RES): | ||
4093 | 99 | if rel == "essex": | ||
4094 | 100 | (status, output) = getstatusoutput('glance-manage db_version') | ||
4095 | 101 | if status != 0: | ||
4096 | 102 | juju_log('Setting version_control to 0') | ||
4097 | 103 | check_call(["glance-manage", "version_control", "0"]) | ||
4098 | 104 | |||
4099 | 105 | juju_log('Cluster leader, performing db sync') | ||
4100 | 106 | migrate_database() | ||
4101 | 107 | |||
4102 | 108 | |||
4103 | 109 | @hooks.hook('image-service-relation-joined') | ||
4104 | 110 | def image_service_joined(relation_id=None): | ||
4105 | 111 | if not eligible_leader(CLUSTER_RES): | ||
4106 | 112 | return | ||
4107 | 113 | |||
4108 | 114 | relation_data = { | ||
4109 | 115 | 'glance-api-server': canonical_url(CONFIGS) + ":9292" | ||
4110 | 116 | } | ||
4111 | 117 | |||
4112 | 118 | juju_log("%s: image-service_joined: To peer glance-api-server=%s" % | ||
4113 | 119 | (CHARM, relation_data['glance-api-server'])) | ||
4114 | 120 | |||
4115 | 121 | relation_set(relation_id=relation_id, **relation_data) | ||
4116 | 122 | |||
4117 | 123 | |||
4118 | 124 | @hooks.hook('object-store-relation-joined') | ||
4119 | 125 | @restart_on_change(restart_map()) | ||
4120 | 126 | def object_store_joined(): | ||
4121 | 127 | |||
4122 | 128 | if 'identity-service' not in CONFIGS.complete_contexts(): | ||
4123 | 129 | juju_log('Deferring swift stora configuration until ' | ||
4124 | 130 | 'an identity-service relation exists') | ||
4125 | 131 | return | ||
4126 | 132 | |||
4127 | 133 | if 'object-store' not in CONFIGS.complete_contexts(): | ||
4128 | 134 | juju_log('swift relation incomplete') | ||
4129 | 135 | return | ||
4130 | 136 | |||
4131 | 137 | CONFIGS.write(GLANCE_API_CONF) | ||
4132 | 138 | |||
4133 | 139 | |||
4134 | 140 | @hooks.hook('ceph-relation-joined') | ||
4135 | 141 | def ceph_joined(): | ||
4136 | 142 | if not os.path.isdir('/etc/ceph'): | ||
4137 | 143 | os.mkdir('/etc/ceph') | ||
4138 | 144 | apt_install(['ceph-common', 'python-ceph']) | ||
4139 | 145 | |||
4140 | 146 | |||
4141 | 147 | @hooks.hook('ceph-relation-changed') | ||
4142 | 148 | @restart_on_change(restart_map()) | ||
4143 | 149 | def ceph_changed(): | ||
4144 | 150 | if 'ceph' not in CONFIGS.complete_contexts(): | ||
4145 | 151 | juju_log('ceph relation incomplete. Peer not ready?') | ||
4146 | 152 | return | ||
4147 | 153 | |||
4148 | 154 | service = service_name() | ||
4149 | 155 | |||
4150 | 156 | if not ensure_ceph_keyring(service=service, | ||
4151 | 157 | user='glance', group='glance'): | ||
4152 | 158 | juju_log('Could not create ceph keyring: peer not ready?') | ||
4153 | 159 | return | ||
4154 | 160 | |||
4155 | 161 | CONFIGS.write(GLANCE_API_CONF) | ||
4156 | 162 | CONFIGS.write(CEPH_CONF) | ||
4157 | 163 | |||
4158 | 164 | if eligible_leader(CLUSTER_RES): | ||
4159 | 165 | _config = config() | ||
4160 | 166 | ensure_ceph_pool(service=service, | ||
4161 | 167 | replicas=_config['ceph-osd-replication-count']) | ||
4162 | 168 | |||
4163 | 169 | |||
4164 | 170 | @hooks.hook('identity-service-relation-joined') | ||
4165 | 171 | def keystone_joined(relation_id=None): | ||
4166 | 172 | if not eligible_leader(CLUSTER_RES): | ||
4167 | 173 | juju_log('Deferring keystone_joined() to service leader.') | ||
4168 | 174 | return | ||
4169 | 175 | |||
4170 | 176 | url = canonical_url(CONFIGS) + ":9292" | ||
4171 | 177 | relation_data = { | ||
4172 | 178 | 'service': 'glance', | ||
4173 | 179 | 'region': config('region'), | ||
4174 | 180 | 'public_url': url, | ||
4175 | 181 | 'admin_url': url, | ||
4176 | 182 | 'internal_url': url, } | ||
4177 | 183 | |||
4178 | 184 | relation_set(relation_id=relation_id, **relation_data) | ||
4179 | 185 | |||
4180 | 186 | |||
4181 | 187 | @hooks.hook('identity-service-relation-changed') | ||
4182 | 188 | @restart_on_change(restart_map()) | ||
4183 | 189 | def keystone_changed(): | ||
4184 | 190 | if 'identity-service' not in CONFIGS.complete_contexts(): | ||
4185 | 191 | juju_log('identity-service relation incomplete. Peer not ready?') | ||
4186 | 192 | return | ||
4187 | 193 | |||
4188 | 194 | CONFIGS.write(GLANCE_API_CONF) | ||
4189 | 195 | CONFIGS.write(GLANCE_REGISTRY_CONF) | ||
4190 | 196 | |||
4191 | 197 | CONFIGS.write(GLANCE_API_PASTE_INI) | ||
4192 | 198 | CONFIGS.write(GLANCE_REGISTRY_PASTE_INI) | ||
4193 | 199 | |||
4194 | 200 | # Configure any object-store / swift relations now that we have an | ||
4195 | 201 | # identity-service | ||
4196 | 202 | if relation_ids('object-store'): | ||
4197 | 203 | object_store_joined() | ||
4198 | 204 | |||
4199 | 205 | # possibly configure HTTPS for API and registry | ||
4200 | 206 | configure_https() | ||
4201 | 207 | |||
4202 | 208 | |||
4203 | 209 | @hooks.hook('config-changed') | ||
4204 | 210 | @restart_on_change(restart_map()) | ||
4205 | 211 | def config_changed(): | ||
4206 | 212 | if openstack_upgrade_available('glance-common'): | ||
4207 | 213 | juju_log('Upgrading OpenStack release') | ||
4208 | 214 | do_openstack_upgrade(CONFIGS) | ||
4209 | 215 | |||
4210 | 216 | open_port(9292) | ||
4211 | 217 | configure_https() | ||
4212 | 218 | |||
4213 | 219 | #env_vars = {'OPENSTACK_PORT_MCASTPORT': config("ha-mcastport"), | ||
4214 | 220 | # 'OPENSTACK_SERVICE_API': "glance-api", | ||
4215 | 221 | # 'OPENSTACK_SERVICE_REGISTRY': "glance-registry"} | ||
4216 | 222 | #save_script_rc(**env_vars) | ||
4217 | 223 | |||
4218 | 224 | |||
4219 | 225 | @hooks.hook('cluster-relation-changed') | ||
4220 | 226 | @restart_on_change(restart_map()) | ||
4221 | 227 | def cluster_changed(): | ||
4222 | 228 | CONFIGS.write(GLANCE_API_CONF) | ||
4223 | 229 | CONFIGS.write(HAPROXY_CONF) | ||
4224 | 230 | |||
4225 | 231 | |||
4226 | 232 | @hooks.hook('upgrade-charm') | ||
4227 | 233 | def upgrade_charm(): | ||
4228 | 234 | cluster_changed() | ||
4229 | 235 | |||
4230 | 236 | |||
4231 | 237 | @hooks.hook('ha-relation-joined') | ||
4232 | 238 | def ha_relation_joined(): | ||
4233 | 239 | corosync_bindiface = config("ha-bindiface") | ||
4234 | 240 | corosync_mcastport = config("ha-mcastport") | ||
4235 | 241 | vip = config("vip") | ||
4236 | 242 | vip_iface = config("vip_iface") | ||
4237 | 243 | vip_cidr = config("vip_cidr") | ||
4238 | 244 | |||
4239 | 245 | #if vip and vip_iface and vip_cidr and \ | ||
4240 | 246 | # corosync_bindiface and corosync_mcastport: | ||
4241 | 247 | |||
4242 | 248 | resources = { | ||
4243 | 249 | 'res_glance_vip': 'ocf:heartbeat:IPaddr2', | ||
4244 | 250 | 'res_glance_haproxy': 'lsb:haproxy', } | ||
4245 | 251 | |||
4246 | 252 | resource_params = { | ||
4247 | 253 | 'res_glance_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' % | ||
4248 | 254 | (vip, vip_cidr, vip_iface), | ||
4249 | 255 | 'res_glance_haproxy': 'op monitor interval="5s"', } | ||
4250 | 256 | |||
4251 | 257 | init_services = { | ||
4252 | 258 | 'res_glance_haproxy': 'haproxy', } | ||
4253 | 259 | |||
4254 | 260 | clones = { | ||
4255 | 261 | 'cl_glance_haproxy': 'res_glance_haproxy', } | ||
4256 | 262 | |||
4257 | 263 | relation_set(init_services=init_services, | ||
4258 | 264 | corosync_bindiface=corosync_bindiface, | ||
4259 | 265 | corosync_mcastport=corosync_mcastport, | ||
4260 | 266 | resources=resources, | ||
4261 | 267 | resource_params=resource_params, | ||
4262 | 268 | clones=clones) | ||
4263 | 269 | |||
4264 | 270 | |||
4265 | 271 | @hooks.hook('ha-relation-changed') | ||
4266 | 272 | def ha_relation_changed(): | ||
4267 | 273 | clustered = relation_get('clustered') | ||
4268 | 274 | if not clustered or clustered in [None, 'None', '']: | ||
4269 | 275 | juju_log('ha_changed: hacluster subordinate is not fully clustered.') | ||
4270 | 276 | return | ||
4271 | 277 | if not is_leader(CLUSTER_RES): | ||
4272 | 278 | juju_log('ha_changed: hacluster complete but we are not leader.') | ||
4273 | 279 | return | ||
4274 | 280 | |||
4275 | 281 | # reconfigure endpoint in keystone to point to clustered VIP. | ||
4276 | 282 | [keystone_joined(rid) for rid in relation_ids('identity-service')] | ||
4277 | 283 | |||
4278 | 284 | # notify glance client services of reconfigured URL. | ||
4279 | 285 | [image_service_joined(rid) for rid in relation_ids('image-service')] | ||
4280 | 286 | |||
4281 | 287 | |||
4282 | 288 | @hooks.hook('ceph-relation-broken', | ||
4283 | 289 | 'identity-service-relation-broken', | ||
4284 | 290 | 'object-store-relation-broken', | ||
4285 | 291 | 'shared-db-relation-broken') | ||
4286 | 292 | def relation_broken(): | ||
4287 | 293 | CONFIGS.write_all() | ||
4288 | 294 | |||
4289 | 295 | |||
4290 | 296 | def configure_https(): | ||
4291 | 297 | ''' | ||
4292 | 298 | Enables SSL API Apache config if appropriate and kicks | ||
4293 | 299 | identity-service and image-service with any required | ||
4294 | 300 | updates | ||
4295 | 301 | ''' | ||
4296 | 302 | CONFIGS.write_all() | ||
4297 | 303 | if 'https' in CONFIGS.complete_contexts(): | ||
4298 | 304 | cmd = ['a2ensite', 'openstack_https_frontend'] | ||
4299 | 305 | check_call(cmd) | ||
4300 | 306 | else: | ||
4301 | 307 | cmd = ['a2dissite', 'openstack_https_frontend'] | ||
4302 | 308 | check_call(cmd) | ||
4303 | 309 | |||
4304 | 310 | for r_id in relation_ids('identity-service'): | ||
4305 | 311 | keystone_joined(relation_id=r_id) | ||
4306 | 312 | for r_id in relation_ids('image-service'): | ||
4307 | 313 | image_service_joined(relation_id=r_id) | ||
4308 | 314 | |||
4309 | 315 | |||
4310 | 316 | if __name__ == '__main__': | ||
4311 | 317 | try: | ||
4312 | 318 | hooks.execute(sys.argv) | ||
4313 | 319 | except UnregisteredHookError as e: | ||
4314 | 320 | juju_log('Unknown hook {} - skiping.'.format(e)) | ||
4315 | 0 | 321 | ||
4316 | === added file 'hooks/glance_utils.py' | |||
4317 | --- hooks/glance_utils.py 1970-01-01 00:00:00 +0000 | |||
4318 | +++ hooks/glance_utils.py 2013-10-15 01:35:02 +0000 | |||
4319 | @@ -0,0 +1,193 @@ | |||
4320 | 1 | #!/usr/bin/python | ||
4321 | 2 | |||
4322 | 3 | import os | ||
4323 | 4 | import subprocess | ||
4324 | 5 | |||
4325 | 6 | import glance_contexts | ||
4326 | 7 | |||
4327 | 8 | from collections import OrderedDict | ||
4328 | 9 | |||
4329 | 10 | from charmhelpers.fetch import ( | ||
4330 | 11 | apt_install, | ||
4331 | 12 | apt_update, ) | ||
4332 | 13 | |||
4333 | 14 | from charmhelpers.core.hookenv import ( | ||
4334 | 15 | config, | ||
4335 | 16 | log as juju_log, | ||
4336 | 17 | relation_ids) | ||
4337 | 18 | |||
4338 | 19 | from charmhelpers.contrib.openstack import ( | ||
4339 | 20 | templating, | ||
4340 | 21 | context, ) | ||
4341 | 22 | |||
4342 | 23 | from charmhelpers.contrib.hahelpers.cluster import ( | ||
4343 | 24 | eligible_leader, | ||
4344 | 25 | ) | ||
4345 | 26 | |||
4346 | 27 | from charmhelpers.contrib.storage.linux.ceph import ( | ||
4347 | 28 | create_pool as ceph_create_pool, | ||
4348 | 29 | pool_exists as ceph_pool_exists) | ||
4349 | 30 | |||
4350 | 31 | from charmhelpers.contrib.openstack.utils import ( | ||
4351 | 32 | get_os_codename_install_source, | ||
4352 | 33 | get_os_codename_package, | ||
4353 | 34 | configure_installation_source, ) | ||
4354 | 35 | |||
4355 | 36 | CLUSTER_RES = "res_glance_vip" | ||
4356 | 37 | |||
4357 | 38 | PACKAGES = [ | ||
4358 | 39 | "apache2", "glance", "python-mysqldb", "python-swift", | ||
4359 | 40 | "python-keystone", "uuid", "haproxy", ] | ||
4360 | 41 | |||
4361 | 42 | SERVICES = [ | ||
4362 | 43 | "glance-api", "glance-registry", ] | ||
4363 | 44 | |||
4364 | 45 | CHARM = "glance" | ||
4365 | 46 | |||
4366 | 47 | GLANCE_REGISTRY_CONF = "/etc/glance/glance-registry.conf" | ||
4367 | 48 | GLANCE_REGISTRY_PASTE_INI = "/etc/glance/glance-registry-paste.ini" | ||
4368 | 49 | GLANCE_API_CONF = "/etc/glance/glance-api.conf" | ||
4369 | 50 | GLANCE_API_PASTE_INI = "/etc/glance/glance-api-paste.ini" | ||
4370 | 51 | CEPH_CONF = "/etc/ceph/ceph.conf" | ||
4371 | 52 | HAPROXY_CONF = "/etc/haproxy/haproxy.cfg" | ||
4372 | 53 | HTTPS_APACHE_CONF = "/etc/apache2/sites-available/openstack_https_frontend" | ||
4373 | 54 | HTTPS_APACHE_24_CONF = "/etc/apache2/sites-available/" \ | ||
4374 | 55 | "openstack_https_frontend.conf" | ||
4375 | 56 | |||
4376 | 57 | CONF_DIR = "/etc/glance" | ||
4377 | 58 | |||
4378 | 59 | TEMPLATES = 'templates/' | ||
4379 | 60 | |||
4380 | 61 | CONFIG_FILES = OrderedDict([ | ||
4381 | 62 | (GLANCE_REGISTRY_CONF, { | ||
4382 | 63 | 'hook_contexts': [context.SharedDBContext(), | ||
4383 | 64 | context.IdentityServiceContext()], | ||
4384 | 65 | 'services': ['glance-registry'] | ||
4385 | 66 | }), | ||
4386 | 67 | (GLANCE_API_CONF, { | ||
4387 | 68 | 'hook_contexts': [context.SharedDBContext(), | ||
4388 | 69 | context.IdentityServiceContext(), | ||
4389 | 70 | glance_contexts.CephGlanceContext(), | ||
4390 | 71 | glance_contexts.ObjectStoreContext(), | ||
4391 | 72 | glance_contexts.HAProxyContext()], | ||
4392 | 73 | 'services': ['glance-api'] | ||
4393 | 74 | }), | ||
4394 | 75 | (GLANCE_API_PASTE_INI, { | ||
4395 | 76 | 'hook_contexts': [context.IdentityServiceContext()], | ||
4396 | 77 | 'services': ['glance-api'] | ||
4397 | 78 | }), | ||
4398 | 79 | (GLANCE_REGISTRY_PASTE_INI, { | ||
4399 | 80 | 'hook_contexts': [context.IdentityServiceContext()], | ||
4400 | 81 | 'services': ['glance-registry'] | ||
4401 | 82 | }), | ||
4402 | 83 | (CEPH_CONF, { | ||
4403 | 84 | 'hook_contexts': [context.CephContext()], | ||
4404 | 85 | 'services': [] | ||
4405 | 86 | }), | ||
4406 | 87 | (HAPROXY_CONF, { | ||
4407 | 88 | 'hook_contexts': [context.HAProxyContext(), | ||
4408 | 89 | glance_contexts.HAProxyContext()], | ||
4409 | 90 | 'services': ['haproxy'], | ||
4410 | 91 | }), | ||
4411 | 92 | (HTTPS_APACHE_CONF, { | ||
4412 | 93 | 'hook_contexts': [glance_contexts.ApacheSSLContext()], | ||
4413 | 94 | 'services': ['apache2'], | ||
4414 | 95 | }), | ||
4415 | 96 | (HTTPS_APACHE_24_CONF, { | ||
4416 | 97 | 'hook_contexts': [glance_contexts.ApacheSSLContext()], | ||
4417 | 98 | 'services': ['apache2'], | ||
4418 | 99 | }) | ||
4419 | 100 | ]) | ||
4420 | 101 | |||
4421 | 102 | |||
4422 | 103 | def register_configs(): | ||
4423 | 104 | # Register config files with their respective contexts. | ||
4424 | 105 | # Regstration of some configs may not be required depending on | ||
4425 | 106 | # existing of certain relations. | ||
4426 | 107 | release = get_os_codename_package('glance-common', fatal=False) or 'essex' | ||
4427 | 108 | configs = templating.OSConfigRenderer(templates_dir=TEMPLATES, | ||
4428 | 109 | openstack_release=release) | ||
4429 | 110 | |||
4430 | 111 | confs = [GLANCE_REGISTRY_CONF, | ||
4431 | 112 | GLANCE_API_CONF, | ||
4432 | 113 | GLANCE_API_PASTE_INI, | ||
4433 | 114 | GLANCE_REGISTRY_PASTE_INI, | ||
4434 | 115 | HAPROXY_CONF] | ||
4435 | 116 | |||
4436 | 117 | if relation_ids('ceph'): | ||
4437 | 118 | if not os.path.isdir('/etc/ceph'): | ||
4438 | 119 | os.mkdir('/etc/ceph') | ||
4439 | 120 | confs.append(CEPH_CONF) | ||
4440 | 121 | |||
4441 | 122 | for conf in confs: | ||
4442 | 123 | configs.register(conf, CONFIG_FILES[conf]['hook_contexts']) | ||
4443 | 124 | |||
4444 | 125 | if os.path.exists('/etc/apache2/conf-available'): | ||
4445 | 126 | configs.register(HTTPS_APACHE_24_CONF, | ||
4446 | 127 | CONFIG_FILES[HTTPS_APACHE_24_CONF]['hook_contexts']) | ||
4447 | 128 | else: | ||
4448 | 129 | configs.register(HTTPS_APACHE_CONF, | ||
4449 | 130 | CONFIG_FILES[HTTPS_APACHE_CONF]['hook_contexts']) | ||
4450 | 131 | |||
4451 | 132 | return configs | ||
4452 | 133 | |||
4453 | 134 | |||
4454 | 135 | def migrate_database(): | ||
4455 | 136 | '''Runs glance-manage to initialize a new database or migrate existing''' | ||
4456 | 137 | cmd = ['glance-manage', 'db_sync'] | ||
4457 | 138 | subprocess.check_call(cmd) | ||
4458 | 139 | |||
4459 | 140 | |||
4460 | 141 | def ensure_ceph_pool(service, replicas): | ||
4461 | 142 | '''Creates a ceph pool for service if one does not exist''' | ||
4462 | 143 | # TODO: Ditto about moving somewhere sharable. | ||
4463 | 144 | if not ceph_pool_exists(service=service, name=service): | ||
4464 | 145 | ceph_create_pool(service=service, name=service, replicas=replicas) | ||
4465 | 146 | |||
4466 | 147 | |||
4467 | 148 | def do_openstack_upgrade(configs): | ||
4468 | 149 | """ | ||
4469 | 150 | Perform an uprade of cinder. Takes care of upgrading packages, rewriting | ||
4470 | 151 | configs + database migration and potentially any other post-upgrade | ||
4471 | 152 | actions. | ||
4472 | 153 | |||
4473 | 154 | :param configs: The charms main OSConfigRenderer object. | ||
4474 | 155 | |||
4475 | 156 | """ | ||
4476 | 157 | new_src = config('openstack-origin') | ||
4477 | 158 | new_os_rel = get_os_codename_install_source(new_src) | ||
4478 | 159 | |||
4479 | 160 | juju_log('Performing OpenStack upgrade to %s.' % (new_os_rel)) | ||
4480 | 161 | |||
4481 | 162 | configure_installation_source(new_src) | ||
4482 | 163 | dpkg_opts = [ | ||
4483 | 164 | '--option', 'Dpkg::Options::=--force-confnew', | ||
4484 | 165 | '--option', 'Dpkg::Options::=--force-confdef', | ||
4485 | 166 | ] | ||
4486 | 167 | apt_update() | ||
4487 | 168 | apt_install(packages=PACKAGES, options=dpkg_opts, fatal=True) | ||
4488 | 169 | |||
4489 | 170 | # set CONFIGS to load templates from new release and regenerate config | ||
4490 | 171 | configs.set_release(openstack_release=new_os_rel) | ||
4491 | 172 | configs.write_all() | ||
4492 | 173 | |||
4493 | 174 | if eligible_leader(CLUSTER_RES): | ||
4494 | 175 | migrate_database() | ||
4495 | 176 | |||
4496 | 177 | |||
4497 | 178 | def restart_map(): | ||
4498 | 179 | ''' | ||
4499 | 180 | Determine the correct resource map to be passed to | ||
4500 | 181 | charmhelpers.core.restart_on_change() based on the services configured. | ||
4501 | 182 | |||
4502 | 183 | :returns: dict: A dictionary mapping config file to lists of services | ||
4503 | 184 | that should be restarted when file changes. | ||
4504 | 185 | ''' | ||
4505 | 186 | _map = [] | ||
4506 | 187 | for f, ctxt in CONFIG_FILES.iteritems(): | ||
4507 | 188 | svcs = [] | ||
4508 | 189 | for svc in ctxt['services']: | ||
4509 | 190 | svcs.append(svc) | ||
4510 | 191 | if svcs: | ||
4511 | 192 | _map.append((f, svcs)) | ||
4512 | 193 | return OrderedDict(_map) | ||
4513 | 0 | 194 | ||
4514 | === modified symlink 'hooks/ha-relation-changed' | |||
4515 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4516 | === modified symlink 'hooks/ha-relation-joined' | |||
4517 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4518 | === added symlink 'hooks/identity-service-relation-broken' | |||
4519 | === target is u'glance_relations.py' | |||
4520 | === modified symlink 'hooks/identity-service-relation-changed' | |||
4521 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4522 | === modified symlink 'hooks/identity-service-relation-joined' | |||
4523 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4524 | === modified symlink 'hooks/image-service-relation-changed' | |||
4525 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4526 | === modified symlink 'hooks/image-service-relation-joined' | |||
4527 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4528 | === modified symlink 'hooks/install' | |||
4529 | === target changed u'glance-relations' => u'glance_relations.py' | |||
4530 | === removed directory 'hooks/lib' | |||
4531 | === removed file 'hooks/lib/openstack-common' | |||
4532 | --- hooks/lib/openstack-common 2013-06-03 18:39:29 +0000 | |||
4533 | +++ hooks/lib/openstack-common 1970-01-01 00:00:00 +0000 | |||
4534 | @@ -1,813 +0,0 @@ | |||
4535 | 1 | #!/bin/bash -e | ||
4536 | 2 | |||
4537 | 3 | # Common utility functions used across all OpenStack charms. | ||
4538 | 4 | |||
4539 | 5 | error_out() { | ||
4540 | 6 | juju-log "$CHARM ERROR: $@" | ||
4541 | 7 | exit 1 | ||
4542 | 8 | } | ||
4543 | 9 | |||
4544 | 10 | function service_ctl_status { | ||
4545 | 11 | # Return 0 if a service is running, 1 otherwise. | ||
4546 | 12 | local svc="$1" | ||
4547 | 13 | local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }') | ||
4548 | 14 | case $status in | ||
4549 | 15 | "start") return 0 ;; | ||
4550 | 16 | "stop") return 1 ;; | ||
4551 | 17 | *) error_out "Unexpected status of service $svc: $status" ;; | ||
4552 | 18 | esac | ||
4553 | 19 | } | ||
4554 | 20 | |||
4555 | 21 | function service_ctl { | ||
4556 | 22 | # control a specific service, or all (as defined by $SERVICES) | ||
4557 | 23 | # service restarts will only occur depending on global $CONFIG_CHANGED, | ||
4558 | 24 | # which should be updated in charm's set_or_update(). | ||
4559 | 25 | local config_changed=${CONFIG_CHANGED:-True} | ||
4560 | 26 | if [[ $1 == "all" ]] ; then | ||
4561 | 27 | ctl="$SERVICES" | ||
4562 | 28 | else | ||
4563 | 29 | ctl="$1" | ||
4564 | 30 | fi | ||
4565 | 31 | action="$2" | ||
4566 | 32 | if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then | ||
4567 | 33 | error_out "ERROR service_ctl: Not enough arguments" | ||
4568 | 34 | fi | ||
4569 | 35 | |||
4570 | 36 | for i in $ctl ; do | ||
4571 | 37 | case $action in | ||
4572 | 38 | "start") | ||
4573 | 39 | service_ctl_status $i || service $i start ;; | ||
4574 | 40 | "stop") | ||
4575 | 41 | service_ctl_status $i && service $i stop || return 0 ;; | ||
4576 | 42 | "restart") | ||
4577 | 43 | if [[ "$config_changed" == "True" ]] ; then | ||
4578 | 44 | service_ctl_status $i && service $i restart || service $i start | ||
4579 | 45 | fi | ||
4580 | 46 | ;; | ||
4581 | 47 | esac | ||
4582 | 48 | if [[ $? != 0 ]] ; then | ||
4583 | 49 | juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action" | ||
4584 | 50 | fi | ||
4585 | 51 | done | ||
4586 | 52 | # all configs should have been reloaded on restart of all services, reset | ||
4587 | 53 | # flag if its being used. | ||
4588 | 54 | if [[ "$action" == "restart" ]] && [[ -n "$CONFIG_CHANGED" ]] && | ||
4589 | 55 | [[ "$ctl" == "all" ]]; then | ||
4590 | 56 | CONFIG_CHANGED="False" | ||
4591 | 57 | fi | ||
4592 | 58 | } | ||
4593 | 59 | |||
4594 | 60 | function configure_install_source { | ||
4595 | 61 | # Setup and configure installation source based on a config flag. | ||
4596 | 62 | local src="$1" | ||
4597 | 63 | |||
4598 | 64 | # Default to installing from the main Ubuntu archive. | ||
4599 | 65 | [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0 | ||
4600 | 66 | |||
4601 | 67 | . /etc/lsb-release | ||
4602 | 68 | |||
4603 | 69 | # standard 'ppa:someppa/name' format. | ||
4604 | 70 | if [[ "${src:0:4}" == "ppa:" ]] ; then | ||
4605 | 71 | juju-log "$CHARM: Configuring installation from custom src ($src)" | ||
4606 | 72 | add-apt-repository -y "$src" || error_out "Could not configure PPA access." | ||
4607 | 73 | return 0 | ||
4608 | 74 | fi | ||
4609 | 75 | |||
4610 | 76 | # standard 'deb http://url/ubuntu main' entries. gpg key ids must | ||
4611 | 77 | # be appended to the end of url after a |, ie: | ||
4612 | 78 | # 'deb http://url/ubuntu main|$GPGKEYID' | ||
4613 | 79 | if [[ "${src:0:3}" == "deb" ]] ; then | ||
4614 | 80 | juju-log "$CHARM: Configuring installation from custom src URL ($src)" | ||
4615 | 81 | if echo "$src" | grep -q "|" ; then | ||
4616 | 82 | # gpg key id tagged to end of url folloed by a | | ||
4617 | 83 | url=$(echo $src | cut -d'|' -f1) | ||
4618 | 84 | key=$(echo $src | cut -d'|' -f2) | ||
4619 | 85 | juju-log "$CHARM: Importing repository key: $key" | ||
4620 | 86 | apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \ | ||
4621 | 87 | juju-log "$CHARM WARN: Could not import key from keyserver: $key" | ||
4622 | 88 | else | ||
4623 | 89 | juju-log "$CHARM No repository key specified." | ||
4624 | 90 | url="$src" | ||
4625 | 91 | fi | ||
4626 | 92 | echo "$url" > /etc/apt/sources.list.d/juju_deb.list | ||
4627 | 93 | return 0 | ||
4628 | 94 | fi | ||
4629 | 95 | |||
4630 | 96 | # Cloud Archive | ||
4631 | 97 | if [[ "${src:0:6}" == "cloud:" ]] ; then | ||
4632 | 98 | |||
4633 | 99 | # current os releases supported by the UCA. | ||
4634 | 100 | local cloud_archive_versions="folsom grizzly" | ||
4635 | 101 | |||
4636 | 102 | local ca_rel=$(echo $src | cut -d: -f2) | ||
4637 | 103 | local u_rel=$(echo $ca_rel | cut -d- -f1) | ||
4638 | 104 | local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1) | ||
4639 | 105 | |||
4640 | 106 | [[ "$u_rel" != "$DISTRIB_CODENAME" ]] && | ||
4641 | 107 | error_out "Cannot install from Cloud Archive pocket $src " \ | ||
4642 | 108 | "on this Ubuntu version ($DISTRIB_CODENAME)!" | ||
4643 | 109 | |||
4644 | 110 | valid_release="" | ||
4645 | 111 | for rel in $cloud_archive_versions ; do | ||
4646 | 112 | if [[ "$os_rel" == "$rel" ]] ; then | ||
4647 | 113 | valid_release=1 | ||
4648 | 114 | juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive." | ||
4649 | 115 | fi | ||
4650 | 116 | done | ||
4651 | 117 | if [[ -z "$valid_release" ]] ; then | ||
4652 | 118 | error_out "OpenStack release ($os_rel) not supported by "\ | ||
4653 | 119 | "the Ubuntu Cloud Archive." | ||
4654 | 120 | fi | ||
4655 | 121 | |||
4656 | 122 | # CA staging repos are standard PPAs. | ||
4657 | 123 | if echo $ca_rel | grep -q "staging" ; then | ||
4658 | 124 | add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging | ||
4659 | 125 | return 0 | ||
4660 | 126 | fi | ||
4661 | 127 | |||
4662 | 128 | # the others are LP-external deb repos. | ||
4663 | 129 | case "$ca_rel" in | ||
4664 | 130 | "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; | ||
4665 | 131 | "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; | ||
4666 | 132 | "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; | ||
4667 | 133 | "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; | ||
4668 | 134 | *) error_out "Invalid Cloud Archive repo specified: $src" | ||
4669 | 135 | esac | ||
4670 | 136 | |||
4671 | 137 | apt-get -y install ubuntu-cloud-keyring | ||
4672 | 138 | entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main" | ||
4673 | 139 | echo "$entry" \ | ||
4674 | 140 | >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list | ||
4675 | 141 | return 0 | ||
4676 | 142 | fi | ||
4677 | 143 | |||
4678 | 144 | error_out "Invalid installation source specified in config: $src" | ||
4679 | 145 | |||
4680 | 146 | } | ||
4681 | 147 | |||
4682 | 148 | get_os_codename_install_source() { | ||
4683 | 149 | # derive the openstack release provided by a supported installation source. | ||
4684 | 150 | local rel="$1" | ||
4685 | 151 | local codename="unknown" | ||
4686 | 152 | . /etc/lsb-release | ||
4687 | 153 | |||
4688 | 154 | # map ubuntu releases to the openstack version shipped with it. | ||
4689 | 155 | if [[ "$rel" == "distro" ]] ; then | ||
4690 | 156 | case "$DISTRIB_CODENAME" in | ||
4691 | 157 | "oneiric") codename="diablo" ;; | ||
4692 | 158 | "precise") codename="essex" ;; | ||
4693 | 159 | "quantal") codename="folsom" ;; | ||
4694 | 160 | "raring") codename="grizzly" ;; | ||
4695 | 161 | esac | ||
4696 | 162 | fi | ||
4697 | 163 | |||
4698 | 164 | # derive version from cloud archive strings. | ||
4699 | 165 | if [[ "${rel:0:6}" == "cloud:" ]] ; then | ||
4700 | 166 | rel=$(echo $rel | cut -d: -f2) | ||
4701 | 167 | local u_rel=$(echo $rel | cut -d- -f1) | ||
4702 | 168 | local ca_rel=$(echo $rel | cut -d- -f2) | ||
4703 | 169 | if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then | ||
4704 | 170 | case "$ca_rel" in | ||
4705 | 171 | "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging") | ||
4706 | 172 | codename="folsom" ;; | ||
4707 | 173 | "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging") | ||
4708 | 174 | codename="grizzly" ;; | ||
4709 | 175 | esac | ||
4710 | 176 | fi | ||
4711 | 177 | fi | ||
4712 | 178 | |||
4713 | 179 | # have a guess based on the deb string provided | ||
4714 | 180 | if [[ "${rel:0:3}" == "deb" ]] || \ | ||
4715 | 181 | [[ "${rel:0:3}" == "ppa" ]] ; then | ||
4716 | 182 | CODENAMES="diablo essex folsom grizzly havana" | ||
4717 | 183 | for cname in $CODENAMES; do | ||
4718 | 184 | if echo $rel | grep -q $cname; then | ||
4719 | 185 | codename=$cname | ||
4720 | 186 | fi | ||
4721 | 187 | done | ||
4722 | 188 | fi | ||
4723 | 189 | echo $codename | ||
4724 | 190 | } | ||
4725 | 191 | |||
4726 | 192 | get_os_codename_package() { | ||
4727 | 193 | local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none" | ||
4728 | 194 | pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs | ||
4729 | 195 | case "${pkg_vers:0:6}" in | ||
4730 | 196 | "2011.2") echo "diablo" ;; | ||
4731 | 197 | "2012.1") echo "essex" ;; | ||
4732 | 198 | "2012.2") echo "folsom" ;; | ||
4733 | 199 | "2013.1") echo "grizzly" ;; | ||
4734 | 200 | "2013.2") echo "havana" ;; | ||
4735 | 201 | esac | ||
4736 | 202 | } | ||
4737 | 203 | |||
4738 | 204 | get_os_version_codename() { | ||
4739 | 205 | case "$1" in | ||
4740 | 206 | "diablo") echo "2011.2" ;; | ||
4741 | 207 | "essex") echo "2012.1" ;; | ||
4742 | 208 | "folsom") echo "2012.2" ;; | ||
4743 | 209 | "grizzly") echo "2013.1" ;; | ||
4744 | 210 | "havana") echo "2013.2" ;; | ||
4745 | 211 | esac | ||
4746 | 212 | } | ||
4747 | 213 | |||
4748 | 214 | get_ip() { | ||
4749 | 215 | dpkg -l | grep -q python-dnspython || { | ||
4750 | 216 | apt-get -y install python-dnspython 2>&1 > /dev/null | ||
4751 | 217 | } | ||
4752 | 218 | hostname=$1 | ||
4753 | 219 | python -c " | ||
4754 | 220 | import dns.resolver | ||
4755 | 221 | import socket | ||
4756 | 222 | try: | ||
4757 | 223 | # Test to see if already an IPv4 address | ||
4758 | 224 | socket.inet_aton('$hostname') | ||
4759 | 225 | print '$hostname' | ||
4760 | 226 | except socket.error: | ||
4761 | 227 | try: | ||
4762 | 228 | answers = dns.resolver.query('$hostname', 'A') | ||
4763 | 229 | if answers: | ||
4764 | 230 | print answers[0].address | ||
4765 | 231 | except dns.resolver.NXDOMAIN: | ||
4766 | 232 | pass | ||
4767 | 233 | " | ||
4768 | 234 | } | ||
4769 | 235 | |||
4770 | 236 | # Common storage routines used by cinder, nova-volume and swift-storage. | ||
4771 | 237 | clean_storage() { | ||
4772 | 238 | # if configured to overwrite existing storage, we unmount the block-dev | ||
4773 | 239 | # if mounted and clear any previous pv signatures | ||
4774 | 240 | local block_dev="$1" | ||
4775 | 241 | juju-log "Cleaining storage '$block_dev'" | ||
4776 | 242 | if grep -q "^$block_dev" /proc/mounts ; then | ||
4777 | 243 | mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }') | ||
4778 | 244 | juju-log "Unmounting $block_dev from $mp" | ||
4779 | 245 | umount "$mp" || error_out "ERROR: Could not unmount storage from $mp" | ||
4780 | 246 | fi | ||
4781 | 247 | if pvdisplay "$block_dev" >/dev/null 2>&1 ; then | ||
4782 | 248 | juju-log "Removing existing LVM PV signatures from $block_dev" | ||
4783 | 249 | |||
4784 | 250 | # deactivate any volgroups that may be built on this dev | ||
4785 | 251 | vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }') | ||
4786 | 252 | if [[ -n "$vg" ]] ; then | ||
4787 | 253 | juju-log "Deactivating existing volume group: $vg" | ||
4788 | 254 | vgchange -an "$vg" || | ||
4789 | 255 | error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?" | ||
4790 | 256 | fi | ||
4791 | 257 | echo "yes" | pvremove -ff "$block_dev" || | ||
4792 | 258 | error_out "Could not pvremove $block_dev" | ||
4793 | 259 | else | ||
4794 | 260 | juju-log "Zapping disk of all GPT and MBR structures" | ||
4795 | 261 | sgdisk --zap-all $block_dev || | ||
4796 | 262 | error_out "Unable to zap $block_dev" | ||
4797 | 263 | fi | ||
4798 | 264 | } | ||
4799 | 265 | |||
4800 | 266 | function get_block_device() { | ||
4801 | 267 | # given a string, return full path to the block device for that | ||
4802 | 268 | # if input is not a block device, find a loopback device | ||
4803 | 269 | local input="$1" | ||
4804 | 270 | |||
4805 | 271 | case "$input" in | ||
4806 | 272 | /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist." | ||
4807 | 273 | echo "$input"; return 0;; | ||
4808 | 274 | /*) :;; | ||
4809 | 275 | *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist." | ||
4810 | 276 | echo "/dev/$input"; return 0;; | ||
4811 | 277 | esac | ||
4812 | 278 | |||
4813 | 279 | # this represents a file | ||
4814 | 280 | # support "/path/to/file|5G" | ||
4815 | 281 | local fpath size oifs="$IFS" | ||
4816 | 282 | if [ "${input#*|}" != "${input}" ]; then | ||
4817 | 283 | size=${input##*|} | ||
4818 | 284 | fpath=${input%|*} | ||
4819 | 285 | else | ||
4820 | 286 | fpath=${input} | ||
4821 | 287 | size=5G | ||
4822 | 288 | fi | ||
4823 | 289 | |||
4824 | 290 | ## loop devices are not namespaced. This is bad for containers. | ||
4825 | 291 | ## it means that the output of 'losetup' may have the given $fpath | ||
4826 | 292 | ## in it, but that may not represent this containers $fpath, but | ||
4827 | 293 | ## another containers. To address that, we really need to | ||
4828 | 294 | ## allow some uniq container-id to be expanded within path. | ||
4829 | 295 | ## TODO: find a unique container-id that will be consistent for | ||
4830 | 296 | ## this container throughout its lifetime and expand it | ||
4831 | 297 | ## in the fpath. | ||
4832 | 298 | # fpath=${fpath//%{id}/$THAT_ID} | ||
4833 | 299 | |||
4834 | 300 | local found="" | ||
4835 | 301 | # parse through 'losetup -a' output, looking for this file | ||
4836 | 302 | # output is expected to look like: | ||
4837 | 303 | # /dev/loop0: [0807]:961814 (/tmp/my.img) | ||
4838 | 304 | found=$(losetup -a | | ||
4839 | 305 | awk 'BEGIN { found=0; } | ||
4840 | 306 | $3 == f { sub(/:$/,"",$1); print $1; found=found+1; } | ||
4841 | 307 | END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \ | ||
4842 | 308 | f="($fpath)") | ||
4843 | 309 | |||
4844 | 310 | if [ $? -ne 0 ]; then | ||
4845 | 311 | echo "multiple devices found for $fpath: $found" 1>&2 | ||
4846 | 312 | return 1; | ||
4847 | 313 | fi | ||
4848 | 314 | |||
4849 | 315 | [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; } | ||
4850 | 316 | |||
4851 | 317 | if [ -n "$found" ]; then | ||
4852 | 318 | echo "confused, $found is not a block device for $fpath"; | ||
4853 | 319 | return 1; | ||
4854 | 320 | fi | ||
4855 | 321 | |||
4856 | 322 | # no existing device was found, create one | ||
4857 | 323 | mkdir -p "${fpath%/*}" | ||
4858 | 324 | truncate --size "$size" "$fpath" || | ||
4859 | 325 | { echo "failed to create $fpath of size $size"; return 1; } | ||
4860 | 326 | |||
4861 | 327 | found=$(losetup --find --show "$fpath") || | ||
4862 | 328 | { echo "failed to setup loop device for $fpath" 1>&2; return 1; } | ||
4863 | 329 | |||
4864 | 330 | echo "$found" | ||
4865 | 331 | return 0 | ||
4866 | 332 | } | ||
4867 | 333 | |||
4868 | 334 | HAPROXY_CFG=/etc/haproxy/haproxy.cfg | ||
4869 | 335 | HAPROXY_DEFAULT=/etc/default/haproxy | ||
4870 | 336 | ########################################################################## | ||
4871 | 337 | # Description: Configures HAProxy services for Openstack API's | ||
4872 | 338 | # Parameters: | ||
4873 | 339 | # Space delimited list of service:port:mode combinations for which | ||
4874 | 340 | # haproxy service configuration should be generated for. The function | ||
4875 | 341 | # assumes the name of the peer relation is 'cluster' and that every | ||
4876 | 342 | # service unit in the peer relation is running the same services. | ||
4877 | 343 | # | ||
4878 | 344 | # Services that do not specify :mode in parameter will default to http. | ||
4879 | 345 | # | ||
4880 | 346 | # Example | ||
4881 | 347 | # configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http | ||
4882 | 348 | ########################################################################## | ||
4883 | 349 | configure_haproxy() { | ||
4884 | 350 | local address=`unit-get private-address` | ||
4885 | 351 | local name=${JUJU_UNIT_NAME////-} | ||
4886 | 352 | cat > $HAPROXY_CFG << EOF | ||
4887 | 353 | global | ||
4888 | 354 | log 127.0.0.1 local0 | ||
4889 | 355 | log 127.0.0.1 local1 notice | ||
4890 | 356 | maxconn 20000 | ||
4891 | 357 | user haproxy | ||
4892 | 358 | group haproxy | ||
4893 | 359 | spread-checks 0 | ||
4894 | 360 | |||
4895 | 361 | defaults | ||
4896 | 362 | log global | ||
4897 | 363 | mode http | ||
4898 | 364 | option httplog | ||
4899 | 365 | option dontlognull | ||
4900 | 366 | retries 3 | ||
4901 | 367 | timeout queue 1000 | ||
4902 | 368 | timeout connect 1000 | ||
4903 | 369 | timeout client 30000 | ||
4904 | 370 | timeout server 30000 | ||
4905 | 371 | |||
4906 | 372 | listen stats :8888 | ||
4907 | 373 | mode http | ||
4908 | 374 | stats enable | ||
4909 | 375 | stats hide-version | ||
4910 | 376 | stats realm Haproxy\ Statistics | ||
4911 | 377 | stats uri / | ||
4912 | 378 | stats auth admin:password | ||
4913 | 379 | |||
4914 | 380 | EOF | ||
4915 | 381 | for service in $@; do | ||
4916 | 382 | local service_name=$(echo $service | cut -d : -f 1) | ||
4917 | 383 | local haproxy_listen_port=$(echo $service | cut -d : -f 2) | ||
4918 | 384 | local api_listen_port=$(echo $service | cut -d : -f 3) | ||
4919 | 385 | local mode=$(echo $service | cut -d : -f 4) | ||
4920 | 386 | [[ -z "$mode" ]] && mode="http" | ||
4921 | 387 | juju-log "Adding haproxy configuration entry for $service "\ | ||
4922 | 388 | "($haproxy_listen_port -> $api_listen_port)" | ||
4923 | 389 | cat >> $HAPROXY_CFG << EOF | ||
4924 | 390 | listen $service_name 0.0.0.0:$haproxy_listen_port | ||
4925 | 391 | balance roundrobin | ||
4926 | 392 | mode $mode | ||
4927 | 393 | option ${mode}log | ||
4928 | 394 | server $name $address:$api_listen_port check | ||
4929 | 395 | EOF | ||
4930 | 396 | local r_id="" | ||
4931 | 397 | local unit="" | ||
4932 | 398 | for r_id in `relation-ids cluster`; do | ||
4933 | 399 | for unit in `relation-list -r $r_id`; do | ||
4934 | 400 | local unit_name=${unit////-} | ||
4935 | 401 | local unit_address=`relation-get -r $r_id private-address $unit` | ||
4936 | 402 | if [ -n "$unit_address" ]; then | ||
4937 | 403 | echo " server $unit_name $unit_address:$api_listen_port check" \ | ||
4938 | 404 | >> $HAPROXY_CFG | ||
4939 | 405 | fi | ||
4940 | 406 | done | ||
4941 | 407 | done | ||
4942 | 408 | done | ||
4943 | 409 | echo "ENABLED=1" > $HAPROXY_DEFAULT | ||
4944 | 410 | service haproxy restart | ||
4945 | 411 | } | ||
4946 | 412 | |||
4947 | 413 | ########################################################################## | ||
4948 | 414 | # Description: Query HA interface to determine is cluster is configured | ||
4949 | 415 | # Returns: 0 if configured, 1 if not configured | ||
4950 | 416 | ########################################################################## | ||
4951 | 417 | is_clustered() { | ||
4952 | 418 | local r_id="" | ||
4953 | 419 | local unit="" | ||
4954 | 420 | for r_id in $(relation-ids ha); do | ||
4955 | 421 | if [ -n "$r_id" ]; then | ||
4956 | 422 | for unit in $(relation-list -r $r_id); do | ||
4957 | 423 | clustered=$(relation-get -r $r_id clustered $unit) | ||
4958 | 424 | if [ -n "$clustered" ]; then | ||
4959 | 425 | juju-log "Unit is haclustered" | ||
4960 | 426 | return 0 | ||
4961 | 427 | fi | ||
4962 | 428 | done | ||
4963 | 429 | fi | ||
4964 | 430 | done | ||
4965 | 431 | juju-log "Unit is not haclustered" | ||
4966 | 432 | return 1 | ||
4967 | 433 | } | ||
4968 | 434 | |||
4969 | 435 | ########################################################################## | ||
4970 | 436 | # Description: Return a list of all peers in cluster relations | ||
4971 | 437 | ########################################################################## | ||
4972 | 438 | peer_units() { | ||
4973 | 439 | local peers="" | ||
4974 | 440 | local r_id="" | ||
4975 | 441 | for r_id in $(relation-ids cluster); do | ||
4976 | 442 | peers="$peers $(relation-list -r $r_id)" | ||
4977 | 443 | done | ||
4978 | 444 | echo $peers | ||
4979 | 445 | } | ||
4980 | 446 | |||
4981 | 447 | ########################################################################## | ||
4982 | 448 | # Description: Determines whether the current unit is the oldest of all | ||
4983 | 449 | # its peers - supports partial leader election | ||
4984 | 450 | # Returns: 0 if oldest, 1 if not | ||
4985 | 451 | ########################################################################## | ||
4986 | 452 | oldest_peer() { | ||
4987 | 453 | peers=$1 | ||
4988 | 454 | local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2) | ||
4989 | 455 | for peer in $peers; do | ||
4990 | 456 | echo "Comparing $JUJU_UNIT_NAME with peers: $peers" | ||
4991 | 457 | local r_unit_no=$(echo $peer | cut -d / -f 2) | ||
4992 | 458 | if (($r_unit_no<$l_unit_no)); then | ||
4993 | 459 | juju-log "Not oldest peer; deferring" | ||
4994 | 460 | return 1 | ||
4995 | 461 | fi | ||
4996 | 462 | done | ||
4997 | 463 | juju-log "Oldest peer; might take charge?" | ||
4998 | 464 | return 0 | ||
4999 | 465 | } | ||
5000 | 466 |
The diff has been truncated for viewing.